OpenAI recently announced the establishment of a Preparedness team dedicated to assessing, forecasting, and safeguarding against the risks associated with highly-capable AI systems.
OpenAI, with its mission of creating safe Artificial General Intelligence (AGI), has consistently emphasised the importance of addressing safety risks across the entire spectrum of AI technologies, from existing models to the potential future superintelligent systems. This endeavor is in line with…
Read the full article here