A year ago, Jan Leike co-led OpenAI’s since-disbanded superalignment team with the company’s co-founder and chief scientist, Ilya Sutskever. As the most ambitious of the company’s three safety teams, the superalignment group was focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control. But in May, Leike made a dramatic exit, accusing OpenAI of prioritizing “shiny products” over safety. He wrote that his team had been struggling to…
Read the full article here