AI safety researchers leave OpenAI over prioritization concerns

외신뉴스
2024-05-18 20:30 PM

Amaka Nwaokocha12 hours agoAI safety researchers leave OpenAI over prioritization concernsFollowing the recent resignations, OpenAI has opted to dissolve its “Superalignment” team and integrate its functions into other research projects within the organization.2316 Total views2 Total sharesListen to article 0:00NewsOwn this piece of crypto historyCollect this article as NFTJoin us on social networksThe entire OpenAI team focused on the existential dangers of AI have either resigned or been reportedly absorbed into other research groups within the company.


Days after Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, announced his departure, Jan Leike, the former DeepMind researcher who was OpenAI’s superalignment team’s other co-lead, posted on X that he had also resigned.


According to Leike, his departure from the company was due to his worries about its priorities, which he thinks are more focused on product development than AI safety.Source: Jan Leike


In a series of X posts, Leike said the OpenAI leadership was wrong in its choice of core priorities and should emphasize safety and preparedness as artificial general intelligence (AGI) development moves forward.


AGI is a term for a hypothetical artificial intelligence that can perform the same as or better than humans on a range of tasks.


After spending three years at OpenAI, Leike criticized the company for prioritizing developing flashy products over nurturing a robust AI safety culture and processes. He emphasized the urgent need for resource allocation — particularly computing power — to support the team’s vital safety research, which he says was overlooked.“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point. Over the past few months, my team has been sailing against the wind.”


OpenAI formed a new research team in July 2023 to prepare for the emergence of advanced AI that could outsmart and overpower its creators. Sutskever was appointed co-lead of this new team, which received 20% of OpenAI’s computational resources.


Related:Reddit shares jump after-hours on OpenAI data-sharing deal


Following the recent resignations, OpenAI has opted to dissolve the “Superalignment” team and integrate its functions into other research projects within the organization. This decision is reportedly a consequence of the ongoing internal restructuring, which was initiated in response to the governance crisis of November 2023.


Sutskever was part of an effort that successfully saw OpenAI’s board briefly push out Sam Altman as CEO in November 2023 before he was later rehired to the role after backlash from employees.


According to The Information, Sutskever informed employees that the board’s decision to remove Altman fulfilled their responsibility to guarantee that OpenAI develops AGI that benefits all of humanity. As one of the six board members, Sutskever emphasized the board’s commitment to aligning OpenAI’s goals with the greater good.


Magazine: How to get better crypto predictions from ChatGPT, Humane AI pin slammed: AI Eye# Business# Technology# AI# OpenAIAdd reaction

외신뉴스
Crypto news


함께 보면 좋은 콘텐츠

All posts
Top