US lawmakers send a letter to OpenAI requesting government access

외신뉴스
2024-07-24 03:16 AM

Vince Quill5 hours agoUS lawmakers send a letter to OpenAI requesting government accessCongressional Democrats asked the AI firm, “What is the percentage of computing resources that OpenAI is dedicating to AI safety research?”929 Total viewsListen to article 0:00NewsOwn this piece of crypto historyCollect this article as NFTJoin us on social networksSenate Democrats and one independent lawmaker have sent a letter to OpenAI CEO Sam Altman regarding the company’s safety standards and employment practices toward whistleblowers.


Perhaps the most significant portion of the letter, first obtained by The Washington Post, was item 9, which read, “Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?”


The letter outlined 11 additional points to be addressed, including securing OpenAI’s commitment to dedicating 20% of its computing power to fuel safety research and protocols to prevent a malicious actor or foreign adversary from stealing AI products from OpenAI.First page of the letter from Senate Democrats. Source: The Washington PostRegulatory scrutiny


Although regulatory scrutiny is nothing new for OpenAI and the encompassing artificial intelligence sector, the letter from Democrat lawmakers was prompted by whistleblower reports of lax safety standards for GPT-4 Omni to ensure the market release of the product was not delayed.


Related: Synchron, ChatGPT to help paralyzed patients chat and text again


OpenAI whistleblowers also claimed that efforts to bring safety concerns to management were met with retaliation and allegedly illegal non-disclosure agreements, driving the whistleblowers to file a complaint with the US Securities and Exchange Commission in June 2024.


Shortly after, in July, tech giants Microsoft and Apple renounced their memberships for OpenAI’s board due to increased regulatory scrutiny. The decision not to take up a board seat came despite Microsoft’s $13-billion investment in OpenAI in 2023.Existential fears persist


Former OpenAI employee William Saunders recently revealed that he quit the company because he had felt the ongoing research at OpenAI might pose an existential threat to humanity, likening the potential trajectory of OpenAI to the infamous crash of the RMS Titanic in 1912.


Saunders clarified that he was not concerned about the current iteration of OpenAI’s ChatGPT large language model but was more concerned with future versions of ChatGPT and the potential development of advanced superhuman intelligence.


The whistleblower argued that employees working within the artificial intelligence sector have a right to warn the public about potentially dangerous capabilities exhibited by the rapid development of synthetic intelligence.


Magazine: AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns# Business# Congress# United States# AI# Regulation# OpenAIAdd reaction

외신뉴스
Crypto news


함께 보면 좋은 콘텐츠

All posts
Top