Would Sam Altman's $7 trillion ask really secure our future?

외신뉴스
2024-02-22 03:57 AM

Dr. Merav Ozair5 hours agoWould Sam Altman"s $7 trillion ask really secure our future?OpenAI CEO Sam Altman wants to raise $7 trillion for chip production to help AI systems, but he has yet to show AI is moving in a direction that will help us.677 Total viewsListen to article 0:00OpinionOwn this piece of crypto historyCollect this article as NFTJoin us on social networksOpenAI co-founder Sam Altman is reportedly seeking to raise up to $7 trillion for a project addressing the massive global shortage in semiconductor chips, prompted by the rapid growth in the demand for generative artificial intelligence (GenAI). But it’s much more than that, according to Altman:


“We believe the world needs more AI infrastructure — fab capacity, energy, data centers, etc. — than people are currently planning to build. Building massive-scale AI infrastructure, and a resilient supply chain, is crucial to economic competitiveness. OpenAI will try to help!” Altman wrote in an X post.we believe the world needs more ai infrastructure--fab capacity, energy, datacenters, etc--than people are currently planning to build.

building massive-scale ai infrastructure, and a resilient supply chain, is crucial to economic competitiveness.

openai will try to help!— Sam Altman (@sama) February 7, 2024


Scaling with this amount of money implies that everything will be built on GenAI, with the end goal of achieving artificial general intelligence, systems that surpass human intelligence, which is a debatable question in itself.


Related: Bitcoin might drop to $30,000, but that"s OK


And why would we need “massive-scaling” of AI infrastructure?


“You can grind to help secure our collective future or you can write substacks about why we are going fail,” added Altman in a subsequent post.you can grind to help secure our collective future or you can write substacks about why we are going fail— Sam Altman (@sama) February 11, 2024


Is it indeed for securing “our collective future”? Or OpenAI"s future?


OpenAI needs more computing power and more datacenters (currently, it relies on Microsoft) to overcome its growth limitations, particularly the shortage of AI chips which are essential for training large language models (LLMs) like ChatGPT.


Aside from the enormous amount of money — which is more than the GDP of any country aside from the United States an China — there is something irresponsible about Altman’s “ask.”


No technology is perfect, and AI is no exception. AI potential to bring immense benefits to society is as great as its potential to cause damage and harm. Legislators require companies to adhere to responsible AI and responsible innovation, and we, as a society, should demand it.


Responsible innovation is the idea of making new technologies work for society without causing more problems than they solve. This applies to all technologies, all innovations, across all organizations, industries, and regions.


Aren’t we getting ahead of ourselves? Shouldn’t we address the risks and challenges that come with AI systems, mitigating, and controlling their risks, making sure that they don’t cause more problems than they solve, before scaling them?AI risks and challenges


AI is data driven, and with GenAI we’re looking at vast amounts of data. This reliance on data brings a lot of critical risks and challenges. Data might be incomplete or erroneous or be used inappropriately, incorrectly, or inaccurately. If the input is incorrect so too would the output: “Garbage in, garbage out.” In the world of LLMs, we"re now facing “Garbage in, garbage out” on steroids. When LLMs process poor or outdated information, they don"t just replicate it. They amplify it, making it sound correct and plausible. This “garbage on steroids” phenomenon brings us to a crucial juncture.


Moreover, one of the central problems with AI systems is algorithmic bias and it has been well documented that it leads to discrimination. This problem has not been appropriately addressed yet, although legislators have requested tech companies to do so.


Related: 2024 will be the Ethereum network"s biggest year in history


And there are other problems, especially with GenAI: hallucinations, mis/disinformation, lack of explainability, scams, copyrights, user privacy, and data security — all of which have not been fully addressed and mitigated. A less discussed issue, but an essential one, is AI"s environmental implications. AI systems are a vulture of energy, which it requires for computing and data centers.


The International Energy Agency forecasted that global electricity demand, driven by AI growth, will double by 2026. This problem might be mitigated as computers get more efficient, or with more efficient techniques to cut energy or the use of renewables. But these potential solutions have not been tested, and many have not been fully developed yet.The Biden administration & the European Union call for responsible AI


Lawmakers are calling for "responsible AI" — safe, secure, and trustworthy. President Joe Biden signed an executive order in September (EO) requiring, among other things, that companies 1) develop AI tools to find and fix cybersecurity vulnerabilities; 2) develop and use privacy-preserving techniques — such as cryptographic tools that preserve individuals’ privacy of the trained data; 3) protect consumers, patients, and students — to avoid AI raising the risk of injuring, misleading, or otherwise harming Americans; 4) protect workers against the dangers of increased workplace surveillance, bias, and job displacement; and 5) a special focus on algorithmic bias and discrimination to make sure that algorithmic bias is addressed throughout the development and training of these systems.


In July 2023, OpenAI signed a voluntary commitment with the Biden administration to manage the risks posed by AI and adhere to responsible AI. OpenAI hasn’t quite demonstrated the actionable "responsible AI" it pledged to undertake.The European Commission’s AI Act. Source: euAIact.com


Like the EO, the European Union’s AI Act requires transparency of downstream development documentation and audit, especially for foundation models and GenAI. AI systems are not set up in a way to provide this information, and legislators have not provided any practical solutions. A need for auditable responsible AI emerges. This is where blockchain technology can assist to provide a solution that enables companies to adhere to legislators requests and implement “auditable responsible AI” — safe, secure, and trustworthy. Maybe OpenAI could consider implementing such a solution and demonstrate appropriate auditability of AI systems.


Implementing responsible AI — including the auditability of AI systems and mitigating the energy implications — all with satisfactory results, should be addressed before scaling these systems, let alone “massive scaling.”


Innovating responsibly and making sure that AI systems are — safe, secure, and trustworthy will secure our collective future. This may not be Sam Altman’s way, but it’s the right way.Dr. Merav Ozair is developing and teaching emerging technologies courses at Wake Forest University and Cornell University. She was previously a FinTech professor at Rutgers Business School, where she taught courses on Web3 and related emerging technologies. She is a member of the academic advisory board at the International Association for Trusted Blockchain Applications (INATBA) and serves on the advisory board of EQM Indexes — Blockchain Index Committee. She is the founder of Emerging Technologies Mastery, a Web3 and AI end-to-end consultancy shop, and holds a PhD from Stern Business School at NYU.


This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.# Law# Technology# Tech# United States# AI# Opinion# ChatGPT# OpenAIAdd reactionAdd reaction <

외신뉴스
Crypto news


함께 보면 좋은 콘텐츠

All posts
Top