Biden AI executive order ‘certainly challenging’ for open-source AI — Industry insiders
Savannah Fortis10 hours agoBiden AI executive order ‘certainly challenging’ for open-source AI — Industry insidersThe executive order on AI safety from the Biden administration has laid out its standards for the industry. However, its vagueness has raised concerns among the AI community over stifling innovation.2040 Total views8 Total sharesListen to article 0:00Follow upJoin us on social networksUnited States President Joe Biden issued a lengthy executive order on Oct. 30, which intends to protect citizens, government agencies and companies by ensuring artificial intelligence (AI) safety standards.
The order established six new standards for AI safety and security, along with intentions for ethical AI usage within government agencies. Biden said the order aligns with the government’s principles of “safety, security, trust, openness.”My Executive Order on AI is a testament to what we stand for:
Safety, security, and trust. pic.twitter.com/rmBUQoheKp— President Biden (@POTUS) October 31, 2023
It includes sweeping mandates, such as sharing the results of safety tests with officials for companies developing “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety,” and “accelerating the development and use of privacy-preserving techniques.”
However, the lack of details accompanying the statements has left many in the industry wondering how it could potentially stifle companies from developing top-tier models.
Adam Struck, a founding partner at Struck Capital and AI investor, told Cointelegraph that the order displays a level of “seriousness around the potential of AI to reshape every industry.”
He also pointed out that for developers, anticipating future risks according to the legislation based on assumptions of products that aren’t fully developed yet is tricky.“This is certainly challenging for companies and developers, particularly in the open-source community, where the executive order was less directive.”
However, he said the administration’s intentions to manage the guidelines through chiefs of AI and AI governance boards in specific regulatory agencies means that companies building models within those agencies should have a “tight understanding of regulatory frameworks” from that agency. “Companies that continue to value data compliance and privacy and unbiased algorithmic foundations should operate within a paradigm that the government is comfortable with.”
The government has already released over 700 use cases as to how it is using AI internally via its “ai.gov” website.
Martin Casado, a general partner at the venture capital firm Andreessen Horowitz, posted on X (formerly Twitter) that he, along with several researchers, academics and founders in AI, had sent a letter to the Biden administration over its potential for restricting open-source AI.
“We believe strongly that open source is the only way to keep software safe and free from monopoly. Please help amplify,” he wrote.1/ We’ve submitted a letter to President Biden regarding the AI Executive Order and its potential for restricting open source AI. We believe strongly that open source is the only way to keep software safe and free from monopoly. Please help amplify. pic.twitter.com/Mbhu35lWvt— martin_casado (@martin_casado) November 3, 2023
The letter called the executive order “overly broad” in its definition of certain AI model types and expressed fears of smaller companies getting tangled up in the requirements necessary for other, larger companies.
Jeff Amico, the head of operations at Gensyn, also posted a similar sentiment, calling it terrible for innovation in the U.S.Biden"s AI Executive Order is out and it’s terrible for US innovation.
Here are some of the new obligations, which only large incumbents will be able to comply with pic.twitter.com/R3Mum6NCq5— Jeff Amico (@_jamico) October 31, 2023
Related: Adobe, IBM, Nvidia join US President Biden’s efforts to prevent AI misuse
Struck also highlighted this point, saying that while regulatory clarity can be “helpful for companies that are building AI-first products,” it is also important to note that goals of “Big Tech” like OpenAI or Anthropic greatly differ from seed-stage AI startups.“I would like to see the interests of these earlier stage companies represented in the conversations between the government and the private sector, as it can ensure that the regulatory guidelines aren’t overly favorable to just the largest companies in the world.”
Matthew Putman, the CEO and co-founder of Nanotronics — a global leader in AI-enabled manufacturing — also commented to Cointelegraph that the order signals a need for regulatory frameworks that ensure consumer safety and the ethical development of AI on a broader scale.
“How these regulatory frameworks are implemented now depends on regulators’ interpretations and actions,” he said.“As we have witnessed with cryptocurrency, heavy-handed constraints have hindered the exploration of potentially revolutionary applications.”
Putman said that fears about AI’s “apocalyptic” potential are “overblown relative to its prospects for near-term positive impact.”
He said it’s easier for those not directly involved in building the technology to construct narratives around the hypothetical dangers without observing the “truly innovative” applications, which he says are taking place outside of public view.
Industries, including advanced manufacturing, biotech and energy, are, in Putman’s words, “driving a sustainability revolution” with new autonomous process controls that are significantly improving yields and reducing waste and emissions.“These innovations would not have been discovered without purposeful exploration of new methods. Simply put, AI is far more likely to benefit us than destroy us.”
While the executive order is still fresh and industry insiders are rushing to analyze its intentions, the U.S. National Institute of Standards and Technology and the Department of Commerce have already begun soliciting members for its newly-established Artificial Intelligence Safety Institute Consortium.
Magazine:‘AI has killed the industry’: EasyTranslate boss on adapting to change# Open Source# US Government# United States# AI# Regulation# OpenAIAdd reactionAdd reactionRead moreExpect new IRS crypto surveillance to come with a surge in confiscationUS expands export controls for AI semiconductor chips to ChinaBiden administration issues executive order for new AI safety standards