UK AI Safety Institute ventures across the pond with new US location

외신뉴스
2024-05-20 19:00 PM

Savannah Fortis14 hours agoUK AI Safety Institute ventures across the pond with new US locationThe U.K. expands its AI Safety Institute to San Francisco, aiming to leverage Bay Area tech talent and strengthen global AI safety partnerships.1999 Total views1 Total sharesListen to article 0:00NewsOwn this piece of crypto historyCollect this article as NFTJoin us on social networksThe United Kingdom’s AI Safety Institute is set to expand internationally with a new location in the United States.


On May 20, Michelle Donelan, the U.K. Technology Secretary, announced that the institute will open its first overseas office in San Francisco in the summer.


The announcement said that the strategic choice of a San Francisco office would allow the U.K. to “tap into the wealth of tech talent available in the Bay Area,” along with engaging with one of the world’s largest artificial intelligence (AI) labs located between London and San Francisco.


Additionally, it said this move will help it “cement” relationships with key players in the U.S. to push for global AI safety “for the public interest.”


Already, the London branch of the AI Safety Institute has a team of 30 that is on a trajectory to scale and acquire more expertise, particularly in risk assessment for frontier AI models.


Donelan said the expansion represents the U.K.’s leadership and vision for AI safety in action: “It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.”


This follows the U.K.’s landmark AI Safety Summit, which took place in London in November 2023. The summit was the first of its kind to focus on AI safety on a global scale.


Related:Microsoft faces multibillion-dollar fine in EU over Bing AI


The event boasted leaders from around the world, including from the U.S. and China, with leading voices in the AI space, including Microsoft president Brad Smith, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabiss and Elon Musk. 


In this latest announcement, the U.K. also said it is releasing a selection of the institute’s recent results from safety testing it conducted on five publicly available advanced AI models.


It anonymized the models and said the results provide a “snapshot” of the capabilities of the models instead of designating them as “safe” or “unsafe”.


Part of the findings included that several models could complete cybersecurity challenges, though others struggled with more advanced ones. Several models were found to have PhD-level knowledge of chemistry and biology.


It concluded that all tested models were “highly vulnerable” to basic jailbreaks and that the tested models were not able to complete more “complex, time-consuming tasks” without human supervision.


Ian Hogearth, the chair of the institute, said these assessments would help contribute to an empirical assessment of model capabilities.“AI safety is still a very young and emerging field. These results represent only a small portion of the evaluation approach AISI is developing.”


Magazine:‘Sic AIs on each other’ to prevent AI apocalypse: David Brin, sci-fi author# UK Government# United States# San Francisco# United Kingdom# AIAdd reaction

외신뉴스
Crypto news


함께 보면 좋은 콘텐츠

All posts
Top