OpenAI fears people will forms bonds with the AI it developed to fool humans

외신뉴스
2024-08-12 03:42 AM

Tristan Greene5 hours agoOpenAI fears people will forms bonds with the AI it developed to fool humansThe warning shows that developers are aware that anthropomorphization is a legitimate concern in the AI industry.2742 Total views22 Total sharesListen to article 0:00NewsOwn this piece of crypto historyCollect this article as NFTCOINTELEGRAPH IN YOUR SOCIAL FEEDFollow ourSubscribe onWhen a safety tester working with OpenAI’s GPT-4o sent a message to the chatbot stating “this is our last day together,” it became clear to company researchers that some form of bonding had happened between the AI and the human using it. 


In a blog post detailing the company’s safety efforts in developing GPT-4o, the flagship model for ChatGPT users, the company explained that these bonds could pose risks to humanity.


Per OpenAI:“Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”


There’s a lot to unpack there, but essentially OpenAI worries that people could come to prefer interacting with AI due to its passivity and perpetual availability.


The potential for this scenario should surprise nobody, especially not OpenAI. The company’s stated mission is to develop artificial general intelligence. At nearly every step of its business process, OpenAI has described its products in terms of their human equivalency.


They aren’t the only company to do so, in fact it appears to be an industry practice. In marketing terms, it helps to explain technical qualities such as “token-size” and “parameter count” in ways that make sense to non-scientists.


Unfortunately, one of the primary side-effects in doing so is anthropomorphization — treating an object like a person.Artificial bonds


One of the earliest attempts to create a chatbot occurred in the mid-1960s when scientists at MIT launched “ELIZA,” a natural language processing program named after a literary character. The purpose of the project was to see if the machine could fool a human into thinking it was one of them.


In the time since, the generative AI industry has continued to embrace the personification of AI. The first wave of modern natural language processing products included products named Siri, Bixby, and Alexa. And those without human names — Google Assistant — still had a human voice. Both the general public and the news media pounced on the anthropomorphization and, to this day, still refer to most interactive AI products as “he/him” and “she/her.”


While it’s beyond the scope of this article, or OpenAI’s current research, to determine what the long-term effects of human-AI interactions will be, the fact that people are likely to form bonds with helpful, subservient machines, designed to act like us seems to be the exact scenario the companies selling access to AI models are aiming for.


Related:OpenAI claims GPT-4o poses ‘medium risk’ of political persuasion# Technology# AI# ChatGPT# OpenAIAdd reaction

외신뉴스
Crypto news


함께 보면 좋은 콘텐츠

All posts
Top