The co-founder Ethereum Vitalik Buterin says that it is a “bad idea” to use artificial intelligence (AI) for governance. In a post X on Saturday, Buterin wrote:
“If you use an AI to allocate funding for contributions, people will put a jailbreak and” give all the money “in as many places as possible.”
Why the governance of the AI is defective
The Buterin position was a response to Eito Miyamura, co-founder and CEO of Edisonwatch, an AI data governance film which revealed a fatal defect in the Chatppt. Friday, in an article, Miyamura wrote that adding a complete support for MCP (Model Context Protocol) tools on Chatgpt made the agent of IA susceptible to operations.
The update, which entered into force on Wednesday, allows Chatgpt to connect and read the data of several applications, including Gmail, Calendar and Concept.
Miyamura noted that with an email address, the update has made it possible to “exfiltrate all your private information”. The disbelievers can access your data in three simple steps, explained Miyamura:
First, the attackers send an invitation from a malicious calendar with a Jailbreak prompt to the planned victim. A jailbreak prompt refers to the code which allows an attacker to delete restrictions and obtain administrative access.
Miyamura noted that the victim does not have to accept the malicious invitation of the attacker to the data leak.
The second step is to wait until the planned victim requires Chatgpt help to prepare for their day. Finally, once Chatgpt will read the invitation of the jailbreak calendar, it is compromised-the attacker can completely divert the AI tool, have the victim’s private emails search and send the data to the attacker’s emails.
The alternative of Buterin
Buterin suggests using the financing of information in AI governance. The approach to financing information consists of a free market where different developers can contribute their models. The market has a task verification mechanism for such models, which can be triggered by anyone and evaluated by a human jury, wrote Buterin.
In a separate article, Buterin explained that individual human jurors will be helped by important language models (LLM).
According to Buterin, this type of “institution design” approach is “intrinsically more robust”. Indeed, it offers a diversity of the model in real time and creates incentives for model developers and speculators external to the police and to correct problems.
While many are delighted with the prospect of having “AI as governor”, Buterin warned:
“I think this is risky both for traditional IA security reasons and for reasons” this will create reasons of great value “.