mt logoMyToken
RTP
$174,173,104,410.88 -0.04%
24H LQ
$431,618,809.49 +0.31%
FGI
0%
ETH Gas
Spot
Exchanges

Vitalik Buterin Criticizes Agentic AI, Calls for Editable Models and Neurotech Integration

Favorite
Share
Vitalik Buterin

Ethereum co-founder Vitalik Buterin took to X (formerly Twitter) on Monday to push back against the current rush toward so-called “agentic” AI systems, models built to act autonomously with minimal human supervision. He also urged making the case for models that let humans steer and edit as they go. “Echoing something Andrej Karpathy recently said, it does frustrate me how a lot of AI development is trying to be as ‘agentic’ as possible,” Buterin wrote.

He added that increasing paths for human input not only improves outputs but is “better for safety.” The Ethereum co-founder said he’s “much more excited about open-weights AI models with good editing functionality than those that are just for creating from scratch,” and even sketched a medium-term wish: “some fancy BCI thing where it shows me the thing as it’s being generated and detects in real time how I feel about each part of it and adjusts accordingly.”

The post directly echoed recent remarks from Andrej Karpathy, who, in a high-profile keynote and follow-up interviews, has cautioned against treating large language models like infallible autonomous agents and urged developers to keep AI “on a leash,” stressing that human oversight, careful prompting, and incremental development are still essential.

Why this Matters Now

Buterin’s comments arrive at a moment when major labs are publicly shifting their stance on openness and user control. In early August, OpenAI surprised many by releasing a family of open-weights models, gpt-oss-120b and gpt-oss-20b, that can be downloaded, inspected and fine-tuned by outside developers, a move framed by some as a step toward democratizing powerful AI tooling. Supporters say open weights make it easier to build custom, human-centric workflows and inspect models for safety problems; critics warn about the misuse risks of widely distributed, powerful models.

Buterin’s public preference for “editing functionality” ties into an emerging thread in the industry: instead of asking models to produce finished work and then stepping back, many researchers want interfaces that let humans intervene mid-generation, pruning, correcting, or guiding output in real time. Proponents argue this reduces hallucinations, keeps the human firmly in control of intent, and produces artifacts that better match human tastes and safety constraints. Karpathy’s recent talk similarly emphasised the need for tighter human-in-the-loop workflows.)

BCI + Generative AI

Buterin’s offhand desire for a brain-computer interface (BCI) that senses emotional responses as content is generated isn’t pure fantasy. This year has seen a flurry of tangible BCI progress: companies from Neuralink to Synchron and a crop of startups are rolling out human trials, less-invasive implants and EEG-based wearables aimed at decoding attention and affective states in real time.

Academic work on EEG-based emotion detection and “affective computing” has improved too, suggesting that devices capable of flagging when a user likes, dislikes, or is neutral toward something are increasingly plausible. Still, fully reliable, consumer-grade emotional decoding, especially non-invasively, remains an active research challenge.

What the Debate Reveals

Taken together, the signals point to two simultaneous currents in AI development. One current pushes toward more agentic systems that can run complex tasks with little or no human supervision; the other, voiced by figures like Karpathy and now echoed by Buterin, argues for tooling that keeps humans tightly coupled to model behavior, whether via better editing UIs, human-in-the-loop verification, or even neurotech feedback.

For industry watchers, the balance between those approaches will shape product design, regulation and public trust. Open-weights releases make it easier for independent teams to experiment with human-centric editing layers, but they also widen the distribution of powerful models, which regulators and safety researchers worry could increase misuse.

Meanwhile, as BCI and affect detection make incremental advances, new interfaces may indeed let creators “nudge” models with far richer signals than typed prompts, raising fresh questions about privacy, consent and how inner states are measured and stored. Vitalik Buterin’s tweet is less a technical blueprint than a value statement: better outputs and safer systems come from keeping people and the messy, nuanced signals of human preference at the center of AI workflows.

Whether that future arrives through editable open models today or brain-bound interfaces tomorrow, the conversation is already shifting away from pure autonomy toward richer human-AI collaboration. As Buterin put it in his post, he’d rather watch something be born and be able to steer it than hand over the reins completely.

Disclaimer: This article is copyrighted by the original author and does not represent MyToken’s views and positions. If you have any questions regarding content or copyright, please contact us.(www.mytokencap.com)contact