AI brokers will develop into persistent, autonomous, and deeply built-in into on a regular basis workflows. However when they’re able to act on our behalf, tougher questions come up. Who controls the information, execution, and belief layer?
—
at the moment, $NEAR AI has offered the reply. Introduced reside at NEARCON 2026, IronClaw is a brand new open-source verifiable AI agent runtime designed for a future the place brokers run constantly with out exposing delicate knowledge, credentials, or consumer intent.
A runtime constructed for autonomous AI — no blind religion
IronClaw builds on the unique OpenClaw imaginative and prescient, however basically enhances it with cryptographic ensures. Written in Rust and deployed inside an encrypted trusted execution setting (TEE). $NEAR The AI Cloud runtime permits AI brokers to entry instruments, preserve reminiscence, and carry out actions in your behalf. All this occurs inside a tightly managed safety perimeter.
Moderately than asking customers to belief an opaque platform, IronClaw shifts the belief mannequin to: Verifiable execution. Knowledge and inference stay protected on the {hardware} degree, and brokers function based mostly on express and enforceable permissions.
Safety by means of structure, not add-ons
IronClaw is designed with the core precept of protection in depth.
Loading tweets…
View tweet
All untrusted and third-party instruments run in their very own sandbox, restricted to solely the sources they’re explicitly allowed to entry. Community calls are restricted to accepted locations. Delicate credentials are solely injected at runtime and are by no means uncovered on to instruments or exterior providers.
Agent exercise is constantly monitored to detect exploits, together with safety in opposition to immediate injection assaults and unauthorized consumption of sources. All consumer knowledge is saved regionally in PostgreSQL, encrypted with AES-256-GCM, and by no means shared externally. What issues is what IronClaw collects. No telemetry or analyticsmaking certain that execution stays fully personal.
Full audit logs give customers visibility into each interplay with the instrument, offering transparency with out oversight.
Deploy privacy-first AI now
IronClaw launches with a free starter tier that features one hosted agent occasion operating underneath the hood. $NEAR Leverage AI’s safe setting and its inference infrastructure. Builders and organizations can scale up by means of versatile paid tiers as their wants develop.
The purpose isn’t just to make brokers safer, however to truly deploy them with out forcing groups to decide on between comfort and management.
Loading tweets…
View tweet
why is that this vital
As AI methods more and more serve company incentives and depend on opaque knowledge pipelines, IronClaw factors in a unique route. Native management, verifiable execution, and privateness by default.
Ilya Poloskin, Co-Founder $NEAR protocol and founder $NEAR AI describes IronClaw as an “agent harness designed for safety.” $NEARA full-stack belief mannequin from the blockchain infrastructure to the AI layer itself.
Moderately than constructing safety into agent AI after the actual fact, IronClaw builds safety into the runtime, combining confidential inference, cryptographic verification, and {hardware} execution into one system.
The muse of accountable agent AI
George Zeng, Chief Product Officer and Basic Supervisor $NEAR AI places the announcement extra bluntly:
“AI brokers are already getting into important workflows, however safety, compliance, and knowledge possession stay unresolved. IronClaw goals to fill that hole, giving builders and enterprises the arrogance to deploy always-on brokers with out giving up transparency or management.”
IronClaw is on the market now and the code might be accessed under. $NEAR AI GitHub.
As AI strikes from instrument to actor, IronClaw takes a transparent place. Autonomy mustn’t come on the expense of privateness, nor ought to intelligence require blind belief.

