Today we celebrate: International Romani Day

NVIDIA, New Regulations, and a Wave of Funding: The Week AI Agents Enter the Mainstream

At the GTC 2026 conference the company NVIDIA announced the NVIDIA Agent Toolkit package[3] – a software stack for building autonomous AI agents. It includes the open-source Nemo runtime, Claw, the open AI agent blueprint AI-Q, and the family of open models Nemotron. At the same time, NVIDIA announced full support for the Open Claw agent platform[2], which can be run locally on DGX Spark stations and Ge. Force RTX laptops. According to conference data, Open Claw has already generated about 1.5 million agents[1] in less than three months. The company has not disclosed a price list for the Agent Toolkit[11], but emphasizes that it will primarily earn from hardware and cloud services.

NVIDIA Builds a Stack for Agents

Alongside NVIDIA’s moves in infrastructure, key regulatory changes have taken place. On March 10, the European Parliament adopted a resolution[18] titled “Copyright and Generative Artificial Intelligence – Opportunities and Challenges,” in which it calls on the European Commission and the European Union Intellectual Property Office[21] to establish stronger legal frameworks for licensing content for generative systems. The resolution envisions EUIPO acting as a “trusted intermediary” to maintain opt-out registers and facilitate sectoral licensing, as well as a presumption of infringement if the model provider fails to demonstrate compliance with transparency obligations. On March 18, the UK government, through the Department for Science, Innovation and Technology and Culture, Media and Sport, published a report and impact assessment regarding the use of copyrighted works in AI[4]. The government withdrew from the broad text-and-data mining exemption with an opt-out option and declared no new copyright exceptions for AI, signaling a shift to a “licensing-first” model.

Europe Focuses on Licensing and Transparency

New tools for agent creators have appeared in the model market[22]. On March 17, OpenAI announced GPT-5.4 mini and GPT-5.4 nano[7] – smaller models designed for fast, high-volume tasks, including sub-agent roles. They expand the GPT-5.4 family[23], designed for professional work and agent applications, where the large model (e.g., GPT-5.4 Pro) acts as coordinator, and mini and nano handle specialized subtasks at lower cost and latency. Detailed rates have not been disclosed[24], but it was emphasized that these models are cheaper than the main GPT-5.4 and target applications requiring very high query volumes. Google, meanwhile, updated the Gemini API, introducing the ability to combine[5] built-in tools with user functions in a single call, as well as spatial grounding of responses in Google Maps for Gemini 3 models. These functions, available through Gemini API and tools like AI Studio and Gemini CLI, aim to simplify creating agents that use multiple data sources and actions.

New Models and Developer Tools

Agent security is drawing increasing attention[8]. In mid-March, an internal AI agent at Meta caused a security incident[25] classified as “Sev 1” when an incorrect answer to a technical question led an employee to temporarily extend access permissions to sensitive data. The incident lasted about two hours[9], and Meta maintains that there was no misuse of user data, but has not disclosed the potential scale of harm. On March 17, the Center for Internet Security (CIS) published a white paper, identifying prompt injection as an “inherent threat” to GenAI systems and pointing out that such attacks are already being tested by criminal groups. The next day, Databricks presented its own guidelines for securing agents[28], highlighting three risk pillars – access to sensitive data, exposure to untrusted data, and state alteration – and recommending that an agent meets no more than two simultaneously. Additionally, a report published on March 15 showed that the effectiveness of indirect prompt injection attacks ranges[29] from 0.5% to 8.5%, depending on the model.

Agent Security Under Scrutiny

Another important thread concerns political and market tensions around model providers. After previously classifying the company Anthropic as a “supply chain risk” and excluding it from US Department of Defense programs, the administration of President Donald Trump announced on March 17 legal actions[42] aimed at removing Anthropic from all federal government agencies. In response, the company announced the creation of the Anthropic Institute think tank[41], combining teams focused on frontier model red-teaming, social impact analysis, and economic research on AI effects. Meanwhile, data provided by Ramp and reported on March 18 by Axios show that Anthropic already captures over 73% of first-time AI tool spending by companies[44], surpassing OpenAI in the new enterprise deployment category. The Claude models are also available in all versions of Microsoft Copilot[45], significantly expanding their reach in the enterprise sector.

Capital Flows into Agent Infrastructure

Startup Nyne raised $5.3 million in seed funding[35] from Wischoff Ventures and angels including Gil Elbaz, to build a “human context layer” for agents. Agent. Mail secured $6 million from General Catalyst, Y Combinator, Phosphor Capital, and angels such as Paul Graham and Dharmesh Shah, to create the first email provider for AI agents[33]. The Handle operating platform raised $6 million seed[36] in a round led by Andreessen Horowitz (a16z), planning expansion including into Mexico. Comet. Chat secured $6.5 million in strategic funding from Run Ventures[31], bringing total capital raised for the communication agents platform to $21.1 million. Health Universe raised $6 million in seed funding[10] from Kleiner Perkins, totaling $9.5 million in funding. For Polish companies, this means growing availability of ready-made API and SaaS components to build agents without having to create the entire infrastructure from scratch.


Related posts:

Share: