Today we celebrate: International Romani Day

LLMs and AI Agents Accelerate: Record Contracts, New Models, and Regulatory Pressure

Meta Platforms signed a multi-year agreement with AMD[3] to supply up to 6 GW of computing power through Instinct GPUs and related CPUs for AI data centers. Independent analyses describe this as the largest single infrastructure contract in the current era of artificial intelligence. The agreement structure includes warrants for 160 million shares of AMD, about 10% of the company, with vesting tied to delivering additional gigawatts of power and stock price milestones. Estimates of this partnership’s value reach up to $100 billion over 5 years. For Meta, this means a leap forward in training and deploying Llama-family models and agents, while for the market, it weakens the previous dominance of Nvidia in the GPU segment for LLMs.

Record Meta and AMD Contract

Simultaneously, 61 data protection authorities worldwide[5], including the Polish Personal Data Protection Office (UODO), adopted a joint position titled “Joint Statement on AI‑Generated Imagery” coordinated by the Global Privacy Assembly. The document dated February 23, 2026, warns of rising threats from AI-generated realistic images and videos, including deepfakes and non-consensual pornography. Regulators clearly state that creating intimate content without consent may constitute a crime and a data protection violation under laws like GDPR in many jurisdictions. In practice, this sets a global standard of expectations for providers and users of generative systems, reinforcing the basis for oversight and legal action also in Poland.

Regulators Target Deepfakes

Legal analysts and the European Commission clarified the implementation timeline[10] for the AI Act, which sets frameworks for LLM and agent use within the European Union. Bans on AI systems considered “unacceptable” and user education duties will begin on February 2, 2025. The next phase, starting August 2, 2025, covers providers of general-purpose AI models (GPAI), and full requirements for high-risk systems—including many LLM applications in finance, health, administration, and employment—will enter into force on August 2, 2026. The harshest AI Act violations may bring fines up to €35 million or 7% of global turnover, compelling entities operating in Poland to inventory AI uses and develop risk management documentation.

AI Act Sets Strict Deadlines

The model market has seen new architectures aimed at agent needs. Startup Inception announced the Mercury 2 model[11]—a diffusion LLM generating text through parallel “denoising” of multiple tokens instead of classic sequential prediction. According to the company, Mercury 2 achieves throughput around 1000 tokens per second with quality comparable to Claude 4.5 Haiku and GPT‑5.2 Mini models in reasoning tests; independent preliminary measurements mention 10–13 times acceleration relative to small autoregressive models. Meanwhile, Guide Labs released the open-source Steerling‑8B model with 8 billion parameters, trained on about 1.35 trillion tokens, designed so that every generated token can be linked to a concept layer and specific training data pieces. The company claims about 90% of top model quality while using 2 to 7 times fewer data, facilitating audits and bias control in regulated sectors.

New Models for Ultra-Fast, Transparent Agents

In the agent product layer, there is a shift from developer tools toward mass-ready solutions. Anthropic expanded the Claude Cowork platform with 10[18] specialized agents for finance, HR, law, engineering, and design, created partly with Fact. Set, S&P Global, and the London Stock Exchange Group (LSEG), with first deployments at clients such as Thomson Reuters and RBC Wealth Management. Startup Cursor, creator of an IDE environment integrated with agents, introduced the ability to run multiple coding agents in parallel cloud development environments with automatic testing and documentation, currently valued at about $29.3 billion. Meanwhile, Veza presented Veza Access Agents and the AI Agent Security module to automatically map agent permissions, detect “shadow AI”, and analyze potential “blast radius” impacts within infrastructure, preparing large organizations for scenarios where AI agents outnumber humans by up to 80 to 1.

Agents Enter Enterprises and Smartphones

The clearest consumer signal of change is the launch of the Samsung Galaxy S26 series[24], which Samsung positions as the “Agentic AI Phone.” The devices integrate three agent assistants at the system level: Bixby, Google Gemini, and the Perplexity AI solution based on Sonar API. Agents received dedicated voice commands and access to native apps like Notes, Calendar, and Gallery, plus selected external services, performing complex background tasks such as travel planning or booking services. Combined with investments from companies like Axelera AI, which secured over $250 million to develop energy-efficient inference chips capable of up to 629 TOPS, this accelerates agent adoption both in data center infrastructure and edge devices including smartphones, directly impacting the market and regulations in Poland.


Related posts:

Share: