Today we celebrate: International Romani Day

Capital, Agents, and Regulations: A Breakthrough Week in the Global AI Race

On February 2, SpaceX announced the acquisition of xAI[1] , Elon Musk’s artificial intelligence company and creator of the Grok chatbot, in a stock swap deal exchanging 1 share of xAI for 0.1433 SpaceX shares. Together valued at 1.25 trillion dollars[2] integrating rockets, Starlink satellites, the X platform[3] and the Grok AI models, making it the most valuable private company in the world[6]. After a January Series E round, xAI was valued at 250 billion dollars[4], while SpaceX about 1 trillion dollars. Elon Musk announced an initial public offering around June 2026[5] , aiming to raise close to 50 billion dollars while protecting SpaceX from potential legal liabilities linked to xAI, including an ongoing investigation into deepfakes generated by Grok[40].

SpaceX–xAI Merger and Nvidia Investment

Simultaneously, Nvidia finalizes a 20 billion dollar investment[7] in OpenAI, part of a broader funding round[8] reaching 100 billion dollars and valuing OpenAI at roughly 830 billion dollars[9]. OpenAI’s new models are trained on NVIDIA GB200 NVL72 systems[10], further cementing OpenAI’s technological dependence[11] on Nvidia’s architecture and raising the entry barrier for competing chip manufacturers[41].

New AI Agent Models and Platforms

On February 5, OpenAI released the GPT-5.3-Codex model[12], which merges the Codex and GPT-5 training stacks[13] and operates 25% faster than its predecessor[14]. The company classified its own model as having high capabilities[15] in cybersecurity, recognizing its high capabilities in this area[42], while admitting that earlier model versions helped debug and manage its training process[43]. Due to abuse risks, OpenAI launched a trusted access program for security experts[16], offering 10 million dollars in API credits for cyber-defense projects[17] and temporarily delaying full API access[18], making the model mainly available in ChatGPT for paying users[44].

That same day, OpenAI also launched the Frontier platform[19], aimed at enterprises to become an operating system for AI agents acting as digital coworkers. Frontier integrates with internal systems like CRM and HR databases[20] and provides memory, agent onboarding, and detailed permissions management[21]. Developed under the supervision of Fidji Simo, it launches without disclosed pricing[22], already including Intuit, State Farm, Thermo Fisher, Uber, BBVA, Cisco, and T-Mobile[46]. This is a direct response to the Claude Cowork offering from Anthropic, which on February 5 unveiled the Claude Opus 4.6 model[23], with a context window of 1 million tokens[47], targeting long-term professional tasks in finance[24].

Agent Oversight Crisis and EU Regulations

The AI agents landscape is supplemented by a Gravitee report[25], prepared with research firm Opinion Matters, published on February 3. Surveys of 750 IT directors in the US and UK estimated[48] that organizations operate roughly around 3 million AI agents[26], of which 53%, nearly 1.5 million, are neither actively monitored nor properly secured[27]. As many as 88% of companies experienced a security incident involving an AI agent in the last 12 months[28]. The deployment pace outstrips security teams’ capabilities[49], and experts from Beauceron Security and Kapittx warn that deployment speed exceeds capacity[29], while the global market urgently needs governance tools[30] for agent systems.

The implementation of the EU AI Act regulation[31] in Europe enters a critical phase. The European Commission was due by February 2, 2026, to prepare guidelines[32] for the practical enforcement of Article 6 concerning the classification of high-risk systems, and full rules for such systems will take effect on August 2, 2026[33]. Operators of high-risk AI systems must demonstrate comprehensive risk management[34], while the schedule already includes a ban on unacceptable practices effective February 2025[35]. Requirements for general-purpose models will apply starting August 2025[36], meaning a need for rapid identification of own high-risk systems[37] and full enforcement of the law[38] for companies in Poland, alongside planning audits, documentation, and close cooperation with agencies such as the Office of Competition and Consumer Protection and the Personal Data Protection Office.


Related posts:

Share: