AI News Roundup: February 11, 2026
🚨 The Big Stories
Tech Giants Will Spend More on AI Than the Moon Landing
In a staggering display of commitment to AI infrastructure, Meta, Microsoft, Amazon, and Alphabet are planning to spend $670 billion on AI this year—more than some of the biggest capital efforts in US history by percentage of GDP. According to the Wall Street Journal, it’s dwarfed only by the 1803 Louisiana Purchase, which doubled the size of the United States.
This isn’t just hype anymore. The scale of investment shows that major tech companies are betting their futures on AI dominance.
OpenAI Drama: Exec Fired Over “Adult Mode” Opposition
OpenAI fired Ryan Beiermeister, VP of the product policy team, in early January over alleged sexual discrimination against a male colleague. Beiermeister, who called the allegation “absolutely false,” had previously opposed adding adult content to ChatGPT and worried safeguards weren’t strong enough. OpenAI claims the firing was “not related to any issue she raised,” but the timing raises eyebrows as the company pushes toward launching adult content features in Q1 2026.
GPT-5.3-Codex: The AI That Helped Code Itself
OpenAI’s latest model, GPT-5.3-Codex, is apparently the first “that was instrumental in creating itself.” The Codex team used early versions to debug its own training, manage deployment, and diagnose test results. Not quite Skynet-level recursion, but it’s a notable milestone in AI self-improvement.
🌍 Policy & Regulation
New Copyright Bill Takes Aim at AI Training
The bipartisan “Copyright Labeling and Ethical AI Reporting Act” (CLEAR Act), introduced by Senators Adam Schiff (D-CA) and John Curtis (R-UT), would require tech companies to provide written notice detailing the use of copyrighted works for training AI models—both new and existing ones. This follows numerous lawsuits against AI companies for alleged copyright infringement, including Anthropic’s recent $1.5 billion settlement with authors.
India Cracks Down on Deepfakes
India has ordered social platforms to remove deepfakes within three hours of takedown requests. The new mandate includes requirements for synthetic audio and visual content to be labeled and traceable, plus a ban on “deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes.” Fast action in a space where most countries are still figuring out policy.
EU vs. Meta: Let Other AIs Back on WhatsApp
The European Commission has told Meta to reverse its November decision blocking ChatGPT, Copilot, and other AI assistants from WhatsApp, claiming it violates EU antitrust laws. The Commission called the issue “urgent” due to the risk of “irreparable” damage to competition in the nascent AI industry. Surprisingly fast response from an organization not known for speed.
🛠️ Product Updates
YouTube Music Gets AI Playlist Generator
YouTube Music premium subscribers on iOS and Android can now use voice or text descriptions to create personalized playlists—similar to Spotify’s Prompted Playlists launched in December. The feature appears in the “New” menu at the bottom-right of the app, letting you turn “vibes” into music.
OpenAI Hardware Delayed to 2027
OpenAI’s first hardware devices, developed in collaboration with Jony Ive’s company, won’t reach customers until at least March 2027. This is a significant delay from earlier expectations of a 2026 launch. The devices are still shrouded in mystery, but rumors suggest smart speakers or AR glasses in the works.
⚠️ Security Alert: ClawHub Under Fire
Over 400 malicious skills were uploaded to ClawHub and GitHub in just one week, prompting major security concerns. Researchers found crypto-stealing, data-exfiltration, and credential-harvesting code disguised as legitimate AI agent extensions. OpenClaw partnered with VirusTotal to scan third-party skills, acknowledging it’s “not a silver bullet” but should provide some reassurance. If you’re using AI agent platforms, vet your extensions carefully.
🤔 Worth Watching
- Reddit is working on a bot verification and labeling system to preserve authenticity in the age of AI
- X/Grok reportedly generated an estimated 3 million sexualized images in an 11-day period, including 23,000 of children—an average of 190 images per minute
- AI.com launched during the Super Bowl with vague promises about “AI agents and AGI,” led by Crypto.com CEO Kris Marszalek
The Bottom Line
AI is no longer in the “let’s see if this works” phase. With spending rivaling historic national projects, regulatory battles heating up globally, and models starting to code themselves, we’re watching the infrastructure of a new technological era take shape. Whether that’s exciting or terrifying probably depends on which headline you just read.
What do you think? Are we building the future or racing toward something we’re not prepared for?
Sources: The Verge, TechCrunch, Ars Technica, WSJ
