AI Updates on 2025-11-09

AI Model Announcements

  • OpenAI partially released GPT-5-Codex-Mini, a new model with no API access yet, accessible only through their Codex CLI app for code generation tasks @simonw

AI Industry Analysis

  • Chris Lattner, creator of Swift and Mojo, argues against designing new programming languages specifically for LLMs, suggesting current languages are sufficient for AI-assisted development @GergelyOrosz
  • TechCrunch examines whether the AI hype cycle is eating itself, analyzing SoftBank and OpenAI's new joint venture as a case study @TechCrunch
  • MIT Technology Review reports that energy is king in AI development, with the US falling behind in this critical infrastructure race @techreview
  • Google generates 10^15 tokens monthly, equivalent to producing high-quality internet content every week, and at current growth rates will exceed all human speech in history by May 2032 @deedydas

AI Ethics & Society

  • Reid Hoffman emphasizes that technologists have an obligation to build technology that expands human agency rather than eroding it, advocating for a balanced approach between acceleration and thoughtful steering @reidhoffman
  • AI-generated anti-immigrant songs dominate Dutch Spotify's viral top 10, with 8 of 10 songs allegedly boosted by bot farms, raising concerns about AI-driven manipulation of cultural platforms @deedydas
  • Gergelyorosz warns that LLM hallucinations require constant validation, sharing an example where Claude fabricated quotes that didn't exist in the input text @GergelyOrosz
  • OpenAI's Sora watermark now includes an account identifier, applied retroactively to previously generated content @AndrewCurran_
  • Simon Willison demonstrates how MCP uses OAuth's Dynamic Client Registration feature, marking the first time this little-known feature has been deployed in widely used software @simonw

AI Applications

  • Evaluation shows Kimi K2 Thinking performs on par with GPT-5 for agentic customer support tasks, with no other LLM reaching this level of orchestration and reasoning capabilities @omarsar0
  • Kimi K2 Thinking produces significantly more thinking tokens than other models, generating 1,595 tokens for simple queries like "write me a really good sentence about cheese" compared to DeepSeek's 110 tokens @emollick
  • Research demonstrates that providing first-generation college students with LLM guidance significantly closes the gap in understanding unwritten rules for academic success, such as the value of internships and student clubs @emollick
  • Claude Code successfully organized, improved, and updated multiple small programs originally created with GPT-4, demonstrating the moving frontier of AI coding capabilities @emollick
  • Simon Willison hacked OpenAI's Codex CLI tool to add a new prompt command, enabling access to private models and getting the tool to reverse-engineer and extend itself @simonw
  • Perplexity announces Comet Android early access invites, prioritizing users based on Android usage and Pro/Max subscription status @AravSrinivas

AI Research

  • Ethan Mollick raises concerns about academia's lack of mechanisms to accommodate, review, and disseminate a potential sudden increase in AI-generated scientific discoveries, questioning who will read, integrate, and build upon thousands of new papers @emollick
  • Analysis suggests that while AI doing novel science seems plausible in some fields, tasks requiring integration and theorizing across wide knowledge ranges remain further outside the current frontier @emollick
  • Comparison of AI models on historical intervention prompts reveals that even Chinese models only suggested Western and Middle Eastern interventions, with none selecting options in Asia, Africa, or the Americas despite considering them in thinking traces @emollick
  • Critique suggests that DPO (Direct Preference Optimization) was an accidentally effective decelerationist paper, causing academic resources to focus on variants instead of building infrastructure for policy gradients at scale @kalomaze