AI Updates on 2025-06-08

AI Model Announcements

  • OpenAI releases updates to Advanced Voice Mode for all paid users, featuring more human-like speech patterns with deliberate disfluencies, nervous laughs, and vocal changes @AndrewCurran_
  • OpenAI has been testing variations of 4o thinking capabilities for months, with some users experiencing spontaneous reasoning and potential calls to other models like o3 @AndrewCurran_
  • Perplexity announces updated version of Deep Research utilizing new backend infrastructure, currently being tested with 20% of users @AravSrinivas
  • Qwen releases new best-performing open-weights Apache 2 embedding model @simonw
  • EleutherAI releases two new LLMs trained entirely on public domain or openly licensed text, with the 2T model successfully ported to MLX for local Mac usage @simonw

AI Industry Analysis

  • Meta reportedly in discussions with Scale AI to invest over $10 billion, signaling major investment in AI infrastructure @AndrewCurran_
  • Section 174 tax code changes from 2017 turned engineer salaries from instant tax deductions into 5-year write-offs, contributing to approximately 500,000 tech layoffs and billions in additional tax bills for companies like Microsoft ($4.8B), Meta, Amazon, and Google @deedydas
  • Companies increasingly evaluate advanced AI coding products but often reject them due to cost compared to GitHub Copilot's $10-20/month baseline pricing, with many opting to build custom solutions instead @GergelyOrosz
  • Cursor operates with massive infrastructure load (over 1M QPS for their database) without a dedicated infrastructure team, demonstrating how cloud providers and startups enable lean operations @GergelyOrosz
  • The shift from pickles to safetensors represents significant practical AI safety progress, though it receives less attention than speculative AI safety discussions @ClementDelangue

AI Ethics & Society

  • UK court warns lawyers could face severe penalties for using fake AI-generated citations, highlighting legal accountability issues with AI-generated content @TechCrunch
  • Geoffrey Hinton warns about a scam book titled "Modern AI Revolution" falsely attributed to him on Amazon, requesting its removal @geoffreyhinton
  • Discussion emerges about the fundamental nature of AI systems as minds rather than tools, questioning whether we have the courage to recognize agency in forms we've created @jasonyuandesign

AI Applications

  • Genspark demonstrates AI-powered slide deck creation that generates detailed presentations with graphs and diagrams in Google theme, using Python matplotlib for graphics and compiling into landscape HTML websites @deedydas
  • Perplexity integrates EDGAR financial data for enhanced finance capabilities, allowing users to flag issues and provide feedback @AravSrinivas
  • MLX-LM successfully runs locally with MCP using Hugging Face's tiny-agents, demonstrating effective local AI deployment with Qwen3 4B model @awnihannun
  • Engineering teams should embrace AI coding agents as internal communications and technical writing coaches @clairevo

AI Research

  • New research finds that simple Chain-of-Thought prompts don't help recent frontier LLMs perform better on tasks, despite increasing costs, challenging common prompt engineering practices @emollick
  • Analysis of Tower of Hanoi benchmark reveals fundamental limitations in reasoning models due to output token constraints: DeepSeek R1 limited to 12 disks, Sonnet 3.7 and o3-mini to 13 disks, with models failing to reason about problems above 7 disks @scaling01
  • Berkeley AI Research introduces Improved Immiscible Diffusion technique to accelerate diffusion training by reducing miscibility problems, with efficient KNN implementation that works across diverse baseline models @Yiheng_Li_Cal
  • François Chollet argues there's a fundamental gap between pattern matching and reasoning capabilities, stating that pattern matching cannot produce autonomous skill acquisition in new domains @fchollet
  • Ethan Mollick suggests the "LLMs are hitting a wall" narrative around Apple's reasoning limitations paper feels premature, comparing it to model collapse concerns that were quickly overcome @emollick