AI Updates on 2026-02-10
AI Model Announcements
- Perplexity upgrades Advanced Deep Research to run on Opus 4.6, extending lead on Google's DSQA benchmark @AravSrinivas
- OpenAI launches deep research powered by GPT-5.2 with app connections, real-time progress tracking, and fullscreen reports @OpenAI
- Cursor releases Composer 1.5 striking balance between intelligence and speed for AI-assisted coding @cursor_ai
AI Industry Analysis
- Former GitHub CEO Thomas Dohmke raises record $60M seed round at $300M valuation for Entire, agent-first dev platform @TechCrunch
- OpenAI's Codex App surpasses 1 million downloads in first week with 60% growth in overall Codex users @sama
- Andrew Ng observes AI causing subtle job displacement as businesses replace employees who don't adapt to AI tools with those who do @AndrewYNg
- LLMs tripled new book releases since 2022; while average quality fell, books ranked 100-1,000 per category improved and pre-LLM authors became more productive @emollick
AI Ethics & Society
- Anthropic relies primarily on internal employee survey to determine if Opus 4.6 crossed autonomous AI R&D threshold, raising deployment responsibility concerns @polynoamial
- AI Now Institute releases essays on accountability, frugal AI, democratization, and "AI for Good" questioning tech industry narratives @AINowInstitute
- Stanford HAI releases policy brief warning AI could worsen health insurance delays and wrongful denials without proper safeguards @StanfordHAI
AI Applications
- Isomorphic Labs' drug design engine more than doubles AlphaFold 3 performance on key benchmarks for biomolecular structure prediction @demishassabis
- MIT graduate develops optical AI system analyzing figure skaters' jumps, working with NBC Sports for 2026 Winter Olympics coverage @MIT
AI Research
- ALMA system enables AI agents to automatically design memory mechanisms, outperforming hand-crafted designs across sequential decision-making domains @jeffclune
- Unsloth releases Triton kernels enabling 12× faster MoE model training with 35% less VRAM and no accuracy loss @UnslothAI
- New research derives neural scaling law exponents from natural language statistics, predicting data-limited scaling from first principles @SuryaGanguli