Maligned - November 22, 2025
AI news without the BS
Hereās what actually matters in AI today. No fluff, no hype - just 5 developments worth your time.
Todayās Top 5 AI Developments
1. One LLM, Many Sizes: Nemotron Elastic Cuts Costs š°
Training a suite of LLMs for different deployment needs is a huge drain. Nemotron Elastic changes the game by embedding multiple, nested submodels within a single parent model. This means you get 9B and 6B models from a 12B parent with zero-shot extraction and massive cost savingsāweāre talking 360x less than training from scratch.
Source: arXiv Link: https://arxiv.org/abs/2511.16664v1
2. LMMs Learn to Self-Improve, No Humans Needed š¤
EvoLMM introduces a self-evolving framework for Large Multimodal Models (LMMs) that cuts out the need for human-curated data or external reward models. It works by having two agents from the same backboneāa Proposer and a Solverācooperatively generate and solve image-grounded questions through internal consistency. This unsupervised method shows consistent gains on multimodal math-reasoning benchmarks, pointing to a future where LMMs truly improve themselves.
Source: arXiv Link: https://arxiv.org/abs/2511.16672v1
3. Robots Master Dexterity from Smart Glasses š
Imagine robots learning complex multi-fingered tasks by just watching humans with smart glasses. Thatās the leap AINA makes, enabling robot policies to be learned directly from āin-the-wildā human demonstrations captured by devices like Aria Gen 2 glasses. This bypasses the typical embodiment gap and costly robot data collection, bringing us closer to generalizable robot manipulation in our messy human environments.
Source: arXiv Link: https://arxiv.org/abs/2511.16661v1
4. Ditch Text: Video Is the New AI Answer š¬
Why read instructions when you can watch them? VANS pioneers āVideo-as-Answerā (VNEP), where AI models predict and generate dynamic video responses for procedural or predictive questions, rather than just spitting out text. This shift from telling to showing unlocks more intuitive learning and creative exploration, especially for tasks where visual demonstration is critical, and could fundamentally change how we interact with AI.
Source: arXiv Link: https://arxiv.org/abs/2511.16669v1
5. LLMs Donāt Think Like You (And Hereās Why) š§
Before we crown LLMs as true reasoners, we need to get real. Research from āCognitive Foundationsā reveals LLMs primarily use shallow forward chaining, lacking the hierarchical nesting and meta-cognitive monitoring humans employ. While they exhibit behaviors associated with success, they often fail to deploy them spontaneously, especially on ill-structured problems. The paper calls out how much research neglects these critical āmeta-cognitive controlsā in favor of easily quantifiable metrics ā a much-needed dose of reality.
Source: arXiv Link: https://arxiv.org/abs/2511.16660v1
Thatās it for today. Stay aligned. šÆ
Maligned - AI news without the BS