view article Article Scaling Test-Time Compute to Achieve Gold Medal at IOI 2025 with Open-Weight Models Oct 20 • 20
view article Article FineWeb-C: A Community-Driven Dataset for Educational Quality Annotations in 122 Languages Jul 8 • 32
view article Article Explore, Build, and Innovate AI Reasoning with NVIDIA’s Open Models and Recipes Jun 4 • 21
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics Paper • 2506.01844 • Published Jun 2 • 143
Unified Reward Model for Multimodal Understanding and Generation Paper • 2503.05236 • Published Mar 7 • 122
view article Article A Deepdive into Aya Vision: Advancing the Frontier of Multilingual Multimodality +2 Mar 4 • 78
Cohere Labs Aya Vision Collection Aya Vision is a state-of-the-art family of vision models that brings multimodal capabilities to 23 languages. • 5 items • Updated Jul 31 • 70
How to Get Your LLM to Generate Challenging Problems for Evaluation Paper • 2502.14678 • Published Feb 20 • 18
From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions Paper • 2502.13791 • Published Feb 19 • 5
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training Paper • 2501.17161 • Published Jan 28 • 123