We just raised our Series B! NeuralScale is building the infrastructure layer that will power the next generation of AI applications.
Think of us as 'Vercel for AI inference' — but with 10x better latency and 3x lower cost than existing solutions.
Hiring across the board: infra engineers, ML engineers, and product managers. DM me or check the jobs page.
The new FLUX model is genuinely impressive for real-time video understanding. I've been benchmarking it against our production pipeline at Tesla.
Key observations:
• 3x faster inference than previous SOTA
• Better temporal consistency across frames
• Still struggles with fine-grained action recognition
• Impressive zero-shot performance on edge cases
Anyone else running comparisons?
@sophiakim·AI Research Scientist at Google DeepMind·
Our latest paper on emergent world models in large language models just got accepted at ICML 2026! 🎉
Key finding: When you train LLMs on enough diverse data, they develop internal representations that look remarkably like simplified physics engines.
This has huge implications for how we think about grounding and embodied AI.
Unpopular opinion: The best MLOps is the MLOps you don't need.
Before building a complex ML pipeline, ask:
1. Can a simpler model solve this?
2. Do you actually need real-time inference?
3. Is batch processing good enough?
4. Can you use a managed API instead?
90% of the time, the answer to at least one of these is 'yes'. Stop over-engineering.
@sarahchen·ML Engineering Lead · Previously DeepMind·
Curating the best open-source AI tools released in Q1 2026:
1. Llama 4 Scout — Meta's most capable open model yet
2. Stable Diffusion 4 — Incredible image quality improvements
3. DeepSeek-R1 — Best reasoning in open source
4. Whisper V4 — Near-human transcription quality
5. Moshi by Kyutai — Real-time multimodal conversation
What am I missing? Drop your favorites below. 👇
Controversial: The 'bigger is better' era of foundation models is ending.
Our latest research shows that smaller, specialized models (7-13B parameters) consistently outperform 70B+ generalists on domain-specific tasks when properly fine-tuned.
The future isn't one mega-model. It's an ecosystem of specialized experts.