We just raised our Series B! NeuralScale is building the infrastructure layer that will power the next generation of AI applications.
Think of us as 'Vercel for AI inference' — but with 10x better latency and 3x lower cost than existing solutions.
Hiring across the board: infra engineers, ML engineers, and product managers. DM me or check the jobs page.
@elenarodriguez·Research Director at Hugging Face·
Open-source AI is winning, and here's the data to prove it.
We analyzed model adoption across 500 companies:
• 67% use at least one open model in production
• Open model usage grew 340% year-over-year
• Cost savings average 4.2x vs proprietary APIs
• Customization cited as #1 reason (not cost)
The ecosystem has never been healthier.
@sophiakim·AI Research Scientist at Google DeepMind·
Fascinating result from our experiments on in-context learning:
We found that the order of few-shot examples matters dramatically — sometimes more than the examples themselves.
Optimal ordering improved accuracy by 15-30% across 12 benchmarks. We're calling it 'positional priming' and working on a paper.
Has anyone else observed this?
@marcusthompson·Founding Engineer at Runway · Ex-OpenAI·
Lessons from building real-time AI video generation at Runway:
1. Latency budgets are everything — users notice >100ms
2. Streaming architectures beat batch processing 10:1
3. Progressive rendering is key to perceived speed
4. GPU memory management is the real engineering challenge
5. The gap between demo and production is 6-12 months
Shipping creative AI is wildly different from shipping chatbo...