Plot Twist: OpenAI Just Changed the Game (Again)
Two open source, open weights models just dropped — and they’re good.
We’re talking Codeforces scores on par with o3 and o4-mini, open licensing (Apache 2.0), and efficient enough to run on consumer hardware.
In this video, I break down:
Why GPT-OSS caught everyone off guard
The benchmarks that rival closed models
How these models were trained (spoiler: cutting-edge RL with OpenAI secret sauce)
What this means for tool use, agent workflows, and on-device inference
And why this could be the biggest open source moment in AI since LLaMA
Also: safety concerns, risks of open weights, and what OpenAI might be signaling with this release — just 48 hours before GPT-5 is expected to launch.
Yes, this is real. Yes, it’s a comeback for open source.
And yes… it’s a plot twist.
👉 Watch till the end to find out how this shapes the future of democratized AI.
The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and Open Source AI.
______________________________________________
My Links 🔗
➡️ Twitter: https://x.com/WesRothMoney
➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe
Want to work with me?
Brand, sponsorship & business inquiries: wesroth@smoothmedia.co
Check out my AI Podcast where me and Dylan interview AI experts:
https://www.youtube.com/@Wes-Dylan
______________________________________________
⏱️ Video Chapters
0:00 – Open Source Shock Drop
OpenAI quietly releases two open-source models (GPT-OSS) that rival GPT-3.5 and GPT-4-mini. No one saw this coming.
1:50 – Performance Benchmarks: OSS vs GPT
Codeforces, MMLU, HumanEval, and more — how these open models hold up to closed models on reasoning and tool use.
4:10 – Training Secrets & Universal Verifier
A look into the rumored training methods like OpenAI’s "universal verifier" and its link to recent coding/math breakthroughs.
6:00 – Model Architecture & Deployment
Mixture of Experts (MoE), tokenizers, consumer hardware efficiency, and how the 20B model can run on just 16GB RAM.
8:40 – Risks of Open Weights & Chain of Thought
Why open weights can’t be recalled, and the emerging safety issues with chain-of-thought supervision and misuse.
11:00 – The Bigger Picture: Decentralization & Comeback
Why this open release might be OpenAI returning to its roots—and how it repositions the U.S. in the open-source AI race.
#ai #openai #llm