How GPT‑5 Helped Crack a 40-Year-Old Optimization Puzzle
Breakthroughs don’t always come from all‑nighters and chalkboards covered in symbols. Sometimes, they come from a quiet moment, a stubborn question, and an unexpectedly smart conversation with AI! 🤖
That’s exactly what happened to Professor Ernest Ryu, a UCLA mathematician who has spent most of his career swimming in the world of optimization theory. After years of wrestling with a long‑standing open problem, he did something unusual for someone in his field: he asked GPT‑5 to help him think.
Ryu wasn’t new to language models. Back in 2023, he tested ChatGPT‑3.5 on small logic puzzles, things like scheduling meetings across time zones. It could pick up subtle hints and implicit constraints, but its reasoning was hit‑and‑miss. Useful, sure. Reliable? Not quite.
But by 2025, GPT‑5 had a reputation for much stronger math reasoning. So Ryu gave it another try; this time with a problem people had been trying (and failing) to solve since the early ’80s. He wasn’t expecting a miracle, but he was curious enough to see whether the model could spark new ideas.
It did exactly that.

The Old Problem Everyone Knew… but No One Could Explain
The question bugging Ryu wasn’t loud or dramatic. It was one of those quiet, nagging mysteries that sat at the center of optimization theory:
Why does Nesterov Accelerated Gradient (NAG) stay stable even when momentum increases?
NAG, introduced in 1983 by Yurii Nesterov, is famous for providing algorithms with a significant speed boost. Instead of taking a step, checking in, and adjusting, NAG looks ahead, making updates based on where the solution is headed rather than where it currently is.
Picture walking downhill while leaning slightly forward. You move faster, but you somehow don’t fall. That’s NAG in a nutshell.
The weird part? Adding momentum should make things shaky. In theory, it should introduce overshooting or instability. But NAG didn’t. It was fast and stable. For decades, everyone could see that it worked, but no one could fully explain why.
And in optimization, we know it works but don’t know why is not a satisfying answer. Stability matters. Efficiency matters. Predictability matters. Understanding the mechanics behind NAG wasn’t just academic curiosity; it influenced how algorithms were designed and used in machine learning, engineering, and beyond.
When GPT‑5 Became a Research Partner
Ryu approached GPT‑5 with a simple goal: explore the problem from new angles. Not solve this for me, but help me think differently.
What surprised him was how well the model could surface niche mathematical tools, reference obscure papers, and connect ideas that normally would’ve taken days of reading to uncover. It didn’t produce a finished proof, but it rapidly generated possibilities, sketched out conceptual paths, and challenged assumptions in a way that sped up Ryu’s reasoning.
Instead of being a machine that spits out answers, GPT‑5 became an intellectual partner. It sorted through the vast landscape of optimization literature and cross‑linked concepts, providing the spark Ryu needed to see the problem with fresh clarity.
That’s what finally opened the door.
A problem that had stubbornly resisted decades of human effort began to unravel, not because the AI solved it, but because it helped illuminate the path forward.
Turning Prompts Into Progress: Ryu’s Process with GPT‑5
Professor Ernest Ryu didn’t go into this expecting a breakthrough. He was just curious. Night after night, after his kids were asleep, he fired up GPT‑5 and started poking around with ideas. What came back was weird, creative, and (occasionally) completely wrong.
But that’s what made it valuable.
GPT‑5 wasn’t inventing brand-new math. It was connecting dots. It pulled in references from obscure corners of academic literature and offered unexpected perspectives, sometimes messy, sometimes brilliant. For Ryu, it became a sounding board. A hyperactive one!
The AI Research Assistant That Never Gets Tired
Throughout the process, GPT‑5 wasn’t flawless; it often suggested arguments that looked fine on the surface but collapsed under scrutiny. But that didn’t matter. What mattered was speed and volume. GPT‑5 could generate more variations, faster, than any human peer could.
It was like exploring a massive maze with someone who could instantly light up the next 20 paths. Most still led nowhere, but now Ryu could rule them out in minutes instead of days. That acceleration changed everything.
In just three days (roughly twelve hours of deep exploration), Ryu found a working approach to a problem that had baffled experts for four decades.
Why This Collaboration Mattered
Ryu didn’t hand over the problem and wait for a solution. He stayed in the driver’s seat, constantly testing, verifying, and guiding the model. The final proof? He wrote it. But several of the most important insights along the way came directly from GPT‑5’s suggestions.
He even developed a trick to improve results: when checking an argument, it worked better to start a fresh chat than to ask the model to reflect on its own answers in the same thread. That simple shift reduced error carryover and made the process more efficient.
AI in Research Isn’t About Replacing People
Ryu’s takeaway was clear: GPT‑5 isn’t a replacement for human researchers; it’s a tool that’s only useful when wielded by someone who knows what they’re doing. You still need to know your field inside out.
You still need to double- and triple-check everything. But if you do that, the model becomes a genuinely valuable research partner.
