OpenAI just rolled out a bold claim: its GPT-5 model is now matching or exceeding human performance in about 40.6 percent of tasks across 44 occupations in nine major U.S. economic sectors, based on a newly published evaluation called GDPval. The benchmark, which evaluates AI-generated work like legal briefs, engineering analysis, clinical notes, and financial research against human professionals, is OpenAI’s attempt to quantify how close large language models are getting to real-world, economically valuable work. That said, OpenAI and observers stress limitations—the test covers only certain task types, not full job responsibilities, and it doesn’t yet account for human ingenuity in oversight, long-term planning, or context-heavy decisions. Meanwhile, other perspectives caution about overinterpreting the data. According to reporting from Axios, performance between GPT-4o and GPT-5 more than doubled, yet speed, cost, and real-world nuance complicate how much this translates to job substitution. In The Guardian, the model is praised for leaps in coding, creative writing, and accuracy, yet Altman notes it still lacks traits like continuous learning or autonomous adaptation. Developers provide more mixed praise — Wired describes GPT-5 as a “mixed bag” in coding tasks, calling some of the benchmarking “misleading.”
Key Takeaways
– GDPval marks a new step in AI benchmarking by comparing models directly with human professionals on real-work deliverables, and GPT-5 achieves parity or better in ~40.6 % of those tasks.
– The results are notable but limited: the benchmark doesn’t represent full job complexity, long-term planning, or the context and oversight humans bring.
– Reactions are mixed — OpenAI views it as progress toward AGI, while critics warn that hype may outpace practical utility, especially in real production environments.
In-Depth
OpenAI’s announcement of GPT-5’s performance claims—backed by the new GDPval benchmark—turns up the volume on the conversation about where AI is heading. At its core, GDPval (short for “Gross Domestic Product valuation,” per OpenAI) is designed to assess how capable models are at tasks that matter economically: things like drafting reports, summarizing financial results, producing medical notes, building competitor analyses, or even engineering sketches. In its first iteration, GDPval covers 1,320 distinct tasks across 44 occupations in nine sectors deemed most important to the U.S. economy. The outputs are graded by experienced professionals who compare AI vs human versions in blinded pairwise comparisons. OpenAI reports that its “GPT-5-high” configuration was ranked as equal or better than industry professionals about 40.6 percent of the time.
That’s a striking leap: older models like GPT-4o scored only ~13.7 percent in similar evaluations, so by the company’s measurement, GPT-5 more than tripled task-level “win rates.” OpenAI casts this as a sign that AI is increasingly able to “offload some of the work” from professionals, letting humans focus on higher-value parts of their jobs. The firm also emphasizes that this is not a full claim of job replacement: GDPval doesn’t test management, negotiation, long-horizon strategy, human judgment at scale, or messy real workflows.
Still, the bold framing has drawn both excitement and skepticism. The Axios write-up underscores that speed, cost, and benchmarking limits muddy the translation from scores to real impact. A model might perform well in a controlled task environment yet struggle in live, messy settings. In The Guardian, OpenAI is praised for improving GPT-5’s coding, creativity, safety, and integration features, but Sam Altman admits the model lacks continuous learning and other hallmarks of true AGI. Wired gives a more critical technical take: some developers say GPT-5’s benchmarking glosses over weaknesses in code quality, verbosity, or hallucination, and that comparisons to rivals like Claude highlight how benchmark design and presentation can influence perception.
To be conservative about what this means: GPT-5’s progress is real and impressive in task-based settings. It signals that AI is not just a toy for trivia or narrow questions anymore, but is creeping into work domains formerly reserved for skilled professionals. Yet turning those task successes into safe, reliable, contextual, scalable applications is a different matter entirely. The gulf between producing a decent draft of a legal memo in isolation and reliably advising in ongoing client cases is still wide. For now, the most pragmatic view is to see GPT-5 as a powerful assistant — one that may handle chunks of work, but still needs human direction, review, and judgment to stay on track.

