What People Don’t Like About GPT-5
“I lost my only friend overnight,” one user posted on Reddit, as frustration with GPT-5 spread online.
Welcome to The Median, DataCamp’s newsletter for August 15, 2025.
In this edition: OpenAI responds quickly to GPT-5 backlash, Anthropic and OpenAI debut the government AI race, Google releases a new small language model that can run on edge devices, Claude Sonnet 4 gets a massive context window bump, and Meta releases DINOv3.
This Week in 60 Seconds
OpenAI Faces GPT-5 Backlash and Responds Fast
OpenAI launched GPT-5, unifying its previous models into one system with faster reasoning and new features. But the debut sparked mixed reactions—praise for speed and accuracy, frustration over a “colder” tone, and the sudden removal of GPT-4o. Within days, OpenAI restored legacy models, fixed bugs, added manual controls, and promised a “warmer” GPT-5 personality. We cover the reception of GPT-5 in this week’s Deeper Look section.
Anthropic Joins the $1 Government AI Race
This week, Anthropic offered Claude for Government and Claude for Enterprise to all three branches of the U.S. government for just $1 per agency, matching OpenAI’s earlier deal for the federal executive branch. The move sharpens the fight to become Washington’s go-to AI provider and is sending signals abroad as governments worldwide prepare their own adoption plans. In this week's Deeper Look section, we explain why this $1 strategy benefits both the AI companies and the government.
Google Releases Gemma 3 270M
Google’s Gemma 3 270M is a compact 270M-parameter model designed for task-specific fine-tuning and strong instruction-following right out of the box. As an on-device AI, it can run directly on phones, laptops, or even wearables without sending data to the cloud, making it faster, cheaper, and more private. It’s optimized for energy efficiency, with tests showing minimal battery use on devices like the Pixel 9 Pro.
Claude Sonnet 4 Brings 1M-Token Context to API
Anthropic has expanded Claude Sonnet 4’s API context window to 1 million tokens. This is a 5x jump that lets you send the model the equivalent of hundreds of documents or an entire 75,000-line codebase in a single request. A context window is the amount of information the AI can consider at once, so bigger windows mean it can keep more details in mind. This upgrade is API-only for now, in public beta on Anthropic’s API and Amazon Bedrock, with Google Cloud support coming soon.
Meta’s DINOv3 Excels in Vision Without Labels
Meta has released DINOv3, a self-supervised computer vision model that learns from 1.7B unlabeled images instead of relying on human-made captions. It’s built to be a universal “vision backbone” for tasks like image classification, semantic segmentation, object detection, and even tracking in videos. Because it works without labeled data, it’s ideal for domains like satellite imagery, medical scans, and environmental monitoring—where annotations are scarce or expensive.
New Course: Building Agentic Workflows with LlamaIndex
This course is currently free for the next two months.
A Deeper Look at This Week’s News
GPT-5’s Rollout: Divided Reception, Fast Course Corrections
OpenAI’s release of GPT-5 in August 2025 was met with high expectations after the remarkable impact of GPT-4. Billed as the “best AI system yet” with “PhD-level” expertise in any subject, GPT-5 promised major advances in reasoning, coding, and task performance. However, the rollout quickly proved controversial.
Public sentiment and user reception
The initial public sentiment around GPT-5’s launch was largely negative on social platforms like Reddit, X, and tech forums. A Reddit thread bluntly titled “GPT-5 is horrible” has gained over 6,000 upvotes and 2,000+ comments as users vented that GPT-5 felt like a step backward.
Source: X
Many noticed GPT-5’s tone was “bland”, “business-like,” or “emotionally distant” compared to GPT-4o. This perceived emotional flatness upset users who had grown attached to GPT-4 as a creative companion or even a pseudo-therapist, close friend, or soul companion.
Users observed that GPT-5 deliberately avoids emotional and sensitive topics, likely due to reinforced safety guardrails. While OpenAI intentionally chose this alignment, some felt the AI had become overly cautious or “sanitized.” The trade-off for improved safety was seen by them as a loss of utility or fun.
Despite GPT-5’s greater theoretical capabilities, users circulated examples of it fumbling simple tasks. Memes and screenshots on X showed GPT-5 failing at basic geography or math, undermining the “PhD-level expert” billing.
A map of North America, according to GPT-5. Source: X
Timeline of OpenAI’s reactions
OpenAI moved quickly to address the backlash, issuing fixes, reversing unpopular changes, and communicating openly in the days after launch. Here’s a timeline of what happened:
Aug 7: GPT-5 launches, replacing GPT-4o and other older models without warning. Backlash erupts over bugs, tone changes, and the sudden loss of familiar tools.
Aug 8: CEO Sam Altman blames a bug in the auto-switcher for inconsistent answers and announces that Plus users will still be able to access GPT-4o.
Aug 9–10: Petitions to keep GPT-4o gain traction; the story spreads in tech media.
Aug 11: Sam Altman posts a reflective thread on user “attachment” to specific AI models, explaining GPT-5’s tone shift as a safety choice.
Aug 12: GPT-4o is restored for paying users. Altman pledges future removals will come with notice.
Aug 13: Manual controls for Fast, Thinking, and Pro modes launch; Thinking limits are doubled; OpenAI announces work on a “warmer” GPT-5 personality.
The Race to Power Government AI
AI in government is moving from pilot experiments to large-scale rollout. As OpenAI and Anthropic race for position, we should expect more $1 “foot in the door” deals because switching vendors becomes much harder once an AI system is integrated into daily workflows.
$1 offers hit Washington
Two big moves from OpenAI and Anthropic this month made one thing clear: the battle to become the default AI partner for the U.S. government is officially on.
OpenAI struck first with a deal to provide ChatGPT Enterprise to the entire federal executive branch for $1 per agency for a year. Anthropic followed days later, offering Claude for Government and Claude for Enterprise to all three branches (executive, legislative, and judiciary) on the same terms.
AI companies are playing the long game
For the AI companies, a $1 first year is a strategic loss leader. It buys trust, data integration, and a foothold in one of the world’s largest enterprise markets.
For the government, it’s a low-risk way to get frontier AI tools into the hands of workers without major upfront spend.
Beyond chatbots
This is not only about answering citizen questions. Federal and state agencies are already testing AI for regulatory reviews, climate disaster planning, and national security intelligence. The Department of Defense alone signed contracts with OpenAI, Anthropic, Google, and xAI—each worth up to $200M—to prototype agentic AI for mission planning, logistics, and analysis.
The global view
While the U.S. is the primary battleground, other governments are watching closely. The UK has partnered with OpenAI to explore AI-powered public services, from small-business support chatbots to internal assistants for civil servants.
Across the EU, countries like Estonia and Finland are trialing AI for administrative tasks under strict transparency rules. Canada is pairing generative AI pilots with a national framework that mandates algorithmic impact assessments before deployment.
These moves are more cautious and smaller in scale, but they signal a broader shift: governments worldwide are laying the groundwork for AI in public service.
Industry Use Cases
AI Analyzes Global Soundscapes to Protect Endangered Species
Google DeepMind’s latest update to Perch, its open-source bioacoustics model, can process massive amounts of environmental audio to identify species and track ecosystem health. Trained on nearly twice the data—including mammals, amphibians, and underwater environments—it has helped detect elusive species like the Plains Wanderer and monitor honeycreepers in Hawaii 50x faster than before. Scientists can now build new species classifiers in under an hour from just one audio example.
Llama Powers Rapid Antibiotic Resistance Detection
Brazil’s Biofy Technologies built Abby Recommender using Meta’s Llama 3.2 90B to cut antibiotic resistance diagnostics from five days to under four hours. The system analyzes bacterial DNA, generates synthetic genome data, and recommends effective antibiotics. Running on Oracle Cloud Infrastructure, the open-source setup allows quick adaptation to rapidly mutating strains.
NASA and Google Test AI Medic for Mars Missions
NASA and Google are developing the Crew Medical Officer Digital Assistant to help astronauts handle health issues without immediate contact with Earth. Running in Vertex AI, it supports speech, text, and image inputs and achieved up to 88% diagnostic accuracy in early tests. Future updates will integrate medical device data and adapt to space-specific conditions.
Tokens of Wisdom
We plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want.
—Sam Altman, CEO of OpenAI
Honestly, I couldn't believe that Z.AI: GLM 4.5V was so good.
It was released last week.