Bye Prompts, Hello Context: Context Engineering, Affective AI, and More
What is context engineering, and why is everyone suddenly talking about it?
Welcome to The Median, DataCamp’s newsletter for June 27, 2025.
In this edition: A new term arrives in the AI lexicon, Anthropic uncovers the emotional uses of Claude, copyright battles continue, and developers get new tools.
This Week in 60 Seconds
“Context Engineering” to Replace Prompt Engineering?
A new term, "context engineering," is gaining traction in the AI engineering world, stirring discussion on its relationship to "prompt engineering." Context engineering is the process of building dynamic systems to provide an LLM with the right information and tools, in the right format, so it can plausibly accomplish a given task. Experts like Andrej Karpathy argue that "prompt engineering" can downplay the intricate process of carefully preparing an LLM's context window for real-world applications, which involves elements like RAG, few-shot examples, and managing historical data. We’ll explore this subject in the “Deeper Look” section.
Anthropic Reports on Claude's Affective Uses
Anthropic released a study this week on how people use their AI, Claude, for affective conversations (driven by emotional or psychological needs). Only a small portion, 2.9%, of Claude's interactions are affective in nature, with companionship and roleplay being even rarer (less than 0.5%). Users turn to Claude for practical, emotional, and even existential concerns, ranging from career development to managing loneliness. The study noted that Claude rarely pushes back in these supportive chats, typically only for safety reasons, and user sentiment tends to become more positive over the course of these conversations. We’ll explore this study in the “Deeper Look” section.
Court Decides that Anthropic's AI Training Is "Fair Use"
A U.S. federal judge has ruled that Anthropic's use of copyrighted books to train its Claude language model constitutes "fair use" and does not breach copyright law. However, the same ruling found that Anthropic's copying and storage of over 7 million pirated books in a central library did infringe on copyrights, ordering a December trial to determine damages. This is a significant development in the ongoing legal battles between AI firms and creative industries over data used for training generative AI models. Anthropic expressed satisfaction that the court recognized its AI training as transformative and consistent with copyright's purpose.
Google Releases Gemini CLI
Google has introduced Gemini CLI, a new open-source AI agent that brings Gemini directly into developers' terminals. Available free of charge, Gemini CLI comes with a 1 million token context window and allows up to 1,000 requests per day. It's also integrated with Gemini Code Assist, providing AI-first coding assistance within VS Code and the CLI. Our DataCamp team moved quickly and published a hands-on tutorial on getting started with Gemini CLI.
Google DeepMind Introduces AlphaGenome
Google DeepMind has released AlphaGenome, a new AI tool designed to advance the understanding of the human genome. This model more accurately predicts how single DNA variants or mutations impact gene-regulating biological processes. AlphaGenome can process long DNA sequences—up to 1 million base-pairs—and predict thousands of molecular properties. It also efficiently scores the effects of genetic variants in a second. The tool is now available in preview via API for non-commercial research, aiming to help scientists with disease understanding, synthetic biology, and fundamental genomic research.
Learn How to Build Multi-Agent Systems With LangGraph
A Deeper Look at This Week’s News
Context Engineering: Fad or Trend?
A new term, "context engineering," seems to have entered the AI lexicon, signaling a shift in how developers approach building with language models.
What is context engineering?
Context engineering is the practice of setting up everything a language model needs to perform a task well. The goal is to create a “context” that helps the model understand what it’s supposed to do.
Instead of just writing a clever prompt, context engineering involves giving the model the right information, in the right format, at the right time. This might include examples of what you want, background knowledge, previous messages, or even access to tools and documents.
Source: Dex Horthy
Tracing the origin of the term
Preparing a language model’s input isn't new, but the term "context engineering" has recently gained prominence.
Tobi Lutke, CEO of Shopify, publicly endorsed the term, stating on June 19, 2025, that he "really like[s] the term 'context engineering' over prompt engineering" because "It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM."
Source: @tobi
Andrej Karpathy further popularized the term this week with a post on X. He clarified that "people associate prompts with short task descriptions you'd give an LLM in your day-to-day use," but in "every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step."
Walden Yan, from Cognition AI (the team behind Devin), seems to have been among the first people to mention the term—on June 12, 2025, in a blog titled "Don't Build Multi-Agents."
While the exact origin of the term "context engineering" isn't 100% clear, there's a clear consensus on the practical activity it describes. This widespread agreement on the underlying practice suggests that the name may well become standard.
Will “context engineering” pick up?
The shift from "prompt engineering" to "context engineering" reflects the increasing sophistication of language model applications. As these systems become more autonomous and integrated into complex workflows, the need for precise and dynamic context provision becomes critical.
The endorsement from prominent figures in the AI community, coupled with the practical necessity it addresses, suggests that "context engineering" is likely to pick up as a core concept and skill in the development of AI systems.
What do you think?
How Do People Use Claude for Emotional Needs?
This week, Anthropic shared a new study on an intriguing aspect of AI usage: "affective conversations." These are interactions where users engage with the AI to address emotional or psychological needs.
While Claude is not designed as a therapeutic tool, the findings offer early insights into how people use language models for personal support.
Affective interactions account for 2.9% of the total
The study analyzed approximately 4.5 million Claude conversations and found that affective interactions are a relatively small slice of overall usage, accounting for just 2.9% of the interactions.
Within this category, interpersonal advice and coaching were the most common, while companionship and roleplay comprised less than 0.5% combined.
Source: Anthropic
This aligns with similar research from OpenAI and MIT, suggesting that AI remains primarily a tool for work and content creation for most.
From everyday worries to existential questions
People turn to AI for a surprisingly wide range of concerns, from navigating career transitions and relationship challenges to grappling with loneliness and existential questions about meaning and consciousness.
Interestingly, the study also observed a dual use in "counseling" conversations, where Claude assists not only with personal struggles but also with practical tasks for mental health professionals, such as drafting clinical documentation.
When does Claude push back?
A key aspect explored was Claude's "pushback" (instances where the AI resists user requests). This happens rarely in supportive contexts, less than 10% of the time. When Claude does push back, it's typically driven by safety concerns or policy compliance.
Examples include refusing to provide dangerous weight loss advice or explicitly stating it cannot offer professional therapy or medical diagnoses.
Source: Anthropic
A gentle upward trend in sentiment
The research also looked at how users' emotional tone evolves during these conversations. Across coaching, counseling, companionship, and interpersonal advice interactions, human sentiment generally became slightly more positive from the start to the end of the conversation.
While this doesn't claim lasting emotional benefits or rule out potential for emotional dependency (areas for future research), it's a notable finding that suggests Claude typically doesn't reinforce negative emotional patterns in these exchanges.
Industry Use Cases
AI Tools Support Farmers in Weed Management and Operations
AI tools are finding applications in agriculture, particularly in tasks like weed control and enhancing operational efficiency. Companies such as Carbon Robotics utilize AI-powered machines, including the LaserWeeder G2, which employs neural networks to precisely detect and eliminate weeds without harming crops. John Deere, the world's largest agricultural equipment company, integrates convolutional neural networks (CNNs) into its autonomous tractors and See & Spray systems. These systems have demonstrated reduced herbicide use, with one system reportedly saving 8 million gallons of herbicide.
AI-Powered Tools Join the Fight Against Shoplifting
Retailers are increasingly turning to advanced AI technologies to combat a rise in shoplifting and organized retail crime. Companies are deploying AI-powered computer vision at checkouts to identify products by shape, color, and packaging, aiming to prevent scams like barcode switching. Beyond checkout, smart surveillance systems, some utilizing facial recognition, are becoming more common to spot potential repeat offenders. Paris-based startup Veesion, for example, uses machine learning algorithms to detect suspicious body movements, claiming its technology can reduce shoplifting by up to 60%. While these AI-driven solutions offer promising results in deterring theft and recovering losses, they also raise important discussions around customer privacy and the potential for false accusations.
Goldman Sachs Rolls Out Firm-Wide AI Assistant
Goldman Sachs has launched its generative AI assistant across the entire firm, making it available to all employees after over a year of internal development and testing with 10,000+ employees. The "GS AI Assistant" is a conversational AI interface that allows employees to interact with large language models like GPT and Gemini within the bank's secure compliance framework. This tool aims to help manage tasks like summarizing complex documents, drafting content, and analyzing data, with the goal of improving employee productivity rather than replacing jobs. Other applications include a developer copilot and a "Banker Copilot" to streamline investment banking workflows.
Tokens of Wisdom
If AI provides endless empathy with minimal pushback, how does this reshape people's expectations for real-world relationships?
—The Anthropic Team