Despicable AI? Disney and Universal Sue Midjourney, Researchers Advance World Models
Lawsuits heat up, world models improve, and reasoning gets better (and pricier).
Welcome to The Median, DataCamp’s newsletter for June 13, 2025.
In this edition: Disney and Universal want to take Midjourney to court, researchers advance AI world models, Mistral debuts Magistral and launches its own compute stack, and OpenAI updates o3.
This Week in 60 Seconds
Mistral Debuts Magistral, Its First Reasoning Model
Mistral AI has launched Magistral, its first reasoning model. The release includes two versions: Magistral Small (24B parameters, open-source under Apache 2.0) and Magistral Medium (enterprise tier). Our team at DataCamp moved quickly and published a hands-on tutorial on running Magistral Small with Ollama and vLLM.
Disney and Universal Sue Midjourney Over AI-Generated IP
Disney and Universal have filed a joint copyright infringement lawsuit against AI image generator Midjourney, alleging the platform enables users to create unauthorized reproductions of their iconic characters, including Darth Vader, Elsa, and Minions. The studios claim that despite cease-and-desist requests, Midjourney continued to allow prompts like “Yoda with lightsaber” to yield infringing images. We’ll dive into this in more detail in the “Deeper Look” section and look at a few examples from the official complaint.
Meta Releases V-JEPA 2, a World Model for Physical Reasoning
Meta has open-sourced V-JEPA 2, a “world model” trained on video to help AI understand and predict how the physical world works. Unlike LLMs, which guess the next word, world models aim to build physical intuition, similar to how a toddler learns that dropped toys fall or that hot pans burn. V-JEPA 2 learns from raw video to anticipate motion, plan actions, and reason about cause and effect. We’ll learn more about world models in the “Deeper Look” section.
OpenAI Releases o3-pro: Enhanced Reasoning
OpenAI has introduced o3-pro, its most advanced reasoning model to date, now accessible to ChatGPT Pro and Team users. O3-pro offers improved performance but comes with increased latency and higher costs—$20 per million input tokens and $80 per million output tokens, compared to o3’s $2 and $8, respectively. Expert reviewers consistently rated o3-pro higher than o3, citing stronger performance across domains like science, education, programming, data analysis, and writing.
Mistral Launches Compute Stack for Sovereign and Enterprise AI
Mistral AI has announced Mistral Compute, a full-stack infrastructure offering aimed at governments, companies, and research labs looking to build and operate their own AI environments. The platform includes hardware (tens of thousands of NVIDIA GPUs), orchestration tools, and deployment options ranging from bare metal to fully managed PaaS (Platform as a Service). It’s designed to support training and inference across high-stakes domains like defense, pharma, finance, and scientific research.
Learn AI Ethics With Datacamp
A Deeper Look at This Week’s News
Why Are Disney and Universal Suing Midjourney?
On June 11, Disney and Universal filed a joint 110-page complaint against AI image generator Midjourney, accusing it of direct and secondary copyright infringement.
The studios claim that Midjourney trained its models on massive volumes of copyrighted material, including characters from Star Wars, Marvel, Frozen, Shrek, Despicable Me, and more, without licensing or permission.
You can read the full complaint here.
How are the studios making their case?
The complaint presents dozens of examples where simple prompts like “Darth Vader walking on the Death Star” or “Bart Simpson on a skateboard” yield unmistakable copies of copyrighted characters.
Source: Official complaint
The studios describe Midjourney as a “bottomless pit of plagiarism” and a “virtual vending machine” for unlicensed derivative works, with no content safeguards in place.
They also point to Midjourney’s marketing materials, which feature characters like the Minions and Darth Vader, as evidence of implicit endorsement or promotional misuse.
Source: Official complaint
What are they asking for?
Disney and Universal are demanding:
A jury trial
Monetary damages
An injunction to stop Midjourney from generating, distributing, or hosting infringing content
Preventive measures, including filters that would reject protected character prompts or block the generation of known IP.
They also warn that Midjourney is preparing a video generation service and could use the same copyrighted training data again, amplifying the harm.
Why does this matter?
This lawsuit follows on the heels of The New York Times lawsuit against OpenAI and Microsoft, which raised similar questions about whether AI models should be allowed to reproduce protected content on request.
As more of these high-stakes copyright trials move forward, the courts could end up drawing new lines between inspiration, replication, and theft.
How do you think these lawsuits will play out?
World Models 101
We’ve seen LLMs become adept at completing text, answering questions, even reasoning through problems. But language isn’t the whole world—it’s just how we describe it. To build AI that can interact with the world, not just talk about it, researchers are turning to something else: world models.
What is a world model?
A world model is an AI system that learns how the physical world behaves. Instead of predicting the next word, it predicts what happens next: where an object will move, what effect an action will have, or how a scene might change over time.
Think of it like this: when you reach to grab a glass of water, you just know how things will go—how your arm will move, how heavy the glass might feel, what could go wrong if it’s wet. A world model tries to give AI that same kind of intuitive foresight.
Why does this matter?
Language models are great at answering questions, summarizing text, and generating content. But they don’t have a real sense of space, time, or cause and effect. They can describe a game of pool, but they can’t predict where the cue ball will go. World models aim to fill that gap.
They’re especially important for robotics. If we want robots to work in real homes, hospitals, or disaster zones, they need to reason about unpredictable environments without being manually trained for every situation.
What’s new with V-JEPA 2?
Meta’s new V-JEPA 2 is one of the most advanced world models yet. Trained on over a million hours of video, it can understand and predict physical interactions and can even guide a robot to complete tasks with unfamiliar objects, without prior tuning. This is known as zero-shot planning.
Source: Meta AI
Where is this heading?
The field is still young. World models today are best at short-horizon tasks: predicting a few seconds ahead, like where to place a cup or how to navigate a table.
They struggle with long-term planning or abstract reasoning. But they’re improving fast—and paired with language models, they could form the backbone of next-generation agents that can act in the physical world.
If LLMs gave us smart talkers, world models may give us smart doers.
Industry Use Cases
Uber Plans Self-Driving Robotaxi Trial in London
Uber is set to launch a trial of self-driving robotaxis in London by spring 2026, partnering with UK-based AI firm Wayve. These autonomous vehicles, previously tested with safety drivers, represent Uber’s first such pilot in the UK. The initiative aims to deploy driverless cars more widely once regulations permit, with the UK government forecasting that autonomous vehicle technology could create 38,000 jobs and contribute £42 billion to the economy by 2035.
AI Helps Chipotle Launch a New Location Every 24 Hours
Chipotle is using AI to expedite its expansion plans, aiming to open nearly one new restaurant every 24 hours in 2025. The company employs the Ava Cado hiring system, an AI tool developed in partnership with Paradox, which reduces hiring time by 75% by streamlining candidate communication, interview scheduling, and offer dissemination. Furthermore, AI-driven personalized promotions through the rewards program are enhancing customer engagement, contributing to a 6.4% year-over-year increase in Q1 revenue.
LVMH Turns to AI Amid Luxury Market Slowdown
LVMH (a luxury conglomerate that owns Tiffany, Dior, and Celine) is turning to AI to maintain its competitive edge during a luxury market slowdown. The group employs predictive and generative AI for supply chain planning, dynamic pricing, product design, targeted marketing, and customer personalization across its 75 brands. For instance, AI agents at Tiffany help personalize customer communications, while creative teams across Dior and others use generative tools to develop product ideas and marketing assets tailored to regional trends. The company’s centralized AI platform, MaIA, processes over 2 million requests each month, supporting around 40,000 employees across the globe in optimizing daily operations.
Tokens of Wisdom
Allowing machines to understand the physical world is very different from allowing them to understand language. The world model is like an abstract digital twin of reality that an AI can reference to understand the world and predict the consequences of its actions.
—Yann LeCun, Chief AI Scientist at Meta
Well Done!!!