CES 2026 Recap: 9 Key AI Announcements
From Nvidia’s Rubin architecture to production-grade humanoids and reasoning-based vehicles.
Welcome to The Median, DataCamp’s newsletter for January 9, 2026.
In this edition: CES 2026 signals a shift toward physical AI, OpenAI debuts a dedicated health experience in ChatGPT, Google upgrades Gmail with an “AI Inbox,” and xAI and Anthropic secure massive new funding rounds.
This Week in 60 Seconds
CES 2026 Signals a Shift to Physical AI
The Consumer Electronics Show (CES) is the world’s premier technology event held annually in Las Vegas, serving as a definitive bellwether for the innovations that will shape the coming year. CES 2026 (January 6-9) marked a shift for AI from digital interfaces into the physical world through physical AI and proactive agents. NVIDIA led the show with its Vera Rubin architecture. Simultaneously, Intel, AMD, and Qualcomm unveiled high-performance neural processing units (NPUs) to enable local execution of massive models on new AI PCs. The event also featured production-ready humanoids like the electric Boston Dynamics Atlas. In the Deeper Look section, we’re going to explore the most important announcements relevant to the AI industry.
Nvidia Launches Rubin Architecture to Replace the Blackwell Series
Nvidia’s official launch of the Rubin platform stands as one of the most important announcements at CES 2026. The new architecture comes with a suite of six new chips (including the Vera CPU and Rubin GPU) engineered to reduce AI infrastructure costs. Nvidia says the architecture achieves a 10x reduction in inference token costs and requires 4x fewer GPUs to train Mixture-of-Experts (MoE) models than its Blackwell predecessor. A standout feature is the new Inference Context Memory Storage platform, which uses BlueField-4 processors to accelerate multistep agentic reasoning. Major cloud providers, including Microsoft, AWS, and Google have already committed to deploying Rubin-based superfactories in the second half of 2026 to power frontier models from OpenAI, Anthropic, and xAI.
OpenAI Launches ChatGPT Health as a Dedicated App Experience
OpenAI has introduced ChatGPT Health, a dedicated experience that securely integrates medical records and wellness apps with AI intelligence. Built in collaboration with over 260 physicians, the tool helps users interpret lab results and prepare for appointments while operating in an isolated space with purpose-built encryption. OpenAI says these conversations are not used for model training and rely on separate health memories to ensure sensitive information remains compartmentalized. The feature is currently rolling out via waitlist to U.S. users on web and iOS.
Google Launches AI Inbox and AI Overviews to Modernize Gmail
Google has integrated Gemini 3 into Gmail to transform the platform into a proactive personal assistant. The update introduces AI Overviews, which summarize long email threads and allow users to find specific information through natural language queries. Additionally, a new AI Inbox feature filters out clutter to highlight critical to-dos and priority contacts. While writing tools like “Help Me Write” are now free for everyone, advanced features such as inbox querying and “Proofread” are reserved for Google AI Pro and Ultra subscribers. The features are currently rolling out to English-speaking users in the United States.
xAI and Anthropic Secure Record-Breaking Funding Rounds to Start 2026
xAI has completed a $20 billion Series E round, exceeding its initial $15 billion target with backing from NVIDIA, Cisco, and the Qatar Investment Authority. The funds will fuel the expansion of its Colossus supercomputer cluster (which already operates over one million H100 GPU equivalents) and support the development of Grok 5. Simultaneously, Anthropic is reportedly in talks to raise $10 billion at a $350 billion valuation, according to The Wall Street Journal. This new financing, led by GIC and Coatue Management, accompanies a separate deal where NVIDIA and Microsoft plan to invest up to $15 billion in exchange for $30 billion in compute capacity. These deals follow a record-breaking 2025 in which AI startups raised $222 billion globally.
Start 2026 with the AI Fundamentals Track (Certification Available)
A Deeper Look at This Week’s News
CES 2026: 9 Key AI Announcements
The 2026 Consumer Electronics Show (CES) in Las Vegas signaled a transition in the AI industry. While previous iterations were dominated by digital generative tools, 2026 showcased “physical AI,” where intelligence is increasingly embedded into hardware, robotics, and autonomous systems capable of navigating the real world with human-like fidelity. We’ll quickly cover the most important announcements.
The New Foundations of AI Infrastructure
1. Nvidia Rubin Architecture
Replacing the Blackwell generation, Nvidia’s Rubin platform is engineered as a holistic architecture for the next era of AI factories. The system utilizes extreme codesign across six new chips (including the Vera CPU with its 88 custom Olympus cores) to address the massive memory and interconnect demands of AI models.
Source: Inside the NVIDIA Rubin Platform
By optimizing for agentic reasoning, the platform claims a 10x reduction in inference cost per token and requires 4x fewer GPUs for training compared to previous hardware, effectively serving as the foundation for the emerging world of physical intelligence.
2. Intel Panther Lake (Core Ultra Series 3)
Intel has launched its newest laptop processor, Panther Lake. It is the first chip made with Intel’s new 18A manufacturing technology, which uses a smaller, more advanced design to deliver faster performance with less battery power.
The chip includes a specialized neural processing unit (NPU) that handles background tasks without slowing down other apps. When combined with the rest of the processor, it can perform 180 trillion operations per second, making it powerful enough to run large AI assistants directly on the device. By running these agentic assistants locally rather than over the internet, the AI is faster, works without an internet connection, and keeps the user’s personal data more private.
3. Qualcomm Snapdragon X2 Plus
The Qualcomm Snapdragon X2 Plus is designed to bring high-end AI performance to more affordable laptops. It features a specialized AI processor capable of 80 trillion operations per second, which is significantly faster than many current alternatives.
This increased speed enables the laptop to handle complex AI tasks more efficiently, resulting in a 43% reduction in power consumption compared to earlier versions. By including this high level of performance in mid-range devices, Qualcomm ensures that smart background assistants can run all day without significantly impacting battery life.
4. AMD Ryzen AI Max+
The AMD Ryzen AI Max+ is an all-in-one chip built for professional creators that combines powerful processing with high-end graphics. A key feature is its massive 128GB of shared high-speed memory, which allows the laptop to run extremely large AI models entirely on the device.
In traditional laptops, the lack of shared memory often creates a bottleneck that limits the size and complexity of AI tasks that can be performed. This new design removes that barrier, enabling a portable computer to handle professional-grade AI work that previously required a much larger and more expensive desktop.
Robotics: From Research to Production
5. Boston Dynamics Atlas
Boston Dynamics has showcased the production-ready version of its Atlas robot, marking its transition from a high-tech experiment into a promising industrial tool. Unlike earlier versions that used complex fluid-powered systems (hydraulics), this new model is fully electric, making it tougher and more suited for long shifts in factories.
Source: Hyundai newsroom
Atlas will now use “brains” developed by Google DeepMind’s Gemini Robotics reasoning models to help it plan complex movements and solve problems on the fly. With unique joints that can rotate a full 360 degrees, the robot can move in ways physically impossible for humans, making it more efficient in tight spaces. It is currently being tested for a 2028 rollout at Hyundai’s car manufacturing plants.
6. LG CLOiD
LG has introduced CLOiD, a two-armed robot on wheels designed to help with common chores like folding laundry or loading a dishwasher. The robot uses specialized AI models that act like a bridge between “seeing” a task and “performing” the physical action, allowing it to interpret voice commands and translate them into a series of steps.
Recent demonstrations showed that while the robot is very smart at understanding what needs to be done, its physical movements are still relatively slow. This is a classic example of Moravec’s Paradox (the idea that it is often easier to teach a computer to solve a complex math problem than it is to teach it the simple physical coordination a human uses to fold a shirt). Regardless, the commitment from a major company like LG signals a serious long-term belief in mobile, helpful home appliances.
AI in Transportation
7. NVIDIA Alpamayo
NVIDIA has introduced Alpamayo, a new system for self-driving cars that focuses on deliberate, human-like reasoning rather than just quick reactions. Instead of simply following rigid rules, this platform uses advanced AI to understand complex and ambiguous situations (such as recognizing that a police officer is using hand signals to direct traffic through a red light).
To help people understand and trust its decisions, the system can provide human-readable explanations, known as “reasoning traces,” for why it took a specific action. Alpamayo is set to debut on U.S. roads in the Mercedes-Benz CLA in 2026.
8. The Lucid-Uber-Nuro Robotaxi
Lucid, Uber, and Nuro unveiled a production-ready robotaxi based on the all-electric Lucid Gravity SUV. The vehicle uses Nuro’s Level 4 driver, an advanced computer system that allows the car to handle all driving tasks without human intervention in specific areas like the San Francisco Bay Area.
Source: Lucid press release
Inside, passengers can use interactive screens to control their comfort and even see a live map of what the car is “thinking.” Autonomous testing is already happening on public roads, with full production and a rider launch expected later this year.
9. Ford AI Assistant
Ford is launching a new AI assistant that is more capable than standard voice controls because it is deeply connected to a vehicle’s physical sensors. While many AI tools are general, this assistant has state awareness, meaning it knows specific details like tire pressure, oil life, and cargo capacity.
In one example, the system can use cameras to look at a pile of mulch and calculate exactly how many bags will fit into the bed of that specific truck. The assistant will launch first in Ford’s mobile app in 2026, with an in-car version following in 2027.
Industry Use Cases
Computer Vision Powers the Move to Physical AI
The Innovation Awards at CES 2026 highlighted a shift as computer vision moves from general digital tools to specialized physical AI designed for complex, real-world industry tasks. Notable winners include VIXallcam, which provides all-weather vision for long-haul trucks, and AA-2, an indoor delivery robot that autonomously navigates elevators. Other breakthroughs featured AI that detects driver impairment from eyelid dynamics and drones that use multispectral imaging for precision agriculture. Read more in this article from BasicAI.
AI Copilot Accelerates High-Stakes Physics at Berkeley Lab
Researchers at the Lawrence Berkeley National Laboratory have deployed the Accelerator Assistant, a specialized AI agent designed to manage complex experiments at the Advanced Light Source (ALS) particle accelerator. Powered by NVIDIA H100 GPUs and the Berkeley-developed Osprey framework, the system routes requests through frontier models like Gemini, Claude, and ChatGPT to autonomously navigate over 230,000 process variables. The assistant can generate Python code and execute multistage physics experiments, reducing preparation effort by two orders of magnitude (100x). Read more in this article from Nvidia.
Caterpillar Brings Edge AI to the Jobsite
Caterpillar is transforming heavy industry by integrating NVIDIA’s Jetson Thor edge AI platform and Qwen3 4B language models into its machinery, enabling natural language interaction directly on the jobsite. Demonstrated on a Cat 306 CR Mini Excavator, the Cat AI Assistant operates locally without a cloud connection, allowing operators to set safety boundaries, troubleshoot behavior, and access technical documentation through intuitive voice commands. Read more in this article from Nvidia.
Tokens of Wisdom
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
—Roy Amara, researcher and futurist







“Caterpillar Brings Edge AI to the Jobsite
Caterpillar is transforming heavy industry by integrating NVIDIA’s Jetson Thor edge AI platform and Qwen3 4B language models into its machinery, enabling natural language interaction directly on the jobsite.”
so much on edge ai in january so far. 10/10
the roundup is perfect. trying to stay up to date with all the announcements!!!