Every generational shift in wireless carries the same temptation: to imagine the future arriving all at once — a new “G,” a sudden leap, a clean break from the past. Yet networks have always evolved differently. Their most lasting changes emerge through disciplined, cumulative progress that gradually reshapes how we connect and communicate.
Artificial intelligence, by contrast, has followed a much faster path. In just a few years, AI now builds tools that can reason, create and assist, expanding what digital work can be. Still, these remain tools of the digital realm — powerful extensions of human thought and creativity but not yet forces that fully bridge into the physical world.
That separation is now ending.
We are entering the era of Physical AI, systems that do not just analyze the world, but interact with it. Autonomous machines, robots, cameras, digital twins and intelligent infrastructure all depend on real-time perception, reasoning, decision-making and action. Intelligence is no longer confined to centralized clouds. It must operate where the physical and digital worlds meet, in real time.
That shift fundamentally changes what networks are required to do.
Generative AI is poised to unlock $3–5 trillion a year, mostly in the digital world. When intelligence enters the physical AI economy across robots, factories and infrastructure, it unlocks tens of trillions of dollars in opportunity. In this realm, AI doesn’t just generate content, it moves the world.
From Informational Tokens to Kinetic Tokens
Today’s AI systems are built largely around what we might call informational tokens, units of data that describe, summarize or predict. Physical AI introduces something different.
In physical systems, data must carry intent, context and timing. It must be actionable. These are what I refer to as kinetic tokens: data constructs that do not just represent information, but initiate physical outcomes such as movement, control, adaptation or coordination in the real world.
Kinetic tokens demand more than bandwidth. They require time-space coherency, deterministic performance, ultra-low latency, synchronization across devices and continuous learning at the edge. Most importantly, they require a network that understands that these tokens are not passive. They are operational. This is where telecom networks become central to the future of Physical AI.
Why Telcos Matter in the Physical AI Era
Telecom networks already operate at the intersection of scale, reliability and real-time performance. As Physical AI emerges, those characteristics become strategic assets. Telcos are uniquely positioned to enable kinetic tokens because we combine:
- Distributed edge infrastructure close to where action occurs
- Predictable, deterministic connectivity required for real-time control
- Security, identity and trust frameworks are essential for autonomous systems
- Operational expertise at national scale
Physical AI systems cannot rely solely on distant hyperscale data centers. Decisions increasingly need to be made on edge devices and communicated peer-to-peer across multiple Physical AI devices for collaborative tasks. The network becomes not just a transport layer, but the nervous system of Physical AI solutions.
6G as the Connective Tissue for Physical AI
This is why 6G matters, not as a speed upgrade, but as an architectural inflection point.
6G is being designed as the first AI-native generation in wireless, with intelligence embedded directly into the fabric of the network. As networks evolve from moving information to shaping action, they become the connective tissue for Physical AI — machines and systems that perceive, reason and act as part of their environment.
6G brings the convergence of connectivity, sensing, localization and computing into a fabric precise and responsive enough for machines to perceive their environment, coordinate with one another and make decisions in real time. What changes isn’t just network performance, but the role the network plays: from a passive data pipe to becoming the active nervous system of physical intelligence itself.
One of its most important capabilities is Integrated Sensing and Communications (ISAC), the ability for the network to simultaneously communicate and sense the physical environment. Combined with AI, ISAC enables networks to sense, interpret and respond to real-world conditions in real time. That closes the loop between perception and action, a foundational requirement for Physical AI.
AI-RAN: Supercharging the Future of Mobile Networks
AI-RAN is not a single capability. It is an architectural effort focused on how radio networks can evolve to support both traditional telecom workloads and AI workloads concurrently on shared infrastructure. That ability is essential if networks are to support Physical AI efficiently, flexibly and at scale. This is the reason for AI-RAN.
At T-Mobile’s AI-RAN Innovation Center in Bellevue, we are grounding this exploration in real engineering. The Innovation Center serves as a dedicated environment for ongoing research and development, where teams and partners can continuously test, refine and validate new network architectures through close collaboration. Working with partners including Nokia, Ericsson and NVIDIA, we are validating that RAN and AI workloads can operate side by side on commercial platforms using live spectrum, commercial radios and real devices, while also validating the total cost of ownership.
These demonstrations matter not because they represent a finished system, but because they establish something foundational. The network can evolve into a multi-function, multi-cloud platform capable of supporting connectivity and intelligence simultaneously. That flexibility is a prerequisite for Physical AI.
With Nokia, we recently completed an industry-first demonstration call using commercial RAN software in which key Layer 1 workloads ran on GPU acceleration alongside AI applications on the NVIDIA Grace Hopper platform. The call was executed over the air using live spectrum, a commercial C-band radio and a standard 5G smartphone. At the same time, the same GPU supported an AI application delivering real-time video captioning. This demonstrated that AI and RAN workloads can coexist on shared infrastructure without compromising performance.
We achieved a similar milestone with Ericsson, completing a demonstration call using its commercial Cloud RAN software with forward error correction offloaded to GPU acceleration on the same NVIDIA platform, again using commercial radios and devices.
Momentum Matters
Some have asked whether AI belongs in the RAN now or whether it should wait for 6G. The answer is that the timing is exactly right.
As 6G standards are being defined, we have a rare opportunity to design intelligence into the network from the beginning rather than retrofit it later. Our nationwide 5G Standalone and 5G Advanced network gives us a living platform to demonstrate this today while building toward what comes next.
When we say our network is born ready for 6G, we mean that the architectural choices we made early, cloud-native cores, 5G Advanced operation and software-driven architectures, were designed to support this convergence of AI, cloud and connectivity.
Setting Direction Through Proof
Leadership in this moment does not come from declarations. It comes from proof points earned through discipline, bold innovation and partnerships that bring complementary strengths together.
As Jensen Huang highlighted at T-Mobile’s Capital Markets Day: AI continues to transform every industry and will also revolutionize telecommunications. Like electricity and the internet, AI is essential infrastructure. Every consumer will use it, every company will be powered by it and every country will build it. Now intelligence is moving into the physical world with robots, autonomous vehicles and cities. A billion cars, billions of robots in the future, millions of factories and hundreds of millions of farms will all be connected to intelligence. AI will be distributed at the edge.
At T-Mobile we are building the network foundations Physical AI requires, engineered for real-time intelligence at the edge. Because the future of AI will not live only in the cloud. It will live in the physical world.




