- Global Supply Chain Alliances Take Shape as Micro LED CPO Optical Transceiver Market Projected to Reach US$848 Million by 2030, Says TrendForce
May 11, 2026
TrendForce Corp.
TAPEI, Taiwan, May 11, 2026 (GLOBE NEWSWIRE) -- TrendForce’s latest research into the Micro LED industry highlights how generative AI is driving rapid growth in demand for high-speed optical communications. Micro LED technology offers power consumption as low as 1–2 pJ/bit and ultra-low bit error rates (BER) of ≤10⁻¹⁰. It’s also emerging as one of the three major short-distance, high-speed intra-rack transmission solutions for scale-up data center networks, alongside active electrical cables (AEC) and VCSEL-based near-packaged optics (VCSEL NPO). As a result, TrendForce projects that the Micro LED CPO optical transceiver market will reach US$848 million by 2030.
Global supply chain players are expanding into optical communications and optical interconnects. Microsoft has introduced the MOSAIC Micro LED CPO architecture, while MediaTek provides integrated active optical cable (AOC) solutions. AEC leader Credo Technology Group acquired Hyperlume in 3Q25 to broaden its optical interconnect portfolio.
Startup Avicena has developed its ultra-low-power LightBundle™ technology and is preparing to launch a 512 Gbps Micro LED optical interconnect solution, with an 896 Gbps version scheduled for advancement in 2Q26 to further improve data transmission efficiency and power consumption.
Meanwhile, ams OSRAM has signed a development agreement with a leading global AI data center infrastructure partner to accelerate the commercialization of Micro LED optical interconnects. Its in-house solution—targeted for launch in 2027—is expected to integrate Micro LED chips, optical components, and dedicated ASICs.
In Taiwan, AUO is combining technologies from Ennostar and Tyntek to launch Micro LED CPO on glass RDL interposers. This enables customers to adopt the technology without needing dedicated mass-transfer equipment. Innolux is also expected to utilize resources from bEMC to gradually enhance its vertical integration and competitive edge. PlayNitride has already partnered with Brillink to expand into this sector. In China, HC Semitek has collaborated with Shanghai New Vision Microelectronics to develop Micro LED optical interconnect technologies.
TrendForce highlights that as global alliances continue to take shape, defining product specifications and completing customer sample validation will still require time. Therefore, shipments of Micro LED CPO optical transceivers are expected to begin scaling significantly in the second half of 2028, eventually contributing approximately $848 million in market value by 2030.Micro LED CPO alliances
For more information on TrendForce’s optoelectronics reports and market data, please visit the Report Page, or Email (OR_MI@trendforce.com) the Sales Department.
Story Continues
For more on the latest technology industry news and trends, please visit News.
About TrendForce
TrendForce is a global leader in technology industry analysis and consulting services. With deep expertise spanning foundry, DRAM, HBM, NAND Flash, AI servers, robotics, near-eye displays, display panels, LEDs, MLCC, and green energy, it also offers in-depth research into key market drivers such as AI, automotive technologies, 5G/6G communications, LEO satellites, and the IoT.
Backed by a team of top industry professionals, TrendForce has been at the forefront of global market research for over 25 years. More than 60% of its clients are among the world’s top 500 companies. TrendForce’s global footprint includes Taipei, Shenzhen, Silicon Valley, New York, and Tokyo. With timely and strategic industry analysis, TrendForce delivers the critical information that empowers businesses to make smarter, faster decisions.
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/a7310270-fdd4-474c-85bc-70a018d4a3bf
View Comments
- Arm Is Quietly Becoming The CPU Backbone Of AI
May 8, 2026
Introduction
Over more than three decades, Arm evolved into a leading CPU architecture supplier for smartphones and embedded devices, licensing CPU architectures and core designs to customers such as Apple (AAPL), Qualcomm (QCOM), Nvidia (NVDA), Samsung, and MediaTek instead of manufacturing chips itself. In 2025, that business model began changing with the introduction of Arm’s AGI CPU program, under which the company started offering direct CPU products targeting hyperscale AI infrastructure and manufactured by undisclosed foundry partners. ARM remains fabless nevertheless.
ARM has long been associated with smartphones and low-power processors, but the company’s role in artificial intelligence is rapidly expanding far beyond mobile devices. While investors continue to focus primarily on GPUs and AI accelerators, Arm is increasingly positioning itself as the CPU architecture layer underneath the entire AI ecosystem, spanning hyperscale cloud infrastructure, edge AI devices, PCs, and physical AI systems.
In the past, I’ve written several articles centered on ARM artificial intelligence infrastructure, including Nvidia, Broadcom (AVGO), Qualcomm, and Advanced Micro Devices (AMD). In a July 2024 Seeking Alpha article entitled “Arm Holdings: A Complementary Investment To Nvidia,” I discussed how Arm’s Neoverse platform was steadily increasing its presence in AI data centers through relationships with hyperscalers such as Amazon AWS, Google Cloud, Microsoft Azure, and Nvidia Grace-based systems.
Since its IPO, ARM has increasingly positioned itself not simply as an architecture supplier for smartphones and embedded devices, but as a central CPU platform for AI infrastructure.
According to CEO Rene Haas during the company's May 6, 2026 earnings call:
The analyst who called NVIDIA in 2010 just named his top 10 stocks and Arm wasn't one of them.Get them here FREE.
"As AI is moving from human-based queries to continuous agent-driven workloads, this shift is expanding the role of the CPU. These Agentic workloads require CPUs to coordinate tasks, move data, manage memory, enforce security and orchestrate workloads around accelerators."
Importantly, Nvidia remains the dominant GPU supplier, yet even Nvidia is increasingly relying on Arm-based CPUs underneath its AI systems.
Nvidia’s Vera CPU is based on Arm v9.2 architecture and is designed to work alongside Rubin GPUs in next-generation AI infrastructure systems. Although Nvidia designs the CPU internally, it relies on Arm's instruction-set architecture and is tightly integrated with Rubin GPUs for orchestration, memory management, networking, scheduling, and accelerator coordination.
Story Continues
Arm’s AI Total Addressable Market Forecast FY2026–FY2031
According to Chart 1, Arm estimates that its total AI-related addressable market will expand from approximately $535 billion in FY2026 to more than $1.5 trillion by FY2031. The largest expansion occurs within Cloud AI, where the XPU opportunity alone is projected to exceed $1 trillion by FY2031.
The chart also demonstrates that Arm’s opportunity extends across three major AI categories: Cloud AI, Edge AI, and Physical AI. While Edge AI remains an important revenue contributor, the long-term expansion of Cloud AI infrastructure is becoming the dominant growth driver for the company.
The most important aspect of the chart is the projected expansion of data-center CPU opportunity from approximately $50 billion to more than $100 billion by FY2031. This supports Arm’s argument that CPUs remain essential in AI systems despite the rapid growth of accelerators and GPUs.
Chart 1: Arm AI TAM Forecast FY2026–FY2031
Server CPU TAM and Market Share Forecast
According to Table 1, the AI data-center buildout is not eliminating CPUs but increasing their strategic importance. The total server CPU market is projected to expand from $27.7 billion in 2025 to $56.2 billion in 2028, while Arm’s server CPU market share rises from 13.4% to 23.1%.
The gains come primarily at the expense of Intel (INTC), whose market share declines from 52.0% to 43.9% over the same period. AMD’s share remains comparatively stable, while Arm gains share through the continued adoption of custom Arm-based CPUs by hyperscalers and accelerator vendors.
The shift reflects growing deployments of AWS Graviton, Google Axion, Microsoft Cobalt, Nvidia Grace and Vera, TPU host CPUs, and Trainium infrastructure. Increasingly, AI infrastructure is being built around heterogeneous systems where CPUs and accelerators work together rather than independently.
Why AI Increases CPU Demand Rather Than Replacing CPUs
One of the most important themes emerging from Arm’s recent earnings call is that AI agents increase CPU demand rather than replace CPUs.
Historically, much of the market narrative implied that GPUs would dominate AI infrastructure while CPUs became secondary components. Arm is now arguing the opposite. As AI systems become increasingly agentic and autonomous, orchestration complexity rises dramatically. CPUs are responsible for task scheduling, memory management, networking, security, data movement, and coordination among accelerators.
During the conference call, CEO Haas repeatedly emphasized that at the datacenter, CPU core counts could ultimately become more important than chip counts themselves. Arm’s AGI CPUs feature 136 cores, while Nvidia’s Vera CPU contains 88 cores. Haas indicated that future generations could potentially double or quadruple those counts over time.
This thesis also aligns with broader hyperscaler trends. Google’s newest TPU systems now integrate Arm-based Axion CPUs. Nvidia’s Vera platform combines Arm CPUs with Rubin accelerators. AWS continues expanding Graviton alongside Trainium and Inferentia deployments. Microsoft is aggressively scaling Arm-based Cobalt systems throughout Azure.
The Shift Toward Heterogeneous AI Infrastructure
Arm’s growing importance reflects the broader transition toward heterogeneous computing infrastructure.
In earlier generations of AI systems, accelerators operated primarily alongside traditional x86 CPUs. Today, the architecture is evolving toward tightly integrated CPU-accelerator systems optimized around efficiency, memory bandwidth, networking, and orchestration.
This trend benefits Arm because its instruction set is optimized for high core-count efficiency and lower power consumption. According to management commentary, Google’s new TPU systems using Arm Axion CPUs improved overall platform performance by approximately 80% while reducing power consumption by roughly 50% compared with previous x86-based designs.
Management disclosed during the FY2026 Q4 earnings call that customer demand for AGI CPUs exceeded $2 billion across FY2027–FY2028 shortly after launch, although the company maintained a more conservative near-term revenue outlook of approximately $1 billion because of supply constraints involving wafers, packaging, memory, and testing capacity.
The market reacted negatively to the more conservative near-term revenue outlook, sending ARM shares down roughly 10% following the earnings call.
PC CPU Market Share and Revenue Share Forecast
According to Table 2, Arm is also steadily gaining share in AI PCs and edge devices. Arm’s PC CPU unit share is projected to rise from 13.6% in 2025 to 18.6% in 2028, while revenue share increases from 11.7% to 15.5%.
Intel’s dominance continues to erode in both units and revenue, while AMD gains more gradually. The market transition reflects increasing adoption of Arm-based AI PCs built around Qualcomm Snapdragon X processors, Apple’s M-series processors, and Microsoft’s Copilot+ ecosystem.
The expansion of local inference workloads further benefits Arm because on-device AI processing emphasizes power efficiency and integrated neural processing rather than maximum raw compute alone.
Arm’s Position In Edge AI And AI PCs
Edge AI remains an important long-term opportunity for Arm even as Cloud AI becomes the dominant revenue driver.
Arm architecture already powers the overwhelming majority of smartphones globally, while Apple (AAPL), Samsung, Qualcomm, and MediaTek continue building increasingly advanced AI functionality around Arm-based designs.
In a November 16, 2025 Substack article entitled “Arm at the Edge – Apple’s AI Paradox,” I discussed how Apple’s relatively conservative AI implementation demonstrated the difference between architecture and implementation. While Apple’s processors rely on Arm architecture, competing Android platforms increasingly emphasize larger neural processing blocks and local generative AI workloads.
Importantly, Arm benefits regardless of which OEM ultimately delivers superior AI functionality because both high-volume and high-performance devices continue paying royalties back to Arm’s architecture ecosystem.
Physical AI Expands Arm’s Long-Term Opportunity
One of the least appreciated components of Arm’s opportunity may be Physical AI.
According to the company’s projections, Physical AI TAM could expand from approximately $25 billion in FY2026 to roughly $50 billion by FY2031. This category includes robotics, autonomous systems, industrial automation, automotive AI, sensors, drones, and humanoid robotics.
Unlike cloud AI systems, physical AI devices require extremely efficient local processing, low latency, low power consumption, and scalable edge inference. These are areas where Arm architecture has historically maintained strong advantages.
As AI increasingly migrates from cloud-only deployments toward distributed intelligent systems, Arm’s architectural footprint across embedded devices may become increasingly valuable.
Investor Takeaway
Arm’s investment thesis has evolved significantly beyond smartphones and low-power mobile processors.
The company is increasingly becoming the CPU architecture layer underneath hyperscale AI infrastructure itself. AI agents, heterogeneous systems, and accelerator orchestration are increasing CPU importance rather than reducing it. This trend directly benefits Arm through deployments across AWS Graviton, Google Axion, Nvidia Grace and Vera, Microsoft Cobalt, TPU host CPUs, and AI edge devices.
Chart 1 above demonstrates that Arm’s addressable AI opportunity could expand from roughly $535 billion to more than $1.5 trillion by FY2031. Meanwhile, Table 1 above shows Arm steadily gaining server CPU share as hyperscalers continue migrating away from traditional x86 infrastructure toward custom Arm-based designs.
The market continues focusing overwhelmingly on GPUs and accelerators. However, CPUs remain essential components of AI infrastructure, particularly as AI systems become increasingly agentic and orchestration-intensive. Arm’s architecture is positioning itself directly in the middle of that transition.
While valuation remains elevated and royalty monetization still lags licensing momentum, Arm increasingly appears less like a smartphone IP company and more like a foundational AI infrastructure platform.
The analyst who called NVIDIA in 2010 just named his top 10 AI stocks
This analyst's 2025 picks are up 106% on average. He just named his top 10 stocks to buy in 2026. Get them here FREE.
View Comments
- RadixArk Launches with $100 Million in Seed Funding Led by Accel to Grow SGLang and Democratize Frontier AI Infrastructure
May 5, 2026
Founded by the creators and core maintainers of SGLang, the open-source inference engine powering trillions of tokens daily for Google, Microsoft, NVIDIA, Oracle, AMD, Nebius, LinkedIn, xAI, Thinking Machines Lab, and humans&, RadixArk emerges to build frontier AI infrastructure for all
PALO ALTO, Calif., May 05, 2026--(BUSINESS WIRE)--RadixArk, the company democratizing access to frontier AI infrastructure, launched today with $100 million in Seed funding at a $400 million post-money valuation. The round was led by Accel and co-led by Spark Capital, with participation from NVentures (NVIDIA’s venture capital arm), Salience Capital, A&E Investments, HOF Capital, Walden Catalyst Ventures, AMD, LDV Partners, WTT Investment, and MediaTek.
Other investors include Igor Babuschkin (Co-Founder of xAI), Lip-Bu Tan (CEO of Intel), Hock Tan (CEO of Broadcom), John Schulman (Co-Founder of OpenAI and Thinking Machines Lab), Soumith Chintala (PyTorch creator and CTO of Thinking Machines Lab), Olivier Pomel (Co-Founder of Datadog), Thomas Wolf (Co-Founder of Hugging Face), William Fedus (Co-Founder of Periodic Labs), Robert Nishihara (Co-Founder of Anyscale), Eric Zelikman (Co-Founder of humans&), and Logan Kilpatrick (Gemini Product Lead). The company will use the capital to grow SGLang, accelerate support for emerging model architectures and frontier hardware, and build large-scale inference and training infrastructure for the next generation of AI applications.
RadixArk was founded by Ying Sheng and Banghua Zhu, AI infrastructure and modeling veterans from xAI and NVIDIA. In 2023, Sheng and others created SGLang, an open-source inference engine for serving models at scale. SGLang quickly became a de facto open-source standard, stewarded by a global community of thousands of contributors across hundreds of companies, universities, and research organizations. SGLang is now deployed across hundreds of thousands GPUs worldwide and generates trillions of tokens daily for Google, Microsoft, NVIDIA, Oracle, AMD, Nebius, LinkedIn, xAI, Thinking Machines Lab, and humans&.
Today, the most sophisticated AI infrastructure is only available to a handful of companies. Neo-labs must rebuild core training and inference stacks from scratch, while infrastructure teams at every company from enterprises to startups are understaffed and underresourced. The result is enormous waste from duplicated effort, siloed research insights, and impeded progress for the entire AI ecosystem. By treating infrastructure as a first-class priority, RadixArk delivers the foundational open systems needed to build the next generation of AI.
Story Continues
"Our mission is simple yet ambitious: make frontier-level AI infrastructure open and accessible to everyone," said Ying Sheng, co-founder and CEO of RadixArk. "We believe the next generation of AI won’t be defined by who owns the biggest private infrastructure, but by who builds the most meaningful applications on top of shared, world-class systems. We aim to make these systems orders of magnitude cheaper and more accessible, so everyone can build on them."
RadixArk will go beyond traditional inference solutions that offer compute access for off-the-shelf or open-source models. Instead, the company is building an end-to-end platform that supports the full lifecycle of model development, including training proprietary models, fine-tuning open models, running reinforcement learning, and deploying and running inference at scale. By standardizing on a single platform, RadixArk customers maintain ownership and control of their models while having access to best-in-class infrastructure primitives.
"RadixArk is building the open foundation for the next era of AI—where companies don’t just consume models, they train and manage them as a core part of product development," said Ivan Zhou, partner at Accel. "By democratizing training and inference infrastructure, RadixArk enables any engineer to experiment and innovate at the frontier, fully owning how AI powers their products."
RadixArk’s platform is built on battle-tested, open foundations across the AI stack. Inference runs on SGLang, the fastest and most flexible open engine for modern models, while reinforcement learning is powered by Miles, the company’s open-source framework for large-scale training. SGLang was incubated at LMSys, a nonprofit organization founded by researchers from Stanford, Carnegie Mellon, UC Berkeley, and other universities.
"Some of the most important software of the last decade started as open-source projects run by small groups of researchers who refused to compromise on quality. SGLang sits in that lineage: born at LMSys, maintained by thousands of contributors, now the de facto standard for modern inference. RadixArk carries that same spirit into a company, and we're honored to help Ying and Banghua scale it," said Arpan Shah, general partner at Spark Capital. "Frontier AI is at risk of becoming the private infrastructure of a handful of companies. RadixArk is the counterweight: a belief that the next generation of AI products will be built on open, shared systems that any team can run, tune, and own."
SGLang has day-0 support for virtually every open model family (Llama, Qwen, DeepSeek, Kimi, GLM, GPT, Gemma, Mistral, etc.) and hardware provider (NVIDIA GPUs, AMD GPUs, Intel CPUs, Google TPUs, etc.). Together, these frameworks are the starting point for a suite of managed infrastructure and tooling that supports everyone building AI systems, from individual developers to startups, enterprises, and research labs.
"SGLang is the absolute best inference framework for large language models," said Igor Babuschkin, Co-Founder of xAI and RadixArk angel investor. "It was a crucial part of the infrastructure at xAI, because it enabled folks to run large models faster and more efficiently than many alternatives. I’m excited to see Ying and Banghua expand that vision with RadixArk."
"Durable technology shifts are built on infrastructure that empowers entire ecosystems, not just individual companies," said Lip-Bu Tan, CEO of Intel and RadixArk angel investor. "RadixArk has a compelling mission to build the next generation AI infrastructure stack, and SGLang is already emerging as a dominant inference engine for large models. I’m glad to support the company as an early investor."
About RadixArk
RadixArk is an AI infrastructure company building open, scalable systems for training, deploying, and running frontier models. Founded by the creators and core maintainers behind SGLang — the open-source inference engine serving trillions of tokens daily — RadixArk is building an end-to-end infrastructure platform, treating inference, training, and post-training as core first-class citizens. The company builds on two open-source foundations: SGLang for inference and Miles for reinforcement learning. On top of these, RadixArk ships managed infrastructure and tooling that enable developers, startups, enterprises, and research labs to build and operate advanced AI systems with greater speed, control, and performance.
Founded by AI infrastructure and modeling veterans Ying Sheng and Banghua Zhu, RadixArk has raised $100 million in funding from Accel, Spark Capital, NVentures (NVIDIA’s venture capital arm), Salience Capital, A&E Investments, HOF Capital, Walden Catalyst Ventures, AMD, LDV Partners, WTT Investment, MediaTek, Igor Babuschkin (Co-Founder of xAI), Lip-Bu Tan (CEO of Intel), Hock Tan (CEO of Broadcom), John Schulman (Co-Founder of OpenAI and Thinking Machines Lab), Soumith Chintala (PyTorch creator and CTO of Thinking Machines Lab), Olivier Pomel (Co-Founder of Datadog), Thomas Wolf (Co-Founder of Hugging Face), William Fedus (Co-Founder of Periodic Labs), Robert Nishihara (Co-Founder of Anyscale), Eric Zelikman (Co-Founder of humans&), and Logan Kilpatrick (Gemini Product Lead). Learn more at radixark.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20260505077157/en/
Contacts
Press Contact
Kira Wolfe
Command for RadixArk
kira@heycommand.com
View Comments
- MediaTek hires former TSMC executive to boost AI chip packaging
May 4, 2026
By Wen-Yee Lee
TAIPEI, May 4 (Reuters) - Taiwanese chip designer MediaTek said it has appointed former TSMC executive Douglas Yu as a part-time adviser as it steps up advanced packaging work and expands into the AI chip market.
Here are a few details:
• Yu joined TSMC in 1994 and retired in 2025, holding a range of roles in backend research and development. He played a key part in developing TSMC's advanced packaging technologies, including its CoWoS (Chip on Wafer on Substrate).
• CoWoS is a key chip packaging technology widely used in artificial intelligence chips, including Nvidia chips.
• "We look forward to leveraging his extensive industry experience and technical expertise to support the company's exploration and roadmap planning for future advanced packaging technologies, as well as to guide our R&D and investment strategy in advanced packaging-related products and technologies associated with TSMC," MediaTek said in a statement on Saturday.
• TSMC's CoWoS capacity has been in high demand, with customers such as Nvidia and cloud service providers scrambling to secure capacity.
• Last week, MediaTek said it expects to generate multiple billions of dollars in revenue from its AI accelerator ASIC chips by 2027.
(Reporting by Wen-Yee Lee; Editing by Sherry Jacob-Phillips)
View Comments
- OpenAI Is Building What Comes After the iPhone
Apr 30, 2026
The smartphone has been the center of the consumer technology universe for nearly 20 years — a stalwart fixture in an industry rife with flash-in-the-pan fads.
That’s because it was not just a product. It was a platform. And the investment universe reorganized around it.
Apple (AAPL) became the most valuable company on Earth. Amazon (AMZN) transformed from a bookstore into the world’s largest retailer. Facebook reinvented itself as Meta (META), the kingpin of social media. Google cemented its grip on how the world finds information. All of it ran through the same chokepoint: the smartphone screen.
InvestorPlace - Stock Market News, Stock Advice & Trading Tips
But now we’re catching glimpses of a future no longer ruled by the traditional smartphone — because it is no longer the obvious endpoint of consumer computing.
The OpenAI-Qualcomm Partnership Signals a New Computing Era
According to reports leaked earlier this week, OpenAI has tapped Qualcomm (QCOM) and MediaTek to develop chips for an AI-first consumer device, with Luxshare —a major player in Apple’s supply chain —“filling the role of exclusive system co-design and manufacturing partner.”
The device isn’t expected to reach mass production until 2028. But the signal here is still enormous.
OpenAI is trying to build the next consumer computing interface — and it’s already picking its manufacturing partners.
That is how platform shifts usually begin.
A new device starts as a curiosity, then becomes a companion, then becomes the default interface. At first, people ask why anyone needs it. A few years later, they question how anyone lived without it. The smartphone did that to the PC. AI can do that to the traditional smartphone.
The Rise of Ambient AI Devices
The traditional smartphone era is centered around on-screen intelligence. You pull out your ‘pocket computer,’ open an app, type something, tap something, scroll something – buy something – then usually repeat that ritual hundreds of times per day.
The AI era is different. It’s all about ambient intelligence: intelligence that can meet the user in the real world, see what they see, hear what they hear, understand context, and act on their behalf.
That requires a different kind of device — which means the ubiquitous consumer electronic device of the future will be an AI-native device designed around continuous context. It may look like glasses, earbuds, a pendant, a wearable, or a phone-like companion device.
Eventually, it will evolve into something even more ambient. But whatever the shape, the AI-native device will be designed around agents.
Story Continues
The user gives intent. The agent does the work. The system handles execution.
That kind of computing works best when freed from an app grid, present, contextual, multimodal, and persistent. It wants ‘eyes,’ ‘ears,’ local processing, connectivity, and cloud access.
That brings us to Qualcomm.
Why Qualcomm Could Win the AI Device Era
The irony here is that Qualcomm — the company that made its fortune on the back of the smartphone boom – may be one of the biggest winners of the ambient-AI era. Because the thing replacing the traditional smartphone will still need almost everything Qualcomm is good at — and probably more of it.
An AI-native mobile device needs low power consumption, local AI inference, memory efficiency, wireless connectivity, camera/audio/sensor integration, and thermal discipline. It needs to run small models on-device while handing bigger reasoning workloads to frontier models in the cloud.
That is not a side quest for Qualcomm. That is Qualcomm’s resume – and it’s why the OpenAI report matters so much.
It reframes Qualcomm from a mature handset chip supplier into a potential platform company for the next device era.
That is the investment opportunity.
The bear case on Qualcomm has always been that smartphones are mature. Unit growth is slow. Apple is trying to bring more silicon in-house. The Android market is cyclical. Margins can be pressured.
Wall Street has treated Qualcomm like a company tied to the last platform shift. But what if it’s actually tied to the next one?
The AI-native device thesis says Qualcomm’s future is not fewer phones but more intelligent devices.
Phones, glasses, earbuds, wearables, robots, drones, vehicles, industrial machines, smart home devices – all will become AI endpoints. All will require efficient edge AI compute, connectivity, and the ability to sense the physical world in real time.
In that world, Qualcomm is a toll road.
And if the ambient-AI mobile device category grows faster than the legacy smartphone category declines, Qualcomm’s growth profile could actually accelerate.
This is why QCOM stock is the most obvious buy for this theme.
But it is not the only one…
AI Stocks Positioned for the Post-Smartphone Era
If AI is about to transform the smartphone era, the entire hardware stack will get repriced.
Coherent (COHR) and Lumentum (LITE) are worth watching. Ambient AI devices will need optical systems, lasers, and sensing components that go well beyond what data center optics demand — and these two are among the few pure-play suppliers of that hardware. Taiwan Semiconductor (TSM) is a natural toll road. The advanced foundry king remains deeply embedded in the global AI semiconductor supply chain — and every advanced chip in this new device category still runs through its fabs. Arm (ARM) matters because the future of ambient AI will be built around power-efficient compute. The device needs efficient architectures, and Arm remains central to that universe. Meta is a major platform play because it is already pushing AI glasses into the market through Ray-Ban Meta. If AI glasses become the first successful post-smartphone form factor, Meta may own one of the earliest mainstream distribution channels for ambient AI. Google’s Gemini, Android, Maps, Search, YouTube, Gmail, and Android XR give it a natural path into AI-native devices. The opportunity is enormous — but so is the risk. If agents replace search as the default interface, Google’s core business gets abstracted away. If Google wins the agent layer instead, it becomes the operating system for the physical world. Apple is the giant question mark. It has the consumer trust, design talent, installed base, wearables franchise, silicon capability, and retail distribution to win the next era. But it also has the most to lose if the moat shifts from hardware plus apps to hardware plus agents. Right now, OpenAI has the agent mindshare.
Every one of these companies is circling the same question: who owns the interface when the screen is no longer the center? The OpenAI-Qualcomm report is the first concrete answer. That is why this reported deal feels so important.
It is not really about one future device in 2028. It is about the direction of travel.
The Bottom Line: AI Is Moving Off the Screen
AI is moving from the cloud into the physical. From apps into agents. From screen-based into ambient.
The first phase of the AI boom was about training giant models in giant data centers. Nvidia (NVDA) won because intelligence needed a cloud-based brain.
The next phase is about deploying intelligence into billions of real-world devices. Qualcomm can win because intelligence now needs a body.
OpenAI is trying to build the next great consumer device. Qualcomm may be supplying the silicon foundation for it. And investors who still think of QCOM as just a boring old smartphone chip stock may be missing one of the most important AI hardware pivots hiding in plain sight.
The difference between those two eras isn’t just growth — it’s the difference between supplying one device category and supplying every intelligent endpoint on the planet.
Be early to the next choke point.
For nearly 20 years, the smartphone controlled the flow — of information, attention, and trillions of dollars. The companies that owned that screen captured the upside.
Now that control layer is shifting.
As computing moves off the screen and into the real world, the winners won’t just be the companies building new devices but the ones that control what those devices do — especially when it comes to money…
Because every new interface needs a financial backbone.
That’s exactly what Elon Musk is building inside X: a system designed to move, store, and deploy your money instantly, without the friction of traditional banks or apps.
Two massive shifts are colliding at once: a new way to interact with technology and a new way money moves through it.
If you can see how those pieces connect, you’re already ahead of the market.
Click here to see how we’re positioning for it — and the key opportunities we’re tracking before they go mainstream.
The post OpenAI Is Building What Comes After the iPhone appeared first on InvestorPlace.
View Comments
- QUALCOMM Stock Jumps Double Digits After Surprise AI Chip Report
Apr 30, 2026
With stock buybacks of $9.91 billion in the 12 months through September 2025, QUALCOMM Incorporated (NASDAQ:QCOM) is among the 20 Stocks with the Biggest Share Buybacks.
QUALCOMM Incorporated (NASDAQ:QCOM) shares surged on April 27 after analyst Ming-Chi Kuo stated on social media that industry checks indicate OpenAI is working with MediaTek Inc. and Qualcomm to develop smartphone processors. Following the report, Qualcomm shares rose 11%, or $16.65, to $165.50 in premarket trading, reflecting investor enthusiasm over a potential high-profile AI-driven mobile chip opportunity.
Earlier that same day, analyst Ming-Chi Kuo said via X that, according to his latest supply-chain checks, OpenAI is collaborating with MediaTek and QUALCOMM Incorporated (NASDAQ:QCOM) on smartphone processors, with Luxshare Precision Industry Co., Ltd. serving as the exclusive system co-design and manufacturing partner. Kuo added that mass production is expected in 2028 and that Qualcomm and MediaTek could benefit from long-term device replacement demand.
QUALCOMM Incorporated (NASDAQ:QCOM) is a leading American technology company focused on wireless innovation, semiconductors, and mobile connectivity solutions. Headquartered in San Diego, California, the company was founded in 1985 and remains a core supplier of modem, processor, and communications technologies used across the global smartphone ecosystem.
Potential involvement in next-generation AI smartphones could open a new growth avenue for Qualcomm while reinforcing its leadership in premium mobile chipsets. Combined with $9.91 billion of stock buybacks over the prior twelve months, the company offers investors a compelling mix of innovation exposure and capital returns.
While we acknowledge the potential of QCOM as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock.
READ NEXT: 10 Best Gold Mining Companies to Invest In According to Wall Street and 11 Most Profitable Renewable Energy Stocks Right Now.
Disclosure: None. Follow Insider Monkey on Google News.
View Comments
- No question the AI megatrend continues, Taiwan's MediaTek says
Apr 30, 2026
TAIPEI, April 30 (Reuters) - The CEO of MediaTek, Taiwan's top chip design company, on Thursday said he has no doubts about the strength of the artificial intelligence wave, adding that demand for data centres is accelerating.
Taiwanese tech companies like MediaTek and TSMC, the world's largest contract chipmaker, have reported surging business thanks to the AI boom, despite recent worries among market participants that tech firms' breakneck spending would not yield sufficient returns.
On an earnings call, MediaTek's Rick Tsai said demand momentum is particularly strong for AI data centres.
"Everyone can see that demand for data centres continues to grow and if anything to accelerate," he said. "There is no question that the AI megatrend continues."
Tsai added that by 2027, MediaTek expects to generate revenue of multiple billions of dollars from its AI accelerator ASIC chips.
The market size for data centre ASIC chips is now estimated to be $70 billion to $80 billion in 2027, he said, up from a previous estimate of $50 billion to $70 billion.
MediaTek is a customer of TSMC, which earlier this month said its first-quarter profit rose 58% to a record high, beating estimates.
Tsai's comments added to other bullish remarks from companies about AI.
Alphabet topped Wall Street estimates for quarterly revenue on Wednesday, as enterprise spending on AI delivered the best quarter of reported growth for its cloud unit yet.
South Korea's Samsung Electronics, the world's largest memory chipmaker, earlier on Thursday said first-quarter operating profit jumped eightfold to a record, underpinned by higher chip prices as the AI boom led to a supply crunch.
MediaTek is the third most valuable company on the Taiwan stock exchange with a market capitalisation of $131 billion.
On Thursday, MediaTek reported first-quarter revenue of T$149.15 billion ($4.71 billion), a 2.7% drop from a year earlier, while net income was down 17.4% to T$24.38 billion.
It blamed the revenue fall on a decline in its mobile phone business which offset revenue growth for Smart Edge Platforms, which includes chips for AI servers.
MediaTek shares have surged 83% this year, outperforming a 34% rise for the benchmark index. The stock closed up 1.4% on Thursday ahead of its earnings release.
($1 = 31.6940 Taiwan dollars)
(Reporting by Ben Blanchard; Additional reporting by Wen-Yee Lee; Editing by Thomas Derpinghaus)
View Comments
- Your Smartphone Could Soon Be an AI Agent, and Qualcomm Stock Is Positioned to Profit Big
Apr 29, 2026
Semiconductor stocks are not in an easy environment right now. A global memory shortage has squeezed smartphone production, analysts have been slashing targets, and investors have been running scared from anything with handset exposure.
Now, Qualcomm (QCOM) is suddenly in the spotlight for a very different reason. TF International Securities analyst Ming-Chi Kuo says OpenAI is working with Qualcomm and MediaTek to develop processors for an AI agent smartphone. This would be a phone with no apps, where AI agents do everything for you.
More News from Barchart
Why the Real Story Behind the UAE’s OPEC Exit is Petrodollar Diplomacy Should You Buy X-Energy Stock After the Amazon-Backed XE IPO? Massively Disappointing Boston Scientific (BSX) Stock Could Be Due for a Comeback Get exclusive insights with the FREE Barchart Brief newsletter. Subscribe now for quick, incisive midday market analysis you won't find anywhere else.
That is not automatically a business-altering event overnight, but it does add a new layer of excitement around a stock that has been beaten down.
Why This AI Agent Phone News Matters for Qualcomm
Snapdragon CPUs and modems are found inside most high-end Android devices. The smartphone market is getting squeezed right now. But Qualcomm's smartphone empire is not yet out of the woods. Memory-chip shortages have curtailed handset production, while Apple's (AAPL) efforts to replace Qualcomm's chipset in the devices' modems present a cloud for long-term prospects.
An OpenAI partnership, which translates into actual product plans, is less messy. The prospect of chips in AI-first devices, starting mass production in 2028, draws the eye away from the short-term, handset woes and towards a potential edge-AI business. The market's lack of movement after the initial move suggests that investors want more than smoke and mirrors on the supply chain front.
How Did Qualcomm Stock Perform?
Despite the broader pressure, QCOM stock has been hit harder than many peers. Shares tumbled as much as 25% earlier in 2026 before clawing back. As of late April, the stock was still down 8.62% year-to-date (YTD). The pain came from a nasty memory chip shortage. DRAM suppliers are funneling capacity toward high-bandwidth memory for AI data centers. That means fewer memory chips for smartphone makers. Fewer phones mean fewer Snapdragon processors sold.
On top of that, Apple is building its own modem chip and will eventually stop paying Qualcomm. That double whammy crushed sentiment. Over the past twelve months, the stock has seen only 6.42% growth.
Story Continues
From a valuation perspective, Qualcomm does not look extreme in either direction. QCOM stock trades at 30.62 times trailing earnings, which looks expensive at first. But flip to the forward price-to-earnings multiple, which is at 17.69 times. That sits well below the semiconductor sector median, which often runs in the low 20s. The market is not treating Qualcomm as a high-growth story right now, but it is also not pricing in a total collapse.www.barchart.com
Qualcomm Faces Earnings Pressure
Qualcomm is all set to report its Q1 earnings today after the close, and Wall Street expects revenue to fall 3.6% to $10.58 billion, with adjusted EPS slipping 10.2% to $2.56. The bar looks a bit lower after the company’s February guidance missed estimates and raised concerns that a memory chip shortage is hitting handset production.
J.P. Morgan cut the stock to “Neutral” from “Overweight” and lowered its target to $140, while Barclays also turned cautious on what it called a difficult handset environment. JPM sees Qualcomm’s QCT handset business falling 22% in calendar 2026, versus 17% consensus, as Apple and Samsung remain longer-term headwinds.
Investors will be watching whether automotive, IoT, and AI can help soften the smartphone slowdown. Qualcomm’s results also serve as a read on broader personal-electronics chip demand, and the company has been leaning more heavily on diversification beyond handsets.
What Do Analysts Think of QCOM Stock?
Wall Street is deeply divided on Qualcomm. Barclays analyst Thomas O’Malley resumed coverage with an “Underperform” rating and a $130 price target. He warned that memory shortages will lead to a double-digit contraction in handset volumes this year and said the company still needs to prove its data center story.
With JPMorgan's downgrade from $185 to $140, the firm pointed to slow diversification beyond smartphones and a lack of near-term catalysts.
Morgan Stanley initiated coverage with an “Underweight” rating and a $132 target, highlighting lingering margin concerns. On the flip side, Loop Capital upgraded Qualcomm to a “Strong Buy” with a $185 target, calling the sell-off overdone. Wells Fargo upgraded from “Underweight” to “Equal Weight” and lifted its target to $150.
Even so, the aggregate view is cautious but not outright bearish. Based on current data, Qualcomm carries a consensus “Hold” rating. The average price target sits at $155.85, which implies a 4% downside, but a $205 Street-high target shows a potential 31.1% from here.www.barchart.comwww.barchart.com
On the date of publication, Nauman Khan did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Barchart.com
View Comments
- Analysts skeptical Qualcomm stock rally can continue
Apr 28, 2026
Investing.com -- Qualcomm shares surged over 11% on Friday before adding a further 1% on Monday, but analysts are urging caution, suggesting the rally owes more to short-covering than improving fundamentals heading into Wednesday's earnings report.
The catalyst for the latest leg higher was an unconfirmed report out of Asia suggesting Qualcomm and MediaTek are working with OpenAI on a processor for an upcoming smartphone, with 2028 production cited as the target timeline.
Wolfe Research analyst Chris Caso acknowledged the move but was measured in his assessment.
"We would be similarly skeptical that an OpenAI phone would materialize into much of a catalyst," he wrote, noting that iPhone and Android's grip on the market "would be tough to overcome."
Caso also cautioned that if OpenAI were to take share from Android, it would only cannibalize Qualcomm's existing business rather than generate incremental revenue.
Bernstein analyst Stacy Rasgon, who rates Qualcomm Market Perform with a $140 price target, echoed that skepticism.
"The responses feel a bit more 'squeeze-y' than fundamental to us at this point," he wrote. Rasgon models full-year fiscal 2026 earnings per share of $10.64, below the Street's $11.02, citing weakening smartphone dynamics and the beginning of a significant Apple content roll-off into year-end.
Mizuho TMT specialist Jordan Klein was the most direct, saying he remains negative on Qualcomm and characterizing the move as a "massive short squeeze and dumb retail chasers."
Klein flagged surging memory prices as a headwind to smartphone demand in the second half of 2026 and warned that Qualcomm's April 29 earnings guidance could serve as a downside catalyst.
Related articles
Analysts skeptical Qualcomm stock rally can continue
As Claude disrupts stock market, Anthropic researcher warns ’world is in peril’
Wolfe Research outlines eight risks that could spark stock declines in 2026
View Comments
- Qualcomm Jumps 8.2% After Report Signals OpenAI Smartphone Chip Collaboration
Apr 28, 2026
This article first appeared on GuruFocus.
Qualcomm (NASDAQ:QCOM) shares jumped 8.2% on Monday after TF International Securities analyst Ming-Chi Kuo indicated that industry checks point to a potential collaboration involving OpenAI, MediaTek, and Qualcomm to develop smartphone processors tied to an AI-focused device. Kuo added that Luxshare Precision Industry could act as the exclusive system co-design and manufacturing partner, a detail that coincided with Luxshare shares rising as much as 10% in Shenzhen trading. At the same time, Apple Inc. shares fell as much as 2.2% in New York, suggesting investors could be reassessing competitive dynamics if AI-native smartphones begin to take shape.
Warning! GuruFocus has detected 3 Warning Signs with QCOM. Is QCOM fairly valued? Test your thesis with our free DCF calculator.
According to Kuo, the project appears to be positioned as a longer-term effort, with mass production expected in 2028, while specifications and supplier decisions could be finalized by late 2026 or the first quarter of 2027. None of the companies involved immediately responded to requests for comment, leaving the scope and structure of the collaboration unconfirmed. Still, the combination of a leading AI developer alongside multiple chipmakers could point to a broader push toward embedding more advanced AI capabilities directly into consumer hardware over time.
The timing of the report is notable for Qualcomm, which has faced uneven momentum this year despite a rebound of about 28% from a recent low through Friday's close, while still remaining down 7% in 2026. That performance places it as the weakest component in the Philadelphia Semiconductor Index, which has risen nearly 50% this year following an 18-session winning streak. Part of the pressure appears tied to strong demand for memory driven by AI data center buildouts, which has constrained supply and increased costs for consumer electronics makers, dynamics that could continue to weigh on Qualcomm as it heads into its second-quarter earnings release scheduled after Wednesday's market close.
View Comments