The AI effect: Large language models aren’t only driving semiconductor demand

Vijay Rakesh
Vijay Rakesh Senior Semiconductors and Automotive Technologies Analyst
August 11, 2023

The launch of ChatGPT last fall was a watershed moment in the development of artificial intelligence, with some analysts comparing it to the arrival of cloud computing or even the creation of the Internet. Developers are now racing to develop Large Language Models (LLMs) — the generative AI technology behind ChatGPT — to tackle everything from consumer healthcare and drug development to weather forecasting, stock trading and more. But to many investors, one industry that stands to benefit from the growth of AI stands out: The semiconductor industry. 

It’s true that LLMs require immense processing power that will drive demand for semiconductors, but not every type of processor is right for the task. What’s more, the process of designing, training, and operating LLMs relies on a broader technology ecosystem designed for AI applications. This includes specialized software, storage capacity, and networking components that can handle vast quantities of data at high speeds. Investors, therefore, must examine the AI technology landscape carefully when looking for the strongest opportunities. 

A computing explosion

Today’s large language models work by analyzing vast amounts of data and running it against parameters to find connections and patterns that are then used to make predictions and generate outputs. Two years ago, a model might have run 50-70 billion parameters. Today, these models use trillions of parameters. 
 
Running a model with this level of sophistication requires a tremendous amount of processing power. Using computing servers with CPUs (central processing units) would take weeks or months to run through trillions of parameters during an LLM training session. Instead, LLM developers are moving from CPUs to GPUs (graphics processing units) to provide a much faster solution. Thanks to their parallel computing architecture, which groups multiple cores to simultaneously perform multiple calculations across data streams, GPUs can run through trillions of parameters in a day, accelerating LLM training time. 

The most obvious beneficiaries of this demand are GPU hardware manufacturers like NVIDIA (NVDA) and Advanced Micro Devices (AMD). Although AMD is making inroads with its AI CPUs that we believe could add $1.1 billion in revenues in FY 2024, NVIDIA remains a clear leader in the AI market because of its combined GPU hardware and software capabilities. With over 75% market share, NVDA could drive approximately $300 billion in AI-specific revenue by the end of 2027, in our view. 

The key issue to understand is that AI development requires specialized expertise to create models that can navigate varied data formats — including text, audio and video — and work in multiple operating environments, such as enterprise databases and public or private clouds. NVIDIA offers a well-developed programming software called CUDA to help build and train LLMs, providing an attractive option for companies that aren’t software development experts — like biotech/pharmaceutical companies and energy companies — that want to accelerate development of their in-house LLMs. 

AMD has compelling hardware, but uses an open-source framework for software. That approach may be a better fit for companies with in-house software development expertise, such as Microsoft, Amazon, or Google (covered by other Mizuho Americas equity analysts), which can customize their LLM systems. 

Meeting networking and storage demands

Beyond GPU hardware and software, LLMs require sufficient storage capacity to capture and update massive amounts of data every day, along with connectivity to transport that data from storage to the GPUs in milliseconds. These technology demands are driving potential growth for memory and network connectivity OEMs. 

For example, Broadcom (AVGO) is strongly positioned on the networking side. The company already controls 80%-85% of the Ethernet switching market, and its new Tomahawk 5 and Jericho 3AI Ethernet switches offer the high-bandwidth capabilities needed for AI/LLM applications. We estimate AI infrastructure investment could generate approximately $7.5 billion for Broadcom in FY 2024, a 75% year-over-year increase. 

On the storage side, Micron (MU) is an industry leader in High Bandwidth Memory, a type of DRAM designed to deliver the speed necessary to enable parallel processing on multiple GPUs.

A long runway

Just a few months after the release of Chat GPT3, OpenAI launched Chat GPT4 — but the wave of AI implementation is just getting started. An estimated 95% of LLMs are still in training, and new use cases will emerge as developers fine-tune those models. As these applications come online and adoption grows, the development of new processing and networking technologies will continue to accelerate. 

For example, Compute Express Link (CXL) is one enticing technology that speeds memory and processing. CXL offers an open, high-speed interconnect technology that can accelerate LLM development by enabling memory pooling across multiple storage environments and between CPUs/GPUs. The CXL offering is different than competing ideas because it offers low latency and cache coherence, and we estimate it can reduce server costs 5%-10%.  

Originally, CXL was expected to take another three to four years to reach the market, but with the LLM explosion, major semiconductor OEMs have all adopted CXL and are accelerating its development. It’s just one example of how everything in the realm of AI is now spinning faster. As a result, investors must understand the full picture of the AI hardware and software ecosystem and remain on top of technological developments to find opportunities in this emerging — and rapidly changing — market.   

Back to top