Demand for AI semiconductors is expected to exceed $70 billion this year, as generative AI adoption fuels demand.

The worldwide scramble to adopt and monetise generative artificial intelligence (AI) is accelerating an already bullish semiconductor market, according to new data gathered by Gartner. 

According to the company’s latest report, the global AI semiconductor revenues will likely grow by 33% in 2024. By the end of the year, the market is expected to total $71 billion. 

“Today, generative AI (GenAI) is fueling demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will total $21 billion, and increase to $33 billion by 2028,” said Alan Priestley, VP Analyst at Gartner.

Breaking down the spending across market segments, 2024 will see AI chips revenue from computer electronics total $33.4 billion. This will account for just under half (47%) of all AI semiconductors revenue. AI chips revenue from automotive electronics will probably reach $7.1 billion, and $1.8 billion from consumer electronics in 2024.

AI chips’ biggest year yet 

Semiconductor revenues for AI deployments will continue to experience double-digit growth through the forecast period. However, 2024 is predicted to be the fastest year in terms of expansion in revenue. Revenues will likely rise again in 2025 (to just under $92 billion), representing a slower rate of growth. 

Incidentally, Garnter’s analysts also note coprorations currently dominating the AI semiconductor market can expect more competition in the near future. Increasingly, chipmakers like NVIDIA could face a more challenging market as major tech companies look to build their own chips. 

Until now, focus has primarily been on high-performance graphics processing units (GPUs) for new AI workloads. However, major hyperscalers (including AWS, Google, Meta and Microsoft) are reportedly all working to develop their own chips optimised for AI. While this is an expensive process, hyperscalers clearly see long term cost savings as worth the effort. Using custom designed chips has the potential to dramatically improve operational efficiencies, reduce the costs of delivering AI-based services to users, and lower costs for users to access new AI-based applications. 

“As the market shifts from development to deployment we expect to see this trend continue,” said Priestley.

  • Data & AI
  • Infrastructure & Cloud

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.