, Key Takeaways
Artificial intelligence (AI) related technology is soaring in popularity, with items like smart glasses capable of recording video and AI pins that function as wearable personal assistants, providing control over various smart devices. Each month sees AI applications becoming increasingly used in streamlining work.
Until recently, graphics processing units (GPUs) from prominent manufacturers like Nvidia and AMD have served as the primary catalysts for training general-purpose large language models (LLMs) like ChatGPT.
Despite their versatility, these standard GPUs may deliver subpar performance and efficiency, depending on their usage.
Now, the era of custom Application-Specific Integrated Circuit (ASIC) chips, meticulously crafted to provide more efficient and tailored solutions for diverse AI applications, appears to have arrived. Consider the realm of cryptocurrency chips as a prime example. Bitcoin miners, who once heavily relied on power-hungry central processing units (CPUs), now find ASIC processors outpacing both CPUs and GPUs in mining. This is due to the custom chips’ superior computing capabilities and reduced electricity consumption.
Morgan Stanley estimates that the market for ASIC chips dedicated to artificial intelligence is projected to grow by 85% every year between 2023 and 2027, reaching $30 billion.
Notably, tech giants are steering their attention towards ASICs as well. Google, in collaboration with Broadcom, is spearheading the development of its fifth-generation Tensor Processing Units (TPUs). These chips boast the ability to handle AI workloads approximately ten times faster than conventional CPUs and GPUs.
Application-Specific Integrated Circuits (ASICs) designate custom electronic circuits crafted for a particular purpose or application. In contrast to general-purpose integrated circuits (ICs), like microprocessors and memory chips, adaptable to various devices, ASICs undergo tailored development for a specific task. This specialization enables ASICs to outperform general-purpose ICs in terms of performance, power efficiency, and compactness.
ASICs achieve peak efficiency by intentionally optimizing circuit layouts and designs for their designated tasks. Engineers tailor ASICs to specific applications, aiming to minimize power consumption. This is a crucial consideration for battery-powered devices and other scenarios where power efficiency is paramount.
Moreover, the compact nature of ASICs results from their singular application focus. This can prove advantageous in space-constrained environments, such as smartphones and other mobile devices.
ASICs find their niche in applications demanding a trifecta of high performance, low power consumption, and minimal size. Conversely, Field Programmable Gate Arrays (FGPAs) take the spotlight in scenarios valuing flexibility and swift prototyping, while general-purpose ICs shine in applications where cost-effectiveness and user-friendly integration are paramount.
As generative AI gains momentum, developers are directing their efforts toward developing chips to optimize data centers. Amazon, in collaboration with Marvell, is spearheading the development of Graviton AI chips. These chips are engineered to enhance processing speed and curtail power consumption. Additionally, Amazon is teaming up with Alchip Technologies to create the Trainium and Inferentia lines. The giant aims to streamline AI model training and facilitate precise predictions.
Yan Taw Boon, the head of Thematic for Asia at investment management company Neuberger Berman, said : “In the realm of AI, we firmly believe that a one-size-fits-all approach is insufficient, and the pursuit of personalization holds promising opportunities for chipmakers.”
Simultaneously, Microsoft is forging partnerships with Global UniChip and Marvell for the Maia 100 and Cobalt 100 chips, respectively. The Maia 100 chip prioritizes minimal power consumption during AI model training. The Cobalt 100 chip focuses on boosting processing speed. In the automotive sector, Tesla has joined forces with Alchip to bring to fruition its AI supercomputing chip, Dojo, designed explicitly for enabling self-driving vehicles.