Moore's Law is entering a new phase with the emergence of cloud computing and 5G connectivity, leading to the development of high-performance computing chips.

The recent passing of Gordon Moore has reignited the debate over whether Moore's Law is dead. The late Intel co-founder predicted in 1965 that the number of transistors in an integrated circuit would double approximately every two years, leading to exponential growth in computing power. This observation held true for several decades and has been a major driving force behind the flourishing semiconductor industry, encompassing technological advances which enabled the PC era in 1990 – 2010 and the subsequent smartphone and mobile apps economy. However, as semiconductor transistor miniaturization approaches its physical limit, it is becoming increasingly challenging to keep up with the pace of cost reduction and performance improvement as projected by Moore’s Law. So, what’s next?

With the emergence of cloud computing and 5G connectivity, we believe Moore’s Law is entering a new phase led by high-performance computing (HPC). Faster and lower latency wireless communications allow intensive computing workloads in battery-powered mobile devices to be shifted to cloud infrastructure. This gives users access to high performance computing chips, high-speed networking and massive storage resources. 5G technology allows for a greater number of mobile devices to be connected to the cloud simultaneously. As a result, rather than focusing solely on making chips even smaller and more powerful to achieve higher computing capabilities, efforts have been shifted toward developing HPC chips in the centralized cloud infrastructure.

It is not just about transistor miniaturization anymore. The most advanced tech companies are focusing on 3D-packaging of multiple computing and storage chips into an HPC system. As a key component of cloud infrastructure, HPC chips perform intensive computational tasks at high speeds through parallel processing of divided tasks, such as deep learning, training and optimizing artificial intelligence (AI) applications. As the industry races to launch their own cutting-edge HPC systems, the boundary between semiconductor and cloud computing companies is becoming blurred. For example, technology conglomerates such as Google, Amazon and Microsoft have been hiring chip designers for their own HPC chips to enhance cloud infrastructure, while companies like Nvidia, AMD and Marvell are no longer seen as just cyclical semiconductor companies, but as integrated system companies given their expertise in chips and software platforms that enable next-generation cloud computing infrastructure. As demand for HPC and AI continues to grow, we believe Nvidia is well positioned to benefit given its leading position as a provider of Graphic Processing Units (GPUs) for HPC applications.

Nvidia

Nvidia

Founded in 1993 by Jensen Huang, Nvidia has emerged as a leader in Silicon Valley, and is known for reinventing the GPU (graphics processing unit) market in 19991. It currently commands 80% of the global GPU market share2, setting new standards in modern computing. With advanced hardware and software solutions, Nvidia has become the go-to choice for top-of-the-line computing and development of cutting-edge applications across a range of industries.

Since establishing itself as the graphics chip provider for gaming, Nvidia has expanded into high-performance computing (HPC) and artificial intelligence (AI) where the same gaming processors are repurposed for different computational tasks. Among data center operators, preference has been growing for GPUs over CPUs for computing workloads as GPUs can handle large datasets and parallel computing workloads far more efficiently. Nvidia is well positioned to benefit from this growing demand for GPUs, but has gone a step further to provide complete systems and software solutions that can handle the complex HPC workloads. For example, Nvidia’s CUDA software (parallel computing platform & programming model) allows developers to dramatically speed up computing applications by harnessing the power of GPUs. The company has also developed its own suite of high-speed networking solutions (NVlink for chipset communications) and network communications (via acquisition of Mellanox).

This complete set of systems and services allows Nvidia’s partners, such as Google, Microsoft, Oracle and other leading businesses, to bring new AI, simulation and supercomputing capabilities to every industry. Nvidia eyes a US$1trillion addressable market3 and is targeting new growth opportunities in the emerging markets of autonomous driving and metaverse. As we look into the future, it's clear that HPC and AI will extend into many areas of our lives. Nvidia is likely to be a key enabler as it continues to put its ecosystem of advanced tools and solutions to work, remaining at the forefront of this exciting and rapidly evolving field.