NVIDIA First Look: How They Operate

Estimated reading time: 7 minutes

Just when I think "There's no way they beat last quarter" after their revenue jumped from 7 to 13 billion, NVIDIA blows out another quarter with over 18 billion in revenue. Just unreal.

After being proven so wrong I want to know what's up, what's special. In the hunt for quality, it pays to know what's going on with a company, even though they might be extremely overvalued at this time (As of today NVIDIA has a P/E ratio of 55. Even for the best company in the world this is too much).

So let's dive in, take a look, and see what we can learn! For this post I'll look at business segments, fundamental data, and the latest earnings call. Let's go!

NVIDIA Business Segments

The hardest part about NVIDIA is understanding exactly what they do and how they make money. Here's their official "about" blurb:

NVIDIA Corporation provides graphics, and compute and networking solutions in the United States, Taiwan, China, and internationally. The company's Graphics segment offers GeForce GPUs for gaming and PCs, the GeForce NOW game streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise workstation graphics; vGPU software for cloud-based visual and virtual computing; automotive platforms for infotainment systems; and Omniverse software for building 3D designs and virtual worlds. Its Compute & Networking segment provides Data Center platforms and systems for AI, HPC, and accelerated computing; Mellanox networking and interconnect solutions; automotive AI Cockpit, autonomous driving development agreements, and autonomous vehicle solutions; cryptocurrency mining processors; Jetson for robotics and other embedded platforms; and NVIDIA AI Enterprise and other software. The company's products are used in gaming, professional visualization, datacenter, and automotive markets. NVIDIA Corporation sells its products to original equipment manufacturers, original device manufacturers, system builders, add-in board manufacturers, retailers/distributors, independent software vendors, Internet and cloud service providers, automotive manufacturers and tier-1 automotive suppliers, mapping companies, start-ups, and other ecosystem participants. It has a strategic collaboration with Kroger Co. NVIDIA Corporation was incorporated in 1993 and is headquartered in Santa Clara, California.

Here is annual NVIDIA business segment revenue.

Notice how they transitioned in 2020 from GPU and Tegra Processor to Graphics and Compute & Networking. More on this later.

Let's dive into fundamentals.

NVIDIA Fundamentals and Valuation

They're large, they're well ran, and they know how to operate. These next slides show just that.

NVIDIA absolutely crushed the growth game recently:

A lot of that revenue is being retained as free cash flow (yay!):

Management is reinvesting into the business wisely (also yay!):

Note: the lowest point of ROIC since 2018 was over 12%. This is outstanding. Seeing the latest 45% is sky-high, but there's no way this can last. Companies can't grow to the sky forever.

Management is maintaining an attractive share count:

NVIDIA is priced extremely expensive, as they should be. They've crushed it lately.

"Latest" EV/FCF as of 2023-11-28.

They're big, expensive, crushing the last few quarters, and at the top of their game. Even without digging deeper I can see the trap. It's common for investors to overpay for quality. This is exactly the case. Investors are jumping in. While I'd like to jump in, the last graph on EV/FCF dictates that I do not jump in (yet). This is still extremely expensive. NVIDIA is arguably the same company it is today as it was in 2019. All it takes is a shift in perception from the market to re-value NVIDIA to an EV/FCF of ~25. That means perception is the only thing holding NVIDIA back from a %50 haircut. This isn't the boat that I want to be in.

With that out of the way let's dig into the latest earnings call.

Earnings Call Highlights

Most earnings calls, especially from large companies, are rife with lip service and a dance with chosen "analysts" to 1) make the company look good, and 2) make the analyst look good. It's a mutual symbiotic relationship that benefits the two parties, but rarely the independent investor. It's increasingly rare that an analyst will ask necessary pressing questions from analysts such as myself or other truth-seekers who wish to find the boiled-down essence of what makes a company a worthwhile investment.

Still, we must dive in and decipher the forest for the trees while the lumberjacks throw sawdust in our faces.

This section of the post is quite text-heavy, but it's very cool to see how NVIDIA operates. It's very worth the read, especially in today's AI explosion.

Colette M. Kress, CFO: Some of the most exciting generative AI applications are built and run on NVIDIA, including Adobe, Firefly, ChatGPT, Microsoft 365 Copilot, CoAssist, Now Assist with ServiceNow and Zoom AI Companion. Our data center compute revenue quadrupled from last year and networking revenue nearly tripled. Investment in infrastructure for training and inferencing large language models, deep learning recommender systems and generative AI applications is fueling strong broad-based demand for NVIDIA accelerated computing. Inferencing is now a major workload for NVIDIA AI computing.

It's pretty cool to see just how widely NVIDIA products are embedded in other large companies.

Kress goes on to explain how this is just the beginning:

Kress: Most major consumer Internet companies are racing to ramp up generative AI deployment. The enterprise wave of AI adoption is now beginning. Enterprise software companies such as Adobe, Databricks, Snowflake and ServiceNow are adding AI copilots and assistants to their platforms. And broader enterprises are developing custom AI for vertical industry applications such as Tesla and autonomous driving. Cloud service providers drove roughly the other half of our data center revenue in the quarter. 

But there's an economic danger here between China, the US, and NVIDIA:

Kress: Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products including our Hopper and Ampere 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe they'll be more than offset by strong growth in other regions. The U.S. government designed the regulation to allow the U.S. industry to provide data center compute products to markets worldwide, including China, continuing to compete worldwide as the regulations encourage, promote U.S. technology leadership, spurs economic growth and support U.S. jobs. For the highest performance levels, the government requires licenses. For lower performance levels, the government requires a streamlined prior notification process. And for products even lower performance levels, the government does not require any notice at all. Following the government's clear guidelines, we are working to expand our data center product portfolio to offer compliant solutions for each regulatory category, including products for which the U.S. government does not wish to have advanced notice before each shipment. We are working with some customers in China and the Middle East to pursue licenses from the U.S. government. It is too early to know whether these will be granted for any significant amount of revenue. Many countries are awakening to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystems.

Lesson: over-reliance on revenue from China or other countries that don't historically play nice with the USA are un-stable, no matter how much they've grown in recent years/quarters.

Kress goes on to explain how LLMs are growing exponentially (bold added by me):

Kress: At last week's Microsoft Ignite, we deepened and expanded our collaboration with Microsoft across the entire stock. We introduced an AI foundry service for the development and tuning of custom generative AI enterprise applications running on Azure. Customers can bring their domain knowledge and proprietary data and we help them build their AI models using our AI expertise and software stack in our DGX Cloud, all with enterprise-grade securities and support. SAP and Amdocs are the first customers of the NVIDIA AI foundry service on Microsoft Azure. In addition, Microsoft will launch new confidential computing instances based on the H100.The H100 remains the top-performing and most versatile platform for AI training and by a wide margin, as shown in the latest MLPerf industry benchmark results. Our training cluster included more than 10,000 H100 GPUs or 3x more than in June, reflecting very efficient scaling. Efficient scaling is a key requirement in generative AI because LLMs are growing by an order of magnitude every year. Microsoft Azure achieved similar results on the nearly identical cluster, demonstrating the efficiency of NVIDIA AI in public cloud deployments.

This is one point that slightly justifies the high valuation. If NVIDIA is looking to operate in an industry that is having an "iPhone moment" then who knows just how high revenues will go in the next 5 years. The more I read about recent developments, the more I believe it. It's imperative that we not jump quickly here.

Let's keep exploring.

Networking now exceeds a $10 billion annualized revenue run rate. Strong growth was driven by exceptional demand for InfiniBand, which grew fivefold year-on-year. InfiniBand is critical to gaining the scale and performance needed for training LLMs. Microsoft made this very point last week highlighting that Azure uses over 29,000 miles of InfiniBand cabling, enough to circle the globe. We are expanding NVIDIA networking into the Ethernet space. Our new Spectrum-X end-to-end Ethernet offering with technologies, purpose-built for AI will be available in Q1 next year with support from leading OEMs, including Dell, HPE and Lenovo.

Here is a little more info about infiniband (click to expand).:

Source: NVIDIA Quantum-2 InfiniBand Architecture

400 gigabits per second is amazing. There's a lot of tech-speak here that I don't understand, but that's ok (for now). Infiniband is important if Azure is using 29,000 miles of it.

Jensen Huang, President and CEO: Generative AI is the largest TAM expansion of software and hardware that we've seen in several decades. The -- at the core of it, what's really exciting is that what was largely a retrieval-based computing approach, almost everything that you do is retrieved off of storage somewhere, has been augmented now, added with a generative method, and it's changed almost everything. You could see that text to text, text to image, text to video, text to 3D, text to protein, text to chemicals, these are things that were processed and typed in by humans in the past, and these are now generative approaches. The way that we access data is changed. It used to be based on explicit queries. It is now based on natural language queries, intention queries, semantic queries.

Pretty cool that AI is so widely used and has fundamentally changed how we interact with technology. "Semantic queries" is especially interesting.

Last quote from the recent earnings call that stood out to me.

Harlan L. Sur, Executive Director and Head of U.S. Semiconductor & Semiconductor Capital Equipment at JPMorgan Chase & Co, Research Division: If you look at the history of the tech industry, right, those companies that have been successful have always been focused on ecosystem, silicon, hardware, software, strong partnerships and just as importantly, right, an aggressive cadence of new products, more segmentation over time. The team recently announced a more aggressive new product cadence and data center from 2 years to now every year with higher levels of segmentation, training, optimization, inferencing, CPU, GPU, DPU networking. How do we think about your R&D OpEx growth that looks to support a more aggressive and expanding forward road map? But more importantly, what is the team doing to manage and drive execution through all of this complexity.

Huang: Gosh, boy, that's just really excellent. You just wrote NVIDIA's business plan. And you described our strategy. First of all, there is a fundamental reason why we accelerate our execution. And the reason for that is because it fundamentally drives down cost. When -- the combination of TensorRT LLM and H200 reduced the cost for our customers for large model inference by a factor of 4, and so that includes, of course, our speeds and feeds, but mostly, it's because of our software. Mostly, the software benefits because of the architecture. And so, we want to accelerate our road map for that reason. The second reason is to expand the reach of generative AI. The world's number of data center configurations, this is kind of the amazing thing. NVIDIA is in every cloud, but not one cloud is the same. NVIDIA is working with every single cloud service provider and not one of their networking control plane security posture is the same. Everybody's platform is different, and yet we're integrated into all of their stacks, all of their data centers, and we work incredibly well with all of them. And not to mention, we then take the whole thing, and we create AI factories that are stand-alone. We take our platform. We can put them into supercomputers. We can put them into enterprise. Bringing AI to enterprise is something -- generative AI to enterprise is something nobody has ever done before. And we're right now in the process of going to market with all of that. And so the complexity includes, of course, all the technologies and segments and the pace. It includes the fact that we are architecturally compatible across every single one of those. It includes all of the domain-specific libraries that we create. The reason why you -- every computer company without thinking can integrate NVIDIA into their road map and take it to market and the reason for that is because there's market demand for it. There's market demand in health care. There's market demand in manufacturing. There's market demand, and of course, in AI, in financial services and supercomputing and quantum computing. The list of markets and segments that we have domain-specific libraries is incredibly broad. And then finally, now we have an end-to-end solution for data centers. InfiniBand network -- InfiniBand networking, Ethernet networking, x86, ARM, just about every permutation combination of solutions, technology solutions and software stacks provided. And that translates to having the largest number of ecosystem software developers, the largest ecosystem of system makers, the largest and broadest distribution partnership network, and ultimately, the greatest reach. And that takes -- surely, that takes a lot of energy. But the thing that really holds it together, and this is a great decision that we made decades ago, which is everything is architecturally compatible. When you -- when we develop a domain-specific language that runs on one GPU, it runs on every GPU. When we optimize TensorRT for the cloud, we optimized it for enterprise. When we do something that brings in a new feature, a new library, a new feature or a new developer, they instantly get the benefit of all of our reach. And so that discipline, that architecture compatible discipline that has lasted more than a couple of decades now is one of the reasons why NVIDIA is still really, really efficient. I mean we're 28,000 people large and serving just about every single company, every single industry, every single market around the world.

This is all great, but leads me to ask "Then what are the other semiconductor companies talking about?" Aren't AMD and Intel direct competitors?

Huang makes great points, and it's invigorating, but it's too pie-in-the-sky for me. I want to see what the next big risk is for NVIDIA, and none of the analysts asked that hard question. When looking at the recent filings for NVIDIA, risks are listed in generic format. There seems to be little-to-no insight there.

I'd like to know what makes NVIDIA better/different than all the other semiconductor companies.

Conclusion

Subscribe to Walsh Investment Strategy to unlock.

Become a paying member of Walsh Investment Strategy to gain access.

A membership gets you:

  • Premium Research
  • Sunday Edition Email
  • Real-Time Portfolio Updates
  • Community Access
  • Inner Circle

 

Disclaimer: The information provided on this website is for general informational purposes only and should not be considered investment advice. Please read our full disclaimer for more information. You can access it by clicking HERE.

Responses

  1. “As of today NVIDIA has a P/E ratio of 55. Even for the best company in the world this is too much”

    This is an interesting comment to make, especially in an article about a company where the “right” P/E ratio at various points over the past five years on (then) current earnings was probably much higher than 55x. I can understand what you’re saying as a rule of thumb, but worth thinking about scenarios where it may or may not be appropriate.

    – Alex

    1. That’s a very good point. A company with a P/E >=55 is expected to increase earnings, or “grow into it.” There are certainly times when such a valuation works, especially when looking at the history of NVDA and how they’ve grown into those historical valuations.

      With my current imperfect knowledge on NVIDIA I am personally not comfortable with a p/e of ~55 other than a v small starter position.

      Off the top of my head I cannot think of any good point to buy a company at p/e55 unless I can see the future. Then again, I think I have 1/20 of your experience. Do you think there’s a time when buying/holding at 55 is right or wise?

      1. All fair, particularly as it relates to “imperfect knowledge” and the use of a rule of thumb. On the Q, not to be flippant, but the answer is when current earnings do not fairly represent what the true earnings power of the business will be in the future. Even if I have 20x your experience (using your number!), there are very few instances where I would be comfortable paying that price – but it isn’t zero.

        1. I like it. I also wouldn’t call that flippant, but succinct, almost like you thought a lot about this or something.

          “Do current earnings fairly represent the true earnings power of the business in the future?” points me to answer the latter half of the question for myself… and hopefully subscribers.

          Today was a good start.

Subscribe

Get smarter
Invest wisely
Never miss a thing

You have Successfully Subscribed!

Shares