Looking back, he can connect the dots and see how he arrived at the forefront of artificial intelligence and machine learning. His journey into computing began with a passion for science fiction, fantasy, and video games.
"I read science fiction and fantasy copiously as a kid, but I also loved video games because they could bring that imagery to life," David Kanter told The Forecast.
This curiosity led him to explore what computers are good for, diving deep into the economics of semiconductors for his University of Chicago thesis. In his 20s, he become a technology reviewer then analyst through the personal computer revolution and digital transformation following the internet boom. Today, he stands at ground zero of the explosion in AI innovation, as co-founder of ML Commons, a collaborative engineering consortium dedicated to accelerating machine learning innovation and accessibility, primarily known for developing the MLPerf benchmarks.
In this Tech Barometer podcast, David Kanter talks about his inspiration and insights from working with industry leaders to measure hardware performance for AI and machine learning workloads. He describes how the AI/ML environment is evolving, the implications of power-hungry AI applications, and the kind of thinking that will be required to foster AI development in the coming decades.
Transcript:
David Kanter: The current era of machine learning is very much informed by the supercomputing community. I think the seminal moment for this era of machine learning was AlexNet and ImageNet, which demonstrated that if you took enough compute power, you could train a machine learning model, a neural network that outperforms humans at image recognition. And that was 2011, and that kind of launched the modern era.
Jason Lopez: The voice of David Kanter, co-founder of ML Commons, the organization which developed the benchmarks to measure the performance of machine learning and artificial intelligence hardware, software, and computer systems. This is the Tech Barometer Podcast. I'm Jason Lopez. For the past two decades, computer scientists have been working on the performance of machine learning models. Kanter and his colleagues have explored how computer systems across a range of functions can keep up with AI training. Their benchmark suite, MLPerfStorage, reveals the performance of storage systems when running AI.
David Kanter: Debo Dutta and I had sort of brainstormed about this.
Jason Lopez: Debo Dutta is the chief AI officer of Nutanix and a co-founder of ML Commons.
David Kanter: He said, you know, we've got this wonderful professor at McGill that we've worked with. She was at Nutanix, I think, for a period of time and runs a wonderful lab up at McGill. And so we started some of the brainstorming. And from there, sort of the idea emerged that, hey, we should look at what is the performance that you need out of a storage system to support AI training. That's really the goal of MLPerfStorage is answering that question.
Jason Lopez: Measuring performance is complex, especially at different scales. Machine learning workloads vary significantly, and they work on a variety of applications and use different levels of computational power.
David Kanter: If you've got a system that has, you know, 65 of these generation of accelerators, five nodes of Nutanix may be right for you. Or, you know, maybe you're looking to build a cluster that's a bit more expandable. And maybe you should get 10. But that's ultimately the kind of what the benchmark is telling us. And, you know, we've got three different workloads in the benchmark, 3D images and 2D images and scientific workload.
Jason Lopez: Machine learning operates across a big range, from small energy efficient models to very large systems.
David Kanter: One of the things I like to say is it spans from microwatts to megawatts, because some of the largest MLPerf submissions on the compute side for MLPerf training and MLPerf HPC, we had one submission on Kudaku at the time, I think the world's number one supercomputer. And then we've had systems with 10,000, 11,000 accelerators. Those are some of the largest supercomputers out there.
Jason Lopez: One of the most exciting applications of AI is in scientific research, where machine learning can enhance simulations to make them more efficient.
David Kanter: You can do an exact simulation, but that's very computationally expensive.
Jason Lopez: An exact simulation means, for example, in the case of charting the transformation of a molecule, the system expends the time and energy to go through an exhaustive analysis of how atoms and molecules interact and how they bond and change over time.
David Kanter: What happens if instead of doing pure simulation, maybe we can use machine learning to do some prediction that might be a lot more affordable computationally than an exact simulation. So one of the ways that people are looking at AI for science is training the machine learning model on these exact simulations so that it can approximate them and make very efficient predictions. And then you can use the exact simulation only on the ones that you already believe to be safe.
Jason Lopez: Machine learning, by extension AI, isn't just for research institutions anymore. Its tools and infrastructure are becoming more mainstream, with applications extending far beyond traditional supercomputing environments.
David Kanter: AI is taking a lot of those same tools and infrastructure, including the storage, mainstream and operating it in a different way. Now you have all the leaders in AI and machine learning having those same needs for totally different data. We're taking the storage that you needed for supercomputers, and now there's a much broader pool of people who need them for a wide variety of different applications.
Jason Lopez: One field where this shift is already visible is in autonomous driving, where AI must analyze complex data in real time.
David Kanter: The data needs there are super complicated. You've got 2D images, you've got 3D volumes, you've got LiDAR that gives you depth, you've got radar. And so you have to combine all of this together and analyze it so that we can get cars that will help inform me as a driver, hey David, you've got to stay in the lane or this deer is coming out of nowhere.
Jason Lopez: As the field of AI grows, so do the options for access. Companies must decide whether to buy or lease computing power, a challenge that Kanter and his team considered while designing MLPerf benchmarks.
David Kanter: There's a lot of folks who are going to be doing that in the cloud who may not want to buy, they may want to lease, or it may be a combination of the two. And just the variety of approaches there is really astounding. And that was all something we had to take into account when we built the benchmark to be as inclusive as possible. I read science fiction and fantasy copiously as a kid, but I also loved video games because they could bring that imagery to life. So that got me interested in, well, what computers are good for running video games and why? And those were questions I started to ask myself when I was middle school, high school. That ultimately led me down the path into computing.
Jason Lopez: In some ways, Kanter's work with MLPerf is just an extension of what he's been doing since he was a kid. His father was a doctor and he said he was raised in an atmosphere of inquiry.
David Kanter: I didn't learn until later in life that actually most people don't talk about medical cases over the dinner table. That was one of the ways in which I became maybe a little bit better rounded in college.
Jason Lopez: His passion for technology carried him through the dot-com boom and into a career where he helps define the future of computing.
David Kanter: In the late 90s, I got really interested in the internet and was starting to read some of the early websites on computers and reviews. Anand Tech, Tom's Hardware Guide, a lot of the early review sites I read when I was in high school. And then when I was in college, I ended up getting involved with Real World Tech, which was actually kind of the granddaddy of some of these and still is a website for computer engineers, not the general public, focused on understanding computer technology. And so I started writing there while I was in college, actually, and ended up running the website when I moved out to Silicon Valley in 2005. And that's when I first came into contact with Intel, Itanium, and the transition to 300 millimeter wafers and all sorts of marvels that we bring to life.
Jason Lopez: Today, as AI continues to evolve, the challenge for Kanter remains the same. How to define progress.
David Kanter: Part of the goal is getting the whole industry oriented around what does it mean to be better? And part of that is helping customers understand what they should be buying. And which, again, coming back full circle, that's why I was reading review websites in middle school and high school, because I was trying to figure out what computer to buy.
Jason Lopez: From figuring out what computer to buy, Kanter's now helping people figure out how best to implement enterprise AI to run organizations. The work he's doing for ML Commons is big. Many challenges lie ahead, like power consumption of inference and training systems. Some of the new work they're doing, measuring AI risk, reliability and safety.
David Kanter: I'm thrilled that inference we delivered a couple of years ago and then training, which is a bit more complicated because they're bigger systems. We were able to get the first power measurements for AI training systems late last year. And we've got hopefully more coming soon with MLPerf training.
Jason Lopez: David Kanter is the co-founder of ML Commons, a consortium that develops benchmarks such as MLPerf Storage for evaluating machine learning performance across hardware, software and cloud platforms. You can find them at mlcommons.org. In our ongoing series on AI leaders, listen to our follow-up segment where David Kanter discusses how they're creating benchmarks for data storage. This is the Tech Barometer podcast. I'm Jason Lopez. Thanks for listening. Tech Barometer is a production of The Forecast. Find us at theforecastbynutanix.com.
Jason Lopez is executive producer of Tech Barometer, the podcast outlet for The Forecast. He’s the founder of Connected Social Media. Previously, he was executive producer at PodTech and a reporter at NPR.
Ken Kaplan contributed to this podcast.
© 2025 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.