The Network Behind the Revolution: How Cornelis is Powering the Next Wave of AI Innovation
Executive Spotlight on Lisa Spelman, CEO of Cornelis Networks
What if the most complex and data-intensive problems—like predicting climate change, decoding genetic codes, or designing the next generation of autonomous vehicles—are solved faster, more efficiently, and with greater precision? From predicting the next pandemic to advancing space exploration, these monumental challenges are powered by high-performance computing (HPC) and artificial intelligence (AI). However, despite the rapid advancements in these fields, the infrastructure required to support them often remains a bottleneck.
The problem? Inefficiency in the network that connects all the compute power—the Central Processing Unit (CPUs), Graphics Processing Unit (GPUs), and accelerators—used to run these workloads. As data moves across systems, much of the computing power sits idle, waiting for information to arrive. This delay wastes valuable processing power, energy, and time, hindering the potential for groundbreaking discoveries and innovations.
Enter Cornelis Networks. An end-to-end network solution provider, Cornelis is revolutionizing the way AI and HPC workloads are powered. Specializing in delivering high-performance, scalable networks, Cornelis is addressing the critical need for faster, more efficient data movement. By optimizing the network infrastructure that supports AI and HPC environments, Cornelis is enabling organizations to overcome the limitations of traditional systems and unlock the true potential of their compute assets. Ridgeline first backed the company in 2021, as it was spinning out of Intel, supporting its mission to advance AI and high-performance computing to new heights.
At the helm of this exciting transformation is Lisa Spelman, a visionary leader who recently joined Cornelis as CEO. With over 20 years of experience at Intel, where she spearheaded major advancements in data center and AI technology, Lisa brings unmatched expertise and a forward-thinking approach to Cornelis. Under her leadership, the company is poised to drive significant growth and play a pivotal role in the future of AI and high-performance computing.
In this Executive Spotlight, we take a closer look at Lisa’s leadership journey, her vision for the future of AI infrastructure, and how Cornelis is tackling the world’s most pressing challenges with cutting-edge network technology.
(Note: this interview has been lightly edited for readability.)
What does Cornelis Networks do, and why is it important for the future of AI and high-performance computing?
LS: Cornelis Networks is a company that is absolutely focused on solving the world's AI efficiency challenge. This is a little-known fact, but when you hear about all the AI training and use cases happening in the world, the compute behind AI is actually only being utilized about 50% of the time—sometimes even less, especially with the really big models. That's because data is spending time in the network. While the data is in the network, the GPU, or whatever you're using for your compute, is sitting there idle, waiting to be fed. This leads to wasted power and inefficiency, and it means one of your most expensive and strategic assets for driving your business is fundamentally not reaching its full potential. Here’s where Cornelis Networks comes in. We focus on driving more data to your compute assets faster, so you can reduce the time spent in the network. By gaining that time back as computing output, you can advance the next frontier of AI and scientific discovery. We're really excited about contributing to these advancements in such a positive way.
How does Cornelis solve the efficiency challenges in large-scale AI and HPC environments?
LS: The way we do this is by providing an end-to-end high-performance network. When I say end-to-end, I mean we have the adapter or the NIC portion of the network, the switch portion, and the system and software that tie it all together. It’s a complete solution. By building this end-to-end network, we have the opportunity to innovate and create advanced features. Two of the biggest problems we focus on are: first, we are a lossless network, which means we keep all your data intact, no matter how much is being sent. Second, we are congestion-free. Think of congestion like traffic on a freeway. Over the last decade, we’ve built advanced algorithms to move traffic in the most efficient way between GPUs and compute agents. We say we’re a lossless and congestion-free end-to-end network—that’s our “secret sauce” that helps keep data moving faster and more efficiently.
“What I learned at Intel, and what’s helping me at Cornelis, is how to scale technology and make it available to solve customer problems. It’s not enough to invent a really cool architecture—you also have to build it in a form factor and capability that can be adopted and deployed.”
What options do your customers have in scale-out architectures, and what differentiates Omni-Path as an architecture option?
LS: One of the challenges our customers face in scale-out architectures is that while there are existing solutions available, those were not originally designed for this problem. Scale-out, especially for AI computing, presents two major challenges. One is the massive scale—how do you connect 100,000 GPUs? Or 500,000 GPUs? There are even discussions about getting a million GPUs connected. That’s a huge physical, technological, and logical challenge. The second challenge is highly parallel computing, meaning all compute assets must work on the same problem and share data. No single server or GPU can hold all that data—it’s just too much. That’s where the network comes in to make sure all compute elements are using the same data. Existing architectures weren’t designed for these challenges, which is why we have a market opportunity. As a startup with a team of about 200 experts in networking, we’re focused on building solutions in silicon and software that are designed to address these problems, with losslessness and scalability built into the architecture.
So it’s exciting to hear about the CN5000 family. Can you tell us what sets this new product apart and why you believe it’ll be a leading product option in AI and HPC environments?
LS: Our next product is the CN5000, which is an end-to-end network solution—again, with the adapter and switch—and it's a 400-gig product. It’s really targeted at enterprise AI and high-performance computing environments. We already have a great slate of customers who are eager to get their hands on these capabilities. What they look for, beyond the core differentiators of losslessness and being congestion-free, is extremely low latency and incredibly high message rates. You get excellent network bandwidth and the ability to push a massive number of messages very quickly. The combination of these features allows us to be applicable across training environments, inference environments, and HPC environments. These use cases have shared requirements but prioritize different features, which makes the CN5000 incredibly versatile.
You joined Cornelis from Intel, where you led significant advancements in AI technology. What drew you to Cornelis, and how do you see your experience at Intel shaping the future vision at Cornelis?
LS: I spent the first 20-plus years of my career at Intel, and I learned a tremendous amount there. I have a deep respect for Intel and the opportunities we had to make a global impact. Before I left, I was leading the Xeon product line, which is deployed in nearly every data center worldwide. What I learned at Intel, and what’s helping me at Cornelis, is how to scale technology and make it available to solve customer problems. It’s not enough to invent a really cool architecture—you also have to build it in a form factor and capability that can be adopted and deployed. One of the most amazing things we’ve achieved at Cornelis is getting all the major OEMs on board to sell and market our products. This isn’t just because they like us, but because the technology is strong and performs well. What drew me to Cornelis was seeing how AI workloads were pushing the compute layer forward. I realized the network was the next great optimization frontier, and there’s so much we can do to improve energy efficiency, utilization, scientific discovery, and advancing humanity’s greatest challenges by improving network performance. I got to know the team and the architecture, and I was compelled by the vision being built here. I knew I could help, and it’s been a fantastic seven months so far. We’re off to a great start, and 2025 will be a breakout year for us.
In what industries do Cornelis products get deployed, and can you describe the ecosystem that surrounds Cornelis and how your technology eventually makes it to the end customer?
LS: What’s interesting about Cornelis is how we get our products to customers. We don’t sell directly to enterprises, universities, or government entities. Instead, we sell through OEMs—companies like Dell, Lenovo, HPE, Penguin, and Supermicro—and they take our technology to the end customers. We need great relationships with our OEM partners, as well as with the end customers, to make sure they see the performance data and understand how our products can benefit their applications. We’re generally focused on enterprise AI and HPC workloads, but here are a few examples of where our technology is used: In genetic research, like cancer treatments, which is a traditional high-performance computing problem. In automotive, where every engine design is modeled before it's built physically, using airflow analysis. Weather prediction and oil and gas exploration also require massive computing power. Additionally, some auto manufacturers are now collecting vast amounts of data from consumer cars and using it to create private AI clouds, where we help them build their own enterprise AI solutions. We cover a lot of verticals, helping industries push the frontiers of discovery from HPC challenges to enterprise AI opportunities.
Can you share a real-world example of how Cornelis’ interconnect solutions have helped an organization or industry achieve tangible results? What kind of impact did it have on their ability to address complex challenges or drive innovation?
LS: Sure! I mentioned automotive earlier. We've worked with many car manufacturers around the world in their wind tunnel and body design work, which are HPC problems. But these manufacturers are now also building enterprise AI systems to collect and analyze data from the cars on the road. This data is incredibly intensive, and they need an AI infrastructure capable of handling that. They’ve been able to deliver real-time services through their applications using our technology. This is driving innovation, improving customer experiences, and accelerating advancements in automotive and other industries.
What does the future of AI infrastructure look like, and what role will Cornelis play in shaping that future?
LS: What excites me about the future of AI infrastructure is that the advancements aren’t stopping. Both on the application and infrastructure sides, the pace of change is accelerating. At Cornelis, we see ourselves playing a critical role in developing AI infrastructure that serves everyone. We’re building products that can support up to 2 million endpoints in a single cluster, which is an amazing innovation. This allows us to add massive capacity on the compute side while still keeping the network performance high. We’re also excited about the opportunities in test-time compute on the inference side, where inference can help inform training. Enterprises will be the ones to build businesses around AI, and we’re helping them do that with the right infrastructure, tools, and support. With our HPC heritage, we’re in a great position to serve mixed-use cases and continue being the leading network technology in AI, inference, and HPC environments. We see enormous potential ahead and the challenge now is how quickly we can scale to meet growing demand.
Learn more about Cornelis at cornelisnetworks.com.