Talking about innovations in data center industry with John Belizaire
December 6, 2022
Prylada held a series of customer development interviews with experts from the data center industry, to find out more insights and learn the challenges data centers face during their routine operations.
This article is an abstract from our talk with John Belizaire, CEO at Soluna Computing. Here we are discussing data center trends, their struggle for sustainability, and even the downside of an innovation business. Join us in our exciting conversation.
You offer a rather unique solution on the market, and when you talk about it, it sounds so brilliant and … logical. But could you tell a bit more about the innovativeness of your solution?
Soluna is really on a mission to have renewables become the dominant source of energy on the planet. And the way we do that is by solving a fairly unknown problem in the renewable energy space. The fact is that when you replace fossil fuel power plants, and you put renewable energy power plants on the grid, you start to have issues where the grid can't control how much power is generated. Sometimes it's too much, sometimes it's not enough. And when it's too much, the power plants tend to get asked to shed the power or turn off parts of the plant. And so there's a lot of wasted energy as a result.
From our perspective, there's a very elegant solution that you can bring to that problem. The solution is scalable, can be done anywhere on the planet, and is specifically designed for this purpose. It only required to find a consumer of the wasted energy. And that is what we’ve done.
We’ve built a data center with the focus of being green and using a completely new design than traditional data centers. So it focuses on power consumption and power usage effectiveness that’s best in class in the industry. And it allows us to make a turn around and take that facility, which is highly scalable, and point to customers that have a need in extensive computing, focused on analytics, such as AI and machine learning, and interested in putting their computing on a sustainable platform.
Existing clients already have traditional data centers. Does Soluna provide an easy onboarding and migration process?
100%. Our approach is to provide customers with tools to easily transition the data over to our platform. We give them the same user experience that they have on the other platform, where they can upload code, have container-based systems, and we do that essentially by using the same platforms.
If you're going to run a very large GPU supercomputer in your office, there's a certain stack you're going to build up on that machine. We are going to provide the same stack on our platform. So it should feel the same as a typical cloud when you engage with us, but it's different in that we only do certain types of applications and our uptime will not be 24/7. Our uptime is much less than a traditional data center, but that's okay for the types of applications, because they're not real-time.
In the software development, there was an era of monolithic applications and later on it moved to microservices and containerization subjects. Could we observe a similar tendency in the data center industry in the next 10 to 20 years? Will we witness different types of data centers designed for specific tasks?
Yeah, that's a very good point. Actually, I started my career as a software entrepreneur, so I spent a lot of time working with companies, building applications, understanding application architecture, IT architecture, the cloud, all that sort of things. So it's interesting that I'm bringing that expertise here with this new wave of computing.
My first company was actually in the component software space. That's what it was called 25 years ago. Now we call it microservices. So I would literally walk around with a box of Legos and I would say, ‘This is how software will be built in the future’.
Now I feel like I'm doing the same thing 25 years later. I'm saying data centers will no longer be monolithic beasts that can process anything that you want. You actually will start to see more efficiently designed modular facilities that can be placed all around the world, because the demand for data centers is just going in one direction and the types of applications are continually moving towards intelligent applications, data focused applications, analytical applications, modeling, that sort of thing. And it's becoming much harder to build all of your data centers in these concentrated parts of the planet.
Because that's creating a number of things that are making it difficult for us to reach our climate change goals. But what if you can decompose the data center and redesign it in a different way, such that it was more modular like we're seeing in the software side. And I think that is a trend that you're going to see.
Schneider Electric is already designing a host of different data center type facilities that are modular, so you can place them out in remote areas. You're starting to see data centers become decomposed and then reassembled in a more efficient way. That's what we've done.
We've taken a traditional data center, and we've designed it such that we can create a super efficient machine that converts power to really high-end computing for very specific applications. And the design of the building is focused around being a modular design. I think that's going to change the nature of computing down the road, and it will become more advanced.
You'll have different types of computers in the building. So today we use computers that have multiple chips in them. What we're starting to see is a big trend where we're getting much bigger chips so that you can put way more computing power in one concentrated area.
And what about the physical infrastructure control in the data center buildings? With the new modular system, will the importance of facilities monitoring increase?
So, all of our facilities are computer controlled. We have a software platform, which is essentially a facility management system, and it's also a decision system.
We're connected to the local power infrastructure and the grid system. So we can monitor how much output is coming from the wind farm, what the power prices are in that location, what the trending of that power price is, and what the next day will be.
And that is actually our software, we're actually sized the data center to optimize the cost of energy and the amount of energy we need for certain applications. We're always up and constantly tuning that down.
That allows us to do some very cool things first when we receive a job. If a customer gives us a job to run in our facility, we can choose which of the data centers to place the job, such that the job can be done on time with the best possible cost optimization and with the greenest energy.
The second thing that it allows us to do is since we have this predictive ability around power availability, and we know characteristics about the application, we can pause an application, lift it and shift it to another facility, so it can run there. The result is that we basically have created a very large distributed data center.
When talking about the data center industry in general, from your perspective and from your experience, what are the key challenges in the industry now? What is disturbing the vendors most?
When we talk to the big hyperscalers, you basically have these really tuned machines right there. They're like F1 vehicles. They're built to go fast. They're cooled in a very specialized way and use a lot of water. And then they need to have stable power and abundant amounts of power. I'm talking about the hundreds of megawatts level. So, sometimes they put strains on the grid, or they have to put legacy fuels in the back of the data center to make sure that it's on like diesel and stuff like that.
It also depends on how cooling is done. More data centers are moving to or trying to move to oil cooling versus water in air conditioning style systems.
The other challenge is that customers can’t be 100% confident that their data center only uses green energy. What you’re starting to see is how data centers are going out and buying green power and then using that to offset the carbon footprint of their specific data center location. You're seeing companies do what's called a 360 mapping, where they look at every hour that they're computing and try to match the power to each of those hours. And that’s really difficult to prove, because you actually have to go all the way back and support the development of the power such that you can then land on those times. Those are the big challenges we've seen.
Could you describe your top 3 - 5 disturbing/annoying topics you’ve faced in your job recently?
Well, once you’ve entered an innovation business, that means you have to do the three things:
I’m in the innovation business, and these are the three things I spend most of my time doing.
On the education side, I should educate people that it’s okay to build a data center that is not 24/7, that actually the future data centers won't be 24/7, because they will be based in all parts of the world because of the need for sources of renewable energy. The persuasion is about convincing partners that they shouldn’t put the battery out there, convincing them that computing is a better battery now and that the opportunity is to build renewable energies and this form of data centers, this form of computing at the same time.
We think that many plants that couldn’t be built are now able to be built, because we’ve solved their biggest problems using this solution. And we also believe that, as we built a very large fleet of these facilities, we now have brought a new form of computing platform that’s more sustainable and is a true catalyst for creating new energy.
What advice can you give to the companies that want to keep a balance between being profitable and being environmentally friendly?
Most businesses are becoming computing businesses now. If you look at just about any enterprise, more and more companies are trying to figure out how to make use of their data and how to make better business decisions by doing forecasting and computing, that sort of thing.
And what they're realizing is that the amount of that computing is becoming such a big part of their business that they need to find cost-effective solutions to that. In the traditional sense, customers will sign multi-year deals with the big hyperscalers and those facilities are designed to be very flexible in the sense that you can do any type of computing. So it's a general purpose of a data center. And so you're paying for high availability at those facilities 24/7. You're paying for the ability to put different types of compute, real-time compute and non-real time compute in the facility.
There's a cost associated with that, and lots of IT organizations are beginning to realize that they don't actually use all of those capabilities, even though they're paying for it. That's starting to create this concept of a multi-cloud, where you can sign up with several different providers for different purposes. And one of those purposes might be to take really specific jobs and move them to a platform that is tuned for that job. And because it's tuned and that's all it does, it's a lot cheaper. When companies want to train predictive models or train AI platforms, they're starting to look for platforms that are very specialized in that.