When you’re planning a $100 million wind farm on 98,000 acres of different terrain, you’ll want to know a few things. You want to know that you are optimizing your multi-million dollar turbine website. You want to know that the turbine you source from can handle the strongest gust of wind you’ll ever encounter without crashing too hard in some viral YouTube videos out loud. And you want to test potential use cases and changes in software, which is cheap and changeable, not hardware, which is expensive and hard to modify.
Which is why global renewable energy company Siemens Gamesa is working with NVIDIA to generate AI-powered digital twins for its turbines.
Greg Oxley, Principal Data Scientist at Siemens, told me TechFirst’s latest podcast. “This could be…weather events coming and we want to see how to optimally operate this wind farm as we move through these kinds of events. We could test new control strategies or something we want to look at in the future and we want to see how the wind farm is going to work. under these new control models.”
Siemens has thousands of turbines around the world that together produce more than 100 gigawatts of wind power, enough, the company says, to power 87 million homes each year. This is enough to improve the way they work and protect them in the event of storms.
Turning them off if there are very strong winds is not a step to be taken lightly – it cuts off power generation – but it is also important to protect expensive infrastructure. This requires dealing with “dupes,” Oxley says, and it’s crucial to get them right.
“We always try to mitigate what we don’t know and set up appropriate barriers…but this puts us in an imperfect position,” he says. “We’d rather clarify this, understand the unknown as best we can, and get to real optimization rather than just adding buffers on top of everything.”
In other words, adding a margin of safety is both a good and a bad thing. It’s good when it saves money by not destroying the turbines, but it’s bad when it results in unnecessary shutdowns that cost money. Digital twins help Siemens gain a more honest understanding of its equipment, capabilities, and limitations, and provide the company with the data and models it needs to be able to respond optimally in productive ways.
Doing so just got easier, says Dion Harris, product manager at NVIDIA. The company says NVIDIA’s latest chipsets and AI frameworks accelerate simulation modeling up to 4,000 times faster than traditional methods.
“We were only using 22 GPU-accelerated nodes and were able to deliver a performance of roughly…984,000 nodes on a given system,” Harris told me. “It’s really about how we can simulate these very complex environments, but in a very efficient way that is possible. Because if money is no object, if power is no object, you can just throw CPUs at it all day and you can get there… it gives us Artificial intelligence has some tools to model these very complex systems in a way that is both extremely time and energy efficient.”
NVIDIA helps build digital twins for Siemens’ wind farms using NVIDIA Omniverse, a 3D design technology to “connect and create digital worlds”, and NVIDIA Modulus, a “neural network framework that blends the power of physics in the form of controlled partial differential equations with data to build Alternative high-resolution models and parameters with near-real latency.”
Translation: Using AI to model the real world with high accuracy and make it available not just as spreadsheets in a spreadsheet, but as an explorable visual experience.
What is the result of all this high-tech VR-ish metaversy for renewable energy systems? Fewer known unknowns and fewer unknowns.
“What this allows us to do is really get rid of the unknown,” Oxley says.
Within reason, of course. As always, in modeling physical reality on a large scale, the question is how do you ensure that your model is accurate for current real-world systems in all their near-infinite complexities, and to predict future events.
Which is mainly due to shoes on the floor, says Oxley. In addition to the continuous fine-tuning of industrially intelligent knobs and dials.
“We always measure performance back and forth actively,” he says. “So you are always actively working in a physics-based model, turning the knobs that you need to cross a large scale [with] …the slightest mistake in what is actually happening in the field. Now the same with machine learning models, you’re constantly training, it’s constantly improving. So you need that feedback from actual performance in the field, the ‘reality’ of what’s going on, the feed back to your original predictions and tuning back and forth all the time.”