In part two of our Energy Transition Talks conversation on generative artificial intelligence (AI), CGI experts Diane Gutiw and Peter Warren further explore the implications and applications of AI in the energy and utilities industry. Building upon their discussion in part one, they examine how digital twins, change management and trusted data are shaping the use and performance of AI in energy organizations, ultimately looking to the future of AI as multimodal, human-driven technology solution.

The key to realizing AI value: integrated solutions and digital twins

Increasingly, the greatest benefits of generative AI are emerging not in single solutions, but in integrated, multi-model, multimodal ways of pulling in information, producing expert advice and automating certain functions.

The energy industry, says Diane, is “a great example of a very complex environment with lots of different types of media and data that can be leveraged by these new and upcoming technologies.”

In her view, AI is headed toward digital twin models and integrated solutions. In the energy industry, this increased data-driven automation can help make both the grid and operations more efficient.

Peter Warren shares one key use case for digital twins is to help organizations understand other markets better, as they transition their current model. “You might know your existing industry well,” he says, “but as you move from traditional carbon-based energy to something less carbon-based, be it hydrogen or electricity, you may not know those markets; being able to create a digital twin of something you haven’t formally understood is a huge benefit.”

Diane agrees and suggests that the adoption of a digital twin to represent an organization’s current environment is a great use case, especially where there’s a data-intensive end-to-end workflow. Not only does this provide a robust view of the existing environment, she says, “but also it allows organizations to look at different scenarios and leverage AI to say, for example, ‘What would happen to the grid if this event happened, and how could I automatically adjust?’”

The role of organizational change and trusted data in shaping AI performance

Change management plays a large role in the structuring and maintenance of AI within an organization. Data-based decision-making relies on trusting the source of the output and the validity of the output itself. This requires a large structure of people to ensure the data is maintained, reliable and actionable.

As Peter says, “There’s quite a bit of an organizational change to actually make AI function within an organization, and energy utilities are no different.” 

The key, Diane believes, is trustworthiness. Revisiting the digital twin example, she suggests that playing out various scenarios based on an event will create more trust when the event happens in real life.

“There’s lots of different ways to build out change management to support and enhance the use of AI in an organization, but it really comes down to trust and that’s why transparency is so important.”

The use of AI solutions to optimize resource capacity, cost and safety

A trend with roots in COVID is the use of information and data in different media to be able to work remotely. This continues to prove valuable for energy organizations in terms of cost, safety and logistics.

Diane explains how AI solutions are optimizing transportation routes and determining where and when to send crews, allowing organizations to better manage resource capacity, plus travel time and cost. These solutions also address safety concerns, as satellite and drone imagery solutions help collect information, data and images without having to, for example, send humans up poles.

Further, these solutions can accelerate understanding and resolution of issues. “Video data, still images, device data can be consumed much quicker with generative AI tools, in combination with traditional AI methods, to gather that information and visualize it for the operator at the end.”

The future of AI solutions within organizations

Looking at the next five years, Diane reveals three key trends already emerging in the AI space, which she believes will be critical to organizations in the future:

  1. More hybrid solutions, in which organizations refine their AI with human input to have a human in the loop and leverage the technologies to get smarter.
  2. Crediting futurist Mike Walsh, Diane uses the term ‘swarming,’ which groups hybrid solutions such as digital twins and other multimodal media solutions to complete complex tasks, advancing programmatic AI.
  3. Increased proactive AI, in which AI interactions prompt humans to make a decision or ask a question, for example in predictive maintenance or personal health devices, etc.

Although she admits we will have to debunk some AI-generated information, Diane points out there are effective emerging tools already doing just that.

She doesn’t believe we need to be afraid of the future of AI, and quotes Picasso to highlight why: “Computers are useless. They can only give you answers.”  For Diane, as long we’re asking the questions and AI is the one giving us the answer, “we’re really in control of where this is going in the future.”

Listen to other podcasts in this series to learn more about the energy transition

Read the transcript

1. Introduction and continuation from part one

Peter Warren:

Hey everyone, welcome back for part two of our series with Dr. Diane Gutiw on AI. Last time, we explored the backgrounds of how to approach an AI project, should you be afraid of it, how's it going to go, and I think we kind of cleared that up that yes, it's coming, but don't be too fearful.

Diane, just to kick that off again, picking up from that last point about being fearful. In the energy transition spots, as we dive into this deeper here, we see that it's not just energy for the traditional energy players; it's manufacturers wanting to know how to acquire better energy at a better price, maybe a lower carbon, and if they do that, do they participate in the energy market? People are crossing bounds with this type of technology. Any thoughts on that from you?

2. Multimodal, integrated solutions are where AI is headed

Diane Gutiw:

I think that the energy industry, as well as the digital twin models and these integrated solutions, really is where AI is going. Where it's really going to show its greatest benefits is not just in a single solution, (and we talked a lot about that, what are some great uses of generative AI), but in these integrated, multi-model, multimodal ways of pulling in information and being able to provide advice.

Being able to automate certain functions, provide expert advice will help make both the grid and operations more efficient, as well as provide more safety for workers. You can also deal with things like resource capacity concerns when you bring all of the tools together into an integrated environment.

 The energy industry is a great example of a very complex environment with lots of different types of media and different types of data, be it document or video or images, that can be leveraged by these new and upcoming technologies that are rolling out.

3. Digital twins are key to increasing AI use cases and value

Peter Warren:

You mentioned a couple of really key things there; the fact that you might know your existing industry well, but you could understand it even better. But, as people go from traditional carbon-based energy to something less carbon-based—be it hydrogen or into electricity—they may not know those markets, so being able to do a digital twin of something they haven't formally understood is a huge benefit. Do you see that in other industries as people sort of cross between points, maybe in manufacturing, for example?

Diane Gutiw:

I think where there's an end-to-end workflow that's very data intensive, the adoption of digital twin models to be able to represent your current environment (whether it's in healthcare, whether it's in telecom, lots of different industries, transportation), there are fantastic industries that can leverage the digital twin. It isn't just a representation of what's happening now in my environment, which is a great use of a digital twin. It also allows you then to look at different scenarios and leverage AI to say, "What would happen if I was to adjust something in this way? What would happen to the grid if this event happened, and how could I automatically adjust?" Those are the sorts of things that AI and digital twins together really are able to expand.

4. Organizational change and trusted data help organizations enhance AI performance

Peter Warren:

It's interesting, you talked about things, and I know Andrew, who works on your team. He and I did a presentation to a client and it was about how to leverage the AI, but there was this whole change management aspect of it, too. For example, if the data tells you that you should be turning left and you're insisting, “No, I've always turned right here,” you really have to have the ability to understand those things, but you also have to know that the data was right. If the data's not right, you have a whole structure of people to make sure that the data is being maintained. There's quite a bit of an organizational change to actually make AI function within an organization, and energy utilities are no different.

Diane Gutiw:

You're absolutely right. The real key to that, I think, is trustworthiness. Until you trust what models are telling you, until you trust an output, you're not going to fully embrace it. There's a whole governance and risk management model around AI, which helps and advances change management, that focuses on that trust. A big part of that is very principles-based, but a big part of that is transparency. As long as you are able to understand the source of the information and how that information is presented, I think that's really critical in trusting.

But going back to the digital twin, if you can play out different scenarios and see what's the next best action based on this event and be able to play it in different scenarios, you're going to have more trustworthiness when the event happens in real life. There are lots of different ways to build out change management to support and enhance the use of AI in an organization, but I do think it really comes down to trust, and that's why this transparency is so important. We need to understand the source and what it's telling us in order to be able to embrace it and use it as part of our business operations.

Peter Warren:

I was impressed with Andrew's description to this client that you must have a structure to correct the data in a real-time fashion. It's not just a case of letting things be, because now you're really making decisions, so there's a organizational structure that has to go with this, and that's something maybe we can dive into a bit more. But I had a question, too.

5. AI is helping organizations optimize resource capacity, cost and safety

Peter Warren:

In your opening connection, you talked about the workers, the workforce, and in the previous session of this, we talked about people using chatbots and AIs to optimize, but the person in the field, the person in the office, how do you see AI changing their lives?

Diane Gutiw:

I think we saw during COVID the use of information and data in different mediums to be able to work remotely. We're now seeing that continue, for safety, for cost, for logistics, for optimizing transportation routes and where we're sending different crews. It also really helps make sure, if you leverage AI solutions, now you're able to understand what the issue is and send the right crew to the right location and optimize the amount of travel that's needed. This has already been tested, and it's something that we're seeing more and more for resource capacity concerns, as well as for team safety concerns when people are going way out in the field or in unsafe areas.

We're also seeing the combination of satellite imagery and drone imagery to avoid sending people up poles so that we're able to collect information, data, and images. Again, looking at people's safety, as well as resource capacity. How can we do something remotely more efficiently, collect the data, and then use that data to help make decisions without having to send people out?

Peter Warren:

I think that's something that people don't realize, that at CGI, we do a ton of work with space data, even help various companies and countries fly satellites and everything else. I think that's something that up to now, people have wanted to use satellites, they've wanted to use drones, but really what you're saying is now from a single person's point of view, they may not be able to digest all that data. Now, the diverse data, the unstructured data within documents combined with real-time data is something that you could now put together in a use case in a way that you've never been able to do because one human probably couldn't digest all that. Is that more or less a good statement?

Diane Gutiw:

Yeah, absolutely. It still doesn't resolve the issue of when you're collecting IoT and edge device data every two seconds, do you need to consume it all? You still need to focus on what is the use of that information, what's the problem you're trying to solve, and then pull in the information that you need. But if you're starting to collect the video data, the still images, as well as the device data, all of that information can be consumed much quicker with generative AI tools in combination with a lot of traditional AI methods to be able to get that information and visualize it for the operator at the end.

Peter Warren:

That's brilliant. I know that in my own history here, I've been up utility poles that I probably shouldn't have been up, so if I understand that whole comment.

6. The future of generative AI will be proactive, hybrid and human-driven

Getting into the last bit here, if we jumped ahead—there's the snapshot of today, when people are thinking they're pretty advanced, but where do you see this if we jump ahead five years? What's going to be the next future?

Diane Gutiw:

That's a great question, and we've had a lot of eyes on the frontier AI, which you may have heard about, which is all of the big partners getting together to start to look at how do we move forward in a productive way? With frontier AI, there's a lot of speculation of what's coming next, a lot of talk about artificial general intelligence.

What I think is coming in the next five years and what we're starting to see is a few things. One would be more hybrid solutions, where we're able to continue to refine our AI with human input, having a human in the loop, to leverage these technologies to get smarter.

The next, using Mike Walsh's (a great futurist if you haven't listened to him) terminology, he's called it swarming, which is basically a group of these hybrid things working together; a lot about what we're talking about in the digital twin, using these multi-model, multimodal media types of solutions to be able to do quite complex tasks. More programmatic AI, where we're firing a task at AI: “We'd like you to be an ethical hacker, here's a hard problem, we'd like to see if there's a vulnerability in our infrastructure,” and you can set the AI to a very specific task. You can monitor it and you can observe the outcomes.

And then the last one, which Mike Walsh doesn't cover, but for which we're starting to see requests, is more proactive AI—being able to use AI to prompt to make a decision. (Whether it's predictive maintenance to personal health devices being able to prompt you to be able to do more). You're starting to see it in your smart devices. I think we're going to see that in our work coming up more, where the AI interactions are prompting us to make a decision, to ask a question.

 I'm going to come back as we're wrapping this up. I think why we don't need to be afraid is the one thing that I don't see changing, and it comes from the old Picasso quote, which is, loosely, “Computers are useless because they can only answer questions.” I think as long as we're asking the questions and AI is the one giving us the answer, we're really in control of where this is going in the future.

I think that it is a brilliant tool for helping us answer questions, but setting what those questions are, setting those use cases, and designing AI for a purpose really is where we're going to see a lot of the fear around AI and people using AI for nefarious purposes. It's going to be very similar to the internet, where we're going to have tools to debunk these things, whether it's deep fake videos or plagiarized documents. It's getting easier and easier.

I've got a son who just started university who already sees at the top of his papers that no paper will be accepted that has been generated using generative AI. We're going to see that more and more, and the tools are getting very good at debunking those, so I think that's going to be part of how we're going to get better at being discerning in how we use them.

Peter Warren:

I think that's brilliant. Well, I couldn't think of a better way of wrapping that up. Thank you very much for your time and expertise. I know you've had a very busy day today, and it's probably not over, so thank you very much, Diane, for making time for us, and thank you for everybody that listened in. Diane, thank you.

Diane Gutiw:

Yeah. Great conversation. Thanks, Peter.

Peter Warren:

Cheers.