The AI That Cried Wolf: How VIA Refines Algorithms

As energy companies start to explore AI solutions, we hear a recurring set of questions: How do your algorithms work? How much data do you need to make predictions? How do you measure the accuracy of your algorithms? With that in mind, we wanted to take an opportunity here to shed a little light on one of our most frequently asked questions.

By Colin Gounden

Does VIA use subject matter expertise to build models or does it rely solely on AI algorithms?

The short answer is both. We start by building AI algorithms with a combination of our client’s equipment data (age, location, equipment type) and add contextual information like pollution or weather data. We use this data to create initial predictions. These initial predictions are refined with input from our client’s internal subject matter experts. That’s right: our goal is to create software that works alongside human experts rather than replaces them.

How does our collaborative approach to refining the algorithm work? To explain, let’s take the example of an AI model trained to distinguish photos of huskies from photos of wolves. Initial algorithms had a hard time with this task. VIA’s key differentiator is a mathematical approach that extracts from the AI an “explanation” for each prediction. In this example, early results explained that the model would classify the subjects as wolves when there was snow in the photo. An obvious mistake for people but not obvious for a computer. Once the snow feature was removed, the algorithm’s accuracy improved more than three-fold. Similarly, we present an algorithm’s initial predictions and corresponding explanations to a client’s team of experts and de-prioritize selected features.

We see two big advantages from this approach. First, we remove any spurious correlations that an AI algorithm may be picking up. Second, we also gain buy-in from experts and users regarding the software’s predictions. Increasingly, experts have “algorithm aversion” where they don’t blindly trust black box predictions. The ability to have explanations and input into the algorithms builds credibility in the software and recommendations.