Generalized AI: Generalized AI is a straight-forward pipeline. You have some data, you process it, you stick it into a machine learning algorithm and then you predict on it.
Verticalized AI: Vertical AI is a collection of statistics, scientific and machine learning models that work together. It’s not a clear pipeline. There’s a lot of cross communication.
Sparse Data: Data acquired point by point, not contiguous. Considered rare and the most useful type of data, the gaps in sparse data are best served by verticalized AI.
Dense Data: A cloud of data that is continuous and voluminous. Dense data is lots of information on a specific kind of subject.
Verticalized AI, as found in the Petro.ai platform, incorporates two critical components for solving complex O&G well spacing issues: utilizing domain knowledge and understanding data. The challenge is real, and the answer is in. Verticalized AI is the future. Generalized AI or “AI for all” has found its niche in broad market applications but doesn’t have the expertise-specific functionality to work on serious industry-related issues.
Utilizing Domain Knowledge
Dr. Derek Ruths, Chief Data Scientist of Petro.ai, explains,
"Verticalized AI is AI that’s developed with an understanding of what the actual problem is and designed with that problem in mind. Well spacing apps that we’ve built have AI in them and are structured using a tremendous amount of domain knowledge. It’s specialized AI for a problem.
“A good analogy for verticalized AI,” Derek continues, “is in the concept of visualizations. Spotfire or any BI tool provides generalized visualizations. You can use it for anything, but the problem is that every time you build a plot, you have to build it from scratch. You have to figure out where your data is coming from. You’re solving the whole problem every time you’re sitting down to make a plot.
“Verticalized visualizations are what you use in Petro.ai. They’re based on solid expert knowledge of information that’s needed by engineers in O&G. Because the visualizations already understand what you’re looking for, the plot is well defined. You may tweak some things, but you don’t have to re-decide what goes on your x axis, y axis, what data to pull in. All those things are known and part of the AI solution.”
Derek adds, “An important part of what makes a vertical AI, verticalized, is the quality of the data you’re putting in which is dependent again on domain expertise.
“Quality has a couple properties: First, it is representative. Which means that it reflects the conditions under which it is going to be used. So having a domain expert keenly involved in preparing a representative data set is vitally important.
“The second attribute of a quality data set is the labeling. Knowing how you’re flagging it. You have to acquire intricate domain knowledge to go in and differentiate what’s right and what’s wrong. AI is completely dependent upon being trained by correctly sorted patterns of information.
“It’s that combination, the representativeness of the data and the quality of labeling that you have to put into the AI. That’s where you need deep domain expertise. There has to be a tight integration between your domain experts and your data scientists or AI engineers. It’s that combination which will allow you to form an output that mimics understanding the problem.”
“And that output must be connected to key business decisions,” Derek emphasizes. "An expert has labeled it. That expert decision maker who understands the problem being contextualized in the AI has mapped the output to a business decision.”
Dr. Troy Ruths, CEO of Petro.ai weighs in on the first complexity of data, sparsity, “As you collect more data, it’s intrinsically going to have higher dimensions, meaning there are more columns in the data; there’s more complexity in the data. And when that happens, you’re creating a sparser data set. This is the great irony of big data. Big data is typically not dense data. As you collect more information and integrate it, you’re actually creating sparse data sets. And it’s sparse because you’re putting in different views of the same problem.
“For instance, in Petro.ai we have microseismic data, stratigraphic models, surveys, these are all different data types but actually they’re sparse when you think about all the information that could be there. Generalized machine learning algorithms don’t perform well when there’s a lot of sparse data. One of the roles of verticalized AI is to help combat sparsity by putting in things like physical principles and understanding how you can connect two sparse points or interpolate between them.”
“Data sets can also have information gaps or be incomplete,” Derek continues. “But this isn’t a problem for verticalized AI. The beauty is, because of the physical modelling built into the AI, it will know how to fill in that missing data in a more meaningful way.
“When you’re talking about a verticalized AI, as we’ve noted, the AI contains domain knowledge that is relevant to estimating data gaps. If you had a generalized system and didn’t have the well depth, the generalized AI would approximate using a 10’ well and a 500’ well and a 1000’ well because those are equally likely. Whereas a verticalized system would say, well I know what wells look like, I’ve seen this information pattern, typically wells are a kilometer deep and then the AI will use that to fill in the missing data.”
Last month, Microsoft acknowledged the role of verticalized AI, “The rapid pace of innovation in artificial intelligence has led us to the next stage of AI development, which we call “industry verticalization”: designing AI products to meet the needs of specific industries.” For O&G that innovation is already here in the verticalized AI of Petro.ai—where our domain experts and O&G experts combine to make the industry smarter for super majors to mid-size to independents. We are all impacted by the Intelligence Revolution.