Podcasts > Ep. 152 - How can process manufacturing use AI for system-wide optimization
Ep. 152
How can process manufacturing use AI for system-wide optimization
Jag Gattu, Founder & CEO, UptimeAI
Tuesday, November 08, 2022

This week we interviewed Jag Gattu, founder and CEO of UptimeAI. UptimeAI is an artificial intelligence-based plant monitoring software that combines predictive maintenance with explainable failure modes, recommendations, and self-learning workflows to mitigate equipment failures and performance loss in process industries.

In this episode, we talked about the current state of artificial intelligence for system optimization and process manufacturing as we focused on performance degradation and equipment failure. We also discussed the need for energy improvement, efficiency, and long-term sustainability priorities.

Key Questions:

What is the value UptimeAI brings to the market?

What are areas of confidence and challenges for predictive analytics and system-wide optimization?

What are the steps a company should take to provide energy-efficient solutions?

How do you see the data availability in the manufacturing industry?

Transcript.

Erik: Jag, thank you so much for joining us today.

Jag: Thank you, Erik. A pleasure to be here.

Erik: Jag, you know I'm in Shanghai, I just mentioned. I have a Chinese wife, and a three-year-old. Already, she's planning his career — academic career in particular — as you can imagine. So, just looking at your CV, I have to imagine that your mother is very proud of you. You have this perfect IIT Engineering, in Michigan State Engineering, MS and then the MBA from Kellogg's. Obviously, you're a very hard-working person. Where did you grow up?

Jag: So, first of all, thank you. That was pretty nice of you to say that. Now, I grew up in India. All my education — my primary, middle school, high school, and then my undergrad — all of that was pretty much in India. Then after my undergrad, that's when I moved to the US for a master's. Since then, I've been in the U.S. for the past maybe 20 plus years.

Erik: Then I see it in your CV. You were working with MathWorks for a while and then moved over to GE. I guess, MathWorks is a good place to build a technical foundation. GE, I imagine you're working with oil and gas on product development. So, that must have been where you started to get more into this topic of helping operating companies to improve operations. What were you working on? Can you mention just a couple of the highlights of the projects you were working on with GE?

Jag: My background, my undergrad, I did it in mechanical engineering. But then, I was always fascinated at the intersection of the domain, which is mechanical and then computers. I mean, this was back in the day. So, programming, optimization, simulation, that always kept me very interested and intrigued. After my undergrad, when I did my masters, it was again in the same space. I was still doing masters and research in the stress analysis, simulation, fluid-solid interactions, and then modeling stuff and so on. So, it was very natural for me to look at an opportunity of MathWorks, which was very much again at the intersection of industrial software.

MathWorks actually makes a software that is used in a lot of the industrial companies for both data analysis, design production. So, their tools are used in control system design for automotive, for aerospace. There are softwares used to design a lot of the software components in cars and other pieces. Again, it was a very good intersection.

Now, interestingly, when I started my career at MathWorks, I started working a lot with industrial companies from MathWorks. I worked with pretty much a lot of the automotive companies to begin with, and then worked with defense companies, U.S. Air Force, U.S. Navy, NASA, Boeing, then energy companies including Halliburton, Marathon, Baker Hughes.

The foundation about how I started working with industrial companies started almost right off my masters. But then, what I did after my journey at MathWorks for about close to maybe a decade, I moved more into commercial roles and more into product innovation roles at GE. So, I did my business degree. Then I went into GE. In GE, I was mainly focusing on new product innovation. So, I managed some of the new product opportunities in the space of both smart instrumentation, adding more sensors, analytics on both the edge and at the cloud level, and then also managing some of the asset and equipment monitoring software including predictive maintenance solutions. So, that was my journey into the space.

Erik: Got you. Then it was in 2019, fairly recently, that you founded UptimeAI. What was the impetus for you? You had a successful career. I'm always curious on what drives an entrepreneur to say, "I'm going to give up this stable paycheck and this successful career, and test my fate?" What was it for you?

Jag: Like you said, it's always a very personal and also interesting journey, for every founder has their own journey. For me, personally, I always like to build things. I was always very hands on. I took a lot of pride in what I made, whether it is at home or whether it is electronics or things that I did on computer. For me, that process of building something to solve problems — that other people will be excited about, impacting other people, other industries, people's lives — was very, very rewarding for me personally. So, the idea or the interest to start a company and build something, which is innovative and makes an impact on the world and the society around us, was always there.

So, my career, whether it was doing a business degree or whether it was moving into a more commercial role, it was steps that I took to round my skills and my experience so that I can be ready to start a company. Then when I was in GE, during the last years at the company, I started working very close with a lot of the process super majors. I mean, the very large process companies, very large industrial companies across the globe including U.S., Middle East, Europe, Russia, Egypt, Algeria, Southeast Asia. That also tried giving a lot of perspectives in terms of how the industrial space is evolving.

Specifically, people are looking at how can we start to improve the profitability using data. It was also interesting that people wanted to move, or companies wanted to move beyond just KPIs and dashboards, or even simple alarms or analytical tools. They want to solve the problem. They don't want to just get a bunch of alerts or just show dashboards and KPIs. Coupled with that, companies are also looking at the challenge of expertise drain or the drain of people — senior experts leaving the space, and then younger generation not necessarily coming in at the same rate as the seniors used to be.

All these, I saw that there's a huge opportunity. Because I think there's a lot of movement in development of technologies in other areas or in other industries. But the manufacturing itself, for the right reasons, has been more conservative in terms of adopting. So, I saw that it was ripe for disruption. That's when I decided to start UptimeAI.

Erik: Got it. Yeah, timing is very important, obviously. Anybody that has a physical footprint, things are going to move more slowly, and they're not going to be the one to do the first move. But we do seem to be reaching that point where there have been enough pilots, enough proof cases, that companies feel higher level of confidence.

One more question before we get into the specific topic, which is this question of a startup versus a multinational, and bringing new technology to market. Because if you just put bullet points on paper, you would say the multinational has 20 times more right to win. They have assets in the field. They have existing customers, et cetera. But then, when you look at who is actually successfully bringing product to market, in a lot of cases, startups are the companies that are able to move faster. Why do you think that is? Why do you see the right to win for UptimeAI versus the larger incumbents in the market?

Jag: Yeah, that's a great point. There are many classic case studies on this kind of topic. Most of them talk about success is your enemy in a way. It is very much true. Because when you look at a large company, generally, they have a large portfolio of products. Typically, when you're innovating — for example, large companies have a lot of service-driven solutions that they provide, which would generate a lot of revenue.

Now, if you were to develop a software application which would give the capability for the companies to do their work on their own, self-service kind of capability, it would disrupt the business line that they have, which is more driven by services. So, there's always a conflict. Then the larger the company, the more likely that you have multiple conflicts to do something very disruptive.

It often takes someone to take a very bold step to say that it is okay, I'm going to take that chance. I'm going to risk my existing revenue lines to do something disruptive, which, by the definition is a risky proposition. Typically, it is a hard decision to make. That's one of the reasons why it's very hard for large companies to innovate. The other reasons are also, on paper, a hardware company should be able to develop software. It's real. When people talk about culture, when people talk about the core strengths of the company, they're all real. When you have people who are coming in from a pure hardware space, and the leadership is coming from hardware space, to build a software business, it's not the same model anymore. That is another reason that I see that the company core strength do not necessarily align with the space that they want to go.

Then there is also aspects of constraints. Innovation happens when there is a need and there is a necessity. I meet a lot of people who either come from or who have experience in large enterprises. Sometimes they say that it's almost like a startup in a large company. It was a new initiative. It's a startup in a large company. But let me tell you, it's not actually a real startup in a large company. Because when people often say it's a startup in a large company, they just mean that the product might be new. But then, they still have a large sales force. They still have a large team of software engineers. They still have all the IT support. Everything is there. It's just that the product is new, and so they need to deal with that business.

Whereas a startup by itself, you have a lot more constraints, which means that you have to be much more focused. You can't just look at, can I grow my team? You really have to grow the business before you can actually grow the team. I think a lot of these factors make the conditions more conducive for a startup, for a small company, to really innovate rather than a very large company.

Erik: Okay.

Jag: It's a long answer.

Erik: No, that's useful. That mirrors my experience. I was chatting with an executive from a telco a couple of months ago. He was saying, "Yeah, we have a lot of really innovative product teams that are doing really great products. But as soon as they start to get any traction, somebody kills them." Because when it's just a small team working on something, okay, we can give them freedom. But as soon as they start getting some traction in the market, somebody says, "Okay. We need to integrate that into my portfolio," or as you said, "That might cannibalize our business. Let's hold on this. Let's have a discussion." Then those discussions go on and on and on.

Jag: Exactly.

Erik: Okay. Great. It's a very interesting dynamic. But let's get into the topic now. Maybe we can start with the value proposition. To some extent, it's in your name UptimeAI. But obviously, you do a lot more than uptime. So, what is the high level value proposition that you're bringing to the market?

Jag: I think, fundamentally, let's say, if a person gets sick, we go to the doctor to get diagnosed and figure out what we need to do. If there is a problem with the machine or an operational issue, typically, people go to an expert. These are experts who have 30, 40 years of experience. They can connect the dots, diagnose and fix the issues.

Now, we know that in the past few years there is a significant number of these experts, close to an estimate, say, almost 75% or higher, of senior experts have already retired. When you see such a drain of expertise, and the younger generation is expecting tools and solutions that guide them or that give them these insights in a more efficient way while there is more emphasis on sustainable the machines getting hit, machines getting more older, how do you run these plans more profitably, more safely, more sustainably? That's what UptimeAI does. It solves these problems by providing a solution which mimics experts in learning and solving operational issues. It's a pure software-based application that enables operations teams to solve either reliability issues, equipment issues and performance and energy efficiency issues faster, with less effort, and at scale. That's what we do.

Erik: I would love to dive into a little bit the status quo of where we are. Because we're in this interesting space where, as you mentioned, not too long ago, people were more or less satisfied with visualizations and just being able to access the data remotely on a dashboard. Now there's the expectation that you're going to be able to do, to some extent, predictive analytics and optimize systems. We have success cases of that. But it's not easy, right?

Then there's always the question for companies of, what are the quick wins? Where are the areas where I have a fairly high level of confidence that this initiative will be successful? What are the use cases that are challenging and there's a possibility of failure, but they're worth trying because they're very high value? Then how do you connect those? How do you build a roadmap where you can say, "Well, we're not willing to invest a tremendous amount of money in a high-risk venture, but we want to start building the infrastructure, developing the data sets, and moving in that direction?" How do we do that?

I imagine you're having these conversations a lot with your customers, helping them to determine what do we do today, what do we plan for tomorrow, and then what do we think about in the future. How do you think about structuring these roadmaps and prioritizing initiatives?

Jag: Sure. So, let me start with the use cases in the status quo, and then we will go into how should a company or how can a company look at it when they're thinking about moving the needle further into more sophisticated solutions.

So, if you look at the market, I've talked about solving operational issues. We're talking about performance issues that could be consuming higher energy. If you look at a cement plant, it could be 40% to 60% of the cost is energy. If you are spending 1% more in the energy, you could be losing maybe 5 million or 10 million a year just in that 1%.

Similarly, in a refinery or a solar plant. In a solar plant, if you have dust accumulated on the panels, you could be losing up to 7% of production loss, which is still efficiency loss. Then you look at a refinery with a 300,000 barrels per day production, it typically can use up to 1000 tons per hour of steam continuously. That's a lot of energy in steam. If you're able to save even like 2% of that, that value is in tens of millions of dollars annually. These are our problems with regards to performance — energy efficiency, raw material efficiency, water efficiency, and so on.

Now, the other part, which is operational issues, are equipment failures. These are generally more popular in terms of use cases because they are more visible in nature. If there is an equipment that fails, people notice it very, very directly. Whereas from a performance standpoint, you could be losing 5% efficiency for a year and you might not even know it. It's quite common that people might actually miss that kind of opportunity. So, these are operational issues. If you do them efficiently at scale, then you reduce your maintenance costs, you improve your people productivity, you increase your enterprise consistency. All of those secondary effects come next to this.

Now, if you look at those use cases and then look at what is the status quo, how are people solving these problems today, then you do have market tools. There are some digital tools in the market, whether they are DCS alarms or whether they are — there are a lot of point solutions. There are solutions for looking at performance. There are solutions for looking at predictive analytics for rotating equipment. There are tools for looking at maintenance data. These are all different point solutions for different equipment, for different functions.

The problem is that if you really want to move the needle further, you have to look at things more holistically. Take an example, one of the customers we were working with, they had this case on a turbine where they were looking at an actual shift on the turbine. Then they thought that it could be a reliability issue, and the reliability team was looking at it. They couldn't find anything wrong. Then they sent it to maintenance, then they sent it to instrumentation. Then they came back and then they pulled in an expert, a senior expert from outside, as a third-party consultant. That person came in. He looked at a bunch of data, analyzed it for a couple of weeks. Then he figured out that it was actually because of performance or a process issue. It was not because of a mechanical issue.

So, the reason why I mentioned this is, that whole process took twelve weeks. It showed very clearly that these aspects of reliability performance and maintenance, they're all interrelated. So, what people are trying to do now is, number one, people want to look at things more holistically and optimize the opportunity in a consolidated way, and not have these point solutions which are missing those opportunities. That is one thing that people want.

On the other hand, people are trying to do this with, let's say, advanced analytics. That's where all the initiatives about how can we do this with more data. Because now I have a lot of data. How can I utilize that data? The challenge with just using a pure analytics approach is that if you use pure analytics, then you need a lot of data science, activity, effort. That could take months. You have to hire people, or you have to spend a lot of money. We have seen many cases where to deploy for 1000 equipment, it could be years before you go live. That's a lot of money and effort. Then even if you deploy, those models don't give you information about what is the diagnosis, what is the cause of the issue. It still tells you symptoms about pressure high, temperature low. So, you're still dependent on experts.

The third main challenge with pure analytics approach is, let's say, you build a model. It throws an alert. Then the operator says it's okay. There's nothing wrong with it. The alert is actually a false alarm. What happens to the model? Nothing happens, because there is no connection between what the operator is saying and what the model is doing. There's no direct connection there. These are the reasons why a lot of customers are stuck in pilot. It's great to take an example pump or a compressor, and analyze the data. But to actually make it an operational tool that can scale is a different ballgame. That's where UptimeAI comes.

So, we have a purpose-built application for monitoring assets at full unit level, full plant level, that will help you connect these dots, predict the problem, connect the dots, help you identify the cause, give you a recommendation on how to fix it. Then as the operators take actions, it can automatically learn from those actions to adjust and learn to get better and better. It mimics how experts actually do things. So, that's actually the journey.

You also asked about, how do customers look at this? It seems like a daunting task in terms of how to get from where they are to a more sophisticated plant level monitoring. Typically, when we talk to customers, there are customers who have significant experience because they have tried a bunch of things. We find a lot of these customers who have tried a lot of pilots, but they were not able to scale. They have burn marks on their hands. So, they are more careful about — they know what to look for.

But there are also customers who are just starting their journey. What we tell the customers is that, first and foremost, start with use cases in mind. Don't just start with, "I want to build a huge data lake, or I want to add hundreds of sensors." Because at the end of the day, those hundreds of sensors may or may not actually solve the problem that you're trying to address. You want to essentially start from the point of, "How do I generate value?"

So, what we suggest is, first, look at the use case. Work with someone who understands the use case. Don't just go with data base or just data logging, pure technical aspects. Look at the business aspects first. The second part is, many of these companies have a lot of data already — good data already. So, we always suggest that look at how you can generate value from existing data first, and work with vendors or look at how you can utilize existing data to the maximum extent, and then see what is missing. What are the gaps that need to be filled, so that you can actually justify to cut a project to add those?

The third thing is, you want to have a broader picture to begin with. Don't start with really point solutions. Because you have a lot of point solutions. The plants are generally very expensive, very broad. There are utilities. There's process. There are a lot of different types of equipment. So, the only way you can scale this is if you already start from a standpoint where you have a vision of actually how you can cover more broadly. Otherwise, what happens — we have seen customers where they've built small apps like 10 of them, 15 of them for 15 different use cases. But it becomes a nightmare to manage and to maintain them. They cannot really scale. So, I think these are the three things that we help, that we work with customers. We talk to customers. We really help them, guide them through this decision-making process.

Erik: Yeah, it really seems to be connecting those dots. That is the challenging thing. Because setting up the spot solutions is — I mean, it's not not that it's easy, but it's relatively easier, lower risk. A small team can do it, because they have ownership of that asset. Defining the vision often is not too challenging, because you hire some consultants and put together PP tech. You have a vision.

But then, connecting those two in a way where you're maybe starting with smaller solutions, but you're doing it in such a way that they can be integrated with each other and they logically fit with each other, then building those into a larger solution, that has more of a process perspective as opposed to an asset perspective. That does seem to be the big challenge here.

Are there any cases? Maybe that would be a useful way to think through this. It's to look at an example of a company, and how they've walked through that. Is there any company that comes to mind that's made this journey, or at least maybe is underway in the journey that could help to illustrate this?

Jag: Yeah, absolutely. We have deployments in power, in refining, in chemicals, renewables. Take a traditional refining company. This is a traditional plant. Probably, 30 plus years old plant. They are looking to improve both reliability and sustainability, especially with energy usage. So, the way we looked at it is, let's first start with — so, the business goal is clearly defined. It's essentially, how we can reduce energy consumption? We know what are the different types of energy or sources of energy that are being consumed. We first look at what is the baseline, and then understand that from there, we have to move forward. We have to reduce that energy consumption.

Then we looked at, okay, what are the different systems or units in the refinery that are actually consuming or higher consumers of this energy? We started taking unit by unit. Then we started monitoring those units to identify where there are excess energy consumption opportunity. For example, typically, what happens is when there is a — let's say, you're consuming maybe 1000 tons per hour today, as a daily average. Tomorrow, it increases to 1,100. The question comes as to why is this 100 extra consumption of steam, is that valid or not? Is it because there's inefficiency, or is it because the ambient temperature decreased so I had to pump in more energy? Or is it because my input, my load on the units has increased? That's what becomes very complex for engineers to generally look at. Because there are a lot of parameters.

So, what we have done is we've looked at the unit data or the sensor data — both process and mechanical information that is there. Then we've used our AI engine to monitor not only at the unit level, but also at an equipment level, so that what we can do with it now is we can look at, okay, there is an extra 100 tons per hour of consumption. Is there a problem with that, or is it okay? That analysis is being done by our application. It is identifying if there is a problem, if maybe five tons of that 100 is actually because of inefficiency, our application is able to identify that there is a problem with maybe a bad gas compressor, or your main air blower is actually consuming more and more energy than it should.

So, we always start this exercise with the existing data, existing sensors. That's what we did for this customer as well. We were able to identify challenges or issues. For example, for compressors, we were able to identify that some of the recirculation or surge valves were open to a certain extent, which was actually causing excess energy consumption. It was causing inefficiency, and that could lead to savings of a couple of tons per hour of steam. So, those are the kinds of issues that we were able to demonstrate for them.

Then as we go more and more deeper now, we're covering pretty much all the units in the entire refinery to be able to look at exactly what's going on. As we do that, we do identify that there are maybe some gaps in the sensor data, in particular units, particular equipment. That's where we advise the customers that, "Hey, the next set of improvement opportunities that you have is to add these sensors for this equipment, so that we can get more different insights, more meaningful insights in terms of diagnosis and accurate diagnosis of the cause of the problem." So, that's one example.

Erik: Great. Maybe if we use that example, we can look at it from a different perspective — the organizational perspective. As you start to connect different components of the solution, you start to require a lot of domain expertise and understanding why might these dynamics be occurring. Also, when you look at how do we diagnose this, what can we do differently? Again, you need a particular expertise. So, who would be the internal stakeholders that should be part of this project team? Also, are there externals? Are you pulling in the OEMs? Are you pulling in other technology partners or system integration partners also into these project teams?

Jag: This is a great question. So, one of the unique things about what we do, when we say UptimeAI has a purpose-built application for operations in heavy industries, our solution combines data science and domain expertise. The way we do that is we have built-in libraries of failure modes and recommendations for different types of equipment. The way we have built that library is by taking various senior subject matter experts who have 40 years of experience in the industry individually. We have a dozen or so of such kind of experts in various areas — someone from boiler, someone from rotating equipment, someone from energy integration, someone from instrumentation. That's how we have built that library to begin with.

What happens is, with our application, it can detect. It can predict the problem. It uses that knowledge base to be able to diagnose what could be the issue. It can look at, hey, for my fan, the current use is increasing but my input pressure is decreasing. My pitch flow position is continuously increasing. It could be assigned for choke issue filter. So, that interpretation is already built into the application. There is a built-in failure mode and effects analysis built into the application, so that it can look at the symptoms and diagnose what the issue might be, what the failure mode might be, and what kind of recommended actions should be taken.

Now, that is a good baseline to start with. On top of it, we do bring in the site teams. Because the site teams will have more specialized knowledge of the operations. Definitely, they are the experts at their specific operation. They can then add on to the knowledge base that we're bringing to them as a baseline.

Our application is used by — in the case of the refining, we have users from energy team. We have users from process. We have users from operations, and mechanical reliability. We have representatives from each of these functions who are all simultaneously looking at issues within our application, to identify is this a liability, is this a process? They work together. They will then coordinate with the operations teams in the control rooms to then resolve these issues. We do have, within the application, there's a — collaboration plays, where you can actually do back and forth. That activity then becomes knowledge base for future issues.

Erik: Got you. Okay. Great. I think that combination of the data science and built-in, plus external domain expertise, seems to be really critical in these complex situations. But let's look at one other use case, which is the use case of energy efficiency. This was already a long-term priority, because of the steady push towards becoming a more sustainable economy. I think everybody who's providing solutions here needs to send a dividend to President Putin, because his war in Ukraine has really accelerated the need. Then we also have a drought in the northern hemisphere, which is also putting a lot of pressure on utilities.

Just in whole, this has been a tremendous year, I suppose, for anybody who is looking at energy efficiency solutions. I think, still, there's a fair bit of complexity here in terms of how you — even just tracking where energy is being used, I think a lot of companies are not yet doing that on a granular basis. Then looking at how do you optimize. So, what's your viewpoint on where we are there? Then what are the steps that companies should be taking?

Jag: You bring up a really good point. Because, traditionally, a lot of the digital solutions have been focusing on reliability of critical rotating equipment. That has been the darling of digital tools. For the right reasons, there is good value in that, in looking at equipment failures of critical equipment. No questions. But then, when you look at the energy efficiency and the performance, they're more difficult to detect. Partially, because when something is failing, there is a continuous degradation. You can't actually observe that because there is a significant change that could eventually happen before it ultimately fails.

For performance and energy efficiency, you're looking at maybe 2%, 3%, or 5% improvements overall. Many times, it might not actually degrade continuously. You might always be losing 5% efficiency, and you may not even know because that's how you have been operating it. These degradations are very, very slow in nature, generally speaking.

So, the tools and the technology that you're using should be able to identify those nuances and the minor changes, as opposed to major symptoms that are typical in a reliability type of issue. One, you have to be able to build or have solutions which are specifically meant for these types of problems. Second, like you said, you want to be able to track that. Because, ultimately, you have to first measure. Then the second part is identifying what could be contributing to those changes, and then having that intelligence to then address those.

Companies — this is, like you said, because of all the reasons that you mentioned — they are significantly focused on these aspects. So, overall, connecting all the different points of information and being able to look at that organize the information and look at it holistically, then from there, being able to understand and analyze why these are the way they are, and whether there's a problem or not, is essentially important but a complex solution in itself.

Erik: What do you see in terms of data availability here? We have one case with a client working on supporting automotive OEMs here in China on this. Historically, the OEMs have been pretty comfortable just tracking energy consumption at the plant level. But then, when you start looking at how to improve, that's certainly not — that's not going to do it. Do you see this more widely across industry or in the industries that you're working in? Do you see already a better data availability here?

Jag: Typically, we segment the manufacturing space into discrete, and then process. Process industries like utilities, refining chemicals. Because they are generally continuously-operated industries. They run 365 days. They're fully automated as well. So, we do find that there is more likelihood of finding that data. I'm not saying it's always 100% comprehensive, but you do get more granularity in terms of the energy consumption, information also, in these process industries, typically, that we find.

On the other hand, if you look at the discrete manufacturing space like automotive or electronics manufacturing or some of the other industries, it is more likely that there might be less number of sensors. So, you might have to also put in some additional hardware, add in some additional hardware to start measuring those points. That's the reason why we started our journey more from the process side because of availability of data. Today, I mean, we have deployments in power, in utility sector, in chemicals, in refining. As I said, it's not 100%. Do we have every single sensor that we want to have? No. But can you cover maybe 75, 80% of the value? Yeah, we can.

Erik: Well, let's wrap up here with a look at what's next for UptimeAI. You're still a relatively young company. I'm sure that you have a very ambitious product roadmap. Can you share — if you look out over the next 24, 36 months — what are the big priorities for you?

Jag: We've always looked at being able to — given that this is a conservative space, we always wanted to have customers look at what we can deliver as value, and take that stories to other customers to be able to — you have them as proof points, so that we can actually give other customers also the same value that we're giving our existing customers.

Right now, in the past two and a half years or so, we've been fortunate to work with some of the industry thought leaders. Like in India, we work with Tata Power. We are working with several companies, including in cement, in renewables. We're doing deployments for even companies, large companies like Shell. Now what we're doing is we're taking those cases. We are actually expanding globally. Today we have customers in Asia. We have customers in Middle East. We have customers in North America.

Now, in the next 24 or 36 months, we are looking to almost grow probably 2x to 3x in terms of our team size, and basically go to market in regions where we are less present. As part of that, we're also bringing in a lot of the partners from various areas or various industries and geographies, so that we can offer a solution that could be much more comprehensive.

Erik: Great. Those partners, is this more on the system integration side or the technical technology integration?

Jag: It's more both on the system integration side, as well as implementation partners, and then sales partners as well.

Erik: Okay. Great. Jag, I think we've covered a good bit of territory here. Anything we didn't touch on that is important for folks to know?

Jag: No, I think we did cover a lot of things. I know that for companies who are going through this journey, they may have a lot of questions. We're always happy to engage and support. We have incredible people in the company who have significant experience working through customers in building their roadmaps, and also working on both old solutions as well or next generation solutions. So, we're more than happy to engage and work with customers.

We do have some white papers on our website, which talks about how do you evaluate solutions, what is the difference between a platform and a purpose-built application, the different types of use case studies that we have, how do you differentiate between actual AI application versus application that claims to be an AI application, but just uses machine learning or some kind of regression models or stuff like that. A lot of resources. We're happy to engage. Our website is www.uptimeai.com. I can also be reached at LinkedIn or other social media forms.

Erik: Okay. Perfect. Jag, thanks for the time today.

Jag: Thank you, Erik. Yeah, a pleasure to be here.

Contact us

Let's talk!

* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.