播客 > Ep. 212 - From Legacy Data to Predictive Power: Overcoming AI Challenges in Traditional Industries
Ep. 212
From Legacy Data to Predictive Power: Overcoming AI Challenges in Traditional Industries
Zohar Bronfman, Co-founder &CEO, Pecan
Friday, November 29, 2024

In this episode, we spoke with Zohar Bronfman, co-founder and CEO of Pecan, a predictive analytics platform that integrates machine learning and generative AI to simplify and accelerate AI adoption. Zohar shares how Pecan bridges technical and organizational barriers to help companies harness predictive insights for business decision-making. They discuss the evolving role of AI in industries, challenges faced by traditional companies, and how predictive GenAI can democratize access to advanced analytics.

Key Questions Explored:

• How does Predictive GenAI simplify AI adoption for non-experts?

• Why is automation key to reducing AI project timelines?

• How can small-scale AI testing minimize risk and ensure scalability?

• What challenges hinder AI adoption, and how can they be overcome?

• Why will predictive analytics shape the future of business decision-making?

 

For more information on Pecan and how predictive analytics can transform your business, visit pecan.ai.

Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP): asiagrowthpartners.com.

音频文字.

Erik: Zohar, thanks for joining me on the podcast today.

Zohar: My pleasure, Erik. Good to be here.

Erik: Yeah, this is going to be an interesting one. I've been covering this topic from a few different perspectives but not really yet from predictive analytics. So looking forward to understanding what you're doing there. But before we get into it, I'd love to understand your background a bit. You're the first person, I believe, that I've had on the podcast who has two PhDs — one in computational cognitive neuroscience and the other in history and philosophy of science and technology. So what's the story there? You did that just so that your mother could brag? Or how did you end up with dual PhDs?

Zohar: Yeah, I did it mostly for my mother to brag but also because I found those two disciplines complementary. So I started back at the day with studying philosophy. I fell in love with philosophy. I thought it was extremely interesting and broadening the horizon. As I was going deeper and deeper into philosophy, I realized that another aspect, a very interesting aspect of the mind — I was always very much interested in the mind, studying the mind, understanding the mind, understanding how people think, how people act, why they act the way they act. It became clearer to me over time as I went deeper into philosophy that the experimental empirical "scientific aspect" of the mind is as important. And if you really want to have a fuller picture, you have to probably study both. That's why I parallelized and went on doing the computational cognitive neuroscience degree, which is, in fact, emulating brain processes that underlie human behavior. They underlie decision-making, mental models, stuff like that. And in parallel, investigating the same thing via a completely different method, which is basically the philosophical method of inquiry. And I have to say it was almost like being in paradise in terms of intellectual gratification. The PhD studies is also where I met my co-founder and friend, Noam Brezis, who's our CTO here at Pecan. And that's what eventually led us to start the company.

Erik: So you started the company in 2018. Were you still involved in your PhD then, or was this right after graduation? How did you transition? Because I believe, if I'm correct, this was kind of the first step after your PhD — founding Pecan.

Zohar: Yeah, indeed, Erik. So we started Pecan back-to-back with graduation. Like days apart from formally getting our PhDs, Noam and I started Pecan. And it was, for us, a very natural quality evolution of things. While we were working on emulating brain processes, obviously, we harnessed computational methods of different types, statistical methods, algorithms, machine learning, AI, many different ways of explaining and accounting for different mental processes. And we fell in love with predictive analytics and machine learning. The concept of taking historical data of things that happened, being able to run algorithms that identify hidden patterns and then project a probability of a future event with a high degree of accuracy is mind-blowing when you think about it. It's like so many things in the world are predicted even though our specific brain doesn't necessarily know how to predict them. And the fact that the machine learning algorithms are able to do so in areas where our brain falls short, just first time we saw this to work gave us the goosebumps. Like, oh, my god. You can actually predict the future.

Then very naturally, you know, a little bit like a child, we naively asked ourselves. Why aren't everyone, every company on the face of the planet, is just constantly predicting everything about the future? It goes without saying. If you run a business and you know what's going to happen, you can just better business. And in fact, we know the greatest, biggest, best companies on the planet are companies that have, in their core processes, machine learning and predictive analytics. It's not a secret. You can think of the giants of Google and Facebook, and you can think of the most successful tech companies of the last decade like Uber, or Spotify, or whatnot. All of them at the heart of the story is a good, smart way of leveraging and using those predictions. So we asked ourselves. How is it no one or very few businesses are actually doing it? What's the gap here? Then we embarked on a mission of understanding, first and foremost, what's wrong. Why aren't everyone doing it?

We came to the realization that there are several different aspects that prevent companies from adopting machine learning in a meaningful way. It's very interesting. Those aspects are somewhat independent. So you can think of it as the perfect storm in a negative way, just the perfect storm that prevents, like a significant barrier to entry for these companies. We decided that we are going on a very ambitious mission of addressing those barriers via technology. We are going to build a platform. We are going to build a technology that helps companies overcome these barriers to entry and adopt real, full-blown predictive machine learning so that they can improve their business. In our vision, companies should just run their business via the predictive lens.

Erik: So I'm really interested in hearing your analysis on what those barriers are. Because some of the companies that you mentioned, I would consider to be almost single-product companies that interact — their product is kind of a digital solution. So if we think about Google or Uber, I mean, Google has multiple products but really has one major product or a couple of major products. Then I'm working, for example, with chemical companies and components manufacturers and companies that have a lot of different products that often don't collect data by default. The data that they have often is somehow attached to a facility, to an operating facility, and then requires also a kind of real-time maybe results. Or at least maybe doesn't require it, but there's a lot more value in real time than in kind of longer-term trend analysis. So what are those challenges that you saw? Maybe you can help to illustrate, what are the challenges that a platform or a technology company might have, and then what are the challenges that a traditional, or a manufacturing company, or some other traditional industry company might have?

Zohar: So I'll start by saying what are the general challenges that probably 95% of companies face, and then I'll double click on the more traditional physical goods or like physical goods types of companies. The general challenges are usually, first and foremost, challenges that relate to human capital. It turns out that the type of individuals that are able to do both the technical work of building AI, or machine learning, or predictive models — these are, in many cases, synonyms. The type of people that can build these models and make them work in a real business context and can help the business use them in a meaningful way is a very small subset of individuals. So to begin with, you have a relatively very few data scientists out there. Data scientist is the technical term for people who know how to build predictive models. Funny anecdote: the vast majority of data scientists and AI professionals actually work for Google, Facebook, Amazon, and so on and so forth. So when you look at the companies that are not those giants, the availability of human capital that specializes in building those models is just very scarce, and these individuals are very hard to recruit. They will usually prefer working for, as an example, Google. So availability of human capital is one challenge.

Then, interestingly enough, there's also a challenge of business implementation. That's something that people sometimes overlook. They think if they have a good model, they're done. In reality, a good model is necessary but not a sufficient condition to drive business value. You can predict something in a very accurate manner. But if you don't act in the right way according to this prediction in a way that optimizes a specific API, then it just won't matter. I always give this example that I can build a very accurate predictive model that will tell you which of your customers is going to churn. But if the model captures only individuals who have already made their mind, and they're going to churn regardless, what good does it make that you know they're going to churn if you can't do anything to retain them? So this is where understanding the business context and kind of call it implementing it hand-in-hand with the technical model is another area of a challenge. This is shocking statistics. That's why anywhere between 70% to 90% of AI initiatives end up failing. Imagine how much money is going down the drain, and imagine how much potential ROI is being lost because those projects don't get to see the light of day. So these were the realizations in a very generic manner.

For the more traditional companies, there are additional challenges that are built on top of those challenges I just mentioned. One of those would be the amount and quality of data. So when you think about traditional companies, manufacturers — sometimes it's retailers, sometimes it's insurance companies — in some cases, they have legacy or historical data that is not in its best state, that is missing, that is fragmented. Sometimes they just don't have data. Everyone is saying garbage in, garbage out. But in reality, there are much that can be done with data challenges, but you have to have the data. That's one thing that AI has proven. It has to digest a lot of historical data if you wanted to say something meaningful about the future. So the quality of data, on top of the insufficient human capital, and the business context that is usually lacking are the three major challenges that traditional companies are facing when they think about implementing AI.

Erik: One other topic I'm curious, this might differ significantly — I guess you're probably more focused on maybe European and American markets —where I'm sitting here in Asia. But we see also here companies struggle with deciding what data sets they're willing to put on the cloud or run through an AI algorithm, right? It might be sitting somewhere on the cloud but there is still, in some cases, this concern. What if I put my data into an online service, and it's used to train or something? Which might not be a very realistic concern but it is nonetheless a concern that somehow sits at the back of kind of a non-technical manager's mind. It's maybe something where lawyers might get involved. Then once lawyers get involved, the decision timeline extends significantly. Do you see this as a challenge also in the North America and the European markets? And how do you have those conversations, if you do, when companies have concerns about putting certain data sets into your models?

Zohar: So I take a relatively extreme position with regards to this specific point. Because I argue that if you're reluctant to use cloud infrastructure, you will, as a company, you will end up just losing the battle. All of the technologies today, not only AI but definitely when it comes to AI, and not only purely statistical modeling but also everything data-related at the periphery of AI, it's all cloud-driven. The pace is just so fast that using on-premise type of technology is just going to be insufficient. In a few years, I think it's very short-sighted to not use cloud these days and just puts the company that chooses to do so at risk.

Yes, like you said, we focus mainly on the European and mostly on the North American market. I can say that the most sensitive data you can think of — NHOs, insurance companies, banks — all of these companies use cloud infrastructure. That's why security, digital security, cyber security is booming every month. Because we all understand that, on the one hand, we have to have good infrastructure for implementing technology. And on-premise is just a bad way of doing it. On the other hand, we understand that keeping the data and the algorithms and the proprietary knowledge safe is crucial for each and every company. This trend in my mind, from everything I'm seeing, is not going to decrease on the contrary. It's just accelerating exponentially every day that we speak. I will say that whenever a company is looking for an AI vendor or a data vendor, they should definitely look out for all the security certifications. There are many industry standards and regulatory standards that companies have to meet.

Also, another thing to consider technically is that when you are thinking about implementation of predictive models, you do not need PII. You don't need private information. So for the machine, it doesn't matter if, for example, the customer that we are going to analyze is Erik or Zohar. As long as they have some kind of an identifier, which can be completely random, it has the data associated with Erik's behavior or Zohar's behavior, then the machine will make sense of the data and will extract the patterns. So you do reduce a little bit of the call it anxiety by not necessarily sharing or integrating data that has personal identification to it.

Erik: Yeah, that makes sense. You can anonymize certain metadata. I guess it really comes down a bit to an education topic. When you talk about talent, the limitation of talent, of course, when you're talking about most senior management, most senior management is, let's say, 40 to 60 years old. Very unlikely to have any kind of data science background. Therefore, maybe not in the best position to be evaluating risk. So there's kind of an educational component.

Coming back to the first challenges that you focused on, I mean, that's really — one is a talent issue. So just very limited supply of data science capability, and that tends to be dominated by the big tech companies. Then there's this kind of, let's say, the cost and the time it takes if you're developing something in-house to get it to a point where you can determine does this make sense. And so you end up making expensive bets, basically. Whereas you'd prefer to be making inexpensive bets if you're going to be writing — if you're not sure of what the result is, you're going to be writing 80% of them off because the use case doesn't really make sense. Then you want to be making quick, inexpensive bets and then doubling down. But that's hard to do if you're building up a code and doing all this data cleaning and so forth from scratch. So help us understand. What is the process for you? How do you step in and address these two challenges?

Zohar: Yes. So, like you mentioned, the payoff from AI project is usually a very delayed one. You invest a lot up front by hiring the relevant talent, by investing the manual labor and resources in prepping the data, building the models, testing them, and so on and so forth. Then you deploy the model. You start measuring the success. Usually, you have to iterate. And then it becomes an extremely expensive project. The outcome of which sometimes can take quarters or even years until you are able to say it's a success or not. That's why so many projects fail. Because people pull the plug earlier because they say I can't continue investing in that whole.

To your question, what we've done is we leverage technology — variety of technological capabilities I will talk about in a minute — to change this curve of payoff and make the modeling process far faster in terms of understanding whether AI can help with the specific use case, yes or no. And in parallel and very relatedly, we lowered the barrier of proficiency required for using the platform. Instead of requiring experienced data scientists who can write code and understand statistics, we've built the platform in the mind of serving the so-called data analysts, who are individuals that are data savvy but don't have data science expertise. They know how to do classic data analysis, which is many times referred to as BI. So think about analyzing historical data, building dashboards and reports. Basically, people who write query SQL and people that have some kind of a BI or an analytics platform. It can be company, BI platforms like Tableau or Power BI, or other companies who provide basic BI capabilities. Those individuals, they have everything it takes if you think about it. They know the data. They understand the business. They are eager to do more with their capabilities. But like I mentioned, they don't know data science. They don't have the machine learning expertise.

So what we've done, we've built a ton of automation, first and foremost, around data preparation which, like I mentioned, is one of the hardest challenges, especially when it comes to traditional companies. So a lot of automation around data preparation, a lot of simplification, guidance, and sanity checks that help the analysts build the models without knowing in advance what a good model is and how reaching a good model should look like. So it's basically guiding them through the process. All of that is being packaged in a relatively friendly and simple-to-use platform. So it's a very bold act of democratizing data science and helping organizations reach conclusions far faster because of the automation. So now you don't have to wait for six months until the data is prepared and then a couple of months until the model is being deployed and then you check how well it went. You can automate the data preparation and build a model in a couple of weeks, sometimes in a couple of days. You'll get an answer. This is something worth pursuing, investing more in, or let me aim at another direction because, here, a model is not going to work out.

Erik: Let me ask a question about your experience working with, let's say, different customers in terms of how they structure their teams. I guess you have one scenario where there's a CIO office. They have a lot of data analysts. They're more likely to take kind of a top-down, "Here's our six strategic priorities in terms of the big solutions that we might want to address, which might be related to maybe our CRM or some of the big enterprise systems." Then you also have more of a bottom-up approach where you have the different functions. Maybe the CFO team, the operations team, the maintenance team, they might also have their own data scientists. They will likely be closer to the business. They'll understand the pain points of the front-line teams more clearly. And so they'll be more likely to identify potentially smaller, more kind of quick wins because they really understand the business, but they might not have as much scale. So there's this kind of different tracks that a company can take. I know there's no right answer here. But in your experience, do you see one of these strategies tending to work better? Are there certain circumstances where you would say companies in this situation should probably be taking more of a top-down, choosing a couple of big bets, companies in this situation might want to diversify their bets and take more of a bottom-up ideation approach?

Zohar: Yes, I think it's, first of all, extremely interesting to analyze the market and the landscape based on types of companies and how their data and AI practice is being managed based on their industry and their age, so to speak. I can say, again, there's everything from everything around. But if we're trying to segment the market, I'd say the traditional companies, the more legacy companies, they have a more historical structure which is akin to what you mentioned about having a CIO. Usually, nowadays, the CIO would usually have someone like a chief data officer, or a VP AI, or something like that. These are cost centers that are serving the different business units. Usually, this setup isn't working amazingly well. I'm trying to be cautious here. But from what we are seeing, in many cases, there are gaps. There are business gaps, context gaps. Also, the concept of a centralized data organization that serves different business units is weaker than a dedicated data team that sits within the business unit and that draws on a common infrastructure that the CIO and CDO manage.

So what we are seeing with the more traditional companies is that there's a "monopoly" of the data-centralized data science or data-centralized organization. In many cases, it's decelerator of local initiatives. What I've seen as a great example of how more traditional companies can re-arrange the internal org structure is to have an internal centralized infrastructural team. Because the infrastructure should be common. Governance, like I mentioned, the data quality, all of that should be common. Because you want one single source of truth when it comes to data, otherwise it's a mess. But the specific use cases, especially when it comes to AI, the data product should sit within the business unit. It allows for faster iteration and better sharing of knowledge. The newer companies, the more digital companies, usually, are organized this way. They have an infrastructural data team. I call it also a guild. Sometimes it will also be a guild. But then, each and every business unit, like you've mentioned, the CFO, the marketing team, they have their own dedicated analysts and data scientists. Those are serving the specific priorities of the specific business unit on a daily basis.

Erik: Thank you. That makes sense. Let me ask one more question, more around this topic of how companies should be structuring themselves. Then I'd like to move into understanding your technology better. One of the key challenges that you raised earlier is the topic of ROI, right? So these initiatives need to have a payback, but they also require a certain amount of experimentation. There's a big risk that the experimentation ends too early before you can realize the payback. There's also a risk that the experimentation ends too late, and you've sunk too many resources in. And so you have to kind of figure out what's the right level of investment, the right number of iterations and so forth. What is the logic that you would recommend somebody follow when they're trying to determine we've got this — let's say, you do a workshop with a company. You come out of this workshop with 80 use cases, right? I mean, it's such a general purpose technology that people can come up with a lot of different potential scenarios. You then start to prioritize those. You don't know which ones are going to have a great payback yet. You might have an idea. So what logic do you go through to determine the right amount of time and effort to put in before you make a decision of whether something is going to pay back, or you should kill the idea and move on to the next?

Zohar: Yeah, there's a whole playbook that we've developed internally here in Pecan based on all of the work our customers have been doing. We obviously worked with them on prioritizing use cases. It's actually multi-dimensional. I'll give you a taste of what are the dimensions. So, first and foremost, there's a huge difference between a use case that has already a business process associated with it, and now you're trying to take that business process and improve it by infusing it with AI instead of some kind of a rule-based logic you're already having.

So let me give you just one example. Let's say we're talking about predictive maintenance. Let's say that we want to predict whether a machine is going to fail in the next week so that we can preempt that failure. It's a classic use case for predictive analytics. If there is already a routine of, say, for example, every machine that was working for more than 600 hours straight and that we are seeing some decrease in throughput — I'm just making a rule up, obviously — then we shut it down for maintenance on the spot. So this is, think of it as a business process that is based on a business rule. Maybe the reason for that business rule is the operator's intuition. Maybe it's based on some simple AI analytics that was done historically and people saw that after 600 hours, there's higher likelihood of the machine to fail. If you take this process and the only thing that you change is the brain, so to speak, of the process, like the logic, and you say instead of shutting down a machine that worked for 600 hours and there's a decrease in throughput, just as an example, we shut down a machine that got a predictive score of higher than 75% likelihood to fail in the next week, which is a classic AI predictive score, then you are at a great position to find yourself, A, using the predictions in a meaningful way within your business context and, B, to measure the impact. Right? Because you have the historical downtime, so you know how much downtime you had. And now you have the new downtime in your test period. Where instead of relying on your rule, you're relying on the AI score, and you can just compare. So the ROI is extremely quantifiable. That's one example of a dimension that you have to consider when you're thinking about the ROI from a specific use case.

Obviously, another dimension — we mentioned it briefly earlier — is the data dimension. Do you have the data available? Is it something you can use in an ongoing manner, or does it require significant heavy duty of extraction that you'll have to go through continuously? So understanding availability of data, quality of data, relevance of the business use case, existing process, tangible, measurable outcomes are just a few examples of the things you have to go through when you are basically prioritizing potential use cases. I also recommend, especially for companies who are trying to adopt AI for the first time, I usually recommend, even I would say strongly recommend, to start with a relatively small, well-defined implementation or project. Just if we take the initial downtime in predictive maintenance example, don't implement on a whole factory or a whole geography, all of them, right off the bat. Start with maybe just one machine or maybe a couple of machines at one small site where the data is decent. If you see it works well there, you can always scale. AI is scalable. But if you do it at a large scale right off the gate, you're at a risk of, like you mentioned earlier, Erik, sucking too much of your resources prematurely.

Erik: I know that the answer here is going to be, it depends. Let's say we're looking at this type of data as available. It's kind of defined as a relatively quick win or a less risky investment. What is a reasonable range for payback period? What would be a threshold? 2 years, 12 months?

Zohar: So it depends. But I would say the best projects we were involved in, the best cases we've seen with our customers, you are looking at a full payback within a month. Obviously, it depends on how much you invest. But within a month, you should see, if it's a real valuable implementation, within a month, you should see your ROI. One year would be a decent benchmark. More than a year means the effect of AI is probably not as significant as it could. I wouldn't go for a priori. I wouldn't go for a longer payback time than a year.

(break)

If you're still listening, then you must take technology seriously. So I'd like to introduce you to our case study database. Think of it as a roadmap that can help you understand which use cases and technologies are being deployed in the world today. We have catalogued more than 9,000 case studies and are adding 500 per month with detailed tagging by industry and function. Our goal is to help you make better investment decisions that are backed by data. Check it out at the link below. Please share your thoughts. We'd love to hear how we can improve it. Thank you, and back to the show.

(interview)

Erik: Okay. So coming back to your technology, you launched the company in 2018. I guess the concept of generative AI existed back then, but it hadn't really been widely adopted. So your focus is on predictive AI. But then, on your website, you talk quite often around Predictive GenAI. I assume that you were working with more machine learning, more kind of, let's say, traditional predictive analytics. How do those two different, I don't know if we'd call them technologies, those approaches interact with each other in your system?

Zohar: Machine learning and GenAI are, let's call it cousins, not even brothers within the AI landscape. So everything is eventually statistical models that extract patterns from data and project it to the future in one way or another. But if you look at the entire landscape of AI, then the classic machine learning algorithms and the generative neural networks are relatively far apart. In reality, it's almost impossible to generate strong predictions with the large language models or the generative models that are now so popular with the GenAI revolution. They are not well-suited for making transactional or tabular predictions, which is the world of machine learning. Obviously, machine learning is very poor at generating language or video like neural networks and large language models. So what we've done is basically fusing the two technologies into our platform so that, together, you get an experience or a solution, I should say, that leverages the great aspects of both.

At its core, our platform is still a machine learning platform from the sense that it generates tabular transactional predictions like downtime, or failure, or conversion of a customer, or sales forecast. You know, name it. Every event that has a moment in time, and you want to know whether it's going to happen yes or no, is basically a structure that the machine learning platform supports. So at its core, our platform still does that. But the layer that allows us to democratize this capability so that it can be done extremely fast, and it can be done with individuals who are not experienced modeler is leveraging generative AI. So the LLMs we've developed are doing a lot of the data preparation for the user, are doing a lot of the data modeling, are doing a lot of the, we call it validations. But what it actually means is a set of tests and checks that you have to go through as a data scientist to make sure that the model is real and true and valid and isn't suffering from the classic machine learning pitfalls. There are a ton of pitfalls that, again, the analysts who are not proficient, they won't know because they didn't have the experience doing that before. So the GenAI layer is doing much of the technical implementation, of what the data scientists would have done manually had they done the work. The core patented technology that we have is doing the predictions out of the data by the user.

Erik: Okay. So the GenAI layer is kind of like a very specialized co-pilot that helps to automate a lot of the processes for defining the instructions for the machine learning.

Zohar: Yes, it's a very interesting, call it co-pilot type of addition with one additional aspect, which is crucial. And you also mentioned it earlier. A lot of times, you're not exactly sure as a business and as a user what is it that you'd like to predict and what are the use cases that are relevant for your data. We also have LLMs that are helping you with that ideation and exploration. You can chat, ask questions, upload your data, and get guidance not only with regards to the actual building of the model and prepping of the data but also determining what predictions and what models you should be pursuing.

Erik: Okay. It's a brilliant solution. I've come across a couple of companies that have developed their own internal kind of chatbots to advise potential solutions. My feeling is they're probably not very accurate or that, in any case, it would be much more valuable to say 70 other companies, maybe who will be anonymized, have found value in this use case. When you're looking at what should I be doing, that's quite a credible data point. One topic I wanted to discuss with you, because it's always interesting to me when a company provides visibility into their pricing. It helps to understand, how do we measure value creation? And so your pricing model is defined by monthly predictions. So the number of predictions in hundreds of thousands, the number of trained models in tens, monthly uploaded rows of data in the millions or tens of millions. I mean, how did you come upon this way of measuring value creation here? Because I guess there's no standard yet in terms of how this should be done for this type of solution.

Zohar: There is indeed no standard when it comes to democratization of predictive capabilities. The number one principle that we had in front of us is, we wanted to make this capability a no-brainer. So put aside the way we calculate the pricing. I'll get to it in a minute. But it's the bigger picture of things. We charge a ridiculous fraction of, for example, the cost of just one data scientist. Because we wanted to make this a no-brainer decision for companies to say, "Hey, I'm not going to invest a couple of millions of dollars up front, and see all of that potentially go down the drain a year from now. I'm going to invest a very small amount of dollars compared to alternatives for implementing AI, and see if it's valuable or not within, like I said, weeks." So that was the number one agenda when we were thinking about our pricing.

Now, like you said, it's crucial to correlate pricing to value. You want to charge more as a company when your partner and customers are creating more value for them. Now, the unit that carries the atoms of value for us is the prediction. If you predict more, it means that you use the brain, so to speak, of the platform more. That's why that's the core unit of scale for us, amount of prediction. But then, obviously, computationally, the more models you build and the more data you crunch, there is a higher infrastructural cost. So put very simply, we are not trying to make money out of amount of models and amount of data crunched. This is basically to cover the computational costs. We are trying to make money out of the predictions because that's where the value for our customers lies.

Erik: Yeah, I see. Okay. And for the listeners, I mean, this starts at about $1,000 a month, $1,700 a month. And then, of course, there's enterprise. I mean, we're really talking about just about the salary of, I don't know, an intern almost in Seattle. $7 per model trained. This is maybe benchmarked against the cost of a Starbucks latte. So, I mean, really, for anybody who is not yet investing, I mean, there's no reason not to, as long as you have a couple of smart people that are motivated on your team, not to start at least exploring what's possible. Right? I mean, the pricing is really not a barrier at all. Which I think is incredible right now that AI has gone through systems like this quite quickly from, we have to compete with Google for PhDs who are priced at $500,000 a year or something to, for the annual salary of an intern, we can start doing some pretty sophisticated things and at least testing what might work.

Let me ask you, Zohar, about the future. Because I think we're still just at an early stage in terms of the development of this industry. Maybe we can start with the near future. I'm really interested to hear over the next, let's say, 12 months, what is exciting for you? What are you guys rolling out? But then, if you also want to share a bit more around maybe the medium to longer-term vision that you have for how you see this technology developing and Pecan developing along with it.

Zohar: So for the short term, Erik, I'd say that we are now at a very exciting phase as a company, where, touch wood, because like you've mentioned, the combination of the technology and the pricing just makes something that was big pain and no-brainer, all of a sudden, we're just experiencing — I don't want to jinx it — a very good pool from the market. So we are, as a company, extremely focused on just onboarding as many customers as we can and putting that remarkable predictive power in their hands. So from that perspective, it's quite easy. We just need to execute. But as a company, it's extremely exciting times for us.

Looking on a broader time horizon, both for us as a company and for the market in general, I would say it's very clear that the way businesses are being ran is going to change. That's not, I think, a question. I think everyone agree that that's the case. The question, obviously, is when. Today, and I share that intuition, I think the classic predictions are that it will take years before the full implementation of AI in its various forms. So it's not that it will happen two years from now. But in five, seven, ten years from now, businesses are going to be operating on their core with data and AI, period. We, as individuals who run companies, work for companies, and are owning specific aspects of the company, we would need to learn how to leverage those capabilities. And we will become operators of technology, first and foremost. We are seeing it already in some areas, and it will just become very abundant.

So from that perspective, which I'm very confident about, deriving back to Pecan — which is at the forefront of adopting AI to the masses of small, mid, and large enterprises — I would say that our main vector, technologically, is constantly simplifying and expediting the implementation of predictive capabilities. When we started six years ago, it was a two-year timeline of a project end-to-end. We, as a company, were able to reduce it to a couple of weeks, a month, maybe two months. We can theoretically reduce it even further. And if six years ago when we started, you had to be a PhD-proficient data scientist with a lot of business domain experience for the project to succeed, today we proudly serve hundreds, if not thousands, of data analysts who never done AI before using Pecan. Then, again, two, four years from now, we see a clear path for bringing those capabilities beyond just data analysts, to the laymen like you and me, people who, like I said, own processes, have some business interest, potentially have access to the data. We don't see a reason why those individuals shouldn't or couldn't also run predictions and make business decisions based on what will happen in the future instead of only just the gut feel.

Erik: Great. Zohar, thank you. Any other things that we haven't touched on yet that are important for folks to understand about either your business or how you see the industry, and how they should be approaching AI development for their companies?

Zohar: I think, first of all, Erik, thank you for the conversation. I think we covered great topics here. I would say that maybe the thought I'd like people to leave today's discussion with is that, on the one hand, the AI train is leaving the station. Companies that will fail short of implementing or at least experimenting with AI in the coming years are just going to be irrelevant in the future business landscape for sure. On the other hand, there are no more excuses for not implementing AI. The way we see it, if back at the day, the costs, the talent, the data, those were all objective, challenging hurdles, the technology enabled us to not be there anymore. If you care about the future of your business, and if you care about getting on the AI train before it leaves the station, just look for Pecan or other companies that help solve other AI problems. But now is the time.

Erik: Great, yeah. Zohar, if folks want to learn more, what's the best way for them to reach out to your team?

Zohar: Specifically for Pecan, they can contact us via the website. We're very responsive. We do work with, like I said, the longer tail of the companies out there. So it shouldn't be too hard.

Erik: Great. So that is pecan.ai. Zohar, thank you so much.

Zohar: Thank you, Erik.

 

联系我们

欢迎与我们交流!

* Required
* Required
* Required
* Invalid email address
提交此表单,即表示您同意 IoT ONE 可以与您联系并分享洞察和营销信息。
不,谢谢,我不想收到来自 IoT ONE 的任何营销电子邮件。
提交

Thank you for your message!
We will contact you soon.