Podcasts > Ep. 092 - The importance of low code platforms
Ep. 092
The importance of low code platforms
Lucas Funes & Cecilia Flores, CEO & COO of Webee
Monday, June 21, 2021

In this episode, we discuss the important role low code platforms have in reducing the time and cost to deploy an IoT system and we also explore diverse use cases for edge data and machine learning to improve sustainability, efficiency, safety and overall profitability.

Lucas Funes and Cecilia Flores are the CEO & COO of Webee. Webee’s low-code platform lets business users in manufacturing, agriculture, smart cities, utilities, and more, to use natural language to make sense of all their sensor-generated data as easily as doing a Google search. Its drag-and-drop interface doesn’t require technical expertise to build custom IoT apps or to connect sensors and third party sources like weather forecasts and historical data sets.

Transcript.

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guests today are Lucas Funes and Cecilia Flores, CEO and COO of Webee. Webee as an intuitive IoT and AI solution builder that allows organizations to reimagine their operations using easy to deploy solutions implemented in just hours. In this talk, we discuss the important role low code platforms have in reducing the time and cost to deploy an IoT system, and we also explored diverse use cases for edge data and machine learning to improve sustainability, efficiency, safety, and overall profitability.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Thank you.

Lucas, Cecilia, thank you so much for joining us today.

Cecilia: Thank you. It’s a pleasure to be here.

Lucas: Thank you, Erik. It's a pleasure.

Erik: Before we get into the company, the business, the technology, I'd love to learn a little bit more about the both of you. So you set up Webee, was it about eight years ago now, is that right? I know your cofounders. Had you known each other before setting up the company? I suppose you did to some extent. But was this like a long-term friendship? Was it a working together at Stanford? What was the backstory?

Cecilia: So we founded the company back in 2013. We recently met each other. And my background is very much on the corporate side. And I could see from inside how much value technology brings to the enterprise. And Lucas, who was born an entrepreneur, he was coming more from the engineering side with this idea of IoT but then not a lot of people were talking about it.

So I got fascinated by the idea of connected things, and how that technology could evolve and help fix some of the world's most pressing issues. So we started playing around with the idea of forming a company just around an IoT product that could help us really democratize this adoption. Since the beginning, we were imagining how many areas could be affected by this technology. And then so just by playing around with ideas, we decided to found the company about them.

Erik: And then Lucas, you were running a couple companies previously, so there’s Enorbitas and ESG Studios, were you touching on IoT there, or what was the key focus of those two companies?

Lucas: We’re focused on software, but also in key impact, engineer development. In some of the companies, we use to develop software products, and cloud, where no much cloud like 20 years ago using a whap, and WML instead of like the smartphone that you have today. And we touched some parts on the electronic hardware, on security and that so it was combining different technologies. But not specifically, nobody talked about IoT or artificial intelligence 20 years ago. We used to call them [inaudible 03:54] networks are those type of things.

Erik: Or low code, for that matter. When you first set up the company, was the vision already kind of clearly in mind, or did you basically say we think IoT is going to be interesting, let's do something and then kind of figure out the portfolio as you went?

Cecilia: So our mission has always been to make technology accessible. So we got fascinated about IoT, but it still continued to be confusing. So back then it was very much a dream to think about an end user that could just decide what type of sensor they want to connect what type of information they need to structure and access in real time.

So, the end goal has always been how can we make it easy for people to use technology in different various applications and really help the industry take off? So we kind of navigated around different types of ways to bring this product into the market or just build that idea and that vision into a product. And we started testing different markets.

And then as we went through that path, we started engaging with different large enterprises that were trying to do the same. They were trying to understand, okay, what is this IoT future look like for my clients and how can I add value to them through connected technology? So that's how our product was brought to life. It was really by understanding how the enterprise works and what their goals were, and how we could help them build this type of technology with the vision in mind to make it easy and simple to deploy.

Lucas: With that vision in mind on how to really democratize the connectivity, we build a company with some clear thing in mind that was generating a way to connect the things easily wirelessly without using wires. In my previous background in factory automation and that it was really complex technologies wire that didn't work. So we build the company by this vision on connecting things wirelessly simple with batteryless devices.

Erik: So you were really quite early in the trend that's now very much mainstream. Back in 2013, probably, you're still figuring out how to make a lot of things work here?

Lucas: Yes. The first steps on the company we had to build around wireless protocol to connect things on automate machines and that. And then of course, now, there are many other technologies that we are using instead of our first built wireless protocol. That now you can find like LoRaWAN technologies that are mainstream. So we adopted those open source, or open protocols and connectivity that are proven.

Erik: Just from the entrepreneurs perspective, how long did it take you from the founding of the company towards doing your first proper project for a customer, not like a pilot, but to say, okay, we've got a product, you're going to buy the product, and you're going to deploy it at a meaningful scale? What was the timeline from 0-1 for you?

Lucas: That's kind of amazing question.

Cecilia: Because I think that product always evolved, right, and that’s [inaudible 07:17]

Lucas: And the first one would be we got a six month from starting point. And then one year, we started running pilots and within the IoT platform and some hardware that we had to develop. That delayed our first production deployments because we had to build some hardware at that point because we didn't find in the market a wireless product. You ask about how to find the right customers, and then provide that value prop versus the pricing and how to solve the problem on the customer. So that part is we had to pivot on finding those customers with a clear vision in mind. And it look like between 3-4 pivot on the go-to market strategy and we finally got it.

Erik: It's always a super interesting time for a company. You're trying to figure out what the tech stack looks like where the customers and then you find that match?

Cecilia: Yeah, I guess it just the complexity on IoT is that is a combination. In all of the cases of software and hardware, you need the hardware to be able to collect data and hardware makes it a little bit more difficult to understand how you're going to approach the market. So I think the vision has always been there. But as the market continued to evolve, then the hardware market also became very confusing and complicated to understanding of what was the right way to go. So I think that delayed a little bit the go-to market, in a sense our understanding, and how can we continue to make sense of what is going on in the industry and at the same time, providing value to the client at a price that they are willing to pay? So that's kind of another angle?

Erik: What is your customer look like today? So I know you have a strong focus on manufacturing, you also have a strong focus on smart farm, is it basically factories and farms? And then are you looking primarily at the largest players that have very complex operations or midsize players that really value simplicity, or what's the competition?

Cecilia: So we build a non-call tool set that is end-to-end and really allow enterprises to accelerate the deployment of IoT solutions. But for most of our clients, they don't buy IoT solutions, they buy just technological solutions to fix some of their inefficiencies. It might sound like it's the same thing, but for them the approach to technology is different. And the reason why I'm bringing this up is because as a go-to market for us, we work with companies in the food and beverage industry to help reduce some of the food waste produced by inefficiencies in manufacturing.

And the way we approach them is really by helping them fix some of the issues that they already identified in the production line because they know exactly where inefficiencies happened; they don't have access to a solution that can be implemented fast enough to prove return on investment in an industry that is very focused in efficiency. So our main market is the food industry and agriculture, because they are super interrelated too. A lot of the process depend on another. And there's a lot of efficiencies that could be solved by working in the industry as a whole.

The way we approach to them is a different approach for most of the IoT companies out there in the market, which is really working bottoms-up. Working just with the production manager, with the plan manager that really are on the frontline of the inefficiencies. They have a specific process they want to optimize. They have in a specific component that they need to get much more information about in a more reliable way or flexible way. And they need to fix it right away to improve efficiency.

And sometimes top-down approaches are too comprehensive for solutions to be able to prove the return on investment. And some of the projects die in the way to really take off and get internal approval. So working bottoms-up really helped us tap into a couple of things; is proving value from the beginning, which is the timeframe for proof of concept for you so you can prove how this investment is going to bring you efficiency, then the other one is reducing the friction for technology adoption internally. Because when you focus in industries, there’re so traditional, there's also some friction in understanding how do I make this new technology work with my legacy infrastructure and how do I make more sense of it?

And then you get internal buy-in and digital transformations start happening internally, and organically. It's not because we are just thinking that this is something we have to do. It's just because we are seeing the value firsthand. So we have proven that this approach really help us and help clients understand technology better because it's like we were saying before IoT is so confusing and there's so many roles and a ways you can go that really having a very specific place to start help them get into a real digital transformation approach.

Erik: In your solution, you have a platform that's fairly horizontal, which is around connecting sensors and gaining access to that data, and then also analytics tools to make sense of the data and turn it into some useful output, and then you also have what might be like no more SaaS products that are fully built applications targeting specific solutions. Is it the platform that came first and then as you started to get experience solving different problems you built the SaaS tools on top of the platform? Or is that the right way to think about the suite?

Cecilia: Yes, it is the right way. And the reason why we are creating these pre-built products or pre-built applications within the platform is to accelerate even more the deployment. Because the no code component to our platform is critical, because it is the enabler for enterprises to continue to add more data sources and data points into the platform. But we want to simplify connectivity, we need to show them and guide them on what is the best place to start and what is information they could be acting upon. So that's why we build a preview product so that can be kind of out of the box solution for them and then they continue to build on top of that.

Lucas: And actually, what happened in our journey was that we build a platform as the first step to solve a specific pain point but with a platform that can scale. And that moment, and the beginning, we built custom solution running on top of that platform. And all of that was, at the beginning, developed by programmers by coding. And of course, to have a no coding you need to do so much code in the backend now to be ready to be no-code platform.

And then when we wanted to go to the different customers, we discover that even the same type of customers like in manufacturing on chocolate manufacturers, or the same type of companies that should have the same problem and the same machines, they need to change within 5-7% the solution. So at that point to the deliver the different solution for the customers, we had to put our own programmers and solution architects to customize the software for the customers.

And then there was a moment like 2016 it was like a really big pain point for us because even when we wanted to grow and have more customers, we had to grow our engineers teams, like having more headcount or more headcount, more time. And at that point, even though we had the platform, it took like six months to deliver and market-ready probably to solve the problem. So that we say, oh, there is a huge problem here. That is we need to have something like so easy to build a solution, reusing what we already build, but then so simple to customize, and scale, like if you a drawing in a whiteboard or in a canvas. And that is why we developed that no-code layer on this end-to-end platform to really customize and create solution from scratch, and then escape from the previous solutions. And those examples are the smart farming and the smart factory solution we built.

Erik: Okay. So customization is always necessary based on the set of different businesses, but now the customers can basically do that themselves. Or they can hire consultancy, whatever, but you don't need to work hand in-hand with the customers through this process. Can you explain in a bit more detail what low code actually looks like? I studied engineering for one year, and then I dropped out and switched over to philosophy, so I'm probably a perfect target customer for you. So if I were to be using your solution to set something up, what would it look like from the end user perspective?

Lucas: So the end user is that wants to solve a problem like domain expert, suppose that you are an expert in agriculture and our veterinarian and then you know how to solve a problem in a whiteboard, our vision was this should be so simple, and so human without talking about technical skills, and so simple like you’re drawing in a whiteboard the solution. So you wrote two canvas on this system and then you can drag and drop the different components that will solve the problem.

The way we see this end-to-end no-code technology is in a three step process. The first step is to collect the data and connect with the devices. That has complexity that you're talking about. Many protocols, different technologies, and that. We simplify that by just giving you a prebuilt connectors that can extract information from different software like SAP or Oracle NetSuite or API's, but as well, you can extract information from factories, and you can extract information from a LoRaWAN sensor as one example on that.

Cecilia: So on a traditional system, most of the cases, you want to use one communication protocol, and you're going to connect certain data points, let's say, either sensors or data attacks in the factory line. You will have to have someone manually integrate themselves into the platform so they can show up in the software and then you can start creating the conditional. So our no-code tool set allows to recognize the data points automatically and put them at their disposal on the canvas so you can start playing around and working with the workflows, which is the second step.

Lucas: Recover granted patterns related to that process that is called intelligent mapping and discovery that discover all the objects, the devices, the things, different protocols, and then you can use it without doing anything on the coding side. If you go to the coding, you need having skills like field work programmers, and research programming and [inaudible 18:48] devices and that. That’s the first step.

The second step is when you get that data, and then how you can normalize that data, and really contextualize that data that could be coming from devices or software or API's or services. So that point, you need to put another type of programming skills that data science that can annotate data. In these, you have different boxes on the drag and drop canvas that you can start extracting the data and getting insights.

Cecilia: Instead of manually coding the workflow and telling the software what you want the sensor to do let’s say you want to set up an alert, even the temperature goes beyond or above a certain level, then you want to receive an alert. So instead of coding that manually, just by dragging and drop elements on a canvas you can get that workflow down. And then a step beyond that, which is the third step, which is how do you reselect the data? So once you have all the data streams and you have told the system what you want to do, then you have to create an application so you can actually access the information. And in the case of the enterprise, that info needs to be accessed by different levels within the organization.

So, you can do that also intuitively in the same application without having to have someone program the application, you can just drag and drop elements and create as many as you need depending what type of information you want to share and also create communication channels. So if you want to send a text message with an alert, or if you want to send an email within a specific alert on a specific recording, you do the same thing without the need of coding.

Lucas: So generally speaking, for building in a traditional way end-to-end solution like from IoT, suppose you would like to monitor the vibration of a machine, and having another by SMS and mobile app that will allow you to see on real time the alerts notification on machine fuel status. So that was require field work developer, [inaudible 20:59] developer, edge developer, then and someone that can manage cloud infrastructure like using external example, Azure, or AWS, and then another programmer that can program the code for the thresholds and that data science to incorporate like AI algorithms. And then you need to do the mobile developers for having alerts on the iOS or Android as an example.

And in this no-code technology, you have a previous solution, and then you can scale with just drag and drop. Some example, you can stick on play this non-intrusive LoRaWAN sensor battery power long range to a machine recess. So we see scanning the QR code, and that was on boarded to the system, you stick to the machine, and then you start getting the data, and then with a drag and drop, you just drag and drop a box that is an anomaly detector connected to that digital twin from the machine like a mixer in a factory. And then you create that alerts, very, very easy. We lowered the time from six months to one day to have a full ready end-to-end solution customize for this specific customer.

Erik: And I suppose there's a lot of pre work done for API's for different sensors and so forth. I imagine there's a lot of diversity in terms of the infrastructure that your customers have. Are you regularly confronted with situations where there's a particular type of sensor or other data source that you haven't introduced integrated before and you have to make a business case around whether to devote the engineering resources to do that for a particular customer? Or do you feel like by this point, 95% of the assets that you might want to integrate with, the data sources are already there? There's a tremendous amount of complexity in different communications protocols and sensor models in the market. So what this complexity looks like from your perspective?

Lucas: There is a huge fragmentation on the protocols and the edge side. And even that there are many protocols like MQDT, but then you'll find like different vendors across different protocols and payloads inside. So for that, we had to be in a way to onboard and certify devices or data sources without coding. So this is where we build and we call component driven architecture, all our architecture is built by that in a flow-based programming approach as well.

So now, we support more than 700 types of devices, different types of devices with different protocols and technologies, like from LoRaWAN to WiFi, and that. But in many cases, there are some new data sources that they will require some integration to certificate that to support that data source. For that you can do it without coding with just clicking on the capabilities and using our intelligent mapping and discovery services that is patented is a software no-code that runs, is like a robot that is currently capabilities, and then helps you to build the coders without [inaudible 24:18]

Cecilia: Yeah, so the beauty of focusing in some specific industries is that most of the infrastructure is pretty much the same, so the use cases are similar. So that allows less manual integration of devices or different setting and customization.

Erik: So, it sounds like because of your Industry Focus, you're already covering most of these situations, and then the low code solution allows new devices to be onboarded fairly easily. Let's go into the tech stack a little bit. So we've talked about the platform and the SaaS. This is a fully integrated solution. I suppose you have some customers that have all of the data sources that are required already in place, other customers that also require putting that infrastructure in place? Can you just give us a high level perspective of what the tech stack looks like for a full scale end to end deployment, and then what are you doing internally, and where are you buying things off either hardware or software off the shelf, and then integrated them into your stack?

Lucas: In the first step that is connecting to devices things gateways, we onboard and certified different devices from different vendors. We have many, many supported. And in that perspective, we have our own gateway and sensors. But with more than 700 types of devices, we have many vendors already support there. Also in the stack on the technology and the software, we run agnostic to the cloud infrastructure, and we can run on premise as well. And we can leverage the customer infrastructure, the IT and the IoT infrastructure. As an example, you can go to a factory that already have SCADA or PLCs working in an automated factory, and we can connect to those PLC, extract information so they don't need to reveal or replace the current technology.

And then you can jump another additional layer on when they cover legacy machines that are not connected to PCs or through WiFi, and KTT, so you can stick and play those non-intrusive sensor and track information from that. That is the part on the data extraction. The software we have not only can run on premises, but also can be leveraging their current infrastructure, like you will find many factories that already have a data lakes, they can have like Azure data lakes or AWS databases, and then with this visual designer, I will know controls that you can extract information from there, but also push information to those data lakes they have.

And also there are in this third layer on the third step on the end-to-end, there are many companies like SAP or Oracle or Maximo IBM to do maintenance or work orders on the machines, we can push the data to those. And this is the way we see our tool set as a visual designer to orchestrate the [inaudible 27:30] and IT and OT infrastructures.

Erik: And from an analytics perspective, maybe this is more of a question of what you see in the market rather than what your tech stacks are able to do or maybe both perspectives are interesting. But I guess we have rules-based systems which work still in many cases. And then we have machine learning solutions that are rapidly maturing. To what extent do you see your customers working with fairly simple rules-based systems versus trying to adopt machine learning solutions to improve precision or take on new challenges? Where do you see that dynamic in the market right now?

Lucas: So if we go to preach to a customer that we have the AI, the machine learning to direct customer moving in agriculture or in a factory like a food processing factory or food manufacturing, they don't buy the AI as per the word means now; they want to solve their problem. So, they are not really focused on solving the problem with machine learning AI or symbol threshold or rules-based.

But the problem is that many problems they have cannot be solved with just simple statistics or rule-based logics. So then for that like anomaly detector, if you would like to detect an anomaly in a motor or in an incubator or a mixer or refrigerator, in many cases, you will not be able to detect that anomaly by just simple statistics and rule-based thresholds. You will need to run a machine-learning algorithm and the learning algorithms that characterize and then run those AI inference to the tech anomaly.

For that, it's very important for to provide them the built-in algorithms to have the solution also will happen is that you can start by building a solution with a generic anomaly detector and then you need to give the possibility for the users to train on real time that algorithm because that generic algorithm may not have the accuracy like 90% accuracy on detecting the anomaly. So with our no control set, you can drag and drop the asset, the machine, then you can drag and drop the anomaly detector box, and then you can drag and drop the SMS to send the notification.

And for that, you can reconnect the SMS with just the anomaly detector box, and then the system will start learning. Then the first time that generic normally detector will detect that maybe it's a problem with the machine will trigger the SMS and then ask the user if there is a problem or not. The user can just select no and then that will be training that custom algorithm that was implemented for that machine. So, in the traditional way, what should happen is that the collecting the data on someone training the algorithm, and then putting the data science to do the training on the algorithm. But here is a supervised learning and real time build by drag and drop technology.

Erik: So you’ve integrate training into this normal workflow. What is your experience for, you develop an algorithm that has 85% effectiveness for one model of a particular asset, a pump, and then the customer wants to another customer wants to deploy an algorithm on another pump that's maybe from a different model produced 10 years earlier, different vendor, so basic the same dynamics, but different configuration for the asset, what's your experience for the time that's required to, I guess, you first deploy that generic algorithm on that new asset, and then you need to go through this training period, how much training data? But what's the time to get this up from the generic result of 60% up towards a more optimal 80-90% accuracy rate based on the training?

Lucas: It’s not depending on the pump if it is new or all time, is depending on the use case. So you can get a brand new pump, and then an old pump that even has different values on the energy consumption as an example. But that can work out of the box with the algorithm. And then if the use case on the use of the pump change, that is needed to really train. The self-learning, we have successful deployments that learns between one week to two weeks and sometimes that can be lowered to three days. We have some cases that with that three days, we have the opportunity to detect silos in those three days, that accelerated the process on the learning. Because for learning, we need to detect the normal behavior and the learning time is accelerated when you get the anomaly in the meantime; if not, you need to have a more normal behavior and then you can simulate those anomaly detection.

Erik: But even one or two weeks is still a pretty quick turnaround. Just one more question on the business and then I'd love to hear one or two use cases or case studies. The business model, is this just a SaaS based model, maybe with some setup costs? What does that look like?

Cecilia: Yes, it's a software as a service description and then we have some additional services depending on the setting and if there's some customization of the solution.

Erik: Let's go into a couple of case studies. Anybody that you're able to talk to or any maybe common case studies that you think would be most useful to discuss?

Cecilia: Yeah, so we can cover one on the [inaudible 33:54] side and the other one on the manufacturing. So, on generic, there's this very nice use case of a company that had a process that was super sensitive to temperature. And what they needed is to connect the boilers so they made sure that they could access real time alerts when there was any change on temperature. So the problem they had is that the system they had, it was a SCADA system, it was already generating the alerts, but the alerts were too frequent so they couldn't discriminate the information, and therefore they would go to the production line, it was a chocolate manufacturer, all of the production was already tossed away.

Just because they couldn't detect that there was an anomaly on temperature, and for security and quality protocols, they have to toss away or dispose all the chocolate on the pipes and spend one day setting up the line again. So, just one day of downtime for this client was $1.5 million or more on losses. And so what we did is we got into your production line with a nonintrusive approach, we structured data and we put it on the canvas so they could set up all the alerts on the dashboard they wanted, and they could come back to the canvas to change it anytime they want.

For them, that was critical, because there are some adjustment on the production line that needs to be constantly changing for them. So they needed to have the power. And they have been actually dreaming about having the power to getting the alerts they need, discriminate from all the data that was being produced and reported by the production system, and at the same time have something they could rely on.

So the implementation, I like this use case because we implemented this at the beginning of the pandemic. So we have plans to travel to that factory, and we couldn't because we really wanted to see in real life how it looked like from the client perspective to set up the whole system. So we did it 100% remotely, and it worked perfectly well. And for them, it was also a solution that helped them navigate the pandemic because they had less workforce in the plant, and they needed to still make sure that they could control all of the critical processes.

So a very nice use case for them. The cost was just the implementation of the software and the license of the software. And then it's already proving that they are saving millions and millions of dollars in losses that they already counted on those losses every year. So that's one of the use cases.

And the other one that I like a lot is on-site production. Now there's a step on-site production that in maternity that the mother would lay down on top of a piglet and that period will die, just because they don't have enough room to move. And there's many variables that have been measured. But what we need is a very innovative solution with computer vision, we can detect when that crash accident is happening and we can generate a real time alert, and it's actually a local alarm sound super loud so they know that this is happening and they can go and prevent that piglet from dying.

So they have 17% of the production loss in the maternity process because it's super sensitive to many different things and this is helping them reduce drastically the amount of losses in that part of the process. So two different use cases, two different sets of variables to measure, two different applications of the technology, but the same step and the same no-code approach.

Erik: The second one, so there's kind of an interesting machine vision case. So what you’re training it on is kind of identifying the movement of pig being decreased because often you see machine vision around kind of object recognition around standardized objects, and I guess a pig is, but here you're not looking at kind of the face in measuring angles and so forth. You're looking at a body and how that body is moving in a pen relative to another body. What was the training timeline on this? Do you know what the time was to get a reasonable accuracy level?

Lucas: Yes, that was a kind of amazing. We had to put in a big farm 30 cameras because the main problem there was to detect the crash mortality. So do they take the crash event between the pig and the piglets, so to accelerate the time, there was three weeks, connected 30 cameras, that means that we were monitoring 30 pigs with its piglets. Actually, 50% of the whole pigs just crushed in the first three days on the nursery.

So on that part, we had to put the 30 cameras to detect the crash events. In three weeks, we detect those crash events, and with just actually six crash events, we could train the algorithms to detect the crash event between the pig and the piglets. Imagine that looking the piglet from the top with a camera is like a rectangle that represents the pig and the other like small boxes like small squares that represent the piglets. And then when there is a crash event, you see that one of the small piglets, the squares, the rectangles are intersected with the mother for more than three seconds. And then is different like that intersection from when the piglets are milking or playing around.

Erik: But still about three weeks, very, very quick timeline. Both of these solutions are cloud based, or do you do proper cloud, do your customers often require private cloud or on premise?

Lucas: For them, we had to use edge computing. And then the training was by storing the videos on the cameras, and then extract that video, and then run on the training and tagging that. And we have cameras that work at the edge with that training algorithms during the detection of the collision, the crash [inaudible 40:37]. And when a crash is detected, it use LoRaWAN to send that message back to the cloud or to assign in locally.

Erik: So using edge computing there video data is a bit heavy to be moving. And then for the factory case, was that also on premise public, private cloud?

Lucas: For the factory, we use cloud.

Erik: What's your feeling right now? I guess your market is primarily in the US, are factory DMS, or factory managers open to public cloud now, or is everything so primarily private?

Lucas: We are seeing a change. And all the factories like two years ago wanted to have a really on premise and local solution for the factory. But then for big data, and data lakes, and these, they are educating and moving for the cloud. And then for the cloud, there are large companies, mostly they don't want to have something expose, like in a generic SaaS cloud like a multitenant environment. They focus more in having on a private cloud that could have VPNs connecting from the factory to the cloud, but in a private cloud connectivity, so from the whole network infrastructure focus now.

So not multitenant for the large enterprises like has the premier worldwide and then different infrastructure on the network with VPNs even in the data transmission like using the 3G, 4G to send the data using data packages on the 3G, 4G data within a close VPN.

Erik: That's a lot that we see as well. That’s trend, but maybe a slow moving trend here. So, what is next for you guys? It sounds like this solution has evolved just continuously since you set the company up eight years ago, if you look out two or three years into the future, what are the next features or directions that we might see the stock taking.

Lucas: So, regarding the technology, we are putting air force an investment more in the artificial intelligence component, the cognitive services, and also the natural language processing. Because the truly value of the IoT is to extract the insights and expose the insights for the users in the moment they need it. That means sometimes real time. That means sometimes historic insights that the user can ask for that.

And in that part, a key component is what we call the self-driven AI analytics. So, simply find the user interface for like the operation manager or the machine operator or for the plant manager or even like a farmer that they can have like a search box like Google and ask in natural language questions, and the system should answer. And this is something that we are really have that you can ask questions like how is my machine, you can ask and then the system will answer to you okay, the machine is okay and we believe you need to do maintenance in advance.

Cecilia: Which is a way to kind of step through all the information and the data that has been generated to have specifically what you're looking for in real time.

Lucas: So, all AI computer vision algorithms, more [inaudible 44:00] and NLP advance self-driven box is our focus on the technology perspective. And as well we have in our roadmap to integrate with more ERPs and different APIs within and also building new solutions to end products that solve specific cases, like we launched on smart factory and smart farming. And in that, the solutions are focused this year on the food beverage and agriculture industry across the supply chain from the end-to-end there’s call like grapes-to-glass or farm-to-table or fork-to-table, and then couple next year he’s expanding those ready solution for other industries [inaudible 44:48].

Cecilia: And also something super important that we are adding into our platform is the ability for the platform to give the users insight about their sustainability goals. So we see that most of our clients already have 2040, 2050 carbon neutral goals, and that is very much linked to the production and the efficiency within the use of natural resources and how their food is being produced. Our plan is to continue to help them do that, but also not only add efficiency, but also being able to measure how much they are impacting their sustainability goals. So that's something important as well that comes with the evolution of our product.

Lucas: That's an important topic CeCe bring to the table is we are building the KPIs and multiply for different countries on how to match the energy consumption to the carbon footprint impact, and as well on the agricultural operations by measuring the different variables in the soil like organic material, and that to multiply and calculate how that farm is contributing to carbon retention and processing and reducing the carbon footprint.

So that is not only focus on the main pain point of the customer that is generating more profitability, cost reduction, but also mentioning the carbon footprint impact and then how they can lower that to get to a neutral carbon footprint.

Erik: Well, I think we've covered a lot of territory today. Anything that we didn't touch on yet, that would be important to cover today?

Cecilia: Yeah, for us, it’s a very exciting momentum as we continue to evolve our solutions and learn much more about three industries. And it's an exciting place to be in because we can see firsthand the value that we're adding the company perspective, last year, we were recognized through a very large competition that is called the Female Founders Competition by M12, which is Microsoft Venture Fund, Melinda Gates, Pivotal Ventures, and Mayfield Funds as the best software as a service company in the US.

And for us that was amongst 1,500 applicants. And for us, it was a huge validation of the north of our technology, how we are approaching the market, the value we're adding into the market, and then the evolution of our toolset. So it's exciting moment for us as we continue to grow in the market. And we are also working in a very collaborative way with other players within the ecosystem like Sentech and LoRaWAN that are key players for us to enable those deployments.

We really think that the way for IoT to take off and democratize is much more than an ecosystem approach, where we kind of collaborate on getting solutions out the door, and really with the purpose in mind of helping clients cut through the noise and understand what is the best solution for them in each step of their progress from the technological standpoint. That was super exciting time for us.

Lucas: And I wanted to stay tuned to our new lunches that one of them is regarded the smart irrigation system, all beating with LoRaWAN, that is to helping do the weathering efficiently, and save money, but also improve the crops. So that will be amazing as well for all the farmers around the world.

Cecilia: I would say that the most satisfying thing is that when a client just text you and let you know that you're an alert, just help them fix the problem. Or in the case of farming, alert when there is some fungus on the plants, and they are about to use their production. So that kind of satisfaction is for us more face for the whole hard work behind the technology.

Erik: It sounds like you are really on the right track. Based on my understanding of where the markets moving as well, I think you seem very well positioned for the coming decade. For listeners who are interested in learning more about what you're doing and staying updated, what's the best way for them, on the one hand, stay updated on your new releases, and on the other hand to get in touch with you or your team?

Cecilia: Absolutely. So they can go to our website, which is Webee.io, is WEBEE.IO. They can also follow us on Twitter. On LinkedIn, we're really active updating and sharing the news and there's also a way to subscribe to the news in the website. So would be great to have people and share new solutions as well.

Erik: Well, thank you both for taking the time today. I really appreciate the conversation.

Cecilia: Thank you.

Lucas: Thank you very much, a pleasure.

Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.

Contact us

Let's talk!

* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.