Podcasts > Ep. 124 - Enable scale with reactive microservice frameworks
Ep. 124
Enable scale with reactive microservice frameworks
Brad Murdoch, Executive Vice President, Lightbend
Friday, April 22, 2022

In this episode, we discuss challenges related to building real-time cloud native systems that are able to scale globally, across millions of devices. We also explore the importance of reactive microservice frameworks for providing the on premise flexibility that enterprises often require. 

Our guest today is Brad Murdoch, Executive Vice President at Lightbend. Lightbend equips development teams to build microservices that are resilient to failure, scale effortlessly and instantaneously process data for in-the-moment critical business decisions.

IoT ONE is an IoT focused research and advisory firm. We provide research to enable you to grow in the digital age. Our services include market research, competitor information, customer research, market entry, partner scouting, and innovation programs. For more information, please visit iotone.com

Transcript.

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE, the consultancy that helps companies create value from data to accelerate growth. And our guest today is Brad Murdoch, executive vice president at Lightbend. Lightbend equips development teams to build micro services that are resilient to failure, scale effortlessly and instantaneously process data for in-the-moment critical business decisions.

In this talk, we discuss challenges related to building real time cloud native systems that are able to scale globally across millions of devices. We also explored the importance of reactive micro service frameworks for providing the on-premise flexibility that enterprises often require.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Finally, if you have an IoT research strategy or training initiative that you'd like to discuss, you can email me directly at erik.walenza@IoTone.com. Thank you.

Brad, thank you so much for taking time to speak with us today.

Brad: It's a pleasure, Erik. Thanks for having me on.

Erik: You seem to have a very good track record of being a Chief Marketing Officer of a company that gets acquired. So what's the backstory there?

Brad: I have a history in technology industry, where I have a technical background. I've got a computer science degree. But very early in my career, it was pointed out to me by my manager that I was actually better at talking about software than I was writing it. So I've really established a career in terms of helping technology companies establish their value proposition to their customers. And so, over time, I've worked for large companies and smaller companies. And in many cases, the story ends up being the smaller company has value to larger companies, and then acquisition happens. So it's just part of our industry, that’s normal pattern really for a lot of early stage companies, they either go public or they get acquired.

Erik: And then most recently, you've been a board member of Permission. You're also on the board of Spiritus Partners. But you you're working with Lightbend. How did you land there? What was it around the company that convinced you that this would be the next place for you to place your bets? And what was it that excited you about the company?

Brad: There are a great number of things. So going back to the early 2000s, I was actually in the IoT industry, I just didn't know that I was because that wasn't a term that was used at that time in the early 2000s. I worked for a company called Wind River Systems. And Wind River was one of the biggest companies in embedded software and they were independent, they eventually got acquired by Intel and then they are now PE.

But at that time, it was really the future of the world was about connected devices. And the idea that the increase in power in those devices would enable increased intelligence and the ability for all these devices to operate in concert was always one that was very interesting to me. And I very much enjoyed my time at Wind River. But after that, I've been back into the more traditional enterprise B2B business.

And the first of those companies after my stay at Wind River was a company called JBoss and that was a software company that got acquired by Red Hat. And it's still the core of a lot of their large scale enterprise software business. And through that experience, I really got to a lot of excitement around open source and the power of community in terms of building very sophisticated systems that could be able to deliver on just business changing outcomes.

And when I saw the opportunity at Lightbend, I saw a lot of similarities in terms of the fact that the core technology that Lightbend has is called Akka, it's an open source framework. And it has amazing use cases where the concepts are all around distributed computing, asynchronous processing, the ability to be able to have super scalability, super reliability, super high resilience. And many of the use cases were parallel to what I had been seeing as the future in terms of what we now call IoT.

And it was very exciting to me that there was the potential for this next generation of systems that could handle the types of massive amounts of data that we're now seeing generated by all of these connected devices. I've been at [inaudible 06:28] for six years now. And we're really, really, I feel like, just coming into the point in the market where the inflection point of the requirements of large scale distributed computing and high performance data management at scale is really, really becoming more and more an everyday concern for the global 2000 enterprises and smaller companies that are building solutions for those companies to consume.

Erik: You are in the business of building infrastructure that companies can use to build data driven systems on top of and there's some terminology here that I think would be quite useful to define, at least at a high level for our listeners. So if I'm just reading this three sentences from the company introduction, it's “Lightbend is leading the enterprise transformation towards real time cloud native applications. Our mission is to help our clients become real-time enterprises with systems that combined the scalability and resilience of microservice architecture with the real-time value of streaming data. We're the authors of reactive manifesto, a founding member of the reactive streams initiative.”

And so I think if we can just quickly walk through a couple of those terminologies, that'll help frame for people, what is it that you actually do? And maybe starting with this concept of real time and cloud native? I mean, these are concepts that everybody's heard thousand times. But still, when we say real-time, what exactly do we mean here? And why is this important?

Brad: I referenced my time at Wind River because that was really what made me understand the potential for real time processing. So real-time means it means different things to different people and for good reason. For some systems, we're talking about maybe in automotive, real-time has to be something where you're doing real time safety controls in the car, something like that, then real-time has to be in the microseconds. But for most enterprises, real-time can be milliseconds.

And we also have some examples of digital transformation initiatives where companies are moving from processes that took days or hours and then moving those into something that takes seconds. And so, to them something where you get a result in a couple of seconds compared to taking two days, that's real-time to them. But to someone that was building a safety system, two seconds is not real-time.

So for us, what we provide is a core distributed systems architecture that allows people to build systems that are operating with responses, typically in the millisecond range. So we're talking very, very high performance memory based responses to data requests, as opposed to having to go to backing store or the persistent store. So calling out to a database will take a certain amount of time. With an Akka-based system, typically, the data is already resident in memory automatically for you. And so this is one of the ways that we can achieve the superfast performance that Akka-based systems deliver.

Erik: And then the second term here is cloud native. And I think a lot of people have somehow struggled with this challenge of having some enterprise software that's been on the market for 20 years, and is now maybe being out competed by some startup that has a cloud native solution, and is trying to do that transition. And that's an extremely difficult transition. I think people have a general sense there is a difference, and this is a pain. But what does it actually mean to be cloud native as opposed to be maybe a traditional enterprise software in terms of the performance of the system or how the system is managed or that the cost structure of the system? Why does cloud native matter here?

Brad: One of the things that our industry does very well is coined terms and then adopt them, if they look like they're interesting to people. And so we're seeing cloud native as a term come up in all sorts of areas, which I have to say, I'm not sure that really qualifies. So if we can think about what the cloud means to people in general, we're talking about large scale, distributed infrastructure that is effectively operating as a single giant supercomputer for you.

And the beauty of cloud services that are provided by the hyperscalers is that you as someone that is using one of the services do not need to worry about any of the things behind the scenes that make it all work together, you don't need to know which servers your software is actually running on. There may be geo restrictions in terms of you need to know they're running in a particular region or a particular availability zone or whatever. But in general, you're not restricting things to a particular dedicated environment.

And in order to be able to make software that is truly cloud native, you really need to have software that understands that the power of the cloud comes from its large scale, distributed, scalable nature. And moving an existing application from your own datacenter, putting it into a container or docker containers and then running it in the cloud is not actually taking full advantage of the cloud at all. That's not cloud native. So that was something that would have been written in maybe enterprise Java, or spring or something like that, and run in your own data center.

And just putting it in a container and running it in the cloud is not going to take full advantage of all the incredible services that the hyperscalers have available for you. It's not to say there's not good business reasons for doing it. I'm not implying that in any way. But just that it was not designed with an understanding of large scale distributed computing that you can take advantage of.

And Akka, which is like Ben's core intellectual property was designed for large scale distributed computing before it was even called the cloud. And that's one of the reasons that I'm very excited about what we've been doing. And what we're going to be doing is that for many, many years, the software has been running in a form where being able to handle very large distributed systems at scale, which is what the cloud gives you. Writing an application or microservices using our technology allows you to take advantage of all of the good things that the cloud can bring automatically.

Erik: Microservices, but why is that the case? Why are microservices really changing how we can deliver software in the IoT context?

Brad: So let's take for example, a coffee company. So if a coffee company has a system that is monitoring all of the equipment in all of their stores, then there are going to be times of day where the systems are very lightly used. Overnight, the stores are all closed, so the load on the system would not be very high. When the doors open at 6am in the morning, and everybody starts going to get their coffee, all the stores get opened, all the systems come up live. And any system that would be monitoring things like all the equipment in those stores is suddenly going to have a much, much, much increased demand. So, maybe as much as 100, 1,000 times as much as they had overnight.

So in the old days before microservices, what you would do in terms of developing an application was you had to design it so that it was always ready for that peak, which meant that you were in a position where you were really wasting an amazing amount of resource because it was sitting there idle for a large part of the time.

With a microservices design, what a good enterprise architect would do would be structure, the software that was processing this to be microservice based. And that the scalability that would come from the system would mean that the microservice that was running the software that would be monitoring, let's say, a coffee machine or a refrigerator, would immediately understand that it needed to scale up to 1,000 of those same microservices and that would happen automatically.

So if you design your system appropriately, these microservices can be discrete pieces of logic that handle a single function. And then if you need to scale a single function up or scale a function down, then the system can handle that so that you're getting the optimum usage of your resources and the infrastructure underneath. And that's a really good example of what Akka or like Ben's product does very, very well is it supports this automatic handling of the requirement to scale up your micro services, and scale them down on demand.

Erik: And this is streaming data. So obviously, IoT use cases can have a lot of different data sources, they can have SAP data, they can have consumer data, they can have historical operation data, but streaming data seems to be the new value add that really makes an IoT system unique. So how does managing streaming data differ from managing the other data sources that we've been controlling for decades now to some extent?

Brad: With streaming data is something that we're seeing across many industries. But with IoT use cases, it's just so much more important. Because in many cases, what we're talking about is having devices that are issuing telemetry status constantly. And in many of these cases, the data that it's emitting is not something that you ever would need to store.

So for example, let's say that you have a device where you have a health check and you might have a heartbeat and that data would be providing current temperature, current geolocation, things like that, then that data would be sent to a service that was managing that particular device. And if we're talking about that device, maybe having hundreds, thousands, millions of those devices all emitting that data at the same time, then the central service needs to be able to process all that data.

And it's like a stream. If you think about a stream as a river, you've got a river of data coming through. And each part of that river or each emission from each device needs to be processed. Even if that data is just saying, I'm fine, I'm fine, I'm fine, I'm fine and you get a million I’m fines, but then there's one that says, I'm offline, or I can no longer see this or there's some anomaly or whatever and that particular change in the stream of data is something that would trigger an event, and then action that would need to be taken flagged to an operator or something like that. But 99.9999% of the data just flows through the system and in many cases doesn't even need to be stored.

And so that's really what we're talking about when we're talking about streaming data. In the IoT context, it's typically that devices of all sorts have sensors and are emitting data at an incredible rate. That data needs to flow through a system and be processed. Some of it needs to be stored. Some of it needs to be actioned. But many of those things are just okay, I've read it, everything's good. Let it go, I don't need to do anything more.

Erik: And let's try to not get too technical right now. But we've just covered a lot of technical trends in terms of how people are managing data today in order to create value in new ways. Where does Lightbend fit into this equation?

Brad: All of the sophisticated systems that people need to be able to see value from their IoT investments requires software development. It seems a little bit obvious maybe to say that. But we're talking about a combination of hardware and software that together will deliver the value to the end customer. And so that software needs to be developed. And the software for IoT has got elements that need to run in the device and then there's all the rest of the software that needs to process what happens.

So a classic example might be a digital twin, where you're building a software model of a physical device. And that is very complex software. And so what Lightbend does is we provide the core architecture for building distributed systems. So for building the server side, if you like, the cloud side of IoT systems, and we make it easy for developers to be able to build that by not having to worry about all of the really hard things related to large scale distributed systems that process streaming data at scale.

So all these things that we were talking about, those are what Akka handles. So the offering that that we have is for developers, it's for people that are going to be building IoT systems and to make it as easy as possible for them to be able to create the intelligence that makes an IoT system work effectively.

Erik: What does your architecture end? And so when they release their product to the world, are they running their product on your platform? When it comes to managing the devices in their network, are they managing those devices on your platform? Or are they managing the software and maybe those devices are managed through a separate platform? What are the boundaries of your platform?

Brad: So we have two products, we have something called Akka Platform, and we have something called Akka Serverless. So, Akka platform is something that our customers or systems integrator partners will take and run in their own environment and they can deploy it anywhere they want. And we've got many examples of people running that in public cloud, private cloud data centers, we even have some cruise lines that we'll run a platform on the chips.

So you can run the Akka platform based services that people develop basically anywhere at all. However, we also have an offering that is really what we're extremely excited about called Akka Serverless. And Akka Serverless basically allows developers to have no worries at all about security, transport, databases, frameworks because all of that is taken care of by our platform as a service. Now, this does run in the cloud. And this is what we see as really the future where cloud based services can provide the vast majority of the capability and the developers really don't need to worry about it just about anything.

So with Akka Platform, you still have the requirement to complete your own system, you still need to have all of the DevOps and many of the things related to managing databases. And all of these things are things you still need to have concern about. And many of our use cases, our customers do that. But our new offering with Akka Serverless basically allows that to be all taken care of as a managed service by Lightbend. So short answer is Platform can run anywhere, and Serverless runs in the cloud. And that is something that we see as increasingly interesting to most of our customers actually.

Erik: You work with a lot of financial firms and so forth and I suppose there, maybe it's fairly straightforward. So you're working for example with John Deere or you're working with [inaudible 26:04] or cargo tracking company. So then you get into more complicated scenarios where you have maybe teams that are responsible for engineering the product which is a hardware product, and then you're building applications. So you have different teams that have different priorities and very different skill sets, who I guess all would have a stake in what's happening with your platform. So for the industrial IoT domain, who's going to be typically the buyer, who's going to be the owner of the platform in terms of determining how it's used and then who are going to be the other users from the different stakeholders across the organization?

Brad: So software does not live in isolation, and as you pointed out, there are many parts to the solution. Let me give you example that you referenced John Deere. So John Deere has a service that they call Precision Agriculture, and basically, it's about improving crop yields. So they have built a system that is an IoT platform and it's in the cloud and it provides automatic guidance for planting and seeding and crop care. And it applies artificial intelligence in order to be able to help guide the equipment.

So if you buy a combine harvester or tractor from John Deere, it's chock full of sensors and it's connected. You can subscribe to their Precision Agriculture service and it will take all of the data related to the weather, the information from the seed manufacturer’s historical data, and do the automatic guidance of the equipment, again, in real time.

And so that solution requires many different moving parts. But the core brains of it are something that John Deere developed actually in conjunction with a systems integrator, but there's typically an enterprise architect that will be in the position of designing the overall system that will then work with the teams that will put the actual software into the equipment itself. And they will work with the other parts of the business where they would integrate into things like customer management systems and billing systems and so on.

But the core business logic related to how can we apply this artificial intelligence and this machine learning models that we have in order to be able to gain the equipment and help the farmers get this crop yields, it's whoever is designing that that is going to be using Akka in order to be able to build that system. It needs to be able to scale. It needs to be able to be fully distributed. It needs to be real time. It needs to process all the streaming data that we were talking about earlier. And so that enterprise architect is typically the person that would choose Akka.

Now an enterprise architect is typically in the technical organization, and they don't necessarily have budget. And so the business case related to this is very interesting. In John Deere case, it basically opened up a whole new revenue stream for them by being able to sell this Precision Agriculture service. It's bundled with the equipment when you buy it initially, but then you have to subscribe on an ongoing basis. So it opened up a whole new business for them. And obviously, that's not something an enterprise architect decides, that's a business owner that is looking at this opportunity. And so the economic buyer for many of the systems that people build with our technology are actually business owners or heads of application development that have got very sophisticated challenges. But the technical owner, the people that will choose our software are typically enterprise architects, advanced developers, CTOs, and so on.

Erik: Maybe we can get into the question of how companies assess application development platforms. And we can also look at them the value proposition of Lightbend and where you are superior, what factors do you compete on. Because if I'm looking at anything from sensors to machine learning, it's like machine learning algorithms, you can just have them compete to some extent, and you can say, okay, this one seems to perform better in these circumstances. But with an application development platform, until you actually get in and use it, it's incredibly difficult to understand how does one differ from another in terms of actual performance.

So what are the variables that you typically look at when you say, here's how we compare against maybe a handful of other platforms on the market? And then how do you actually go through that piloting process to show customers the difference before they make that investment?

Brad: One of the reasons that I'm very excited about what we're doing here at Lightbend is that we've got a lot of experience in working with large scale distributed systems where the systems that people build with our technology are processing large amounts of data in real time. And the requirements that we see coming from enterprises that are looking to build IoT systems are ones where scalability, performance, reliability, are just essential. It's not optional that your system is up. You can't be down for maintenance for 30 minute window when you're running most IoT systems. They need to be 24 by 7, they need to be super reliable and they need to be able to scale.

We have got many cases where the reason that people end up at Lightbend is that they have performance requirements that they know that they can't get in any other way. And so we're not trying to suggest that the Lightbend technology is right for every single application in the world. That's not the type of development platform we are. But if you have requirements for large scale highly performant systems, then that requires data processing in close to real time or real time.

One of our big customers did an extensive evaluation of every technology out there. Now, they are building the next generation software defined vehicle there in the automotive space. And they legitimately looked at just about everything out there, they came to the conclusion that there wasn't another technology that could deliver the scale that they were looking to where they're talking about tens of millions of vehicles streaming live data concurrently, that there was nothing else that could scale and could handle it the way that Akka does.

And so not everybody has got such extreme use cases and they don't need to have such extreme use cases. It's just that if you have a requirement to be able to build an IoT system, especially when you mentioned machine learning, if you want to be able to apply machine learning algorithms in real time as opposed to trading models offline, then you need a highly scalable, highly performant system to be able to do that. So typically, people find their way to Lightbend because they have high performance requirements, because they have high scalability requirements, because they have high reliability requirements. And in that environment, we compete extremely favorably with any other technology that's out there.

Erik: Then maybe it would be useful to walk through a few use cases because I think that would also help people frame what kind of scenarios have these requirements and therefore a good fit. So we've already looked at the John Deere one, are there a couple more in the IoT space or the somehow maybe the industrial enterprise space that you have in mind?

Brad: So let me give you a couple more So Hewlett Packard enterprise is obviously one of the largest technology companies. And one of the key product lines for them is called InfoSight. And InfoSight, they refer to it as AI for the data center. It's basically intelligence that is built into the HPE equipment. So for every piece of HPE equipment that goes into a customer data center, it's packed full of sensors. And all of those sensors emit data constantly. And so this allows all of that data to be processed using an IoT platform that HPE has built on top of our technology.

So what we're talking about are literally billions of sensor events being processed every single day. And when those events are being processed, again, what they are looking to do is to apply their ML models that they have built and to train those models. But they're looking for anomaly detection. So for example, in a storage array, you might see an anomaly based on a particular disk drive having a sensor event that suggested maybe it's about to have a failure event. And so the InfoSight platform can alert the customer, can alert the operations team to swap out that particular drive before it fails. So that kind of capability is only possible by having an IoT platform that is managing all of those devices and that is able to apply machine learning to it.

We've gotten a lot of examples of digital twins, I mentioned digital twins a little bit earlier. We can talk about an example of a maker of very innovative mesh WiFi for consumers that is owned by one of the world's biggest brands. And they started off defining their business with an understanding that a key to their success was going to be having the ability to model the real world devices in software. And so, they built digital twins for their mesh WiFi devices using Akka. And that is something that has allowed them to be able to scale massively.

I'm actually a user of their equipment myself, I can tell you compared to your typical WiFi setup at home, this is so much more sophisticated. The data that you have available on your mobile phone in order to be able to control and see what's happening in your environment is very sophisticated. And they have everything modeled in the cloud in a digital twin type of environment so that if you need to make a change to the capability, for example, of the software that's running in the device, you just made that change in the digital twin and everything happens automatically. And so this scales to literally millions of devices. So we've got lots of lots of examples.

Erik: I think in terms of the 5G topic, we're reaching that point where fairly widespread deployment, we're getting past the early pilot phase. And I imagine in a situation like this, this might start to become interesting looking at mesh networks and so forth. Have you seen customers exploring new value propositions or new architectures using 5G on your system, or is it something that you think is still a year or two away? How is that impacted use cases today?

Brad: So the short answer is absolutely we have. We've seen that change over the last probably three years and it's just accelerating. So the 5G is now getting increased deployment and we're now seeing the applications be built on top of it. Things that were previously only possible for devices that were connected through standard networking equipment can now be fully mobile. And the processing of data that you can do in a 5G network is obviously much more than in previous generations of 3G, 4G LTE technologies.

So, the sophistication of capability that you can have now means the software can become much more powerful. And so there are many, many use cases where the fact that you can have more power at the edge drives the opportunity really to be able to build more sophisticated systems that can take advantage of the higher connectivity and the ability to transfer a lot more data between your central system, whether it's in the cloud or in your data center and the mobile devices themselves. It's one of the reasons why we're definitely seeing an increase in the interest in building this class of more sophisticated IoT systems is that the connectivity and the ability to be able to move data much more rapidly means the sophistication of those systems can increase.

Erik: If we just quickly look at the business model, but what would be the typical structure of this? Are you looking at some kind of fixed setup fee and then variables based on usage, a number, just talk me through this?

Brad: So for Akka Platform, it's licensed on a subscription basis. So our customers will get access not just to all of the software, but their expertise to help them with that software. It's done on an annual subscription basis. Our cloud service, though Akka Serverless is a typical platform as a service. So while we do have options for dedicated environments, and so on, the typical environment is just cloud pay as you go, you pay for what you use. And so we've got both models. So an annual subscription model or cloud service where you just pay for what you consume.

Erik: Anything I didn't ask you that I should have, Brad?

Brad: No, I don't think so. This has been fun.

Erik: I think this is really a useful overview also to help people understand what this whole class of software looks like. And it sounds like you have a great solution. I'm sure you have bright days ahead as 5G ramps up and overall the IoT skills. Maybe we can have a call in a couple years, I'd love to see where you guys are by then.

Brad: I think that sounds great. And I'm very much appreciate your time, Erik.

Erik: Thanks for tuning in to another edition of the IoT spotlight podcast. If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Finally, if you have an IoT research, strategy, or training initiative that you'd like to discuss, you can email me directly at erik.walenza@IoTone.com. Thank you.

Contact us

Let's talk!

* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.