Podcasts > Market Sizing > Ep. 040: Extracting value from data analytics - an Interview with Ed Kuzemchak of Software Design Solutions
Ep. 040: Extracting value from data analytics
an Interview with Ed Kuzemchak of Software Design Solutions
,
Friday, October 12, 2018

Ed Kuzemchak of Software Design Solutions tells us what IoT really is and is not, and how to extract value from it. We discuss how Ed started his business on the premise of data analytics, and his view on the differences between consumer and industrial IoT from a data point of view. We also discuss where we are today in leveraging and extracting value from big data, and his 3 step process for path to product.

Ed Kuzemchak is the Founder of Software Design Solutions. Software Design Solutions provides consulting and embedded software development services. 

 

Transcript.

Erik: Welcome back to the IoT spotlight podcast. I am joined today by Ed Kazemchak. Ed is the Chief Technology Officer and the director of IoT and embedded systems engineering at software design solutions, which is an applied visions company. Ed founded software design solutions in 2003. And today, we're going to be discussing his company, their technologies, their way of doing business, so we'll get into a couple specific use cases. And we're also going to talk about some topics that Ed personally is more interested in around big data and analytics, some of the challenges facing companies and adoption of IoT and about also securing IoT devices.

So first, Ed, thank you so much for joining us for the discussion today.

Ed: Erik, thank you for having me on.

Erik: I gave a quick introduction to yourself, but can you give us a little bit more breath into your background prior to founding software design solutions?

Ed: Sure. So I started in the embedded systems industry in 1988. I spent some time in a defense contractor Raytheon Company. And from there, I went to a small startup that was spun out of Carnegie Mellon University here in Pittsburgh which was required later by Texas Instruments, number of years at Texas Instruments working on digital signal processing, deep embedded stuff, working with a lot of Fortune 100 companies doing audio signal processing, video signal processing. And that's really where I got a feel for what kind of things people really care about in the deep embedded hard real time environment.

And then founded software design solutions in 2003, focusing on industrial control, medical systems, transportation systems, a little bit of defense systems. And a lot of what we do, since the beginning really is now called Internet of Things, but back at the time, that word didn't exist, it was called machine-to-machine communications, or perhaps it was called industrial control, or it might be called systems control. And it all had to do with a host computer of some kind connecting to a piece of machinery and retrieving data from it, processing that information and making that piece of machinery more efficient.

I mean, really, if you start to look at it, that sounds a lot more like what Internet of Things is doing today, adding this cloud layer though. And so as the Internet of Things got a little more baked a few years ago, we started looking for the proper partnership for software design solutions because we didn't have a strong cloud angle. And at the same time, Applied Visions, our sister company now was looking to enter the embedded market and form an embedded Internet of Things company. And so that really brought the two companies together, where Applied Visions, our sister company, is really focused on cloud mobile applications, large scale deployments, and software design solutions, continues to focus on the embedded side sensors and gateway communications and sending data up to the cloud, where our sister company will tend to take over there at the cloud level and doing big data analytics.

Erik: I have to admit, despite having the word IoT in the name and logo of our company, I still find the term a little bit flexible. It can mean a lot of different things. But would you define it then as the combination of embedded solutions with cloud technology as the fundamentals of an IoT solution and then maybe with machine learning or other application specific technologies integrated into solutions as needed basis?

Ed: I agree with you there, Erik, that IoT means a lot of different things to a lot of different people. And just like web 2.0 started to become everything that anyone wanted to call knew about 10 years ago, IoT is also being painted with that same brush that folks are saying, oh, if I can connect this thing to the internet, somehow I can call it an IoT device.

I think that IoT specifically means that you are getting some value out of the fact that you're retrieving this data and processing it in a different way. It's not just taking the data that you always could have processed locally and now you're processing it up in the cloud that didn't really add a whole lot of value there. But if you now can take that data and get additional value out of the fact that you now can do much more processing on it than you had capability for of before because you have much more processing horsepower on the cloud, or you can join that data with other data, I think that's the real definition and when of IoT is collecting multiple pieces of data from multiple sources, and doing that analytics. It doesn't have to be up on the cloud, but the cloud is just the convenient place to do it at this time.

Erik: And if you look a little bit literally as the name, the Internet of Things, really the internet, the value out there is that ability to be able to integrate data from multiple sources, and then to also access applications from again multiple sources as needed to process that data. And then to be able to provide those results again to basically whoever needs them in whichever location as long as they're on an internet enabled device.

Certainly for IoT, I would say in most cases not realized yet. We're often looking really at more M-to-M solutions and we call them IoT. But if we look at situations where you're working with a customer, and you're really bringing data from multiple sources outside of the organization, or across department lines, accessing third party software, are there a lot of cases where this is the fact today? Or do you also in your engagements with customers typically work at more of an M-to-M or a siloed approach with maybe a more complete IoT solution has been some phase two that may be one year or three years or five years in the future, but it's not really realized today?

Ed: I think it depends a little bit on the customer and the solution. But let me give a couple of broad examples. First of all, in industrial space, and what I mean by industrial is in manufacturing facility or I'll even loop in transportation into industrial as well, these kinds of systems have had M-to-M capability for quite some time. Now, maybe there wasn't a whole lot of two way transfer of information and there whole wasn't a whole lot of aggregation of information. But industrial machinery has had some amount of sensors added to them for simple monitoring. I mean, that's what SCADA systems were.

And from there, I think those types of customers are ready to take the next step and start to integrate information from multiple pieces of machinery, integrate information from multiple manufacturing facilities so that you can start to get dashboards that start to view data across many different locations and compare that information for trends. I think that those customers are ready to take that next step. I think that it's not a given that it always will happen, because you have a lot of things to consider, you have a lot of connectivity things that consider, you have a lot of security things to consider.

I think that in the consumer space, it's much different, they leap all in one shot to oh, we've got to just have this data immediately up on the cloud. I mean, it's not useful at all. If you go buy yourself a piece of consumer electronics for your home that claims to be internet connected, often, you will find that you don't have any usefulness on that piece of information at the local display. Or even if you connect to it with your laptop, the only display is up on the cloud and you are immediately pushing all that data right up to the cloud.

I think that's one of the differences between industrial and consumer IoT is that consumer is taking this one leap all the way to the cloud because they want your data, whereas industrial has already has local data reading happening. Local processing has been happening for many years. They're trying to go across many different pieces of machinery and across many different installations and try to utilize the comparison of that information from one machine to another.

Erik: With industrial, the product you're offering is solving a problem and that's what people are paying for. And that's how you generate your revenue by providing the solution, solves that problem. And consumer, certainly, the people are generally the product. So the solution is more of the Trojan horse in order to provide enough value to get that data, but then that data becomes the product to another set of customers.

Let's maybe take a step back. Who are your customers today? I'm sure this is changed somewhat since the acquisition in 2016. And when I asked who, in addition to what are the industries, it's very interesting to understand who are they within the company? So who would be the decision makers, the influencers and the people that you're really engaging with on a regular basis?

Ed: So our customers, they vary quite a bit in terms of their industries. But generally, in the transportation industries, we have customers in logistics and transportation, we have customers in safety portions of transportation; in the industry space, we have customers in oil and gas, we have customers in manufacturing efficiency, we have customers in building efficiency. And so these kinds of customers are all looking to increase the efficiency of their system, increase the safety of their system. And so the types of contacts inside those individual customers are usually at the director of engineering level. Sometimes our first contact is at the sea level, maybe a CTO or a CEO.

But primarily, I mean, we're working with these directors of engineering. Because one of the interesting things about being in our business and our business model is we're a software consulting house. We're not a body shop. We're not placing a person at a desk for 40 hours a week and telling the customer here's your person. We're bringing projects into SDS, working them, completing them and delivering them back to the customer, sometimes completely independently, sometimes hand-in-hand with the customer's own team.

The latter is the more usual case where we're providing some expertise in embedded systems or in internet connectivity or in communications that the customer specifically doesn't have. The customer side of that equation is they are providing domain expertise. I just named a whole bunch of different industries, whether it is transportation, safety, or building efficiency, or oil and gas metering, those are very different industries. We don't claim to have domain experience in each one of those industries. What we do have is general embedded systems, real time access, communication, security expertise, that if we leverage that along with the customer’s domain expertise, our customer in transportation safety understands all the rules and regulations about transportation safety, and we can provide the broad base knowledge of how to build an efficient embedded system and deliver that system on time and within budget.

Erik: So there's a good number of IoT application development platforms whose value proposition to an extent is to allow companies that have deep vertical expertise or domain expertise to use this platform in order to access some of these other areas that they have to access? How do we connect devices on the edge? How do we share information across systems?

I think the proposition here is to be able to build these IoT solutions themselves without either developing a large in house team that understands multiple levels of programming, and cybersecurity, and all these other topics that are going to be required or to outsourcing or working with an external partner, and I know this is to an extent maybe a competing approach. You can work with people or hypothetically you could work with a platform.

Let's say you're not exactly an objective observer here because you certainly have a business that's in play in this space, but what is your opinion? Because I'm sure you do have a careful eye towards these application development platforms, maybe even just to use to increase your internal efficiency in terms of your workforce? How do you look at these platforms? Do you think that they really provide value today? Are they a threat to your business? Or are they more like, potentially an efficiency improvement tool that your team might use in order to help you execute projects and focus more on the top challenges that really require a human mind?

Ed: I have been attending the IoT conferences for three, four years now, probably longer than that, actually and I see all of these different platforms out there. And I think that they have a place and a purpose in building demonstrations. They have a place and a purpose and building proof of concepts. But consider these platforms much like the business intelligence dashboard builder platforms that were out there 15-20 years ago in building business applications. Those didn't put the custom business applicant developers out of business and these won't either.

What I mean by that is it was very quick to put up a dashboard that gives you this average and this trend and this little graphic of your sales across the regions. But when it came time to build a production in system that is specific to the customer's needs, you really had to go to a custom application.

The other side of that is I think that from a data analytics perspective, those platforms are getting very mature. I do think that up at the cloud level, building these dashboards and kinds of things to view some of that data, until you get to the specific thing the customer absolutely needs to see and isn't in the platform, they can go quite far. Now down at the sensor level, the problem with using a generic approach is it doesn't cost reduce well. And what I mean by that is you have to in the end often these kinds of platforms have to get the cost of the resulting hardware that needs to go on the system down. And sometimes you're talking to where $0.10 or $0.20 means a lot in the in the difference between one platform spin in another platform that you could potentially use.

And so having a platform that you can very quickly provide a prototype, and do some sensing, but it requires a $40 or $5 Raspberry Pi to do the sensing and that Raspberry Pi has to be plugged into a wall, that's very different from the customers and desire that says, oh, I need this thing to run on a coin cell for 10 years, and it can't cost more than $4. That is where our expertise comes in. Because we can build those kinds of systems where we've power tuned the end result. We have hardware design staff on board here that can do the hardware design and get that bill of materials cost down.

And I think those two different approaches have a purpose throughout the lifecycle. We very often get involved with customers at the proof of concept kind of level where the engineering manager is presenting the idea to the C level or to their customers. And we'll use an off the shelf piece of hardware and provide a quick prototype so that we can see the concept of the IoT system, then go forward with okay, now we need to take this thing from a $70 bill of materials down to a $7 bill of materials. And here's how we would get there.

Erik: And for platforms like a ThingWorx, Ventec for these types of platforms, which are software platforms intended to connect disparate systems, do you also see this playing a similar role on the platform side where they're good for either supporting a team or for mocking up systems quickly but not necessarily for scaling a highly cost efficient optimized system? Or how would you look at this type of system? Maybe I can also ask, do you work with these? Do you use ThingWorx or Ventec, or any of these other software platforms on a regular basis as the basis of a solution that you're building for a client as opposed to building something more from the ground up using different functional pieces?

So I've taken a pretty close look at ThingWorx and we've looked it over, we haven't used it in a production system or even a proof of concept at this point. I think that system has some good points to it. For us, any of the prototype systems that we've tried to build these drag and drop builders, but often the heaviness of the platform is what got in the way of us using it even for a prototype. Now what I mean by that is the amount of stuff that we needed to have in place just to get it to work.

And we don't want to confuse the customer with what they're going to need in the end by saying, oh, we have all this stuff to build the prototype. And they're like, well, do I need all that. Well, no, you don't, you need this small $7 part. Particularly in industrial, a lot of our end sensors need to end up very inexpensive because there's going to be so many of them deployed.

Erik: At least from our standpoint, we divide customers into three categories. One would be the pure IoT technology providers, really, their core business is around providing some technology into an IoT solution. The second would be companies that are maybe a traditional German machine builder, for example, that's very much in the M-to-M space is maybe providing some hardware and software solutions. But they're now looking for how can we transform our existing product portfolio into a more integrated or connected solution, perhaps had intelligence, perhaps have a stickier relationship with our customer by putting some data analytics or other applications on top of the hardware that we're providing.

And then the third category would be a Volvo, a company that's completely out of the IoT domain, but then their industry is being impacted either vehicles, how they're manufacturing their supply chain, do you work with all three of these? Maybe you don't even see the industry or the market in terms of these three categories, but who would be some examples of kind of very typical customers for you?

Ed: Certainly, I think the last two are very typical customers for us. The first one we do a little bit of. But let me give you some examples of like the last two would be. So first of all, we work with customers who are building equipment, that would be your second example, which is customers who are in our space, this is customers in the oil and gas industry, customers in the rail transportation industry, customers in the defense industry. And these customers are building pieces of equipment and they need to provide internet connectivity in these systems to enhance the capability of systems.

In the third category, these are the end users of that equipment. And that might be someone building a particular end device. We have customers in the retail industry doing that providing inventory management systems. We have customers in the security industry. We have customers in the in those kind of end industries that aren't building any equipment, but are just bringing together a system of hopefully, if they can possibly do it off the shelf components and fielding a system that way.

The majority of our long term customers are in that second category, which is the building of equipment and making that equipment more efficient for their end customer. The end customer in like an oil and gas industry is interested in the data. They're not interested in how the data is being gathered and sent and collected. But they are interested, obviously, in the data that they can receive from their system to make sure that their oil and gas distribution system is performing properly, that is performing safely, that they're getting the best amount of efficiency that they can out of it.

Erik: The oil and gas example, this is maybe a great example because you have a big opportunity, I think, in the space to achieve efficiencies or to alter how processes are done. You also have a lot of challenges, challenges related to safety, related to remote operating environments, very high cost of downtime, etc. If you want to use a recent example or maybe just a typical example, and then let's dive into some of the main challenges that you commonly face that some of our listeners might expect to face if they're also going to be deploying solution of a similar type.

Ed: We'll take oil and gas as the example, any kind of industry where you have remote systems, you run into a lot of interesting issues with communications. If you think about the difference between that and a consumer environment, a consumer environment, you can expect that consumer in their home has high quality WiFi that is always there. And by the way, if your automated doorbell or your camera monitoring your pet goes offline for a few seconds, the world's not going to come to an end.

But in a safety critical environment, you have to have these systems up. And so what they have to be able to do is they have to be able to run disconnected, they have to be able to continue to be autonomous. But at the same time, we want to have the ability to send data, whenever communication is available. And that communication might have limited availability because of the terrain that you're in, you might have limited communication in terms of the amount of bandwidth that you can do.

There's a lot of great communication systems being developed out there with LoRa and Sigfox, and all of these LP LAN kind of communications, these low power wide area networks. But at the same time, there are still many parts of this earth where those systems are not going to reach and sometimes the only thing you have is satellite communication, which is very slow, very expensive. And so you have to be able to have the system continue to run completely disconnected and independent, just like it did long before the Internet of Things came around. Then as communication is available, and with ever with however much bandwidth you have, with however much reliability you have, send that data up to the cloud so that it can be processed.

I would say there's a lot of discussion around how much data should be sent up to the cloud and how much should be processed locally. Two weeks ago, I did a presentation at the at machine learning DEF CON about processing data locally without sending it up to the cloud, doing machine learning not a thought at the edge, but in what we call the fog, which was do your processing, do your machine learning locally because you can afford to send all that data up to the cloud and then send the results up to the cloud for further processing, further coordination and interpretation with other data.

But there's a lot of industrial systems, these remote systems are a very good example where you just don't have the reliable high speed communication up to the cloud, to send all your data up there and run all your big algorithms up on the cloud. You might have to run a fair amount of them locally and send the results up to the cloud. And I think that we're starting to we're starting to see that happen more and more once people get their cloud connectivity thoughts together, and they're like, oh, I'm just going to send everything up to the cloud do the processing up there. And they realize they can't get it there. They realize cloud processing isn't as cheap as they thought it was going to be. And they realized they don't need all that data up on the cloud.

I gave an example that if you're sensing a temperature every half second, that's something on the order of 150,000 samples a day. And so you don't want to be sending that up to the cloud. You don't need to send that up to the cloud. You can do a lot of interpretation locally, and send results up to the cloud.

Erik: Even in the examples that you've given, it sounds like the assumption is that the data can be pulled out and sending it up to the cloud is a bit more of a cost consideration. But it's around being able to process that data somehow in a cost efficient manner. I suppose those applications are more around business intelligence about being able to access this data and then sometime in the future, being able to make business decisions based on that data or maintenance decision. Are you seeing more applications now where there is the need for real time or semi real time decision making in these operational environments, oil and gas field, for example, and that processing power is being brought down into the fog? Are you seeing that already? Or is that just technically a challenging demand to get work done right now?

Ed: Well, I think it's kind of the other way. That processing had to have been done before anyways and so it was always done locally. And so there was always some little real time system. And maybe it was a little microprocessor doing that real time work. And then the Internet of Things came along, and people said, oh, we'll just start pushing a lot of this data up to the cloud and doing that work there.

And so I think what happened here was, there was this disconnect of we had to have real time processing. You have to be able to close your real time loops in the microsecond time kind of timeframes. If it's a petroleum meter that needs to stay on board, if it's a pump inside of a cooling tower of an electrical power plant that needs to stay on board, but now you have to start segmenting out what amount of that processing doesn't need to be real time, what amount of that data can be used valuably if we can combine that data with other data and start to correlate it together?

So, if we have a vibration monitor on a pump or a temperature monitor on a bearing, it's important to be sensing that locally. But you really would like to be able to correlate those two values together. And instead of doing that correlation locally which was our only option before, that kind of information can be sent up to the cloud. We like to think of these three tiers of local processing, fog-based processing, and cloud based processing in terms of their latency, whereas local processing can happen in microseconds, fog-based processing can happen in milliseconds and cloud processing, you really have to think of is happening in seconds.

And it's okay to have your correlation of your vibrations and your temperatures and potentially other things like history of that machine happening up at the cloud and taking a second or two to happen. There's value in being able to gather all that information about this machine, and its history and other machines and other plants around the world and start to gather all that data together. And maybe you're running even some deep learning on that up at the cloud, there's real value there. But the actual closed loop of running that machine in real time has to stay down at the machine.

And that's where SDS has been working in that environment for 14 years now building those embedded real time systems. And so that's how we can come into this with the understanding that we're going to continue doing that real time work down on the machine or we're going to start to send valuable pieces of information up to the next level at the fog, potentially doing some work there, and then take cooked results from that and send that up to the cloud for correlation and any kind of expert system.

Erik: A lot of these real time systems, my understanding, is that by necessity they're more or less automated logic. So just a track of logic that is programmed in and is therefore very stable but also lacks access to the types of machine learning that are now coming onto the market and are really exciting in terms of their potential to add intelligence to improve decision making in a fairly automated way, instead of hard coding decision into a process and then maybe updating that on a monthly or biannual or periodic in any case basis.

Maybe we can take this in a couple of steps. One would be where do you see that we are today in general in terms of being able to leverage this “big data” that people have been pooling for years, where are we in terms of being able to actually leverage this and extract value from it using machine learning or other methods? And then two, as we increase our ability to actually use this data to create insight, will continue to primarily be done on the cloud? Or are we already seen the ability to move some of this more robust processing down towards the edge?

I know, we're certainly seeing companies come out with chips that are more specially designed to enable this, but these seem to be still somewhat in the early stages of a direction as opposed to mass adoption. Where do you see us then in terms of maturity and then also in terms of where this capabilities stands in the stack?

Ed: It's certainly the case that, up to this point, I would say this point is maybe last year or so, sensors have been fairly primitive. And in industrial, that may continue to be the case for a little while, because those sensors that are used in industrial have a lot of environmental requirements placed on them. And what I tend to call the edge device in an industrial setting is like a temperature sensor or it's a vibration sensor. And those are generally non programmable, they are very fixed function. Then the next level up is the little aggregator of that data.

Now, I think that previously, those have been very simple little microprocessors aggregating the data, maybe doing some simple filtering and sending the data along to a SCADA system. And the IoT answer to that was fine, send it to the SCADA system for real time control, but then also send it up to the cloud for longer term storage and analytics. But I think it is true that there's been an awful lot of work done recently with pushing some of that processing that you might do up at the cloud down to the edge sensors.

I think that in some domains that might make sense; you are certainly increasing the cost of that sensor significantly. And it'll be a while before having a processor capable of doing a convolutional neural net, for example, in a vibration sensor that can stand to be 200 degrees F and a bunch of vibration will probably be quite a while.

And I think that one of the very important ones, of course, is autonomous car, where the sensors there on an autonomous car have to be very smart. All the processing on an autonomous car has to happen on the car. It has to happen in real time. And any amount of intelligence that you can push outwardly, to the sensors, to the LIDAR, to the radar that's going on that car means that less processing has to happen on the GPUs and those kinds of things that are also on board with the car.

I mentioned in my talk two weeks ago that the autonomous car is a perfect example of fog based processing, where you have very smart sensors, you have very high performance processing onboard locally at the car and very little happens up in the cloud. Think about these autonomous cars are cloud connected, but they're not relying on the cloud to do any of their driving. They're getting traffic updates, and software updates and tweaks and that kind of thing over the cloud. But obviously, the cloud does not have to be connected for the car to drive.

Erik: That's an excellent example of confluence of requirements and then a form factor that allows us to actually afford to put those requirements into a product that people are willing to pay for. One of the barriers there that we're certainly going to encounter once we start seeing fleets of connected vehicles operating in our streets is security. Even if the processing is primarily done locally on the vehicle, this connectivity to the cloud will certainly allow a lot of opportunities for malicious actors to influence the system. In terms of being able to secure these, whether it's an industrial system or a vehicle on city street, these systems that are basically combining the capabilities of the internet with real heavy assets that that can do serious physical damage, where do you see we are today in terms of being able to provide a robust level of trustworthiness new systems?

Ed: I've watched us evolve over the last several years. I think we're making progress in terms of security. So now one of the big things that I think has improved is that 2-3 years ago, when I would attend these IoT conferences, the general feeling that was presented was security is very important. We don't really have all the answers yet about how we're going to secure these things, but security is very important.

Now, what I'm seeing in the last several conferences that I've gone to is all of the chip designers are very serious now about providing the capabilities because it really has to start at the hardware level. Security is a weakest link problem. For a good deal of time, we had a lot of the knowledge from securing everything from servers to laptops to phones. But we really had this this basic problem that in the end an IoT system was a very low budget, low powered piece of hardware that was never meant to be plugged into the internet and opened up in the way that it had been opened up.

Now you see all the hardware vendors, whether it's the STS or the arms or the Texas Instruments, those folks are all providing hardware security baked into the processor, everything from trusted program modules to secure boot. And once you have that basic hardware layer of security, now we have the building blocks to build a secure system on top of that. Now, it doesn't automatically get you a secure system. You still have to do all the right things. In other words, just because you buy a processor with secure boot, you have to actually enable it and do the signing of your application to make sure that the application that's running there is the actual application that you meant to have running there.

So I think a couple of things have happened. First of all, the hardware vendors have really come along and provided the capability on the hardware. Customers are very attuned to this. And of course, it only took five or six very serious events like Moriah bought to bring this to the news. And the customers are starting to basically make that a solid requirement in their systems saying, look, this has to be. And they are now realizing that, oh, you're going to bring this system in and put it on our network. But let's get our IT people involved here. I think that’s a double edged sword because securing an IoT device is not like securing a server room. And sometimes that disconnect happens with the IoT teams.

When they come in, they're like, well, we don't really know how to how to specifically define the requirements if it's not going to be a big Cisco router. We understand how to secure those, but we don't understand how to secure this microprocessor that you're going to put on our network. And so there's sometimes an education process and a shared learning that has to go on there.

Erik: Is there one example of a project that you could walk us through from let's say inception to completion? So from who did you begin the discussion with, how did you spec out what it what is being built here, what might the challenges be communicating the process internally and so forth? I think it'd be interesting for people who are looking at building their initial solutions in this space to understand a bit more. What is the process actually going to look like when they pull the trigger and decide to devote resources to developing a similar solution?

Ed: So, I'll take an example in the retail space. And in this space, we were approached by a company is looking to build a system that was going to be doing automated inventory management in a retail store. And so with that system, the company that approached us had a unique technology that they were building the hardware for it. But we got involved with them to help them select the proper hardware that would be able to not only do the sensing that they needed to be able to do, but also give them some headroom or later capabilities that they would want to add, as well as the level of security that they would need in the system and the level of capability that would be needed later when we start to scale this system.

So like many startup type projects, that started with simple prototypes. And we built simple prototypes around off the shelf pieces of hardware back and forth with this company, as they were developing their sensor technology and we were developing the software around we were doing some image processing, we were doing some specific lightweight machine learning that needed to be done here and putting that proof of concept together.

As that was put in front of their stakeholders, we got a lot of feedback about what was valuable and what wasn't valuable in the system and spun that back around several different proof of concept turns back and forth, all the while focusing on where would we be going with this capability and what was possible to achieve in a reasonable amount of time. It's always important to have working stuff that you can put in front of someone.

We aren't really a fan of cooking up a great set of slides and not having something that we can show actually running on a piece of hardware. And so we took this to one of the biggest retail trade shows with that company, and their industry partner, they had a very large industry partner in the semiconductor space that was supporting them.

And so the from there on, we went to the next level of trying to start to solve the rest of the problems, which means now we have to cost reduce the system. And so that's the steps that you take. You build the prototype, prove out the technology. Not that you're not worried about cost in that prototype, but knowing that this is not going to be the end solution. I mean, it's okay for these proof of concepts to cost 10, 20, 100 times what the end system would cost as long as you know that you can get to the final cost in your system because it takes a lot of work to cost reduce the system down. But as long as you know you can get there, you're okay.

So we're now at a point where this system is being rolled out to trial use in retail stores. The system is quite successful in its initial rollouts. And our final phase of this is doing the work for scalability. It wasn't that we didn't look at scalability. We just didn't address it right away. So now we are taking the system and saying, okay, we're now going to be potentially in hundreds or thousands of stores, we're going to be processing millions of pieces of information a day and now we need to be able to build a scalable system. And that's really no different at the individual sensor level. That's now a cloud architecture problem.

Erik: But if we were to maybe on an order of magnitude basis, look at the cost involved in maybe a similar solution to that what you just described to get a prototype out, to get a working costed down product out and then to get a proper plan to scale this solution across an organization, where should a company be thinking in terms of budget? And I'm not saying here your fees, as a service provider, but generally, if a company is looking at, I want to get a prototype, I want to get a proper product, I want to scale this, can you give me a rough order of magnitude of where those three decision points might reside in the budget?

Ed: I can give you some idea. Our feeling is that prototypes should be pretty quick. And what I mean by pretty quick is they ought to be a month or two. A proof of concept ought to be a month or two. And that is often using off the shelf pieces. But during that time, there's an awful lot of work also being done in helping the customer understand what they really want to build, as the actual prototype, which is the next step.

And that's a month or two of an architect, and potentially some hardware design involvement with the customer and, of course, some software hands as well. But I think that really has more to do with the customer's attention span and the customer's need for speed in the prototype. If someone decides they want to do something in Internet of Things, they don't want to launch a year long project before they see anything, they want to start to see some results right away.

But I think that I usually try to lead someone down a three step path of a very quick proof of concept two months and then a much more thought out prototype, which isn't the cost reduced version, but is the version that you can start to use to actually define the rest of your system around. And that is often a combination of off the shelf hardware and custom hardware, unless the customer already has some custom hardware that they can use already. But if they're starting from scratch, that that next level, that prototype which might be six months is going to be a significant hardware design, but not necessarily the end cost reduce system. Because that's an expensive process and it's a process where you're starting to set in stone a lot of decisions that are harder to undo later.

Erik: You're kind of running the business that I wish I was running. We take a lot of projects into the ideation stage, and then we basically hand them off to companies like yourselves or to maybe in some cases a technology provider that wants to work hand-in-hand and help to develop something. Are there any other points on this business that you're in, that you'd like to touch on, that you think would be particularly valuable to, to our listeners who might also be potential partners or potential customers of yours?

Ed: Well, I just want to mention that I think that the types of services that you provide, Erik, are key to this. Because I think that someone needs to be kind of out there helping people understand what's possible and conversely, what's not possible with IoT and helping drive that excitement that people can say, oh, there really is something in my industry that I can do with IoT. And of course, once you get those ideas going inside of a company, certainly working with folks like you, they can start to form the business case. Then of course, absolutely, bring them to us, and we'll get them started down the path to a product.

Erik: Certainly, an ecosystem, we all have our roles to play here and certainly no one company, even the Siemens, and the GEs of the world are not going in alone here. So we certainly won't either. Ed, for somebody who's listening, that is again, could be a potential customer, a potential partner, could be just somebody that wants to have a conversation and learn more about the space, what's the best way for somebody to get in touch with you or with your team?

Ed: So the easiest way is come visit our website at softwaredesignsolutions.com. There, you can read a number of our blogs. We'll have a link eventually to this podcast, of course. And there, you can see what kinds of projects that we're involved in, you can read about our services. From there, you can contact our team. We'll get right back with you and we'll start a conversation.

Erik: So we will put that certainly in the show notes. Ed, thank you again for taking the time. I really appreciate it.

Ed: Thank you, Erik. It's been very good talking with you.

Contact us

Let's talk!

* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.