PodCast EP062 - Advanced machine vision and deep learning systems - Iain Smith, Managing Director, Fisher Smith

EP062 - Advanced machine vision and deep learning systems - Iain Smith, Managing Director, Fisher Smith

Apr 07, 2020

In this episode, we discuss the use of deep learning mechanisms to accomplish tasks that are not possible with traditional rule-based systems. We use two cases to illustrate how deep learning can be used to solve non-traditional and recognition problems within hours.

This is part 2 of 2 with Iain Smith on machine vision.

Iain Smith is Managing Director and Co-Founder at Fisher Smith. Fisher Smith designs and supplies machine vision systems for automatic inspection and identification of manufactured parts on industrial production lines. https://fishersmith.co.uk

_________

Automated Transcript

[Intro]

Welcome to the industrial IOT, spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the industrial IOT spotlight podcast. I'm your host, Erik Walenza, CEO of IOT one. And our guest today will be Iain Smith, managing director and co founder of Fisher Smith. Fisher Smith designs and supplies machine vision systems. And this is our second discussion with Ian on the topic of machine vision. In this talk, we focused on the use of deep learning algorithms to accomplish tasks that are challenging or impossible with traditional rules based systems. We also walked through two case studies. The first case illustrates how deep learning can be used to solve some recognition problems in a matter of hours while the second case illustrates a difficult problem that would have been impossible with rule-based systems and it's pushing the bounds of deep learning capabilities. If you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iot.com.

[Erik]

Thank you, Iain. Welcome back. And thanks for joining us again.

[Iain]

A pleasure to be speaking with you again, Erik, this is our second podcast. I think these chronologically we'll play right after each other, but just so that in case anybody is just logging in and listening to this one and didn't catch our previous podcast, let's just do, let's say the 60 second background on who you are and what your company Fisher Smith does. And then we can dive into the topics that we're going to be focusing on today.

[Erik]

Okay. Yeah, no problem. So I've been working in machine vision industry for about 20 years now, having done a degree in engineering, math in the UK, and then pretty much gone straight into working for a machine builder of vision inspection machines. And then for the last 15, 16 years, I've been running Fisher Smith doing very much focused on just the vision aspects of industrial machine vision. And this tends to be predominantly inspection, quality control robot guidance tasks, and we tend to be working. I was a range of efficient suppliers were then adding value to that equipment by integrating it, writing the software, doing the frontend, deploying, getting the systems, actually working on the factory floor to solve the application. And we're usually doing that through a chain of customers. Predominantly those customers are automation companies, people who are making the machines that they're doing, the robots they're doing the conveyor, the material handling the moving of parts, the assembly. And we come in as a specialist supplier to do the vision aspects of that production line. So that's a very quick overview of sort of what we are and where we are in the market. So we're quite a small specialized team just focused on, on the machine vision in industrial stuff.

[Erik]

Great. And today we want to do more of a deep dive on deep learning and the impact there. I think in our last podcast, we covered more of we could say that traditional solutions, even though some of those solutions are still kind of on the cutting edge of machine vision technology, but we didn't go very deep into deep learning as a starting point. It would be great if I can just understand your definition of that because we have this term deep learning, we have machine learning and then we have AI as kind of an umbrella concept. Are there technical differences between these terms, are, are these different kind of categories in terms of hierarchy? How do you look at, how do you look at these? And then if you can kind of find what does deep learning actually mean in the context of machine vision?

[Iain]

Yeah, so I guess all of those terms are probably very well misused and they get swapped and interchange quite a lot. AI really sits above all of these as a more general concept, really of computer based intelligence and often the general consensus of AI. It's a sort of human level of awareness and intelligence. And what we're really looking at only, ever looking at is a specific or a focused or targeted AI at a particular function. That's when the separation from AI to really deep learning starts to happen. So we're really looking at deep learning in, in industrial vision context to mean teaching a neural network on images in particular and looking for particular characteristics in those images, we're not using the neural networks for just data processing or speech recognition or any of these other data sources that can happily go into a deep learning network and your network. We're really just focusing on image processing aspects of that. And then even within image processing, it's focusing that down again into specifically industrial applications.

[Erik]

Let's go into a little bit then how does this differ from a traditional process? So I suppose with the traditional process, you're looking at, you know, maybe this shape is diameter of two millimeters and maybe if there is, you know, so if it's out of range by point X millimeters, then it's, it's a fault, or maybe we're looking for black and if there's there's a white pigment, then it's a fault. How does deep learning differ from a program approach that you might have taken or might still be taken in most cases today?

[Iain]

Yeah, so it, it's a very different concept really. So the two, the traditional machine vision and deep learning really compliment each other in a, in a lot of aspects. There are some areas where they overlap, where you think could do that. One way we could do it the traditional way, but often the two are separate. And like you say, the traditional methods tend to be mult. So they're rules based on logic based where you're saying, I'm going to count this many dark pixels or blue pixels in an image I'm going to make a measurement, which is generally finding an area of contrast or a feature at one bit of an image and a different area of contrast or a different feature, another image, and then measuring between them. And for some of those techniques, classic machine vision is still the right way to do it, but where the deep learning changes things is that it can cope with different ways of teaching it to start with.

And then it can cope with different scenarios far better. But it's trying to find where that separation is to work out, which one's best to deploy. So for instance, if you're looking at say fault detection, if you're looking at a bland background, say gray conveyor belts, or a gray surface, you're looking for scratches on a piece of mass or painted material. And that background is consistently a color, a gray or blue. And you know, that the defects on that are black marks, then that's fairly easy to set up with a traditional machine vision approach where you can say, okay, I'm going to ignore pixels that are blue or ignore pixels that are gray. And I'm going to look for anything that's different to that. I'm going to look for black pixels and then you can start counting them and saying, okay, this many black pixels is unacceptable to the customer.

This is a fault we reject the parts. So that's the sort of traditional logical rules based approach. And that works really well. But as soon as that background or the object is not a nice, consistent even surface, if that was multiple colors or had a surface texture had dark areas, light areas was a fabric or some as a complex sort of shape and color like a grain of wood or all sorts of different surface textures. And you're looking for strategies going across the traditional techniques just completely fall down. You can't easily say if you're looking at a bit of wood and you're looking for scratch on that where your, your piece of wood has all these grains, these lines and contours running through it already. So how do you then say, how do you quantify a scratch with a rules based approach? You can't stay as long and thin and dark because all the other grains in the wood are already long and thin and dark.

You may be able to say, well, it's going horizontally and all the other grains are going vertically, but that may only work. In some instances, it may be that some of the blemishes are lighter or some of them are darker, or some of the blemishes are actually very similar color, but there until human, this is, this is often the frustration with traditional vision techniques to a human, with a bit of training. They can look at that object and say, no, we don't like that. Blemish on this piece of wood, this piece of fabric it's wrong. But trying to codify that, trying to put the logical rules around that to say, well, is it darker? Not all the time. Is it lighter sometimes, but no. Is it the same or different shapes to what's already there in otherwise good products, almost impossible to use a traditional machine vision approach to that.

And that's where deep learning really wins that you can give it lots and lots of samples of what a good and all the variations that come through and good. And then you can give it samples that are bad, and it tables with suitable levels of training, separate those different classes, and start to find folks that would have been almost impossible to find with traditional machine vision techniques. I've sort of touched on defects. That tends to be what the quality control aspect of what we do. That's, that's where a lot of our projects end up going down, but really deep learning wins in a, in a few areas. One is the fact finding the other is object detection, that if you don't have such a defined shape or a strong contrast, you can use the deep learning to actually match features in an image and this, and find and locate features in an image.

And we can also use it for, and this, I guess the core and classical use case for deep learning is classification separating apples from oranges, from pairs, from bananas and saying, yeah, this is definitely of this type. This is definitely that type. And then we often look at this now as a, as a layered approach that we may build up a deep learning application. We're looking for a defects type. And then once we found that defects, then we're actually saying, using the deep learning as a secondary operation on that say, now we've extracted the defects. We'll classify it. This is a scratch. This is an oil Mark. This is a fingerprint. This is a piece of cardboard. This is, you know, all the different types. So then you can start to get proper separation and use that feedback to really give the customer valuable information about their process and where the defects coming from, what are their high value problems, whereas with traditional techniques, some of that is possible. But certainly if you're just looking at, I found an area of dark pixels, it's bad, then trying to put lots of rules around it say, well, if it's long and thin and in this direction, it's a scratch. But if it's long and thin curved, then it's maybe a bit of fabric or a hair or something like that. And trying to actually separate those might be very difficult in a rules based way. Whereas the deep learning is really set up to do all of that.

[Erik]

Okay. So interesting. So in a deep learning environment, it's easier than to categorize because the other algorithm can kind of tell you that the shapes are somehow similar, but one of the complaints about deep learning and one of the challenges there is that it's also somewhat of a black box, right? So I can tell you these are similar, but it doesn't necessarily explain why they're similar. How do you get around this understanding barrier?

[Iain]

This is one of the key, the key hurdles we have when, when explaining this, or even getting to the point of assigning a machine or a system off with the customer, when you've got to say to them, well, we've put all of your good images. And this side of the black box, some magic has happened in the middle. And then at the other end, it said, this one's good and that one's bad. And there's no real way of some people can and do understand what's going on in that, inside that neural network. But the reality is that you don't necessarily know what features the deep learning has chosen in that image to separate one class from another good, from bad apples from oranges. And it might be something that you, you don't expect. And this is something we've got to be quite careful with because you can inadvertently where you think you've taught a particular fault type.

What the deep learning's actually picked up is not on the fault itself, but may be the proximity of that fault to the edge of the parts. So if you've trained up for instance and all, all of your images, the scratch is next to the edge of the park. Then the deep learning may have detected some of the aspects of the scratch, a little bit of contrast, change texture, whatever it may be, but it may have also picked up the edge of the part is a significant feature that always appears near it. Then when the same scratch, which might be otherwise identical appears in the middle of the parts, the deep learning misses it completely. And it's trying to have that understanding of, we can't assume that it's found the scratch or the blemish, and it understands what the blemish is. There are other factors in play and the way that the neural networks are trained, the images broken up, manipulated changed, transformed in various ways. Mathematically all very sound concepts can be quite abstract. And it may be finding some data in there to separate the classes that you can't visibly tell, or you can't obviously see what you've got in front of you.

[Erik]

Yeah. There was a study out recently, I think on, I think it was Google that was using deep learning to look at lung cancer and they were comparing this against results from whatever species of a surgeon typically will assess cancer in the lung. And I think the result was that there was a class where the humans could easily identify that this was a cancer and the machines were missing it. And there were others where the machine would very high accuracy identify it, and the humans completely missed it. And so they're just, they're using different processes in some cases for identification. And that this I imagine can be very frustrating for somebody who's trying to understand the quality control process.

[Iain]

Yes. You know, traditional vision techniques have been around for a good number of years now. So they're starting to become generally understood in industry so that people who have may be purchasing project, people who have brought in a number of projects into their business, which have had machine vision on them, have an understanding of what they want that system to achieve and how they want us to present our methodology to them so that they can understand what's happening and sign off to understand that it's this robust method they've got control. If they need to adjust something, they can see what's happening in there. And they can see that yes, features that they've asked us to detect. We are detecting when they change like this, we can see the values changing and we can sort of prove that we are meeting their specification with a deep learning. It's very much a results based sign off for it because you're asking the customer say, well, give us the good ones, give us the bad ones. We'll put them through and give you the result, which says these ones are good and these ones are bad, but can't tell you how it's made that choice. Cause that's all happened in the, in the deep learning. We've just got to prove it by challenging it. Basically,

[Erik]

I think the other challenge that I've encountered in this space is with training, right? So the example that you gave earlier with scratches being on the edge or on the inside. So the challenge there seems to be that there were not enough samples in the training data of situations, where there was a scratch on the inside. So we, you know, we didn't have sufficient data of that case to train the model. And in a lot of cases where, you know, let's say if the, if the quality control is terrible, well, you have a ton of a ton of examples of faults, right? And then you have some, some good training data, but in a situation where quality control is already fairly good, you might not have that many examples of actual faults. How do you get around understanding how to collect the data? And when there is sufficient or insufficient data to train on a deep learning model,

[Iain]

Very, very good point. And maybe just a little aside before I answer that this is one of the big separations between traditional machine vision and deep learning is that with traditional machine vision, you can basically take a single image. You can set your rules and your logic and your tests up on that image and say, well, I'm measuring this or want to find this. If it doesn't look at this pattern changes more than 10% from what I've taught it, I'm going to reject it. If this number of pixels on counting over here exceeds a certain value, I'm going to reject it and you can set those rules up on one image. If you need to, you can, then you can start challenging it with other images, with deep learning, you need a quantitative images to start with. And the constancy is a variable thing. So depending on how subtle the faults are, you may be able to get away with talking 10 tens of images as a starting point.

But if you want some meat to get really robust, then more is more. If you can provide more samples of all the different classes you're trying to put into that deep learning model, then you're going to get better, more robust results, but you also need to spend the time in there to actually separate them as well. So if you don't have enough samples, which is your question, you don't have, you know, I've only got one image of a single folder. Then, you know, the chances of you reliably finding that false in most full locations is limited. There are techniques. So, so one of the best software we we've used most of this, which is a cognitive product called [inaudible] that has the option to add, introduce into your trained images. So you can basically take an image that's part of your train set. And so we'll all, all of the trained images.

I'm going to rotate them all by plus or minus 90 degrees in two degrees steps. And I'm going to make them all slightly brighter, slightly darker to try and introduce some. So artificially introduced some variations, which we may expect as a way of multiplying up the training set. So if I've only got a handful of images or faults, I can add some of these manipulations, either in position size, scale, skew, and brightness, and contrast to try and make my training set more robust by, by adding some of these manipulations in on, on day one. But it's still no substitute. It can help. You can certainly help, but if you don't have it, if you really don't have the data to start with, then it's going to be hard to work with. And this probably also leads into a separation, certainly in the sort of commercial software that we're tend to use of two different techniques in the deep learning.

One is, is where you specifically teach the faults. So certainly with the cognitive product, we refer to that supervised training where you ask specifically drawing on to each one of those faults. This is where the fault is, and you're highlighting it on each failure image. This is the fault. This is the fault of this type. And you have to Mark manually Mark all those images specifically where the fault is, and that then allows you to be much more targeted about the fault type or the feature that you're trying to find and highlights. But the downside of that is you do need to have lots of different variations. So if you're looking at scratches, you want those scratches to be, have a good training set where those scratches are different sizes, different lengths, different curvatures on different positions, where you'd expect to find them. Because with that method, you are what you're teaching in, will take into account, all sorts of things, including the position size, the length, the pixels that surround that area, that you've marked all go into the, into the mixing pot.

When the neural network is trained to decide how to separate that, that class, that fault for everything else, the other way. And if you haven't got very many rejects images, the other way the, we call it unsupervised training or it's novelty detection is where you just train it on good images. You say, all of these images are good products. These are all acceptable. You train those up and then you ask it to find things that are different. And then that's a good way if your faults work with that technique, some do some don't, but if the faults work with that technique, then you can, you're selecting good ones. And you're highlighting areas on those images, which do not conform to what the good ones having them. So it's, it allows you to detect faults that you may not have seen before or a different or varying.

So the two techniques often we combine them as well, because you might, you might want to generally have a defect detection, which is looking for differences from the good products, but there may also be some very specific or quite fine or subtle features that that technique doesn't reliably separate. It might find them when they're more obvious, but when they're more subtle, we need to go to a more direct and this supervised method of training where you're highlighting the faults manually as part of your training, and then to combine the two, have the two deep learning models running either sequentially or in parallel to give you the best of both worlds. Really?

[Erik]

Yeah. I was just going to ask about this and I think you've, you've answered that question. And then I guess maybe a follow up would be this, this training then, is there any kind of off the shelf software where you kind of, you know, somebody can purchase this, they can plug it in, they can upload images and an output, a functional model, or is pretty much all of this require a data scientist right now to kind of get into the data set in order to create a functional model.

[Iain]

So you can go either way on this one level, anyone can go and register for an account with Google, Amazon, Microsoft, you can get access to their virtual machines. You can use open source, deep learning programs, things like TensorFlow that you can start off the news for absolutely minimal cost. And you can go and train a deep learning network. There's obviously an overhead for that, that you've got to understand approximately what you're doing, but there are tutorials out there. So at one level you can go off and do it at absolutely minimal cost, or you don't need to have any of your own hardware. You can rely on uploading images to the cloud and using virtual machines to do all that training. What we're really focusing is commercially available products. Although the sort of data scientist side of things is, is very interesting for us, not very commercially viable that if we can deploy a solution in a relatively short space of time, then that allows us to solve a customer's problem, deploy a solution, get it working on the factory floor.

And then we can move on to the next project. We don't want to be tied up for hours or days or months training up your networks. So to that end, we, we do certainly focus. And really what I've talked about today is, is based on, on these commercially available products. So to mention a couple of the things I've already mentioned, one, which was a Cognex and video software, and we've also used the MPD tech German company there, how con deep learning library as well. And these are commercially available, deep learning products where you have a software user interface that allows you to do the training. So you can import images. You can Mark on the images, the features that you're interested in, and then you can train that model locally on local hardware. And then you can deploy it again within the framework of those software environments.

And this is really where we see the four directs industrial deployments. This is the most sensible route to market for us. So it may not be the most powerful, the most flexible, obviously companies like Google has got their self driving cars. They're spotting all these street signs shop front. So the cars number plates, all of this within very, very complex and powerful neural networks, deep learning AI things. We're looking at, usually not having the luxury of luxury, the benefit to use the cloud as a computing platform for this for a couple of reasons. So one, one is a commercial reason that, you know, these are often specific and proprietary processes and products that companies are using. And they generally cautious about exfiltrating, all of that image data and their quality control data up to the cloud to be stored somewhere else. But then there's also practical aspects that once the training is one element of it, but when we're talking about deploying a deep learning system, this is often on a production machine.

We might be checking multiple parts a second. And in which case, the practicality of taking an image with a camera on a production line, then uploading that image to some cloud service, doing the deep learning on a virtual machine, getting the results and sending that back down to the machine to pass or fail. It might be quick enough, but we might not be able to guarantee the reliability of that. I was a licensee and it's going to be dependent on local network conditions, local internet conditions, other bottleneck it's generally, you know, it's, it, it would be too risky and too difficult to to look at that. So we're looking usually at what you described as edge inference, where you're learning a local computer to do the deep learning runtime, the inference on that computer locally to the camera and give the results back there.

And then, so that then requires that that computer has the capability to do that. And this is where you start. You generally are looking at having GPU acceleration and for most of the products that we're working with, the commercially available products that GPU acceleration is Nvidia based because of the Cooder libraries that Nvidia have other ones that are leveraged in the software, and basically the highest spec graphics card. You work with, the faster you on your network, you fast, your deep learning will run. And that's how we can see the deployment on the training side. All of the same things really apply that what we generally doing here when we're training is we're using a local computer with graphics acceleration. And for instance, we've got a, basically a gaming spec laptop, and that is capable of training these deep learning models within minutes, if it's a small image set, small image size, maybe hours, we might sometimes leave it running overnight to do a more complicated set of training.

The training is generally more of an offline process. So, you know, using a cloud based service for that is more viable, but it's also a reasonable overhead to do it locally on local heart. What we are starting to see is that companies like Cognex are exploring whether or not the cloud based service for this will work. But again, we've got this two things. One is that we might have gigabytes of images to upload. The more images you can take and supply to that deep learning, generally the bedroom more robust your model is. And certainly here in the UK, we have asymmetric internet connections. So the upload speeds might only be 10%, 20% of your download speeds. So to upload tens or hundreds of gigabytes of images to a cloud service, well, that might take you a couple of days. So actually running stuff locally on a local computer can be beneficial from an app point of view. And then you've also got this, certainly the software we're using is licensed.

[Erik]

So rather than the sort of the Google and Amazon where you it's largely open sourced, if we're using proprietary or licensed software, then how do you, if you've got a physical USB dongle that you can use it for, for a local laptop or a local PC, how does that licensing work?

[Iain]

We are seeing some movement with, certainly some of the dominant companies like Cognex who are starting to explore is that a model that us, as, as partners, as integrators could maybe use as a, as a benefit of our partnership that rather than us maintaining our own local infrastructure, we rely on our suppliers infrastructure to do that training. And then further on is, would customers want to do that? Some customers absolutely wouldn't want their data to be stored in a cloud somewhere. They'd be much happier to have it locally, even if that meant investing in some decent PC hardware to run that, to train that.

[Erik]

Yeah. And that's, I think more or less what we see across other IOT use cases as well is that there's a general move towards the cloud. So there's kind of a trajectory in that, in that direction, but it's slow. It's very cautious. And so we're, you know, you know, I guess it, it, you know, it's not so much around the cost model. I mean, that's a, that's why people are moving in that direction, but I think it's really around the risk perception. So any use cases that are deemed as, as high risk of putting IP at risk, I think those are going to move very slowly in. Can you walk us through a situation if you have maybe, you know, a customer in mind that you've worked on this, you don't have to mention the customer's name, but just to kind of walk us through from initial conversations through the decision process of what is the right approach for this particular situation and then how you select the right technology, you know, what's your evaluation process. I think it's quite useful to be able to kind of have an end to end perspective of how this would be deployed.

[Iain]

So I could probably talk, talk you through a couple of use cases that we've, we've come up with. So one that we had a little while ago was for some plastic lenses. So these are solid lens is used in, I think they were used in some sort of smoke or fire detector systems and they were being inspected manually. And the way they inspected them was they had a human look through them and they have a, a grid underneath the moat white piece of paper with black lines on it. And the human's able to look through that and see if the lens is, is correctly formed. Then you see that grid distorted in a sort of classic sort of fishbowl or pin cushion type of distortion. But if there was a blemish in that lens, then you get an anomaly that the lines wouldn't be evenly distorted.

As you look through the lens, they will be uneven and varying. We looked at that and tried to do that with a traditional machine vision approach. And it was, I wouldn't say it was impossible, but it was very difficult to do it and do it robustly partly because the product had a little bit of variability, which was acceptable. It wasn't a precise product, it had to be uniform, but that uniform could vary a little bit, as long as it was an even distortion. It could vary. So if you try to set up a rules based system to find the ideas which were black on whites and nice and straight forward, then you could do that. But then the spacing between them would change all the time. Not by very much, but it could be sort of unevenly depending on how that distortion occurred. And then you start doing that in two axes.

And then what happens when you get a little, like a bubble or a dark mark or a blemish in there, can you detect that? Because now you need to be looking for black pixels in between the grid, which was just, it wasn't impossible, but it was very, very difficult to do and to do it robustly. Whereas we were able to basically reproduce that, take the images and put them into, there was cognitivity in this instance, and with actually only quite a few samples, we're talking any sort of 10 samples of good products. We could then put one with a bubble in it, one with a uneven curvature on the lens, one with a scratch or a spot in it. And straightaway, it would pick up those features regardless of the other distortions in the grid that you were seeing. It could, it could pick up those blemishes very quickly.

And actually, because that was a, these parts took quite a long while to make, we had 20 or 30 seconds per parts to do the inspection. Once it had been trained, we worked out that we could actually run that without GPU acceleration, so we could run it on a CPU. And it wasn't very quick, you were talking 10 or 15 seconds to detect it, but we had that amount of time. It was acceptable to the customer and it meant that we could go with almost a standard machine, vision hardware setup for the deployment, but we'd just done the deep learning training on some, some graphics accelerated software. And I didn't touch on this earlier, but one of the benefits for us using something like VD or how con to do this is that when you were then deploying it to the customer, we will need to have the ability to take images from the cameras.

[Erik]

Will those products have the tools in there, the traditional vision tools and the interfaces to cameras and things like that. So we could use the same software environment to grab the image, to put it through the deep learning, to maybe do a little bit of post-processing on that with standard vision tools. So maybe put some user editable thresholds on the size of the defects that were being found. So basically taking the defects that are found and then doing standard blob and pixel County type analysis on those, which then becomes runtime user editable. So they can sort of trim their quality levels without having to retrain the deep learning and then wrap all that up in a, in a user interface. So that was one, one area that we're looking, looking at it. Could you give me kind of a ballpark for what the timeline would be for development of this and also what the budget would be kind of the, let's say full, you know, including hardware software, the full, full budget might be for a deployment.

[Iain]

Yeah, sure. So the timescale, in terms of us evaluating it, once we'd got samples from the customer, I think we had the deep learning model trained in an hour or so once we got images, the training took probably less than an hour, and then we're able to start doing testing and effectively go back and demonstrate it to the customer within a few days. So that aspects of it was, was quick because it just works. The training worked very quickly. We got a good result straight away. We were able to sort of move to the deployment, the overall cost of that. I think we were somewhere if I say ballpark around 30,000 euros, which was for the hardware, so reasonably high resolution camera, the lighting, the grid that we were using to as part of the imaging solution and industrial computer to do the processing and the software license and our time to actually put a little user interface together, do all the communication and do the sign off and the, and the installation aspects of the integration. So that's the sort of budget of that. And that compares reasonably favorably to a traditional machine vision. If we'd had done it with a smart camera, for instance, of the same sorts of resolution, we might have been 75 or 80% of the cost, but in the same ballpark, which is not always the case, because as soon as you start looking at faster systems, more high end systems where we start needing more graphics cards, bigger hardware, and more involved image processing and deep learning time, then those costs can, can obviously escalate quite a lot.

Like this was for one production line, if I'm right, how would this scale? Let's say it's 30,000 Euro to develop the solution initially. And if they say we have five other factories or five other production lines that are exactly the same, and we want to deploy this, would it also be 30,000? Or are you looking at 70%, 50%? What would the cost look like if you wanted to scale the exact same solution? On that particular one, you'd be looking at 70, 70 ish percent to do repeats because a loss of the overhead of us rice and user interface doing the development aspects during the training of the deep learning has all been done already. So then you're, it's the hardware costs the licensing for the software. And, you know, there's still some deployment charges to be made to actually get the case on sites set up and working. But obviously we then not having to, to create a user interface or anything like that, because that can be copied over from the previous one. So on largely deployments, that difference could be greater if we could be down to sort of 50, 60% repeat costs. So a lot of what we do the repeats very rarely go into the tens and hundreds. It tends to be either a single production line or a five, maybe 10 at the most.

And if you went to very, really large scale, if you were talking are now really just picking names out of that really, but a Samsung or a Sony where they've got maybe hundreds of production lines making consumer electronics, then maybe you wouldn't look at some of these commercial off the shelf products necessarily you might be considering starting from scratch because okay, the development overhead is much, much higher, but then the deployment costs would be much, much lower potentially. But then having said that our suppliers, if we said to them, we've got a hundred off deployment or a thousand off deployment, I'd be very keen to discuss commercial terms on that. So I'm sure cost for various things would potentially come right down at those sort of levels. So another, another little example, and one that we're actively working with at the moment is we've got a, a manual assembly process where we're trying to detect a particular feature where the customer bolts five or 600 of these particular items onto this system, this, this framework that they're building.

And what they're trying to do is to remove the amount of human inspection required to validate that every single one of those has been correctly placed and is in the right position. And because it's a, rather than being a maybe fully automated production line where we can put the camera directly over the object we're looking at, we can control the lighting. We can control the testing of it. This we're talking about a large item where you've got multiple people climbing ladders to bolt stuff on moving around it. So we were restricted to what we can do. We haven't got the same control over our environment. We're having to have the cameras set back from the objects so that the people can move around in front of them. We can't shine stupid, the bright lights at the surface to even the illuminated. We've got to rely on the ambient lighting, which all makes then detecting these objects very tricky to do because they appear all over different angles, different Heights.

We've got a camera, high resolution, color camera looking at the scene, or actually several of them looking all the way around. And we're trying to determine all of these different objects. Are they there? And the objects themselves as they're bolted in, they can twist and conform. They hold on onto other aspects of the, of the bills. And depending on how hard they're tightened and what they're gripping, then they deformed slightly. They look a bit bigger, a bit smaller, and they come basically in one color, a couple of different sizes, but the color is not a controlled aspect of the build. So as long as it's functional and the color is approximately right, it's acceptable. So we've got all of these variations lighting, shadowing, we've got multiple positions, most pool poses and angles that these appear at and the variation in the actual color and size of the objects.

And we're trying to locate all of them around this surface. And we're using a deep learning at the moment to teach. This is what this feature looks like. You know, we're now up to, I think, training on two, two or 3000 images where an each one of those images might contain multiple instances, and it's a very time consuming task. You know, this is the other thing I haven't really touched on is that you need all these deep learning techniques rely on a ground truth. The human to say, this is good. This is bad. Or this is this type of feature. This is this type of feature. And all those images need to be labeled correctly as accurately as possible, and consistently to allow the deep learning to say, well, I know that these are this class and these are this class because without that human interaction at the start with training, it doesn't know what, what is classifying.

So that has been quite a big overhead in terms of, of time. It's not necessarily particularly high skilled, but it does require somebody to sit there and say, I'm going to draw a box around this one. We're now going to draw a box around this one. And what we're finding with that at the moment is what is the best technique? How do we best encapsulate these objects? Because sometimes they're fully visible. Sometimes they're partially visible. Sometimes we see more of the side of it than the front of it. It's all the same object, but how do we, how do we go about teaching that? And there is sort of no right answers to this. It becomes a bit of trial and error of trying one technique, okay, we're going to train all of these where we're only going to focus on a certain aspect of it.

And if we can't see that aspect, we won't train them. And then we train another version of the same model where we include the surroundings of it much more and say, this is it, but it's surrounded by this. We have bigger areas to define it. And we're having to spend quite a lot of time training, a deep learning model testing it. We're not getting very good results with this. Okay. Now, to retrain that we've got to go over all of those, you know, 2000 images, and we've got to redraw every single box on all of those and retrain it and see, does that give us a different, a better result? Is that more consistent? Is that more reliable? Do we need to start to separate these into different classes? This is the object looks straight onto it, and this is your objective it's to its side.

So do we treat the two, rather than putting them all into one class and saying these are all the same, but they do look quite different from the side or from the front. Do we start separate them and say, this is the object from the front, this, the object from the side. And this is where we're finding that deep learning. Although it's very though it's enabling us to do something that we really would have struggled with before with traditional techniques. It's not for free, that there is a significant overhead cost. And we're even saying that some of the manufacturers and the suppliers to the software, they themselves are, I say struggling, but they're finding that they're getting lots of inquiries. We'd really like you to evaluate whether your product can work with our issue on our projects. Can you evaluate it for us? But that evaluation time is time consuming and expensive and using a lot of resources, the software manufacturers. And so it's very powerful, but there is, you know, there is a considerable overhead to it in terms of the time that goes in at, at the training level and what humans need to add.

[Erik]

And what do you think in, in this situation, do you think the end result is going to be that you'll be able to say out of these, let's say these 500 X, you know, instances you'll say, well, these 300 are definitely a pass and those 100 are definitely, you know, are these, these 20 are definitely a fail, but the remainers, we need to have a human go and look at them in second, not a second, second check. Or do you think that you'll actually be able to come up with a sufficient accuracy that a human doesn't have to be involved in the end here?

[Iain]

I think on this particular application, if we get to a 80, 90% success rate and then the human has to intervene for the remaining few, then that will be acceptable for this customer. And in this particular use case clearly if you're looking at a hundred percent quality inspection on a, on a production line, those sort of numbers do not inspire confidence, but with this, because it's a manual operation. Anyway, they're used to spending a lot of time inspecting this and rechecking it because it's a very high sort of high quality component that comes out at the end. It's not a fast process. Then some amount of human intervention is acceptable. And there's also the fact that there are compromises on that system have been made, where we know that we're going to look at some aspects of the build. And some of these objects will naturally be obscured by a later bit of the build or even, you know, something of the same stage. So there may be bits that we physically cannot see and as good as deep learning is it can't see it. If the image doesn't present it to it. So there's an understanding of that particular product.

[Erik]

Sure. One last question. We didn't get into this previously, but so far we've always been talking about using machine vision with camera systems. Right. But I guess you could also use, you could use infrared, you could use some other sensor in quite a similar way. Have you ever found that to be particularly useful or do you find that kind of a more standard camera is generally the most effective solution?

[Iain]

So, so far we have only looked at this with, with standard cameras, but certainly with the Cognex product it's capable of working with multichannel images. So color is fine. Like a white is fine, but there's no reason why that data couldn't be for instance, three D data. I mean infrared or, or thermal stuff presents when you get the image, it's basically the same format as a standard color camera or, or a black and white camera anyway. So those techniques absolutely would go in and, and work with this for three day again, would, would work very happily. It might need to be what we call a range image, a two and a half the image where you've got an X, Y image with height as the height, as the color, if you like the pixels, rather than a full point cloud, but the techniques work with any image formats.

But what I haven't really touched on is the fact that these commercial products that we're using the neural network in them has been pre-trained is ready, biased towards industrial type of images. So if we supply standards, images, black, or white, or color of industrial type of inspections, then the deep learning is already sort of pre set. It doesn't have to work as hard to hone in on the features that you're looking at. We're not starting from a, and this is where you'd start. If you, to some degree, there's a bit of flexibility there on this, but if you went to an Amazon or somebody to take a fresh off the peg neural network and train it from scratch on images, then you're not narrowing that down a source. So the software products, where are you using if you want to try and say, is it a, I dunno, a dog or a monkey or a cat, it's probably an unlikely thing for an industrial inspection to be doing. But if we're looking at here's a printed circuit board, let's find the ends of a chip. So let's check the solar on each one of the pins is correctly formed. Then those sorts of images daily through the deep learning networks that we're looking at here much faster, because the network's already been predisposed to work well with that image type.

[Erik]

Great. Well, thank you for once again, taking an hour of your time and, and walking through this, I really appreciate the deep dive here. Is there any last thoughts or I think we covered a lot of territory today.

[Iain]

I think we have covered quite a lot. I think that's probably most of what I would consider. I mean, there's lots of other use cases, deep learning and deep learning even in machine business is not new, but really what we're, what we're now getting. It is commercially viable off the shelf products that we can deploy in a reasonably short period of time. So this is now becoming commercially really, really viable and more and more acceptable. And it's opening up avenues to us that previously were, were really shut. And it, it has been a bit of a sort of wow factor when we started seeing in particular, the cognitive fitty because the training user interfaces is so nicely presented on that, that you are able to very quickly get somewhere with it. And it really sent us looking away through our back catalog of applications that previously we said, either we can't do it, or we really don't think this is going to be robust if we deployed this it's right on the borderline, we think we can find them, or we can find this type of fault, but not that.

And you start looking back at those and thinking, actually, this, this would be possible. This could really be really viable now. And we're seeing some of the software companies focusing in on some certain use cases. So one of the big ones that certainly Cognex and then Vtech have picked up on is text reading and how quantum Baytex, how Khan has had a ready, trained font for again, industrial text recognition, text reading, CRO CV that has been trained with a neural network with deep learning technique. And then given back to you as a runtime to just use and its capabilities, just you give it RK. You're looking at industrial markings generally. So we're not necessarily talking hundredths and stuff, but almost any industrial funds that you get on a label and the thing that's printed or, or marks on, on something for traceability, or even just for stuff like food and fire packaging, where you've got date codes, locked codes, things like that.

These have been trained with a massive network of images of texts behind them, a really robust without any teaching. You just say, read this line of text, and it's very robust about reading it back to you and getting that back. And we're seeing that being one of the, sort of the key benefits that can be trained. The end user doesn't have to do the training of that. They can just benefit straight away from the fact that they've got a readymade deep learning model that reads text. And certainly I know that some of the companies are looking at other use cases. Are there other things that we could focus in on? So maybe for logistics, this is a box. If you can then say, okay, that's the box? Where do we look for labels on it? Then it, it speeds up everything or number plate recognition, finding a white rectangle or a yellow rectangle.

[Erik]

Certain, maybe slightly niche use cases, but areas where the training aspect could be done before the product is sold, basically. And then you're just ready to go with a pre-trained deep learning model that you can use straight out of the box to solve certain tasks. So I think we'll see more of that coming through as we go forward as really well, very cool. Exciting times. Yeah.

[Iain]

It's going to be, I think pretty incredible. This is I guess for a lot for you, maybe it hasn't happened so quickly, but for a lot of people, I think it seems like this has kind of come from from nowhere and all of a sudden we're moving towards pretty cost effective solution. So thanks for walking us through it and give us a, an update of where we are and yeah, just really appreciate your time. Yeah, no problem. Pleasure.

[Outro]

Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com

Subscribe
test test