Employing Intel Deep Learning SDK Toward Bettering Image Recognition Models
In this case study, the challenge explored involves LeNet*, one of the prominent image recognition topologies for handwritten digit recognition. In the case study, we dive into how the training tool can be used to visually set up, tune, and train the Mixed National Institute of Standards and Technology (MNIST) dataset on Caffe* optimized for Intel® architecture. Data scientists are the intended audience.
IntelIntel designs, manufactures, and sells integrated digital technology platforms worldwide. The company's platforms are used in various computing applications comprising notebooks, desktops, servers, tablets, smartphones, wireless and wired connectivity products, wearables, transportation systems, and retail devices. It offers microprocessors that processes system data and controls other devices in the system; chipsets, which send data between the microprocessor and input, display, and storage devices, such as keyboard, mouse, monitor, hard drive or solid-state drive, and optical disc drives; system-on-chip products that integrate its central processing units with other system components onto a single chip; and wired network connectivity products.Featured Subsidiaries/ Business Units:- Intel Inside- Intel Data Center Manager (DCM)- Saffron Technology- Wind River
Construction & Buildings
Data scientists seeking to explore image recognition topologies.
- CONNECTIVITY PROTOCOLS
One of the main advantages of using the Intel Deep Learning SDK to train a model is its ease of use. As a data scientist, your focus would be more on easily preparing training data, using existing topologies where possible, designing new models if required, and train models with automated experiments and advanced visualizations. The training tool provides all of these benefits while also simplifying the installation of popular deep learning frameworks.
- DATA COLLECTED
- SOLUTION TYPE
- SOLUTION MATURITY
Cutting Edge (technology has been on the market for < 2 years)
- OPERATIONAL IMPACT
The first positive impact of this case study involves enhancing one's understanding of how the human visual system and convolutional neural networks work. In doing so, one receives great exposure into LeNet*.
Getting insight into MNIST dataset is the second positive impact of this case study. To increase the variation in data, the final MNIST collection uses 30k images from each dataset for training and 5k images from each for testing.
Using the Intel® Deep Learning SDK to train the model is the third positive impact of this case study. One of the main advantages of using the Intel Deep Learning SDK to train a model is its ease of use. As a data scientist, your focus would be more on easily preparing training data, using existing topologies where possible, designing new models if required, and train models with automated experiments and advanced visualizations.
- QUANTITATIVE BENEFIT
- USE CASES
Autonomous Transport SystemsAutonomous transport systems provide unmanned, autonomous transfer of equipment, baggage, people, information or resources from point-to-point with minimal intervention. They can include the full range of transport vehicles, including trucks, buses, trains, metros, ships, and airplanes. They are most commonly deployed in controlled industries zones but are expected to soon be deployed in public areas with varying degrees of autonomy. We differentiate autonomous transport systems from autonomous vehicles. Whereas autonomous vehicles serve individual passengers (who may or may not own the vehicle), autonomous transport systems are interconnected fleets of vehicles owned by a business to service a particular need systematically. When discussing autonomous transport systems, the focus is on the interaction among vehicles in a sophisticated system that interfaces with ERP, MES, and other enterprise data management systems. The autonomy of the vehicle is one component of a larger interconnected system of autonomous and semi-autonomous activity with the objective of achieving business or organizational objectives, such as delivering the mail or moving soil from a mine to a processing facility.Fog ComputingFog computing refers to a decentralized computing structure, where resources, including the data and applications, get placed in logical locations between the data source and the cloud; it also is known by the terms fogging and fog networking. The goal of this is to bring basic analytic services to the network edge, improving performance by positioning computing resources closer to where they are needed, thereby reducing the distance that data needs to be transported on the network, improving overall network efficiency and performance. Fog computing can also be deployed for security reasons, as it has the ability to segment bandwidth traffic and introduce additional firewalls to a network for higher security.