Distbelief
- Formal
-
Google has developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models.
- Practical
-
Google's internal deep learning infrastructure DistBelief, developed in 2011, has allowed Googlers to build larger neural networks and scale training to thousands of cores in their datacenters. Google used it to demonstrate that concepts like “cat” can be learned from unlabeled YouTube images, to improve speech recognition in the Google app by 25%, and to build image search in Google Photos. DistBelief also trained the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014, and drove our experiments in automated image captioning as well as DeepDream. While DistBelief was very successful, it had some limitations. It was narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure - making it nearly impossible to share research code externally. The second-generation of machine learning system by Google is called TensorFlow.