Analytics, artificial intelligence (AI) and big data – these conversations are no longer complete without the term “deep learning”, a powerful phrase that is increasingly becoming part of the business vocabulary as it recognises life-changing advantages.
Brendan Marr, best-selling author and keynote speaker on business, technology and big data, says it is with good reason, as deep learning “it is an approach to AI, which is showing great promise when it comes to developing the autonomous, self-teaching systems which are revolutionising many industries. Deep learning is used by Google in its voice and image recognition algorithms, by Netflix and Amazon to decide what you want to watch or buy next, and by researchers at MIT to predict the future. ”
A deep learning website, simply registered as deeplearning.net, comes with the tagline ‘moving beyond shallow machine learning since 2006!’. On its home page it states that “Deep learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence.”
This statement is in unison with the voice of veteran tech journalist, Mike Copeland, who in a multi-part series, published by deep learning expert, NVIDIA, explains the fundamentals of deep learning. He says: “Deep learning has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep learning’s help, AI may even get to that science fiction state we’ve so long imagined.”
Deep learning frameworks are only as powerful, in performance and scalability, as the quality of the smart offloading technologies backing them up. “Mellanox is enabling deep learning with powerful data-centric offload architecture that has been employed by the world’s most advanced machine learning platforms,” says Anton Jacobsz, managing director of value-added distributor, Networks Unlimited, a distribution partner of Mellanox.
Mellanox Technologies announced in June 2017 that the leading deep learning frameworks such as TensorFlow, Caffe2, Microsoft Cognitive Toolkit, and Baidu PaddlePaddle now leverage Mellanox’s smart offloading capabilities. Mellanox RDMA and In-Network Computing offloads and NVIDIA GPUDirect are key technologies enabling users to maximise their application performance and system efficiencies.
TensorFlow is an open source software library originally developed by researchers and engineers within Google’s Machine Intelligence research group. With the inclusion of RDMA technology in place of traditional TCP, TensorFlow data exchange performance between nodes was accelerated by 2X, enabling faster image processing.
Baidu’s PaddlePaddle (Parallel Distributed Deep Learning) is a flexible and scalable deep learning platform. PaddlePaddle supports a wide range of neural network architectures and optimisation algorithms, such that it is possible to leverage many CPUs and GPUs to accelerate training. PaddlePaddle leverages RDMA to achieve high throughput and performance, and takes advantage of the more advanced acceleration capabilities of the combined NVIDIA and Mellanox architectures to accelerate deep learning training time by 2X.
“Advanced deep neural networks depend upon the capabilities of smart interconnect to scale to multiple nodes, and move data as fast as possible, which speeds up algorithms and reduces training time,” said Gilad Shainer, vice president of marketing at Mellanox Technologies during the announcement. “By leveraging Mellanox technology and solutions, clusters of machines are now able to learn at a speed, accuracy and scale that push the boundaries of the most demanding cognitive computing applications.”
The announcement was also accompanied by a statement from Duncan Poole, director of platform alliances at NVIDIA: “Developers of deep learning applications can take advantage of optimised frameworks and NVIDIA’s upcoming NCCL 2.0 library, which implements native support for InfiniBand verbs and automatically selects GPUDirect RDMA for multi-node or NVIDIA NVLink when available for intra-node communications. NVIDIA NVLink is available in Pascal-based Tesla P100 systems, including the NVIDIA DGX-1 AI supercomputer, which has four Mellanox ConnectX-4 100 Gb/s adapters. This allows developers to focus on creating new algorithms and software capabilities, rather than performance tuning low-level communication collectives.”
In conclusion, deep learning is helping to solve numerous big data challenges – it is perhaps best summed up by elite deep learning researcher, Silvio Savarese, an associate professor of computer science at Stanford University and director of the school’s SAIL-Toyota Centre for AI Research, who says the following:
“Everything is powered by deep learning. We can do things we’ve never done before.”
By Staff Writer