Deep Learning Platforms

Deep Learning Platforms

Artificial neural networks (ANN) have become very popular among data scientists in recent years. Despite the fact that ANNs have existed since the 1940s, their current popularity is due to the emergence of algorithms with modern architecture, such as CNNs (Convolutional deep neural networks) and RNNs (Recurrent neural networks). CNNs and RNNs have shown their exceptional superiority over other Machine Learning algorithms in computer vision, speech recognition, acoustic modeling, language modeling, and natural language processing (NLP). Machine Learning algorithms based on ANNs are attributed to Deep Learning.

Requirements for Deep Learning Platforms

The development of deep learning models from scratch is not an easy task. Complex deep learning architectures require weeks of coding until the working model can be developed. Given that similar ANNs can be applied to various tasks and data types, the need for simple implementation of deep learning models seems quite natural. This need is addressed with specially-designed deep learning platforms or frameworks.

So, what should these systems be able to do? Here are the main requirements:

  • First, the platform’s interface and library should be able to build ANN models without getting into the details of underlying algorithms. So, we need to have a high number of algorithms and be able to simply apply each of them;
  • Second, they should be flexible enough to apply all data types: numeric, text, audio, visual, or any combination of these. This requirement provides flexibility of platforms for different types of tasks;
  • The third requirement is a good performance. The computation time that is needed to create a model is smaller for the optimized algorithms;
  • Fourth, the computation time can be reduced by decreasing the number of computations. One of the most efficient ways to reduce the number of computations is paralleling processes. A good deep learning framework has a built-in paralleling mechanism;
  • Fifth, it should have qualified and substantial community support. The reasons for this requirement are obvious. Community is a driving force for the development and improvement of a platform. To provide this, a platform should be open source software, and also highly integrative.

Of course, deep learning platforms meet the above requirements at different levels. We’ll consider the most popular platforms in detail below.

TensorFlow

TensorFlow was developed by the Google Brain team. Nowadays, this is the most popular deep learning platform with the biggest number of contributors, and it has the greatest number of articles on Medium and ArXiv.

What are the merits of TensorFlow? First and foremost, it satisfies all requirements of a good framework. One of its biggest advantages is that it supports multiple languages (C++, R, Python, Java, and others). TensorFlow is open source, has excellent support, and many pre-written ANN algorithms. Also, it supports computations on multiple CPUs and GPUs. It is important to note that TensorFlow requires coding skills, with specific attention paid to its ANN architecture.

Anyway, TensorFlow for the moment is the champion among deep learning platforms.

Keras

Keras is the second most popular platform. Its biggest difference from TensorFlow (as well as from Theano, PyTorch, and MXNET) is that it is a high level ANN API. This means that Keras has a different level of abstraction. While lower level APIs operate with mathematical operations and ANN primitives, Keras operates with ANNs abstractions. So, Keras is user friendly, and thus the best option for people who are new to deep learning. Also, Keras is a good platform for executing quick experiments with ANN models. At the same time, Keras gives access to lower level frameworks and can integrate common machine learning packages, such as scikit-learn.

Keras is the second most-cited platform in ArXiv articles. Prominent scientific organizations, such as CERN and NASA, have used Keras.

PyTorch

PyTorch is a low-level deep learning framework, which (due to its coding style) lays in between Keras and TensorFlow. It was developed by Facebook’s research group.

The main advantage of PyTorch is its ability to work with dynamic computation graphs. This means that PyTorch can change its architecture during runtime, which makes PyTorch memory efficient. Also, the platform supports data parallelism, which induces its strong GPU acceleration.

Caffe

Caffe is primarily developed to provide deep learning for image classification. And this is the biggest advantage of Caffe: it is able to learn from images with a high processing speed. Also, Caffe is open source.

Theano

Theano is the fifth most popular deep learning platform. It is a Python library that performs mathematical operations on multidimensional arrays and optimizes code compilation. Theano is mostly used for scientific research applications.

Conclusions

There are plenty of other platforms, such as MXNET, CNTK, DeepLearning4J, FastAI, Chainer, and others. How you choose a deep learning platform mostly depends on the tasks you need to solve, the size of the community where you can ask questions, and your level of coding knowledge. However, deep learning is a rapidly developing field, new frameworks appear constantly, and some of them can start to dominate in the short term.

Let’s have talk
Let’s have talk

Interesting For You

Computer Vision

Computer Vision

Computer Vision (CV) is one of Artificial Intelligence’s cutting-edge topics. The goal of CV is to extract information from digital images or videos. This information may relate to camera position, object detection and recognition, as well as grouping and searching image content. In practice, the extraction of information is a big challenge, which requires a combination of programming, modeling, and mathematics, in order to be completed successfully. Interest in Computer Vision began to emerge among scholars in the 60’s. In those days, researchers worked on extracting 3D information from 2D images. While some progress was made in this regard, imperfect computing capacity and small isolated groups caused slow development of the field. The first commercial application using Computer Vision was an optical character recognition program, which emerged in 1974. This program interpreted typed or handwritten text, with the goal of helping the blind or visually impaired. Thanks to growing computing power and NVIDIA’s parallelizable GPU, significant progress was achieved in deep learning and convolutional neural networks (CNN).

Read article

Reinforced Learning

Reinforced Learning

Artificial Intelligence uses three basic methods for machine learning: supervised learning, unsupervised learning, and reinforcement learning. In general, these methods are called learning paradigms. The learning paradigm chosen is determined by the specific task at hand. We choose supervised learning for classification and regression tasks. Cluster identification or anomaly detection are typical tasks that can be solved within the unsupervised learning paradigm. The primary goal of reinforced learning is to create software agents that can automatically interact with an environment, learn from it, and determine the optimal behavior in order to optimize its performance. In this article, we will discuss reinforced learning paradigms in detail.

Read article

What is Data Science?

What is Data Science?

In recent years, data science has become increasingly prominent in the common consciousness. Since 2010, its popularity as a field has exploded. Between 2010 and 2012, the number of data scientist job postings increased by 15 000%. In terms of education, there are now academic programs that train specialists in data science. You can even complete a PhD degree in this field of study. Dozens of conferences are held annually on the topics of data science, big data and AI. There are several contributing factors to the growing level of interest in this field, namely: 1. The need to analyze a growing volume of data collected by corporations and governments 2. Price reductions in computational hardware 3. Improvements in computational software 4. The emergence of new data science methods. With the increasing popularity of social networks, online services discovered the unlimited potential for monetization to be unlocked through (a) developing new products and (b) having greater information and data insights than their competitors. Big companies started to form teams of people responsible for analyzing collected data.

Read article