Artificial neural networks (ANN) became very popular among data scientists nowadays. Despite the fact that ANNs have been known since the 1940s, their current popularity is due to the emergence of algorithms with modern architecture, such as CNNs (Convolutional deep neural networks) and RNNs (Recurrent neural networks). CNNs and RNNs have shown their exceptional superiority over other Machine Learning algorithms in computer vision, speech recognition, acoustic modeling, language modeling, natural language processing (NLP). Machine Learning algorithms based on ANNs are attributed to Deep Learning.

The development of deep learning models from scratch is not an easy task. Complex deep learning architectures require weeks of coding until the working model will be developed. Despite the fact that similar ANNs can be applied to various tasks and data types, then the need for simple implementation of deep learning models seems quite natural. This need is satisfied by specially designed deep learning platforms or frameworks.

So, what should they be able to do? Well,

  • First, the platform’s interface and library should be able to build ANN models without getting into the details of underlying algorithms. So, we need to have a high number of algorithms and be able to simply apply each of them;
  • Second, they should be flexible enough to apply all data types: numeric, text, audio, visual, or their combinations. This requirement provides flexibility of platforms for different types of tasks;
  • The third requirement is a good performance. Computation time that is needed to create a model is smaller for the optimized algorithms;
  • Fourth, the computation time can be reduced by decreasing the number of computations. One of the most efficient ways to reduce the number of computations is paralleling processes. Good deep learning framework has built-in paralleling mechanism;
  • Fifth, it should have qualified and big enough community support. The reasons for this requirement are obvious. Community is a driving force development and improvement of the platform. To provide this, the platform should be highly integrative and be open source software.

Of course, deep learning platforms meet the above requirements at different levels. Here, we’ll consider the most popular platforms in detail.


TensorFlow was developed by Google Brain team. Nowadays, it is the most popular deep learning platform with the biggest number of contributors, and it has the greatest number of articles on Medium and ArXiv.

What is good in TensorFlow? It satisfies all requirements of a good framework. One of the biggest advantages is supporting multiple languages (C++, R, Python, Java, and others). TensorFlow is open source, has excellent support, and many pre-written ANN algorithms. Also, it supports computations on multiple CPUs and GPUs. Although, TensorFlow needs coding skills, attention for the ANN architecture.

Anyway, TensorFlow for the moment is the champion among deep learning platforms.


Keras is the second in popularity platform. The biggest difference from TensorFlow (as well as from Theano, PyTorch, and MXNET) is that it is a high level ANN API. It means that Keras has a different level of abstraction. While lower level APIs operate with mathematical operations and ANN primitives, Keras operates with ANNs abstractions. So, Keras is user friendly and it is the best for people who are new to deep learning. Also, Keras is good for doing fast experiments with ANN models. At the same time Keras gives access to lower level frameworks and can integrate common machine learning packages, like scikit-learn.

Keras is the second platform in citations in ArXiv articles. Large scientific organizations, such as CERN and NASA, used Keras.


PyTorch is a low-level deep learning framework, due to its coding style it lays in between Keras and TensorFlow. It was developed by Facebook’s research group.

The advantage of PyTorch is its ability to work with dynamic computation graphs. It means that PyTorch can change architecture during runtime, that makes PyTorch memory efficient. Also, the platform supports data parallelism, which induces its strong GPU acceleration.


Caffe is primarily developed to provide deep learning for image classification. And this is the biggest advantage of Caffe: it is able to learn from images with a high processing speed. Also, Caffe is open source.


Theano is the fifth in popularity deep learning platform. It is a Python library that performs mathematical operations on multidimensional arrays and optimize code compilation. Theano is mostly used for scientific research applications.


There are plenty of other platforms, such as MXNET, CNTK, DeepLearning4J, FastAI, Chainer, and others. The choice of deep learning platform mostly depends on the tasks you need to solve, the size of community where you can ask questions, and your level of coding. However, deep learning is a rapidly developing field, new frameworks appear constantly, and some of them can start to dominate in the short term.