conference logo

Playlist "FrOSCon 2019"

Introduction to Deep Learning

Haitham Bjanthalah

Deep learning has been on a hype peak for the last 2-5 years, and it's being adopted by smaller startups and bigger enterprises alike. As an enthusiastic developer, you might be interested in getting in touch with deep learning to see what’s the hype all about, and if you’re not, you should! Deep learning could be a new way of looking at problems and developing innovative ways of solving them. By the end of this talk, you will have a good understanding of what is out there in the deep learning world, including frameworks, languages, popular existing deep learning networks, cloud providers, and more.


This talk will be in three parts, first, we will present the most common deep learning frameworks, their advantages and disadvantages, and what languages they support. Then we will talk about cloud providers that support deep learning. In the last section, we will talk about our experience with deep learning on the cloud.


For many frameworks, Python seems to be the preferred language, largely due to its ease of use and extensive community support. However, most frameworks also support other languages like C++, R, and Java. For the most part, a combination of preferred language and the intended use case can determine the options for a deep learning framework.
Currently, there are many deep learning frameworks, the most common ones are Tensorflow, Keras, Pytorch, Caffe, and Theano. Some of them excel in image classification challenges while others are more suitable for natural language processing or sentiment analysis. Your choice for the most suitable framework should depend on many factors, including the intended application, preferred language of use, and the availability of good documentation and community support. Usually, frameworks default to running on CPUs and need an extra setup to run on GPUs, but these steps are such a trivial task that it's not an important factor to consider. GPUs have a huge advantage in training and inference speed over CPUs due to their parallelization capabilities.


These days there are countless cloud providers for computing power, and many of them have responded to the increase of interest in this category of AI by providing VMs specially tailored for deep learning. All the big players (Google Cloud, AWS, Azure) provide GPU enabled machines with deep learning frameworks pre-configured. However, there are smaller cloud providers who offer interesting models in order to survive the competition.
When deciding on which providers to go with, one should consider many factors including pricing model, actual computing power delivered, availability regions, and the existence of pre-configured VMs for deep learning. For example, AWS has a good pricing model and great choices of deep learning frameworks, but for a while, it was impossible to provision a new machine as all of the resources were already booked. It’s also worth mentioning that some cloud providers offer trial periods or sign up credit to test their services, so always be on a look for that!


At Viaboxx we are interested in trying out new technologies, so we started with deep learning by creating and running a deep learning showcase on a cloud provider. We used Keras with Tensorflow backend for the source code and Kaggle as a source of data. For a VM to perform the training we compared different cloud providers and chose a smaller one, Paperspace. Although the experience was not hiccups-free, we ended up with a good understanding of the technologies and stacks involved.