Deep Learning

Welcome to the course web page for the Deep Learning Course at Chalmers University of Technology.

This course will unfortunately not be given in the academic year 2016-2017.

It was given May-June 2016.

The page contains outdated information, but please feel free to look at what we did during the course!

Student projects

Book of abstracts for the student projects is now available. Download book-of-abstracts-deeplearningchalmers-2016 now.


Artificial neural networks are models inspired by the mammal brain. Several artificial neurons in a network communicating similarly to the brain, make out a powerful model that has been shown to be able to approximate virtually any continuous function. During the 1990s, the interest in artificial neural networks declined, due to limited processing power and data availability, and other technologies often worked better. In the 2000s, the availability of big data and the advent of faster computers (and graphics processing units able to perform fast mathematical computations) has made it possible to train significantly larger models. These models gave rise to the term Deep Learning, and has outperformed the previous state of the art in tasks such as image recognition and natural language processing.

Course content

The course is made up of multiple modules. Completing a module involves watching video lectures on a specific topic, solving quizzes, and participating in discussion sessions twice a week.

The content is accessible through the learning platform

Important: If you have not received an enrollment key for Scalable Learning through email, please notify one of the teaching assistants.

Course outline

  • Preliminaries
  • Recurrent Neural Networks
  • Convolutional Neural Networks
  • Unsuperviced and seminsupervised methods (Auto-Encoders, Ladder Networks)
  • Regularization (Norm penalty, Dropout, Batch normalization)
  • Project presentations

Video lectures

The following selection of video lectures was used in this course.

  • Hugo Larochelle, Introduction (Lec1, Lec2:1-2:11)
  • Andrej Karpathy, Convnets
  • Justin Johnson, Spatial localization and detection
  • Andrej Karpathy, Understanding and visualizing Convnets
  • Richard Socher, Deep learning for NLP
  • Alex Graves, Recurrent neural networks
  • Yoshua Bengio, Deep generative models
  • Geoffrey Hinton, Regularization (9A, 9B, 9C, 9E, 10E)
  • Nando de Freitas, Deep reinforcement learning
  • Andrej Karpathy, Recurrent Neural Networks, Image Captioning, LSTM

Practical Information

The course will be given in LP4, 2016 (2016-03-21 – 2016-06-04). The workload will correspond to 7.5 hec.

This is a PhD course targeting students who are interested in machine learning and artificial neural networks. Some mathematical maturity is expected, and a basic course in linear algebra and machine learning (equivalend to TDA 231 Algorithms for Machine Learning & Inference or FFR135 Artificial Neural Networks) are required before taking this course.

The course will be carried out as a flipped classroom, which means that every participant has the mandatory homework of watching video lectures at home before attending the course sessions. At the sessions, we will have material to help discussing the content.


The examination is split in two parts: 1) participation in discussion sessions, 2) completion and presentation of a small project. The project should be completed in groups of two students. For 3(G), you have to participate in at least 80% of the discussion sessions,  and present a project of sufficient quality in front of your peers, at the end of the course. A short report on the project should also be submitted. For students who present projects that show a deeper understanding of the subject matter, the grade 4 or 5(VG) can be obtained.

  1. The student should have watched the video lectures of the corresponding module, the day before discussion session. Videos are accessible through Scalable Learning (see below).
  2. Students should actively participate in at least 80% of the discussion sessions.
  3. At the end of the course, each project group will have to present their project in front of their peers. A short report on the project should also be submitted.


  • Groups of two
  • Cover a couple of topics from course for passing grade
  • More topics for higher grades
  • Project presentation:
    • 20 min including questions
    • Tuesday, May 31st, 13:15-15:00 and Thursday, June 2nd, 13:15-15:00.
  • Project report:
    • Due May 31st
    • 4 pages short paper, including references
  • Planning report:
    • Due: April 14th
    • Short: 1 page
  • Inspiration:

Project proposals

  • Sentiment classification
    Raphael Konstantinou, Sebastian Eriksson

  • Facebook feed categorization
    Josefin Ondrus, Gabriel Andersson

  • Playing Solitaire using CNN and Deep Reinforcement Learning
    Robert Nyquist, Daniel Pettersson

  • Multi-word expression detection
    David Alfter, Luis Nieto Piña

  • Deep reinforcement learning for autonomous highway driving
    Car-Johan Hoel

  • Classification of human facial expression
    Arvid Nilsson, Jonas Karlsson

  • Recognition of Handwritten Mathematical Expressions
    Christian Ågren, Alexander Ågren

  • Distracted Driver Detection
    Florian Schäfer, Mats Uddgård

  • Spotting distracted drivers
    Emilio Jorge, Benjamin Lindberg

  • Image captioning
    Luca Caltagirone, Sina Torabi


Registration is now closed. Those of you that are Msc students who have filled out the registration form will have received a notification with your enrollment status. All enrolled students have received an email with instructions for how to follow the course.


Please join our group: for announcements and discussions.

Teaching assistants (PhD students behind the initiative)

  • Olof Mogren (kageback at
  • Mikael Kågebäck (mogren at
  • Fredrik Johansson (frejohk at

Course responsible: Devdatt Dubhashi.


Discussion sessions

  • Tuesday, 13:15 – 15:00 (ML13)
  • Thursday, 13:15 – 15:00 (ML1)