Assignments

General Information

There are 2 lab assignments but we have renamed the subparts using letters this year (as the numbering was confusing last year). You should work in pairs (except for Lab C, see below). If you have a good reason for doing the assignments by yourself, please contact Nick, Mary or John. You need to pass all lab assignments in order to pass the course.

The assignments should be handed in using the Fire system.

The final deadline for all labs is May 23 at midnight. That means that you should aim to have all labs finished (and preferably also approved!) by then. Each lab should, in addition, be submitted at least once before the initial deadline for that lab.

Rules - IMPORTANT

Please read these early and carefully!
  1. Deadlines are hard. If you for some reason cannot make the deadline, contact us before the deadline, and tell us what your reason is, together with a realistic proposal of a new personal deadline for you. You may then get an extension of the deadline.
  2. Your last attempt has to be submitted before the final deadline. If you fail to do this, your submission will be rejected.
  3. Cheating is taken very seriously. Before you start working on the assignments, please read the note on cheating.

Submission

Clean Code

Before you submit your code, Clean It Up! Submitting clean code is Really Important, and simply the polite thing to do. After you feel you are done, spend some time on cleaning your code; make it simpler, remove unneccessary things, etc. We will reject your solution if it is not clean. Clean code:

What to include

Your submission needs to include the following information:

Before you submit, please read the note on cheating.

You are supposed to submit your solution using the Fire system.

Note: You do NOT have to use zip/gzip/tar/bzip on your files! Just upload each file that is part of your solution. Don't forget to submit when you have uploaded all the files.

Benchmarking Requirements (Haskell part)

Include Threadscope plots in the report and draw conclusions from them

Draw conclusions from runtime statistics (+RTS -s)

Play with different granularities (for example by introducing a threshold or depth parameter). To the Fire System

Tools

The stuDAT (Linux) computers have a recent Haskell Platform installed. We will also be using Amazon's Cloud (EC2) to provide you with access to parallel machines. More on that shortly.

The GHC user guide contains a chapter about Using GHC (command-line arguments and the like).

Threadscope is also available on the (Linux) stuDAT computers. (Type threadscope at the prompt.)

To get the Haskell Platform for your own laptop, go to the Haskell Platform page on haskell.org (which is a great source of Haskell info).

The Threadscope page includes information about how to use the tool, as well as how to install it.

Installing on Linux laptops seems straightforward. There are binary releases for Windows and Mac, thank goodness.

The slides from last year's lecture 3 by Andres Löh from Well-Typed are available and are about Threadscope and GHC events.

You should plan to get used to using Threadscope during the first week of the course.

Exercise 1 - Getting Started

Study the use of Threadscope. Write a simple merge sort program in Haskell and parallelise it

Details.

Lab A

Parallelising scan and a Fast Fourier Transform algorithm in Haskell. The lab description also contains pointers to interesting papers and slides.

Details

Lab B

GPU Programming in CUDA and Obsidian. To install Obsidian:
cabal update
cabal install obsidian

Details [pdf]

The files referred to in the instructions for Lab B are inc.cu and LabB.hs. (The latter was slightly revised on April 19, so you may need to fetch the new one.)

Some additional information about Lab B.

The deadline for resubmission has now been set to Friday May 10 at midnight.

References

CUDA programming manual

Markus Billeter, Ola Olsson, Ulf Assarson. Efficient Stream Compaction on Wide SIMD Many-Core Architectures. High Performance Graphics 2009.
[pdf]

Koen Claessen, Mary Sheeran, and Bo Joel Svensson. Expressive Array Constructs in an Embedded GPU Kernel Programming Language.
In Proceedings of the 7th workshop on Declarative aspects and applications of multicore programming, DAMP ’12, 2012.
[pdf]

Alex Cole, Alistair McEwan, Geoffrey Mainland. Beauty And The Beast: Exploiting GPUs In Haskell.
Communicating Process Architectures, 2012
[available at hgpu.org]

Ola Olsson, Markus Billeter, Ulf Assarson. Clustered Deferred and Forward Shading.
High Performance Graphics (HPG), 2012.
[pdf]

Lab C

NESL-style programming and cost models. Repa programming. Writing a tutorial. You need to get into groups of four for this lab.

Details

Lab D

Distributed Erlang and Map-Reduce
Details