Parallel Processing with HPC

Accelerate your HPC jobs with parallel processing, OpenMP threading and message passing interface routines

Parallel processing is a way to scale applications to deal with large datasets and complex problems. In this course, an initial revision of parallel job submission on Spartan and parallel extensions available in some applications leads to a consideration of HPC system architecture and the various limitations and bottle-necks in parallel processing.

This is followed with an introduction to the two core approaches in parallel programming: shared memory and distributed memory, using OpenMP threads and MPI message passing routines. A number of programming examples and opportunities for development are provided.

This course is FREE. To attend you must be from an AAF institution (this includes University of Melbourne and surrounding institutes).

Register for our upcoming workshops here.

Don't see a workshop for your tool of interest? Sign up for tool specific updates.

Want to hear more from Research Platform Services? Subscribe to our quarterly newsletter.

Duration: 5 hours

Format: 1 day intensive

Frequency: Quarterly

What you need: Please bring your laptop and a desire to learn the basics of parallel processing.

Lev, Lead Trainer in HPC

Lev Lafayette, Lead Trainer for everything related to High Performance Computing, Spartan, Linux shell scripting and parallel programming