-
Parallel Computing
- TASK PARALLELISM
- 1.1 Task Creation and Termination (Async, Finish)
- 1.2 Tasks in Java's Fork/Join Framework
- 1.3 Computation Graphs, Work, Span
- 1.4 Multiprocessor Scheduling, Parallel Speedup
- 1.5 Amdahl's Law
- Mini Project 1: Reciprocal-Array-Sum using the Java Fork/Join Framework
- FUNCTIONAL PARALLELISM
- 2.1 Futures: Tasks with Return Values
- 2.2 Futures in Java's Fork/Join Framework
- 2.3 Memoization
- 2.4 Java Streams
- 2.5 Data Races and Determinism
- Mini Project 2: Analyzing Student Statistics Using Java Parallel Streams
- LOOP PARALLELISM
- 3.1 Parallel Loops
- 3.2 Parallel Matrix Multiplication
- 3.3 Barriers in Parallel Loops
- 3.4 Parallel One-Dimensional Iterative Averaging
- 3.5 Iteration Grouping/Chunking in Parallel Loops
- Mini Project 3: Parallelizing Matrix-Matrix Multiply Using Loop Parallelism
- DATA FLOW SYNCHRONIZATION AND PIPELINING
- 4.1 Split-phase Barriers with Java Phasers
- 4.2 Point-to-Point Sychronization with Phasers
- 4.3 One-Dimensional Iterative Averaging with Phasers
- 4.4 Pipeline Parallelism
- 4.5 Data Flow Parallelism
- Mini Project 4: Using Phasers to Optimize Data-Parallel Applications
- TASK PARALLELISM
-
Concurrent Programming:
- THREADS AND LOCKS
- 1.1 Threads
- 1.2 Structured Locks
- 1.3 Unstructured Locks
- 1.4 Liveness
- 1.5 Dining Philosophers
- Mini Project 1: Locking and Synchronization
- CRITICAL SECTIONS AND ISOLATION
- 2.1 Critical Sections
- 2.2 Object Based Isolation (Monitors)
- 2.3 Concurrent Spanning Tree Algorithm
- 2.4 Atomic Variables
- 2.5 Read, Write Isolation
- Mini Project 2: Global and Object-Based Isolation
- ACTORS
- 3.1 Actors
- 3.2 Actor Examples
- 3.3 Sieve of Eratosthenes Algorithm
- 3.4 Producer-Consumer Problem
- 3.5 Bounded Buffer Problem
- Mini Project 3: Sieve of Eratosthenes Using Actor Parallelism
- CONCURRENT DATA STRUCTURES
- 4.1 Optimistic Concurrency
- 4.2 Concurrent Queue
- 4.3 Linearizability
- 4.4 Concurrent Hash Map
- 4.5 Concurrent Minimum Spanning Tree Algorithm
- Mini Project 4: Parallelization of Boruvka's Minimum Spanning Tree Algorithm
- THREADS AND LOCKS
-
Distributed Computing
- DISTRIBUTED MAP REDUCE
- 1.1 Introduction to Map-Reduce
- 1.2 Hadoop Framework
- 1.3 Spark Framework
- 1.4 TF-IDF Example
- 1.5 Page Rank Example
- Mini Project 1: Page Rank with Spark
- CLIENT-SERVER PROGRAMMING
- 2.1 Introduction to Sockets
- 2.2 Serialization/Deserialization
- 2.3 Remote Method Invocation
- 2.4 Multicast Sockets
- 2.5 Publish-Subscribe Model
- Mini Project 2: File Server
- MESSAGE PASSING
- 3.1 Single Program Multiple Data (SPMD) model
- 3.2 Point-to-Point Communication
- 3.3 Message Ordering and Deadlock
- 3.4 Non-Blocking Communications
- 3.5 Collective Communication
- Mini Project 3: Matrix Multiply in MPI
- COMBINING DISTRIBUTION AND MULTITHREADING
- 4.1 Processes and Threads
- 4.2 Multithreaded Servers
- 4.3 MPI and Threading
- 4.4 Distributed Actors
- 4.5 Distributed Reactive Programming
- Mini Project 4: Multi-Threaded File Server
- DISTRIBUTED MAP REDUCE