rust-lang / book

The Rust Programming Language

Home Page:https://doc.rust-lang.org/book/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ch 20.2. cloning receiver seems antithetical to Ch 16

shaurya947 opened this issue · comments

  • I have searched open and closed issues and pull requests for duplicates, using these search terms:
    • "clone receiver"
    • "ch20"
    • "std::mpsc"
  • I have checked the latest main branch to see if this has already been fixed, in this file:
    • src/ch20-02-multithreaded.md

URL to the section(s) of the book with this problem: https://doc.rust-lang.org/book/ch20-02-multithreaded.html#sending-requests-to-threads-via-channels

Description of the problem:
Chapter 16 introduces two ways of communicating between threads: message passing and memory sharing. The message passing approach is demonstrated via the use of std::sync::mpsc::channel. The multiple producer, single consumer nature is reiterated several times, especially in the last example of 16.2. where we clone the transmitter and move the clones into several threads. At this point, the reader is left wondering about the possibility of having multiple receiving ends instead.

16.3. then introduces Arc<Mutex<T>> as a general-purpose tool for sharing memory between threads safely. The examples in 16.3. only demonstrate updating a counter variable, but a reader who is actively connecting the dots between chapters and sub-chapters will end 16.3. with the impression that Arc<Mutex<T>> is probably the way to go when one is interested in spmc or mpmc channel behavior where T is a queue of some sort (such as VecDeque) such that producers push to the back and consumer pop from the front.

Given all of the above, it seems antithetical to go ahead and effectively clone the receiver anyway in Ch 20.2. using an Arc<Mutex<T>>.

Suggested fix:

  • If the current way Ch 20.2. is written—sharing ownership of the receiving end of an mpsc channel—is acceptable (and recommended), then 16.3. should have an additional example at the end showing this exact concept (perhaps building on the last example of 16.2.). To that end, we should also reword/tweak parts of 16.2. to indicate that we will address the multiple producer, multiple consumer approach in the next section.
  • On the other hand, if we wish to leave Chapter 16 as-is, we should consider redoing the code in Ch 20.2. to instead use an Arc<Mutex<VecDeque<Job>>> to demonstrate the spmc case. Here is my attempt at redoing listing 20-20:
use std::{
    collections::VecDeque,
    sync::{Arc, Mutex},
    thread,
    time::Duration,
};

type Job = Box<dyn FnOnce() + Send + 'static>;

pub struct ThreadPool {
    workers: Vec<Worker>,
    queue: Arc<Mutex<VecDeque<Job>>>,
}

impl ThreadPool {
    /// Create a new ThreadPool.
    ///
    /// The size is the number of threads in the pool.
    ///
    /// # Panics
    ///
    /// The `new` function will panic if the size is zero.
    pub fn new(size: usize) -> ThreadPool {
        assert!(size > 0);

        let mut workers = Vec::with_capacity(size);
        let queue = Arc::new(Mutex::new(VecDeque::new()));

        for id in 0..size {
            workers.push(Worker::new(id, Arc::clone(&queue)));
        }

        ThreadPool { workers, queue }
    }

    pub fn execute<F>(&self, f: F)
    where
        F: FnOnce() + Send + 'static,
    {
        self.queue.lock().unwrap().push_back(Box::new(f));
    }
}

struct Worker {
    id: usize,
    thread: thread::JoinHandle<()>,
}

// --snip--

impl Worker {
    fn new(id: usize, queue: Arc<Mutex<VecDeque<Job>>>) -> Worker {
        let thread = thread::spawn(move || loop {
            let mut job = None;

            {
                let mut lock = queue.try_lock();
                if let Ok(ref mut queue) = lock {
                    job = queue.pop_front();
                }
            }

            if let Some(job) = job {
                println!("Worker {id} got a job; executing.");
                job();
            } else {
                thread::sleep(Duration::from_millis(100));
            }
        });

        Worker { id, thread }
    }
}