dgilland / pydash

The kitchen sink of Python utility libraries for doing "stuff" in a functional way. Based on the Lo-Dash Javascript library.

Home Page:http://pydash.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for multithreading.

Drvanon opened this issue · comments

Chains offer an amazing opportunity for parallalelization, since unless a call to "thru" is encountered (or a function accepts the whole input), all calls can be parallelized. Right now, when I execute the following:

>>>  py_(range(5)).map(time.sleep).for_each(lambda _: datetime.now()).value()
[datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152)]

Where as I believe that the following output would also be quite possible:

>>>  py_(range(5)).map(time.sleep).for_each(lambda _: datetime.now()).value()
[datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 12, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 13, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 14, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 15, 875152)]

I was thinking about this earlier today. Though possible (and significantly benificial) to identify chain sections that do not depend on each other and perform those parts in parallel, that might be very challenging. What might be simpler is to provide a parallel API for the map functions, where the pydash.collections.itermap function is replaced with a threaded_map function.

Something maybe like:

import multiprocessing
pool = multiprocessing.Pool()

def pooled_iter_map(collection, iteratee):
    return pool.map(collection, iteratee)

def pooled_map(collection, iteratee):
    return list(pooled_iter_map(collection, iteratee))

def pooled_flat_map(collection, iteratee=None):
    return pyd.flatten(pooled_iter_map(collection, iteratee=iteratee))