one argument process Notice that args should be itrable. like index. But we can use a little trick to tackle none iterable ones.
1 2 def func (resolver ): return resolver.getFormatedVocaloidDataInDict()
We put a “,” after none iterable args.
1 results = [pool.apply_async(func, args=(resolverX,)) for resolverX in [resolver for i in range (num_cores)]]
template 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import multiprocessing as mpnum_cores = int (mp.cpu_count()) pool = mp.Pool(num_cores) processes = [pool.apply_async(func0, args=(arg0,)) for arg0 in argList] processes.append(pool.apply_async(func1, args=(argX,))) results = [p.get() for p in processes]
func0 and func1 can be lambda func.
prepare processes not starting any processes, so cost are really small.
if func has return we use .get(), or .start() is enough
example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 import mathimport datetimeimport multiprocessing as mpdef func (name, resolver ): return {name: resolver.getFormatedVocaloidDataInDict()} if __name__ == '__main__' : start_t = datetime.datetime.now() num_cores = int (mp.cpu_count()) print ("本地计算机有: " + str (num_cores) + " 核心" ) pool = mp.Pool(num_cores) results = [pool.apply_async(func, args=(name,resolverX)) for name, resolverX in [(i, resolver) for i in range (num_cores)]] results = [p.get() for p in results] end_t = datetime.datetime.now() elapsed_sec = (end_t - start_t).total_seconds() print ("多进程计算 共消耗: " + "{:.2f}" .format (elapsed_sec) + " 秒" )
questions jupyter notebook not support multiprocessing package -> try multiprocess instead of multiprocessing https://stackoverflow.com/questions/48846085/python-multiprocessing-within-jupyter-notebook
progress bar -> https://www.jianshu.com/p/1d6e0d07eb4a
gpu acceleration -> https://www.cnblogs.com/noluye/p/11517489.html