Sending over the same socket with multiprocessing.pool.map

Looking at your use-case, you have 2 time-intensive tasks:

  • packing/serializing the data
  • sending the data

Packing on your machine is a CPU intensive task: It would probably not profit much(if at all) from multithreading as threads in python always run on the same core. Packing in multiple processes would probably speed up the packing part since there multiple cores can be leveraged, but on the other hand you'll have to copy the data to a new space in main memory, since processes don't share memory. You should test if multiprocessing makes sense there, if not, try with shared memory which would eliminate the speed loss from copying the data and will let you pack your data on multiple cores(but adds a lot of complexity to your code). For packing in general I'd also recommend looking at protobuf or flatbuffers.

Sending data on the other hand, profits from concurrency not because the CPU needs so much time, but because of delays through the network and waiting for acknowledgement packets, which means that a significant speedup can be achieved by using threads or asyncio because waiting on a reply isn't sped up by using multiple cores.

I'd suggest you test whether packing on multiple cores using the multiprocessing library has the desired effect. If so, you'll have to index or timestamp your packets to be able to realign them on the other side. There are no mechanisms to "make sure they are sent in order" simply because that would remove most of the time you saved using concurrency. So don't try to syncronize where you don't have to, since then you could skip working asyncronously altogether.

However if packing(and this is what I suspect) on multiple processes only yields a negligible speedup, I'd recommend packing/serializing the data on one thread(in the main thread) and then send the data on a thread each or using asyncio. For a how to on that please refer to this answer. You will have to expect data out of order, so either index your packets or timestamp them.

HTH

If for some reason you absolutely have to pack on multiple processes and send the data in order, you'll have to look at shared-memory and set it up so the main process creates a process for each set of data and shares the memory of each dataset with the correct process. Then each child process has to create a shared memory object to write the packed data to. The packed data has to be shared with the parent process. The parent process should then loop over the shared memory objects the children will write to and only send a piece of data if it is the first, or if the previous piece is marked as sent. Sending the data in this case should NOT happen using threads or anything asyncronous, as then the correct order would again not be guaranteed... That said better don't use this solution(extremely complex-minimal gain), go with either of the above 2.


  1. The socket will be shared by the processes and processes are controlled by operating system scheduler which has no control over the execution order for this processes. So processes appears to run randomly to us (this is not full truth - check about os scheduling algorithms) and you cannot guarantee the order of execution and the order of package delivery.
  2. From network perspective when you send data over shared socket typically you don't wait for response (if you use tcp protocol) and this will appear to us as simultaneous packet send/deliver and same for response.

To make sure that you have in-order delivery of packets you need to ensure that each packet you send the other end receive, so you are limited to use synchronized connections (send packet only after previous one was send and you made sure that it was received). In your use case I would suggest you have pool of processes that generate pickled objects and send them to queue (they will be producers). The other object will be consumer of these objects and send them over network.