multiprocessing.Pool.imap_unordered with fixed queue size or buffer?

Since processing is fast, but writing is slow, it sounds like your problem is I/O-bound. Therefore there might not be much to be gained from using multiprocessing.

However, it is possible to peel off chunks of data, process the chunk, and wait until that data has been written before peeling off another chunk:

import itertools as IT
if __name__ == "__main__":
    data = records(100)
    with Pool(2) as pool:
        chunksize = ...
        for chunk in iter(lambda: list(IT.islice(data, chunksize)), []):
            writer(pool.imap_unordered(process, chunk, chunksize=5))

As I was working on the same problem, I figured that an effective way to prevent the pool from overloading is to use a semaphore with a generator:

from multiprocessing import Pool, Semaphore

def produce(semaphore, from_file):
    with open(from_file) as reader:
        for line in reader:
            # Reduce Semaphore by 1 or wait if 0
            semaphore.acquire()
            # Now deliver an item to the caller (pool)
            yield line

def process(item):
    result = (first_function(item),
              second_function(item),
              third_function(item))
    return result

def consume(semaphore, result):
    database_con.cur.execute("INSERT INTO ResultTable VALUES (?,?,?)", result)
    # Result is consumed, semaphore may now be increased by 1
    semaphore.release()

def main()
    global database_con
    semaphore_1 = Semaphore(1024)
    with Pool(2) as pool:
        for result in pool.imap_unordered(process, produce(semaphore_1, "workfile.txt"), chunksize=128):
            consume(semaphore_1, result)

See also:

K Hong - Multithreading - Semaphore objects & thread pool

Lecture from Chris Terman - MIT 6.004 L21: Semaphores