How can I notify an async routine from a sync routine?

My first reaction would be: move to a single concurrency model. Either use threads throughout, or coroutines throughout (with limited use of a threadpool for things that can't yet be done with asyncio).

There is no good reason for your project here to try to mix the two models. I suspect you only started using asyncio because the Python websockets library after having selected threads already. The remainder of your project could also be built using coroutines (e.g. using aiomysql to handle the database connections, etc).

However, you can still combine the two models, but you need to study the asyncio documentation on how to use it in combination with threads. Specifically, to send information from a thread to your coroutines, you need to make use of these two functions:

  • asyncio.run_coroutine_threadsafe(coro, loop) lets you add a coroutine to a running loop, and monitor that coroutine with a Future object if you need to return anything or need to be able to cancel the routine.
  • loop.call_soon_threadsafe(callback, *args) lets you call synchronous functions in the same thread as the loop. This is useful for callbacks that are called from another thread (e.g. you could have a coroutine await on an asyncio.Future() object and have a callback function set a result on that future object, so passing a result to the coroutine).

In your case, if you want to send data out to all current websocket connections, I'd use:

  • a mapping of queues, with each key active ws_serve tasks. ws_serve tasks add their own queue to this mapping and clean up after themselves. The tasks then pick up items to send from their own queue.
  • a coroutine that adds information to all the queues when executed.
  • Other threads can use asyncio.run_coroutine_threadsafe() to execute the coroutine that adds to the queues.

There is no need to use locking here; coroutines have far fewer concurrency issues, coroutines altering a dictionary is not an issue as long as there are no awaits during manipulation (including iteration over all the queues).

If you encapsulate the queues dictionary in a context manager, you can more easily ensure that queues are cleaned up properly:

# asyncio section, no thread access
import asyncio
from contextlib import AbstractContextManager


class WSSendQueues(AbstractContextManager):
    def __init__(self):
        self._queues = {}

    async def send_to_all(self, item):
        for queue in self._queues. values():
            queue.put_nowait(item)

    def __enter__(self):
        task = asyncio.current_task()
        self._queues[task] = queue = asyncio.Queue()
        return queue

    def __exit__(self, exc_type, exc_value, traceback):
        task = asyncio.current_task()
        self._queues.pop(task, None)

# global instance of the queues manager
# this has a coroutine `send_to_all()`
ws_queues = WSSendQueues()

def ws_serve(websocket, path):
    with ws_queues as queue:
        listen_pair = await websocket.recv()

        while True:
            to_send = await queue.get()  # blocks until something is available
            try:
                await websocket.send(to_send)
            finally:
                # let the queue know we handled the item
                queue.task_done()

def run_websockets_server(loop):
    start_server = websockets.serve(ws_serve, ws_interface, ws_port)

    loop.run_until_complete(start_server)
    loop.run_forever()

# reference to the asyncio loop *used for the main thread*
main_thread_loop = asyncio.get_event_loop()

# threads section, need access to the main_thread_loop to schedule
# coroutines

def client_listener():
    while True:
        # create the coroutine. THIS DOESN'T RUN IT YET.
        coro = ws_queues.send_to_all((p1_user, p2_user, time.time()))

        # and schedule it to run on the loop. From here on the
        # websockets will automatically receive the data on their respective queues.
        asyncio.run_coroutine_threadsafe(coro, main_thread_loop)


# starting the threads and event loop
t = threading.Thread(target=client_listener)
t.start()

run_websockets_server(main_thread_loop)

Your code doesn't handle shut-down yet, but I did prepare the above to allow for shutting down the asyncio websockets gracefully.

You'd start with not adding to the queues any more, so shutting down the threads that add data to the queues. Then you'd want to await on all the Queue.join() coroutines so you know all the sockets have completed sending the data out. I'd add a timeout to this, no point in waiting forever here. You could make this a coroutine on the context manager:

async def join(self, timeout=None):
    """Wait for all the websocket queues to be empty

    If timeout is not none, limit the amount of time to wait.
    """
    tasks = [asyncio.create_task(q.join()) for q in self._queues.values()]
    done, pending = asyncio.wait(tasks, timeout=timeout)
    # cancel any remaining joins
    for task in pending:
        task.cancel()

Once you awaited on the queues (preferably with a time limit), you'd shut down the websockets server and close the loop. All this is, of course, done from a coroutine you schedule on the main thread.