Multi-threaded algorithm for solving sudoku?

The idea behind multi-threading is taking advantage of having several CPUs, allowing you to make several calculations simultaneously. Of course each thread is going to need its own memory, but that's usually not a problem.

Mostly, what you want to do is divide the possible solution state into several sub-spaces which are as independent as possible (to avoid having to waste too many resources on thread creation overhead), and yet "fit" your algorithm (to actually benefit from having multiple threads).


Pretty simple really. The basic concept is that in your backtracking solution you would branch when there was a choice. You tried one branch, backtracked and then tried the other choice.

Now, spawn a thread for each choice and try them both simultaneously. Only spawn a new thread if there are < some number of threads already in the system (that would be your input argument), otherwise just use a simple (i.e your existing) single-threaded solution. For added efficiency, get these worker threads from a thread pool.

This is in many ways a divide and conquer technique, you are using the choices as an opportunity to split the search space in half and allocate one half to each thread. Most likely one half is harder than the other meaning thread lifetimes will vary but that is what makes the optimisation interesting.

The easy way to handle the obvious syncronisation issues is to to copy the current board state and pass it into each instance of your function, so it is a function argument. This copying will mean you don't have to worry about any shared concurrency. If your single-threaded solution used a global or member variable to store the board state, you will need a copy of this either on the stack (easy) or per thread (harder). All your function needs to return is a board state and a number of moves taken to reach it.

Each routine that invokes several threads to do work should invoke n-1 threads when there are n pieces of work, do the nth piece of work and then wait with a syncronisation object until all the other threads are finished. You then evaluate their results - you have n board states, return the one with the least number of moves.


Multi-threading is useful in any situation where a single thread has to wait for a resource and you can run another thread in the meantime. This includes a thread waiting for an I/O request or database access while another thread continues with CPU work.

Multi-threading is also useful if the individual threads can be farmed out to diffent CPUs (or cores) as they then run truly concurrently, although they'll generally have to share data so there'll still be some contention.

I can't see any reason why a multi-threaded Sudoku solver would be more efficient than a single-threaded one, simply because there's no waiting for resources. Everything will be done in memory.

But I remember some of the homework I did at Uni, and it was similarly useless (Fortran code to see how deep a tunnel got when you dug down at 30 degrees for one mile then 15 degrees for another mile - yes, I'm pretty old :-). The point is to show you can do it, not that it's useful.

On to the algorithm.

I wrote a single threaded solver which basically ran a series of rules in each pass to try and populate another square. A sample rule was: if row 1 only has one square free, the number is evident from all the other numbers in row 1.

There were similar rules for all rows, all columns, all 3x3 mini-grids. There were also rules which checked row/column intersects (e.g. if a given square could only contain 3 or 4 due to the row and 4 or 7 due to the column, then it was 4). There were more complex rules I won't detail here but they're basically the same way you solve it manually.

I suspect you have similar rules in your implementation (since other than brute force, I can think of no other way to solve it, and if you've used brute force, there's no hope for you :-).

What I would suggest is to allocate each rule to a thread and have them share the grid. Each thread would do it's own rule and only that rule.

Update:

Jon, based on your edit:

[edit] I forgot to mention, the number of threads to be used is specified as an argument to the program, so as far as I can tell it's not related to the state of the puzzle in any way...

Also, there may not be a unique solution - a valid input may be a totally empty board. I have to report min(1000, number of solutions) and display one of them (if it exists)

It looks like your teacher doesn't want you to split based on the rules but instead on the fork-points (where multiple rules could apply).

By that I mean, at any point in the solution, if there are two or more possible moves forward, you should allocate each possibility to a separate thread (still using your rules for efficiency but concurrently checking each possibility). This would give you better concurrency (assuming threads can be run on separate CPUs/cores) since there will be no contention for the board; each thread will get it's own copy.

In addition, since you're limiting the number of threads, you'll have to work some thread-pool magic to achieve this.

What I would suggest is to have a work queue and N threads. The work queue is initially empty when your main thread starts all the worker threads. Then the main thread puts the beginning puzzle state into the work queue.

The worker threads simply wait for a state to be placed on the work queue and one of them grabs it for processing. The work thread is your single-threaded solver with one small modification: when there are X possibilities to move forward (X > 1), your worker puts X-1 of those back onto the work queue then continues to process the other possibility.

So, lets say there's only one solution (true Sudoku :-). The first worker thread will whittle away at the solution without finding any forks and that will be exactly as in your current situation.

But with two possibilities at move 27 (say, 3 or 4 could go into the top left cell), your thread will create another board with the first possibility (put 3 into that cell) and place that in the work queue. Then it would put 4 in its own copy and continue.

Another thread will pick up the board with 3 in that cell and carry on. That way, you have two threads running concurrently handling the two possibilities.

When any thread decides that its board is insoluble, it throws it away and goes back to the work queue for more work.

When any thread decides that its board is solved, it notifies the main thread which can store it, over-writing any previous solution (first-found is solution) or throw it away if it's already got a solution (last-found is solution) then the worker thread goes back to the work queue for more work. In either case, the main thread should increment a count of solutions found.

When all the threads are idle and the work queue is empty, main either will or won't have a solution. It will also have a count of solutions.

Keep in mind that all communications between workers and main thread will need to be mutexed (I'm assuming you know this based on information in your question).


Here is a greedy brute-force single-threaded solver:

  1. Select next empty cell. If no more empty cells, victory!
  2. Possible cell value = 1
  3. Check for invalid partial solution (duplicates in row, column or 3x3 block).
  4. If partial solution is invalid, increment cell value and return to step 3. Otherwise, go to step 1.

If you look at the above outline, the combination of steps 2 and 3 are obvious candidates for multithreading. More ambitious solutions involve creating a recursive exploration that spawns tasks that are submitted to a thread pool.

EDIT to respond to this point: "I don't understand how you can find different solutions to the same problem at the same time without maintaining multiple copies of the puzzle."

You can't. That's the whole point. However, a concrete 9-thread example might make the benefits more clear:

  1. Start with an example problem.
  2. Find the first empty cell.
  3. Create 9 threads, where each thread has its own copy of the problem with its own index as a candidate value in the empty cell.
  4. Within each thread, run your original single-threaded algorithm on this thread-local modified copy of the problem.
  5. If one of the threads finds an answer, stop all the other threads.

As you can imagine, each thread is now running a slightly smaller problem space and each thread has the potential to run on its own CPU core. With a single-threaded algorithm alone, you can't reap the benefits of a multi-core machine.