Are individual queries faster than joins?

Are individual queries faster than joins, or: Should I try to squeeze every info I want on the client side into one SELECT statement or just use as many as seems convenient?

In any performance scenario, you have to test and measure the solutions to see which is faster.

That said, it's almost always the case that a joined result set from a properly tuned database will be faster and scale better than returning the source rows to the client and then joining them there. In particular, if the input sets are large and the result set is small -- think about the following query in the context of both strategies: join together two tables that are 5 GB each, with a result set of 100 rows. That's an extreme, but you see my point.

I have noticed that when I have to get information from multiple tables, it is "often" faster to get this information via multiple queries on individual tables (maybe containing a simple inner join) and patch the data together on the client side that to try to write a (complex) joined query where I can get all the data in one query.

It's highly likely that the database schema or indexes could be improved to better serve the queries you're throwing at it.

A joined query always has to return more data than the individual queries that receive the same amount of information.

Usually this is not the case. Most of the time even if the input sets are large, the result set will be much smaller than the sum of the inputs.

Depending on the application, very large query result sets being returned to the client are an immediate red flag: what is the client doing with such a large set of data that can't be done closer to the database? Displaying 1,000,000 rows to a user is highly suspect to say the least. Network bandwidth is also a finite resource.

Since the database has to cobble together the data, for large datasets one can assume that the database has to do more work on a single joined query than on the individual ones, since (at least) it has to return more data to the client.

Not necessarily. If the data is indexed correctly, the join operation is more likely to be done more efficiently at the database without needing to scan a large quantity of data. Moreover, relational database engines are specially optimized at a low level for joining; client stacks are not.

Would it follow from this, that when I observe that splitting a client side query into multiple queries yield better performance, this is just the way to go, or would it rather mean that I messed up the joined query?

Since you said you're inexperienced when it comes to databases, I would suggest learning more about database design and performance tuning. I'm pretty sure that's where the problem lies here. Inefficiently-written SQL queries are possible, too, but with a simple schema that's less likely to be a problem.

Now, that's not to say there aren't other ways to improve performance. There are scenarios where you might choose to scan a medium-to-large set of data and return it to the client if the intention is to use some sort of caching mechanism. Caching can be great, but it introduces complexity in your design. Caching may not even be appropriate for your application.

One thing that hasn't been mentioned anywhere is maintaining consistency in the data that's returned from the database. If separate queries are used, it's more likely (due to many factors) to have inconsistent data returned, unless a form of snapshot isolation is used for every set of queries.


Of course, I didn't measure any performance with these

You put together some good sample code. Did you look at the timing in SQL Fiddle? Even some brief unscientific performance testing will show that query three in your demonstration takes about the same amount of time to run as either query one or two separately. Combined one and two take about twice as long as three and that is before any client side join is performed.

As you increase the data, the speed of query one and two would diverge, but the database join would still be faster.

You should also consider what would happen if the inner join is eliminating data.


The query optimiser should be considered, too. Its role is to take your declarative SQL and translate it into procedural steps. To find the most efficient combination of procedural steps it will examine combinations of index usage, sorts, caching intermediate results sets and all sorts of other things, too. The number of permutations can get exceedingly large even with what look like quite simple queries.

Much of the calculation done to find the best plan is driven by the distribution of data within the tables. These distributions are sampled and stored as statistics objects. If these are wrong, they lead the optimiser to make poor choices. Poor choices early in the plan lead to even poorer choices later on in a snowball effect.

It's not unknown for a medium sized query returning modest amounts of data to take minutes to run. Correct indexing and good statistics then reduces this to milliseconds.