How large should be mysql innodb_buffer_pool_size?

Your innodb_buffer_pool_size is enormous. You have it set at 20971520000. That's 19.5135 GB. If you only have 5GB of InnoDB data and indexes, then you should only have about 8GB. Even this may be too high.

Here is what you should do. First run this query

SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM
(SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
FROM information_schema.tables WHERE engine='InnoDB') A;

This will give you the RIBPS, Recommended InnoDB Buffer Pool Size, based on all InnoDB Data and Indexes, with an additional 60%.

For Example

mysql>     SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM
    ->     (SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
    ->     FROM information_schema.tables WHERE engine='InnoDB') A;
+-------+
| RIBPS |
+-------+
|     8 |
+-------+
1 row in set (4.31 sec)

mysql>

With this output, you would set the following in /etc/my.cnf

[mysqld]
innodb_buffer_pool_size=8G

Next, service mysql restart

After the restart, run MySQL for a week or two. Then, run this query:

SELECT (PagesData*PageSize)/POWER(1024,3) DataGB FROM
(SELECT variable_value PagesData
FROM information_schema.global_status
WHERE variable_name='Innodb_buffer_pool_pages_data') A,
(SELECT variable_value PageSize
FROM information_schema.global_status
WHERE variable_name='Innodb_page_size') B;

This will give you how many actual GB of memory is in use by InnoDB Data in the InnoDB Buffer Pool at this moment.

I have written about this before : What to set innodb_buffer_pool and why..?

You could just run this DataGB query right now rather than reconfiguring, restarting and waiting a week.

This value DataGB more closely resembles how big the InnoDB Buffer Pool should be + (percentage specified in innodb_change_buffer_max_size). I am sure this will be far less than the 20000M you have reserved right now. The savings in RAM can be used for tuning other things like

  • join_buffer_size
  • sort_buffer_size
  • read_buffer_size
  • read_rnd_buffer_size
  • max_connection

CAVEAT #1

This is very important to note: At times, InnoDB may require an additional 10% over the value for the innodb_buffer_pool_size. Here is what the MySQL Documentation says on this:

The larger you set this value, the less disk I/O is needed to access data in tables. On a dedicated database server, you may set this to up to 80% of the machine physical memory size. Be prepared to scale back this value if these other issues occur:

Competition for physical memory might cause paging in the operating system.

InnoDB reserves additional memory for buffers and control structures, so that the total allocated space is approximately 10% greater than the specified size.

The address space must be contiguous, which can be an issue on Windows systems with DLLs that load at specific addresses.

The time to initialize the buffer pool is roughly proportional to its size. On large installations, this initialization time may be significant. For example, on a modern Linux x86_64 server, initialization of a 10GB buffer pool takes approximately 6 seconds. See Section 8.9.1, “The InnoDB Buffer Pool”.

CAVEAT #2

I See the following values in your my.cnf

| innodb_io_capacity                                | 200 |
| innodb_read_io_threads                            | 4   |
| innodb_thread_concurrency                         | 4   |
| innodb_write_io_threads                           | 4   |

These number will impede InnoDB from accessing multiple cores

Please set the following:

[mysqld]
innodb_io_capacity = 2000
innodb_read_io_threads = 64
innodb_thread_concurrency = 0
innodb_write_io_threads = 64

I have written about this before in the DBA StackExchange

  • May 26, 2011: About single threaded versus multithreaded databases performance
  • Sep 12, 2011: Possible to make MySQL use more than one core?
  • Sep 20, 2011: Multi cores and MySQL Performance

I just answered a question like this in Server Fault using a more concise formula:

SELECT CONCAT(CEILING(RIBPS/POWER(1024,pw)),SUBSTR(' KMGT',pw+1,1))
Recommended_InnoDB_Buffer_Pool_Size FROM
(
    SELECT RIBPS,FLOOR(LOG(RIBPS)/LOG(1024)) pw
    FROM
    (
        SELECT SUM(data_length+index_length)*1.1*growth RIBPS
        FROM information_schema.tables AAA,
        (SELECT 1.25 growth) BBB
        WHERE ENGINE='InnoDB'
    ) AA
) A;

Something like this? Using SHOW VARIABLES and SHOW GLOBAL STATUS:

Expression: innodb_buffer_pool_size / _ram
Meaning: % of RAM used for InnoDB buffer_pool
Recommended range: 60~80%

Expression: Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests
Meaning: Read requests that had to hit disk
Recommended range: 0-2%
What to do if out of range: Increase innodb_buffer_pool_size if you have enough RAM.

Expression: Innodb_pages_read / Innodb_buffer_pool_read_requests
Meaning: Read requests that had to hit disk
Recommended range: 0-2%
What to do if out of range: Increase innodb_buffer_pool_size if you have enough RAM.

Expression: Innodb_pages_written / Innodb_buffer_pool_write_requests
Meaning: Write requests that had to hit disk
Recommended range: 0-15%
What to do if out of range: Check innodb_buffer_pool_size

Expression: Innodb_buffer_pool_reads / Uptime
Meaning: Reads
Recommended range: 0-100/sec.
What to do if out of range: Increase innodb_buffer_pool_size?

Expression: (Innodb_buffer_pool_reads + Innodb_buffer_pool_pages_flushed) / Uptime
Meaning: InnoDB I/O
Recommended range: 0-100/sec.
What to do if out of range: Increase innodb_buffer_pool_size?

Expression: Innodb_buffer_pool_pages_flushed / Uptime
Meaning: Writes (flushes)
Recommended range: 0-100/sec.
What to do if out of range: Increase innodb_buffer_pool_size?

Expression: Innodb_buffer_pool_wait_free / Uptime
Meaning: Counter for when there are no free pages in buffer_pool. That is, all pages are dirty.
Recommended range: 0-1/sec.
What to do if out of range: First be sure innodb_buffer_pool_size is set reasonably; if still trouble, decrease innodb_max_dirty_pages_pct


Your title asks about innodb_buffer_pool_size, but I suspect that is not the real problem. (Rolando commented on why you have set it big enough, even too big.)

I've set max connections = 800 which sometimes saturate and grind the server to halt.

That is unclear. 800 users in "Sleep" mode has virtually zero impact on the system. 800 active threads would be a disaster. How many threads are "running"?

Are the threads blocking each other? See SHOW ENGINE INNODB STATUS for some clues on deadlocks, etc.

Are any queries showing up in the slowlog? Let's optimize them.

What version are you using? XtraDB (a drop-in replacement for InnoDB) does a better job of using multiple cores. 5.6.7 does an even better job.

innodb_buffer_pool_instances -- change this to 8 (assuming a 20G buffer_pool); it will cut back slightly on the Mutex contention.

Are you I/O bound or are you CPU bound? The solutions are radically different, depending on your answer.

SSD -- It might be better if all the log files were on non-SSD drives.