In Linux, what is the difference between "buffers" and "cache" reported by the free command?

Solution 1:

The "cached" total will also include some other memory allocations, such as any tmpfs filesytems. To see this in effect try:

mkdir t
mount -t tmpfs none t
dd if=/dev/zero of=t/zero.file bs=10240 count=10240
sync; echo 3 > /proc/sys/vm/drop_caches; free -m
umount t
sync; echo 3 > /proc/sys/vm/drop_caches; free -m

and you will see the "cache" value drop by the 100Mb that you copied to the ram-based filesystem (assuming there was enough free RAM, you might find some of it ended up in swap if the machine is already over-committed in terms of memory use). The "sync; echo 3 > /proc/sys/vm/drop_caches" before each call to free should write anything pending in all write buffers (the sync) and clear all cached/buffered disk blocks from memory so free will only be reading other allocations in the "cached" value.

The RAM used by virtual machines (such as those running under VMWare) may also be counted in free's "cached" value, as will RAM used by currently open memory-mapped files (this will vary depending on the hypervisor/version you are using and possibly between kernel versions too).

So it isn't as simple as "buffers counts pending file/network writes and cached counts recently read/written blocks held in RAM to save future physical reads", though for most purposes this simpler description will do.

Solution 2:

Tricky Question. When you calculate free space you actually need to add up buffer and cache both. This is what I Could find

A buffer is something that has yet to be "written" to disk. A cache is something that has been "read" from the disk and stored for later use.

http://visualbasic.ittoolbox.com/documents/difference-between-buffer-and-cache-12135


Solution 3:

I was looking for more clear description about buffer and i found in "Professional Linux® Kernel Architecture 2008"

Chapter 16: Page and Buffer Cache

Interaction

Setting up a link between pages and buffers serves little purpose if there are no benefits for other parts of the kernel. As already noted, some transfer operations to and from block devices may need to be performed in units whose size depends on the block size of the underlying devices, whereas many parts of the kernel prefer to carry out I/O operations with page granularity as this makes things much easier — especially in terms of memory management. In this scenario, buffers act as intermediaries between the two worlds.


Solution 4:

Explained by RedHat:

Cache Pages:

A cache is the part of the memory which transparently stores data so that future requests for that data can be served faster. This memory is utilized by the kernel to cache disk data and improve i/o performance.

The Linux kernel is built in such a way that it will use as much RAM as it can to cache information from your local and remote filesystems and disks. As the time passes over various reads and writes are performed on the system, kernel tries to keep data stored in the memory for the various processes which are running on the system or the data that of relevant processes which would be used in the near future. The cache is not reclaimed at the time when process get stop/exit, however when the other processes requires more memory then the free available memory, kernel will run heuristics to reclaim the memory by storing the cache data and allocating that memory to new process.

When any kind of file/data is requested then the kernel will look for a copy of the part of the file the user is acting on, and, if no such copy exists, it will allocate one new page of cache memory and fill it with the appropriate contents read out from the disk.

The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere in the disk. When some data is requested, the cache is first checked to see whether it contains that data. The data can be retrieved more quickly from the cache than from its source origin.

SysV shared memory segments are also accounted as a cache, though they do not represent any data on the disks. One can check the size of the shared memory segments using ipcs -m command and checking the bytes column.

Buffers :

Buffers are the disk block representation of the data that is stored under the page caches. Buffers contains the metadata of the files/data which resides under the page cache. Example: When there is a request of any data which is present in the page cache, first the kernel checks the data in the buffers which contain the metadata which points to the actual files/data contained in the page caches. Once from the metadata the actual block address of the file is known, it is picked up by the kernel for processing.


Solution 5:

Freeing buffer/cache

Warning This explain a strong method not recommended on production server! So you're warned, don't blame me if something goes wrong.

For understanding, the thing, you could force your system to delegate as many memory as possible to cache than drop the cached file:

Preamble

Before of doing the test, you could open another window an hit:

$ vmstat -n 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  1  39132  59740  39892 1038820    0    0     1     0    3    3  5 13 81  1
 1  0  39132  59140  40076 1038812    0    0   184     0 10566 2157 27 15 48 11
...

for following evolution of swap in real time.

Nota: You must dispose of as many disk free on current directory, you have mem+swap

The demo
$ free
         total       used       free     shared    buffers     cached
Mem:       2064396    2004320      60076          0      90740     945964
-/+ buffers/cache:     967616    1096780
Swap:      3145720      38812    3106908

$ tot=0
$ while read -a line;do
      [[ "${line%:}" =~ ^(Swap|Mem)Total$ ]] && ((tot+=2*${line[1]}))
    done </proc/meminfo
$ echo $tot
10420232

$ dd if=/dev/zero of=veryBigFile count=$tot
10420232+0 records in
10420232+0 records out
5335158784 bytes (5.3 GB) copied, 109.526 s, 48.7 MB/s

$ cat >/dev/null veryBigFile

$ free
             total       used       free     shared    buffers     cached
Mem:       2064396    2010160      54236          0      41568    1039636
-/+ buffers/cache:     928956    1135440
Swap:      3145720      39132    3106588

$ rm veryBigFile 

$ free
         total       used       free     shared    buffers     cached
Mem:       2064396    1005104    1059292          0      41840      48124
-/+ buffers/cache:     915140    1149256
Swap:      3145720      39132    3106588

Nota, the host on wich I've done this is strongly used. This will be more significant on a really quiet machine.