
It is also available on AWS CloudWatch metrics via -mem-avail flag. Total used free shared buff/cache available Some implementation of free -m have available column. The current implementation in Linux v4.12-rc2 still looks almost same.


Its background is explained well in the commit in Linux Kernel, but essentially it excludes non-freeable page cache and includes reclaimable slab memory. Starting new applications, without swapping. MemAvailable %lu (since Linux 3.14)Īn estimate of how much memory is available for So, should we use free + buffers + cached? /proc/meminfo has an even better metric called MemAvailable. If you are interested, What is the difference between Buffers and Cached columns in /proc/meminfo output? on Quora has more details about Buffers and Cached. That was why the memory usage was increasing even when processes were not leaking memory. OS keeps page cache while RAM has enough free space. Cached contains cached file contents, which are called page cache. I am still not sure what exactly Buffers contains, but it contains metadata of files, etc. In-memory cache for files read from the disk (the page Shouldn't get tremendously large (20MB or so). Relatively temporary storage for raw disk blocks that So, what are they actually? According to the manual of /proc/meminfo, which is a pseudo file and the data source of free, top and friends: Buffers %lu Most of buffers and cached Mem are given up when application processes claim more memory.Īctually free -m command provides a row for used and free taking buffers and cached into consideration. The only difference was that buffers and cached Mem were high on the long-running one.Īfter some research, or googling, I concluded that it was not a problem. I used top command with Shift+m (sort by memory usage) and compared processes on a long-running server and ones on a newly deployed server. Then why is the OS memory usage increasing? Buffers and cached memory Luckily I also had memory usages of application processes recorded, but they were not increasing. So, they are basically from OS, which was Alpine Linux on Docker in this case. const os = require( "os") Ĭonst usage = ((free - total) / total) * 100 The memory usage was measured in the following Node.js code. After a while, I found that its memory usage % was growing slowly, like 20% in 3 days. Recently I started monitoring a Node.js app that we have been developing at work.
