When is swap used
It is normal for Linux systems to use some swap even if there is still RAM free. The Linux kernel will move to swap memory pages that are very seldom used e. Swap space usage becomes an issue only when there is not enough RAM available, and the kernel is forced to continuously move memory pages to swap and back to RAM, just to keep applications running. For comparison, my Ubuntu Linux starts swapping before the RAM is filled up.
This is done to improve performance and responsiveness:. Performance is increased because sometimes RAM is better used for disk cache than to store program memory. So it's better to swap out a program that's been inactive for a while, and instead keep often-used files in cache. Responsiveness is improved by swapping pages out when the system is idle, rather than when the memory is full and some program is running and requesting more RAM to complete a task.
Swapping does slow the system down, of course — but the alternative to swapping isn't not swapping, it's having more RAM or using less RAM. Lets say you have a process which requests for memory say MB - a web browser.
The process would be allocated MB from the virtual memory, however, it is not necessary it is memory mapped that is mapped to physical memory. There is concept of "Copy on Write" for memory management, whereby, if your processes actually uses the memory allocated from virtual memory that is it does some write on the memory , only then it is mapped to physical memory.
This assists the kernel to work properly in a multi-process environment efficiently. A lot of memory used by processes are shared. Lets say the glibc library is used by almost all processes.
What is the point of keeping multiple copies of glibc in the memory, when every process could access same memory location and do the job. Such frequently used resources are kept in cache so that when processes demand, they could be referenced to same memory location.
This helps in speeding up processes, as reading glibc etc. The above was for shared libraries per say, similar is also true to file reading as well. If you read a large file say MB for the first time, it would take a lot of time. However, when you try and do the same read again, it would be faster. Data was cached in memory, and re-read was not done for all blocks. The processes, requests the kernel to do the job. So, on behalf of the process, the kernel writes the data to its "buffer", and tells process that the write is done.
In an async manner, kernel will keep syncing this data in buffer to disk. In this way, the processes is relying on the kernel to choose a correct time to sync data to disk, and the processes could continue working ahead. Some of opensource utilities are libaio. Also, there are ways to call explicit sync to FDs opened in your processes context, so that you force the kernel the kernel to sync the data to disk for the write you might have done. Consider an example, when you start a process say a web browser , whose binary is about MB.
However, the complete MB of the web browser binary does not start working instantly. The process keeps moving from functions-to-functions in its code.
As said earlier, Virtual Memory would be MB consumed however, not all is memory mapped to physical memory RSS - resident memory would be less, see top output.
When code execution reaches a point, for which memory is not actually physically mapped, a page fault would be issues. Kernel would map this memory to physical, associate the memory page to your process.
Such a page fault is called "Minor Page Faults". Inline with the details above, lets consider a scenario when the good amount of memory becomes memory mapped.
And now a processes starts up, which requires memory. As discussed above, kernel will have do some memory mapping. However, there not enough physical RAM available to map the memory. Now, the kernel will first look into the cache, it will have some old memory pages which are not being used.
It will flush those pages onto a separate partition called SWAP , free up some pages, and map freed pages to the new request coming. As disk write is much slower than solid-state RAM, this process takes a lot of time, and hence a slow down is seen. Lets say you see a lot of free memory available in the system. Even then you see that there is a lot of swap-out happening.
There could be a probable issue of memory fragmentation. Consider a processes, which demands 50MB of contiguous memory from kernel.
The swappiness parameter controls the tendency of the kernel to move processes out of physical memory and onto the swap disk. Because disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory.
For Kernel version 3. Reducing the default value of swappiness will probably improve overall performance for a typical Ubuntu desktop installation. Note: Ubuntu server installations have different performance requirements to desktop systems, and the default value of 60 is likely more suitable.
To change the swappiness value A temporary change lost on reboot with a swappiness value of 10 can be made with. To make a change permanent , edit the configuration file with your favorite editor:. Search for vm. If vm. Firstly, what your definition of "free" is. It is actually not as simple as it sounds in Linux or any modern operating system. Each application can use some of your memory. Linux uses all otherwise unoccupied memory except for the last few Mb as "cache". This includes the page cache, inode caches, etc.
This is a good thing - it helps speed things up heaps. Both writing to disk and reading from disk can be sped up immensely by cache. Ideally, you have enough memory for all your applications, and you still have several hundred Mb left for cache. In this situation, as long as your applications don't increase their memory use and the system isn't struggling to get enough space for cache, there is no need for any swap.
Once applications claim more RAM, it simply goes into some of the space that was used by cache, shrinking the cache. De-allocating cache is cheap and easy enough that it is simply done in real time - everything that sits in the cache is either just a second copy of something that's already on disk, so can just be deallocated instantly, or it's something that we would have had to flush to disk within the next few seconds anyway.
This is not a situation that is specific to Linux - all modern operating systems work this way. The different operating systems might just report free RAM differently: some include the cache as part of what they consider "free" and some may not. When you talk about free RAM, it's a lot more meaningful to include cache, because it practically is free - it's available should any application request it. On Linux, the free command reports it both ways - the first line includes cache in the used RAM column, and the second line includes cache and buffers in the free column.
Once you have used up enough memory that there is not enough left for a smooth-running cache, Linux may decide to re-allocate some unused application memory from RAM to swap. It doesn't do this according to a definite cut-off. It's not like you reach a certain percentage of allocation then Linux starts swapping. It has a rather "fuzzy" algorithm. It takes a lot of things into account, which can best be described by "how much pressure is there for memory allocation".
If there is a lot of "pressure" to allocate new memory, then it will increase the chances some will be swapped to make more room. If there is less "pressure" then it will decrease these chances. Your system has a "swappiness" setting which helps you tweak how this "pressure" is calculated. It's normally not recommended to alter this at all, and I would not recommend you alter it. Swapping is overall a very good thing - although there are a few edge cases where it harms performance, if you look at overall system performance it's a net benefit for a wide range of tasks.
If you reduce the swappiness, you let the amount of cache memory shrink a little bit more than it would otherwise, even when it may really be useful.
Whether this is a good enough trade-off for whatever problem you're having with swapping is up to you. You should just know what you're doing, that's all. There is a well-known situation in which swap really harms perceived performance on a desktop system, and that's in how quickly applications can respond to user input again after being left idle for a long time and having background processes heavy in IO such as an overnight backup run.
This is a very visible sluggishness, but not enough to justify turning off swap all together and very hard to prevent in any operating system.
This is not a situation that's limited to Linux, either. When choosing what is to be swapped to disk, the system tries to pick memory that is not actually being used - read to or written from. It has a pretty simple algorithm for calculating this that chooses well most of the time. What is Swap Space? Adding Swap Space 5. Removing Swap Space 5. Moving Swap Space. Also on the CPU there is a small memory cache. Here a small piece of the running program is stored for fast access. Search on branch predicting algorithms if you want to know more.
Sometimes there are level 2 caches between the CPU and the main memory. Next level is the main memory RAM. Last level and slowest of all is the disk, sometimes you can use USB sticks as extra memory. Lekensteyn 5, 5 5 gold badges 36 36 silver badges 54 54 bronze badges. Toon Krijthe Toon Krijthe 2 2 silver badges 4 4 bronze badges.
Since the default config on many linux distros is to overcommit memory, the "worst" that will happen is that the OOM killer will start sniping processes, most likely starting with the DBMS and with tables "stored in RAM" that's probably not a good thing. To stop processes from using swap — install more RAM. Quentin Quentin 1, 6 6 silver badges 10 10 bronze badges.
If you don't want any swapping, stop using a page file. But expect other problems. It's not. Swapping is moving a block of data from the harddisk into physical memory. A one-time read, which is normally fast enough.
If you have sufficient RAM for your system, you can actually go without swap memory. So, it is up to you if you need swap-memory. It's possible with VZ because the RAM is not completely partitioned as common libraries are shared across VMs, or something that effect.
So, 8 x 64M is not necessarily M. By swapping out inactive programs, you have more memory for file caching. And that speeds things up. His customers or web-app would not be very pleased when OOM kills the back-end all over a simple nightly backup operation, or something similar? Sign up or log in Sign up using Google.
0コメント