You are viewing tau_iota_mu_c

TimC - on the hardships of living with minimal amounts of RAM (4GB or so) [entries|archive|friends|userinfo]
TimC

[ website | Waikikamukauwenewekancicagote ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

on the hardships of living with minimal amounts of RAM (4GB or so) [Sep. 24th, 2011|01:40 am]
Previous Entry Share Next Entry
[Tags|, ]

I just got 200MB/s read/write rate from my swap device on my laptop. Fast laptop eh? OK, so I'm cheating by using the compcache/zram module from the staging tree.

When I bought my 2 laptops, I was upgrading from 256MB to 4GB. I thought that would be enough to last me for years. The video card on that first laptop came with more memory than the system memory of the machine I was upgrading from. Alas, I forgot to factor in opera and firefox (we're now in the era when Emacs is officially lightweight). And being laptops with the particular chipsets they have, 4GB is it, I'm afraid.

And the fact that Linux's VMM, for me, has never really handled the case of a machine running with a working set not all that much smaller than physical RAM. If I add up resident process sizes plus cache plus buffer plus slab plus anything else I can find, I always come up about 25% short of what's actually in the machine. Ever since those 256MB days (whereas about half the ram went "missing" on the 128MB machine prior to then). And even when your working set, including any reasonable allowance for what ought to be cachable, falls far short of RAM, it still manages to swap excessively, killing interactive performance (yes, I've tried /proc/sys/vm/swappiness). When I come in in the morning, it's paged everything out to make backups through the night marginally faster (not that I cared about that - I was asleep). Then it pages everything back in again at 3MB/s, despite the disk being capable of 80MB/s. Pity it's not smart enough to realise that I need the entire contiguous block of swapped pages back in, so it might as well dump the whole wasted cache, and read swap back in contiguously at 80MB/s rather than seeking everywhere and getting nowhere.

What I really wanted, was compressed RAM. Reading from disk with lots of seeks is a heck of a lot slower than decompressing pages in RAM. I vaguely recall such an idea exists if you're running inside VMWare or the like. But this is a desktop. I want to display to my physical screen without having to virtualise my X11 display.

But the zram module might be what I want. Pretty easy to set up (in the early days, it required a backing swap device and was kinda fiddly). Here's the hack I've got in rc.local along with a reminder sent to myself that I've still got this configured, at reboot:
echo 'rc.local setting zramfs to 3G in size - with a 32% compression ratio (zram_stats), that means we take up 980M for the ramfs swap' | mail -s 'zram' tconnors
echo $((3*1024*1024*1024)) > /sys/block/zram0/disksize
mkswap /dev/zram0
swapon -p 5 /dev/zram0

It seems to present as a block device of default 25% of RAM size (but I've chosen 3GB above), and as you write to that device, compressed versions of the page end up in physical memory. Eventually you'd run out of physical memory, and hopefully you have a second swap device (of lower priority) configured where it can page out for real. In my case, I'm using the debian swapspace utility. Be warned, if you plan to hibernate your laptop, not to forget to have a real swap partition handy :)

zram_stats tells me I'm currently swapping 570MB compressed down to 170MB, for a compression ratio of 28%. That 170MB has to be subtracted from the memory the machine has, so it appears to really only have 3.8 or so GB. No huge drawback. At that compression ratio, if I were to swap another 3GB out, physical ram stolen by zram would only be 1GB. My machine would be appearing to have 3GB of physical ram, 3GB of blindingly fast swap, and a dynamic amount (via swapspace) of slow disk based swap. I'd be swapping more because I had 1GB less than I originally had. But at least I'd be swapping so quickly I ought not notice it ('course, I haven't bench marked this). And I'd be able to have 2GB more in my working set before paging really starts to become unbearable.

So, with an uptime of 4 hours, I haven't even swapped to disk yet (I know this, because swapspace hasn't allocated any swapfiles yet). The machine hasn't yet thrashed half to death yet. That must be a recent record for me.


Yes, the module is in the staging tree. It's already deadlocked on me once, getting things stuck in the D state. And the machine has deadlocked with unexplained reasons a couple of other times recently (with X having had the screensaver going at the time, so no traces, and no idea whether it's general kernel 3.0 flakiness or zram in particular; I had forgotten until tonight that I even had previously configured zram back in the 2.6.32 days).

What I really *really* want, since I lack the ability to add more ram to the machines, is a volatile-ram ESata device, used purely as a swap device, reinitialised each boot (ie, having a battery backup is just pointless complexity and expense, and SSD is slow, fragile and prone to bad behaviour when you can't use TRIM, for the amount of writes involved in a swap device). There is the Gigabyte i-Ram and GC-RAMDISK and similar devices, but they're kinda pricey, even without the RAM included in the price. Why is SSD so much cheaper than plain old simple RAM these days? I thought the complexity involved would very much make it go the other way around.

What I really *really* **really** want, is for software to be less bloaty crap.
linkReply