Just yesterday people were telling us that Apple revolutionized memory and are only installing such low amounts of RAM because hey, you got that super fast SSD right there it's basically RAM by a different name.
Well, whoops. I guess there are some side effects.
Rules of software engineering and engineering in general: you can't break Einstein. RAM is generally slow to begin with, but the disk is slower. RAM should be plenty to be used as disk cache, not vice versa. (edit: omitted the last sentence)
Presumably people getting 16GB of RAM are also running more memory intensive workloads than those not shelling out for the upgrade, so it could be a non-factor.
Yeah, although in this case it takes me back to ~10 years ago. I didn't think 8GB was a lot back then either, and we typically tried for at least 16.
Its pretty disappointing how little advancement has been made in RAM in recent years. It seems like everything should come with at least 16GB now with 32GB being readily available, but I guess noone is working on the cost of desktop/laptop memory.
Wow those W520 were heavy. I got a few dumped on me at an old job. They were going to throw them away. I put a few together that I sold for 400 each in 2018, great returns. I didn’t even think they would bring 200
I remember having 8 memory slots for my Cyrix 40Mhz 486 clone in 1994. I had 4MB of memory in four slots. I was thrilled when a friend gave me 4 256KB memory sims for an extra 1MB of RAM, which significantly sped up my computer :)
I had a 75 MB (yes, MB) drive in the first computer I bought. It was top of the line at the time. I remember telling my wife that I couldn’t imagine ever filling it up. Ha! Now I feel like a pauper if my phone has a 64GB drive ;-)
At the time it would have served pretty much all normal users enough, so I don’t see what’s wrong with the statement. It’s like today telling a home user 32GB should be enough for just about anything a normal user would do with their home PC.
I wouldn’t advise anyone to get 128GB (or more) “just in case”. If they have a special need case, then, sure , otherwise no.
I think I forget sometimes because I'm working with this stuff daily now, but yeah the evolution of computing technology is just astounding in scale and pace.
I've noticed kernel_task's disk usage is heavily skewed towards writes (e.g. 7GB read for 170GB written over a couple of hours). Assuming it is indeed swapping pages to disk and reads/writes are counted properly, either the dynamic pager or some application/system process are making extremely unfortunate life decisions. Lots of dirty pages are never read back, like there is a huge cache of... something.
In general you will write out pages to swap before evicting them from memory. The goal is never having to wait for a page to finish being flushed to disk to allocate a new page. This means you will sometimes access a page that had been written out to disk but is still available to memory. If you modify it it may be written out again before being read. So it isn't unreasonable to have swap writes be higher than swap reads.
However those numbers seem extreme probably some bad tuning or just neglecting to account for the full cost of writing to disk.
One thing: any machine client&server I've used in the past 20 years, the swap is the 1st thing to disable.
If there is a need for swap, install more RAM.
> Terrabytes of usage must be a bug.
Garbage collection setups with relatively large heaps are extra horrid when it comes to memory pattern usage while running full GC... and respectively swap.
Swap has value. E.g. it lets you have a tmpfs that's larger than your physical RAM. Or you can process memory dumps of a machine larger than yours. Or you can just freeze a process group and have it dumped to disk so other processes can use the RAM in the meantime. It can definitely help avoiding the OOM killer.
Swap occupancy and swap activity are not the same. The former is fine, the latter should be kept small.
> Or you can process memory dumps of a machine larger than yours.
You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.
About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?
Last - using the swap pretty much means no disk cache.
> You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.
Should, perhaps. But in practice I have had analyzers gobble up more memory than available.
> About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?
Long-running renderjob, preempted by a higher-priority one. Technically they're resumable so we could stop them and resume the other one later from a snapshot file but that isn't implemented in the current workers. So as long as the machine has enough ram+swap it's easier to freeze the current job and start the higher-priority one.
> Last - using the swap pretty much means no disk cache.
I don't know all the tunables of the swap subsystem well enough but I have seen swap being used before running out of physical ram. I assume some IO displaced inactive applications.
Of course, that's the point. If you cannot have enough memory, you don't buy them. Even the old haswell (2013) Acer laptop has 16GB; the skylake lenovo laptop I use for work has 32GB.
You're not wrong, but I have to wonder how much phone RAM is just the new megapixels, i.e. bigger number = more impressive spec sheet.
iOS is obviously a lot stricter on background activity than Android but manages to work great on 3-6GB.
I can comfortably do development work on my laptop with 16GB, and until last year was managing mostly okay on a 5 year-old machine with 8GB.
When you factor in the sort of stuff people actually do on a phone, surely 12-16GB is a massive waste? You can make the argument that it will become more useful as the phone ages, but by that point it will have probably stopped receiving software updates.
Arm and x86 have a similar number of instructions. The tradeoff is that with x86 you end up with more compact code because the instructions are variable length; but on arm the actual decoding is easier so you have more space for i$ (or anything else you want).
No, the RAM is separate chips / dies, albeit incorporated into the same module as the CPU. On pictures of the M1[1], the RAM is the plastic covered chips next to the main heat spreader for the SoC.
It's completely baffling how a little meme like this, which is in no way accurate, survives and thrives not just at large but specifically on HN. Can we track the meme's point of origin?
It's not that far off-base, given that most people don't understand the difference between on-package, where a component is bundled with the CPU on the same chip, and on-die, where a feature is actually part of the same litho process as the CPU.
There's really no meme to trace here; just a popular misunderstanding of semiconductor terminology.
The RAM is on-chip but not on-die... but most people (even developers) don't know the difference between a chip, a die, and a core to begin with. It's a pretty minor error, although such errors are a good sign that someone either doesn't know what they're talking about or is careless with terminology.
Apple themselves are responsible for it, all their promotional material for the chip have included unified memory right alongside all the on-die modules
Speaking for myself, this is the first time I've heard of multiple modules in the same chip. Hearing about it now and knowing generally how dies[1] are manufactured and packaged into ICs it's an obvious thing to do, but yeah never crossed my mind.
It's cool, hey? How AMD (and others) are now doing their CPUs is a good example: there's often several CPU "chiplets" and then one IO chiplet on the same package. [0] is the first good search result.
Given what they've done with the M1, I highly suspect Apple will do something like that for their higher end machines.
What makes you think HN is that different from anywhere else?
And besides, memes persist much more readily anywhere where tribalism comes into play e.g. Politics, Apple vs. [Insert your OS here], Any Tesla thread etc.
But at the same time. 8GB of ram isn't a lot.
Especially when multi tasking.
I know there was a lot of talk about Apple's directly soldered RAM being super efficient.
But if you've got 30 tabs open, playing music and using Photoshop. No amount of optimisation is going to prevent swap usage.
Still though. Terrabytes of usage must be a bug.