Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Such high swap usage sounds like a bug.

But at the same time. 8GB of ram isn't a lot.

Especially when multi tasking.

I know there was a lot of talk about Apple's directly soldered RAM being super efficient.

But if you've got 30 tabs open, playing music and using Photoshop. No amount of optimisation is going to prevent swap usage.

Still though. Terrabytes of usage must be a bug.



Just yesterday people were telling us that Apple revolutionized memory and are only installing such low amounts of RAM because hey, you got that super fast SSD right there it's basically RAM by a different name.

Well, whoops. I guess there are some side effects.


>Apple revolutionized memory

Rules of software engineering and engineering in general: you can't break Einstein. RAM is generally slow to begin with, but the disk is slower. RAM should be plenty to be used as disk cache, not vice versa. (edit: omitted the last sentence)


It's especially funny because the M1 Mac SSDs seem to perform at around the same level as current competition from what I've seen.


Apparently the issue affects the 16 GB models as well. In that case, the SSD wear likely would not be related to the limited RAM in the 8 GB model.


Presumably people getting 16GB of RAM are also running more memory intensive workloads than those not shelling out for the upgrade, so it could be a non-factor.


16GB ram M1 user, haven't seen a KB of swap since I don't do memory intensive stuff, just wanted to future proof.


"8GB of ram isn't a lot."

You are not wrong in 2021 but this comment took me back to late 1900s-early 2000s when a few hundred MB of RAM was premium :). How far have we come!!


Yeah, although in this case it takes me back to ~10 years ago. I didn't think 8GB was a lot back then either, and we typically tried for at least 16.

Its pretty disappointing how little advancement has been made in RAM in recent years. It seems like everything should come with at least 16GB now with 32GB being readily available, but I guess noone is working on the cost of desktop/laptop memory.


In 2011 I bought a laptop with 16GB ram for $2000 (Lenovo W520).

2015, MBP13, 8GB ram I bought for $1900.

2018, MBP13, 16GB, $2200.

2020, MBP13, 32GB, $2500.

Pretty sad improvements for 9 years. But the form factor did improve...


Wow those W520 were heavy. I got a few dumped on me at an old job. They were going to throw them away. I put a few together that I sold for 400 each in 2018, great returns. I didn’t even think they would bring 200


I mean. You're correct, in an off color way. I'm typing this from a Thinkpad with 128Gb of RAM that I bought for <2k (total machine+aftermarket RAM).

So it sounds like Apple has made very little progress in arena.


I'm typing this comment on a 2011 MBA with 4GB's of RAM.

Running Linux on it. Native apps are okay. Firefox is okay unless I over do it with tabs. Forget about electron apps.


RAM prices have come down significantly. I just helped a friend get 16GB of memory for ~$50. It wasn't super nice fancy RGB LED RAM, but it works.

You can still get by on 8GB if you're a lighter user. It's unpleasant for me, but my grandma literally only uses a browser so it works for her.


An Acer laptop from 2010 has 8GB (2cores/4threads, 3.2GHz), still in use but it's just office applications. That's all (an no swap of course).


For laptops, it's power, isn't it? Not to mention initializing so much ram takes a long time.


Maybe, but now we've got phones with 12 GB. It doesn't seem like the difference can be particularly significant.


I remember having 8 memory slots for my Cyrix 40Mhz 486 clone in 1994. I had 4MB of memory in four slots. I was thrilled when a friend gave me 4 256KB memory sims for an extra 1MB of RAM, which significantly sped up my computer :)


"late 1900"... you mean like 1994 where 8MB RAM was the dream?


oops haha yea I meant 1990s


Why would you need more than 128MB? What are you gonna do with it?

Hahahahahahahahahaha....


I had a 75 MB (yes, MB) drive in the first computer I bought. It was top of the line at the time. I remember telling my wife that I couldn’t imagine ever filling it up. Ha! Now I feel like a pauper if my phone has a 64GB drive ;-)


"640kB should be enough for anyone" - nobody ever, but misattributed to Bill Gates.

https://www.computerworld.com/article/2534312/the--640k--quo...


At the time it would have served pretty much all normal users enough, so I don’t see what’s wrong with the statement. It’s like today telling a home user 32GB should be enough for just about anything a normal user would do with their home PC.

I wouldn’t advise anyone to get 128GB (or more) “just in case”. If they have a special need case, then, sure , otherwise no.


Late 1900s, few 100 MB of HDD would have been premium. I remember my father paying a lot for upgrading from 1MB to 4MB.


I think I forget sometimes because I'm working with this stuff daily now, but yeah the evolution of computing technology is just astounding in scale and pace.


> How far have we come!!

Patting yourself on the back for gratuitously wasting resources really, really grates on me.

Sure, I can buy a car that gets 10 miles per gallon for my weekly groceries shopping. I can even afford it, it won't break my bank.

But actually being proud of it? Like it's some sort of sign of social and technological progress?

Whew.


I've noticed kernel_task's disk usage is heavily skewed towards writes (e.g. 7GB read for 170GB written over a couple of hours). Assuming it is indeed swapping pages to disk and reads/writes are counted properly, either the dynamic pager or some application/system process are making extremely unfortunate life decisions. Lots of dirty pages are never read back, like there is a huge cache of... something.


In general you will write out pages to swap before evicting them from memory. The goal is never having to wait for a page to finish being flushed to disk to allocate a new page. This means you will sometimes access a page that had been written out to disk but is still available to memory. If you modify it it may be written out again before being read. So it isn't unreasonable to have swap writes be higher than swap reads.

However those numbers seem extreme probably some bad tuning or just neglecting to account for the full cost of writing to disk.


One thing: any machine client&server I've used in the past 20 years, the swap is the 1st thing to disable.

If there is a need for swap, install more RAM.

> Terrabytes of usage must be a bug.

Garbage collection setups with relatively large heaps are extra horrid when it comes to memory pattern usage while running full GC... and respectively swap.


Swap has value. E.g. it lets you have a tmpfs that's larger than your physical RAM. Or you can process memory dumps of a machine larger than yours. Or you can just freeze a process group and have it dumped to disk so other processes can use the RAM in the meantime. It can definitely help avoiding the OOM killer.

Swap occupancy and swap activity are not the same. The former is fine, the latter should be kept small.


> Or you can process memory dumps of a machine larger than yours.

You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.

About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?

Last - using the swap pretty much means no disk cache.


> You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.

Should, perhaps. But in practice I have had analyzers gobble up more memory than available.

> About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?

Long-running renderjob, preempted by a higher-priority one. Technically they're resumable so we could stop them and resume the other one later from a snapshot file but that isn't implemented in the current workers. So as long as the machine has enough ram+swap it's easier to freeze the current job and start the higher-priority one.

> Last - using the swap pretty much means no disk cache.

I don't know all the tunables of the swap subsystem well enough but I have seen swap being used before running out of physical ram. I assume some IO displaced inactive applications.


>analyzers gobble up more memory than available.

I can run 128GB, if need be. But yeah if the software is pitiful and poorly implemented. Running on swap is an one-time-option I guess.

>so we could stop them and resume the other one later from a snapshot file but that isn't implemented in the current workers.

Indeed, this seems like a poor implementation, lacking 'save' function. I have not run into similar cases.


The most ram you can get on the machines is 16gb.


Of course, that's the point. If you cannot have enough memory, you don't buy them. Even the old haswell (2013) Acer laptop has 16GB; the skylake lenovo laptop I use for work has 32GB.


> 8GB of ram isn't a lot.

Actually yes it is.


> 8GB of ram isn't a lot.

True, my 3 year old phone has 8GB ram. On a laptop, 8GB is just anemic.


You're not wrong, but I have to wonder how much phone RAM is just the new megapixels, i.e. bigger number = more impressive spec sheet.

iOS is obviously a lot stricter on background activity than Android but manages to work great on 3-6GB.

I can comfortably do development work on my laptop with 16GB, and until last year was managing mostly okay on a 5 year-old machine with 8GB.

When you factor in the sort of stuff people actually do on a phone, surely 12-16GB is a massive waste? You can make the argument that it will become more useful as the phone ages, but by that point it will have probably stopped receiving software updates.


The value is in having multiple apps stored in memory. I think the average person multitasks more on their phone compared to a PC.

Things like having a lot of tabs open. Messaging applications, music streaming etc.

It's not required. But instant switching between apps is a nice user experience.

In the same way a 120Hz monitor is a nicer experience but 60Hz is perfectly reasonable.

I don't think it's fair to compare iOS to Android. Android is quite a bit heaver.

I remember reading a quote from someone at Nvidia saying that hardware is much easier to change than software.

And at the end of the day, as consumers we should demand more for our money. Phone's aren't getting cheaper.


Remember that it is RISC too, so memory usage would be much higher then x86.


X86 instructions are a bit of a mess, yes, but in ARM you can end up with more of them.

The tricky bit here is actually the SIMD instructions since they can be very long and compiler will often go absolutely bonkers on fairly short code.


Arm and x86 have a similar number of instructions. The tradeoff is that with x86 you end up with more compact code because the instructions are variable length; but on arm the actual decoding is easier so you have more space for i$ (or anything else you want).


That's what I meant


Why?


At a very basic level: RISC instructions take up more space because they do less individually.

That said, very little of most RAM usage is the actual instructions running. It's mostly data.


We're no longer talking about "directly soldered RAM"; in the M1 Macs, the RAM is now part of the M1 chip itself. It's on the CPU die.


No, the RAM is separate chips / dies, albeit incorporated into the same module as the CPU. On pictures of the M1[1], the RAM is the plastic covered chips next to the main heat spreader for the SoC.

[1] Such as the image on the Wikipedia article https://en.wikipedia.org/wiki/Apple_M1


It's completely baffling how a little meme like this, which is in no way accurate, survives and thrives not just at large but specifically on HN. Can we track the meme's point of origin?


It's not that far off-base, given that most people don't understand the difference between on-package, where a component is bundled with the CPU on the same chip, and on-die, where a feature is actually part of the same litho process as the CPU.

There's really no meme to trace here; just a popular misunderstanding of semiconductor terminology.


The RAM is on-chip but not on-die... but most people (even developers) don't know the difference between a chip, a die, and a core to begin with. It's a pretty minor error, although such errors are a good sign that someone either doesn't know what they're talking about or is careless with terminology.


I’m not sure if the RAM can be defined as on chip but it’s on the same package.

A chip can contain multiple dies.

Usually chips were just discrete packages basically a die that is packaged for integration onto a PCB.

A die is just the bare silicon that has an IC etched into it.

With interposers and modern multi chip packages things are a bit more complicated.

Since the RAM dies are packages they themselves can be defined as chips too while the CPU needs to be integrated to the substrate first.

At this point it’s a question is the CPU+RAM combo can be defined as a chip on its own or is it a hybrid/compound package I would go with the latter.

If the memory and cpu dies / chips would’ve been stacked like say the raspberry pie one I would call it RAM on chip tho.


Why he like that.

Maybe it's a good sign someone is half interested and open to learning more.


If it's fair to refer to M1 as an SoC, then it's defensible to say the RAM is on the chip.


> Can we track the meme's point of origin?

Apple themselves are responsible for it, all their promotional material for the chip have included unified memory right alongside all the on-die modules

https://www.apple.com/v/mac/m1/a/images/overview/chip_memory...


Speaking for myself, this is the first time I've heard of multiple modules in the same chip. Hearing about it now and knowing generally how dies[1] are manufactured and packaged into ICs it's an obvious thing to do, but yeah never crossed my mind.

[1] dice?


It's cool, hey? How AMD (and others) are now doing their CPUs is a good example: there's often several CPU "chiplets" and then one IO chiplet on the same package. [0] is the first good search result.

Given what they've done with the M1, I highly suspect Apple will do something like that for their higher end machines.

[0] https://www.anandtech.com/show/16148/amd-ryzen-5000-and-zen-...


Congratulations, you're one of today's lucky 10,000 :)

https://xkcd.com/1053/

I'm mildly annoyed by the tendency of some people to be critical while they're correcting (teaching!) someone.


What makes you think HN is that different from anywhere else?

And besides, memes persist much more readily anywhere where tribalism comes into play e.g. Politics, Apple vs. [Insert your OS here], Any Tesla thread etc.


No, the RAM is co-located in the M1's flip-chip package, but it's not physically on the same die.

In the photo below, you can see the inside of the BGA flip-chip showing the M1 SoC itself alongside two Hynix LPDDR4 devices:

https://www.eetimes.com/wp-content/uploads/Apple-M1-1.jpg


You've been down voted for technical inaccuracy.

I'll up vote you until someone opens the chip package and replaces or upgrades the ram.


Now you're just daring someone like Louis Rossmann to give it a go. :)


Ha! Good one ;)

I'd watch that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: