EDIT: Here's some more RE work on the matter. Has some symbol remapping information that was extracted from the prefix trie the backdoor used to hide strings. Looks like it tried to hide itself even from RE/analysis, too.
The back door pulls this from the certificate received from a remote attacker, attempts to decrypt it with ChaCha20, and if it decrypts successfully, passed to `system()`, which is essentially a simple wrapper that executes a line of shellscript under whichever user the process is currently executing.
If I'm understanding things correctly, this is worse than a public key bypass (which myself and I think a number of others presumed it might be) - a public key bypass would, in theory, only allow you access as the user you're logging in with. Assumedly, hardened SSH configurations would disallow root access.
However, since this is an RCE in the context of e.g. an sshd process itself, this means that sshd running as root would allow the payload to itself run as root.
Wild. This is about as bad as a widespread RCE can realistically get.
> However, since this is an RCE in the context of e.g. an sshd process itself, this means that sshd running as root would allow the payload to itself run as root.
With the right sandboxing techniques, SELinux and mitigations could prevent the attacker from doing anything with root permissions. However, applying a sandbox to an SSH daemon effectively is very difficult.
You could refactor sshd so most network payload processing is delegated to sandboxed sub-processes. Then an RCE there has less capabilities to exploit directly. But, I think you would have to assume an RCE can cause the sub-process to produce wrong answers. So if the answers are authorization decisions, you can transitively turn those wrong answers into RCE in the normal login or remote command execution context.
But, the normal login or remote command execution is at least audited. And it might have other enforcement of which accounts or programs are permitted. A configuration disallowing root could not be bypassed by the sub-process.
You could also decide to run all user logins/commands under some more confined SE-Linux process context. Then, the actual user sessions would be sandboxed compared to the real local root user. Of course, going too far with this may interfere with the desired use cases for SSH.
That just raises the hurdle for the attacker. The attacker in this case has full control to replace any function within ssh with their own version, and the master process of sshd will always need the ability to fork and still be root on the child process before dropping privileges. I don't see any way around that. They only needed to override one function this time, but if you raise the bar they would just override more functions and still succeed.
I’m highly safety critical systems you have software (and hardware) diversity were multiple pieces of software, developed independently, have to vote on the result. Maybe highly critical pieces of Linux like the login process should be designed the same way. So that two binaries without common dependencies would need to accept the login for the user to get privileges.
Exactly how to do it (especially transparently for the user), I have no idea though. Maybe sending ssh login requests to two different sshd implementations and if they don’t do the same things (same system calls), they are both killed.
Or some kind of two step login process where the first login only gives access to the sandbox of the second login process.
But in general I assume the Linux attack surface is too big to do software diversity for all of it.
Or better, just make an ssh without any dependencies. Statically compile it, and get rid of the libssl and libsystemd and even libpam and libc's nsswitch. (I actually do this for some of my systems)
> The attacker in this case has full control to replace any function within ssh with their own version
Not true. They have this ability only for binaries that are linked to liblzma. If sshd were to be decomposed into multiple processes, not all of them would (hopefully) depend on all the libraries that the original sshd depended on.
Well, sshd doesn't depend on liblzma in the first place, but Debian and RedHat thought it would be a good idea to tie it into libsystemd for logging purposes, and patched in support. It's still pretty bad to have systemd compromised, even if ssh weren't, though. Maybe the army of pitchforks should be marching on the systemd camp. It's definitely not OpenBSD's choice of architecture, here.
It wouldn't matter in this case, since the exploit could simply rewrite the function that calls out to the unprivileged process. If you already have malicious code in your privileged parent process there's no way to recover from that.
Tell us all, please, how the starting vector of this attack would affect statically compiled dropbear binary even with systemd's libsystemd pwnage? I am very cruious about your reasoning.
The fact, that the whole reason this library is even being pulled into the sshd daemon process, is some stupid stuff like readiness notification, which itself is utterly broken on systemd, by design (and thus is forever unfixable), and makes this even more tragic.
Don't put your head into the sand, just because of the controversial nature of the topic. Systemd was VERY accommodating in this whole fiasco.
Saddest part of all this is, that we know how to to do better. At least since Bernstein, OpenBSD and supervision community (runit/s6) guys solved it. Yet somehow we see same mistakes repeated again and again.
I.e. you fork and run little helper to write, or directly write a single byte(!), to notify supervisor over supervisor provided fd. It allows you to even privseparate your notifier stuff or do all the cute SELinux magic you need.
But that would be too simple, I guess, so instead we link like 10 completely unrelated libraries into sshd, liblzma being one of them, one of the most crucial processes on the machine. To notify supervisor that it's ready. Sounds about right, linux distros (and very specific ones at that).
Sshd should be sacred, nothing more than libc and some base cryptolibs (I don't remember whether it still needs <any>ssl even) it needs.
Another great spot to break sshd is PAM, which has no place doing there either. Unfortunately it's hard dep. on most linux distros.
Maybe sshd should adopt kernel taint approach: as soon as any weird libraries (ie everything not libc and cryptolibs) are detected in sshd proces it should consider itself tainted. Maybe even seppuku itself.
The exploit could be, probably, somehow doable without systemd. But it would be much, much harder though.
Don't try to obfuscate that very fact from the discussion.
The sd-notify protocol is literally "Read socket address from environment variable, write a value to that socket". There's no need to link in libsystemd to achieve this. It's unreasonable to blame systemd for projects that choose to do so. And, in fact, upstream systemd has already changed the behaviour of libsystemd so it only dlopen()s dependencies if the consumer actually calls the relevant entry points - which would render this attack irrelevant.
> Another great spot to break sshd is PAM, which has no place doing there either. Unfortunately it's hard dep. on most linux distros.
There are many things to hate about PAM (it should clearly be a system daemon with all of the modules running out of process), but there's literally no universe where you get to claim that sshd should have nothing to do with PAM - unless you want to plug every single possible authentication mechanism into sshd upstream you're going to end up with something functionally identical.
That's an easy thing to say after the fact indeed but yes. In fact after such a disastrous backdoor I wouldn't be surprised if OpenSSH moved all code calling external libraries to unprivileged processes to make sure such an attack can never have such a dramatic effect (an auth bypass would still likely be possible, but that's still way better than a root RCE…).
At this point “All libraries could be malicious” is a threat model that must be considered for something as security critical as OpenSSH.
I don't think that's a threat model that OpenSSH should waste too much time on. Ultimately this is malicious code in the build machine compiling a critical system library. That's not reasonable to defend against.
Keep in mind that upstream didn't even link to liblzma. Debian patched it to do so. OpenSSH should defend against that too?
any one of us if we sat on the OSSH team would flip the middle finger. What code is the project supposed to write when nothing on main dyn loaded liblzma. It was brought in from a patch they don't have realistic control over.
This is a Linux problem, and the problem is systemd, which is who brought the lib into memory and init'd it.
I think the criticisms of systemd are valid but also tangential. I think Poettering himself is on one of the HN threads saying they didn't need to link to his library to accomplish what they sought to do. Lzma is also linked into a bunch of other critical stuff, including but not limited to distro package managers and the kernel itself, so if they didn't have sshd to compromise, they could have chosen another target.
So no, as Pottering claimed, sshd would not be hit by this bug except for this systemd integration.
I really don't care about "Oh, someone could have written another compromise!". What allowed for this compromise, was a direct inability for systemd to reliable do its job as an init system, necessitating a patch.
And Redhat, Fedora, Debian, Ubuntu, and endless other distros took this route, because something was required, and here we are. Something that would not be required if systemd could actually perform its job as an init system without endless work arounds.
Also see my other reply in this thread, re Redhat's patch.
I just went and read https://bugzilla.redhat.com/show_bug.cgi?id=1381997 and actually seems to me that sshd behavior is wrong, here. I agree with the S6 school of thought, i.e. that PID files are an abomination and that there should always be a chain of supervision. systemd is capable of doing that just fine. The described sshd behavior (re-execing in the existing daemon and then forking) can only work on a dumb init system that doesn't track child processes. PID files are always a race condition and should never be part of any service detection.
That said, there are dozens of ways to fix this and it really seems like RedHat chose the worst one. They could have patched sshd in the other various ways listed in that ticket, or even just patch it to exit on SIGHUP and let systemd re-launch it.
I'm not the type to go out of my way to defend systemd and their design choices. I'm just saying the severity of this scenario of a tainted library transcends some of the legit design criticisms. If you can trojan liblzma you can probably do some serious damage without systemd or sshd.
Of course you can trojan other ways, but that can only be said, in this thread, in defense of systemd.
After all, what you're saying is and has always been the case! It's like saying "Well, Ford had a design flaw in this Pinto, and sure 20 people died, but... like, cars have design flaws from time to time, so an accident like this would've happened eventually anyhow! Oh well!"
It doesn't jive in this context.
Directly speaking to this point, patched ssh was chosen for a reason. It was the lowest hanging fruit, with the greatest reward. Your speculation about other targets isn't unwarranted, but at the same time, entirely unvalidated.
Why to avoid this? Well, it is adding more systemd-specific bits and new build dependency to something that always worked well under other inits without any problems for years.
They chose the worst solution to a problem that had multiple better solutions because of a pre-existing patch was the easiest path forward. That’s exactly what I’m talking about.
It is possible to prevent libraries from patching functions in other libraries; make those VM regions unwritable, don't let anyone make them writable, and adopt PAC or similar hardware protection so the kernel can't overwrite them either.
That's already done, but in this case the attack happened in a glibc ifunc and those run before the patching protection is enabled (since an ifunc has to patch the PLT).
Sounds like libraries should only get to patch themselves.
(Some difficulty with this one though. For instance you probably have to ban running arbitrary code at load time, but you should do this anyway because it will stop people from writing C++.)
If you're running in the binary you can call mprotect(2), and even if that is blocked you can cause all kinds of mischief. The original motivation for rings of protection on i286 was so that libraries could run in a different ring from the binary (usually library in ring 2 and program in ring 3), using a call gate (a very controlled type of call) to dispatch calls from the binary to the library, which stops the binary from modifying the library and IIRC libraries from touching each other. But x86-64 got rid of the middle rings.
> If you're running in the binary you can call mprotect(2)
Darwin doesn't let you make library regions writable after dyld is finished with them. (Especially iOS where codesigning also prevents almost all other ways to get around this.)
Something like OpenBSD pledge() can also revoke access to it in general.
> But x86-64 got rid of the middle rings.
x86 is a particularly insecure architecture but there's no need for things to be that way. That's why I mentioned PAC, which prevents other processes (including the kernel) from forging pointers even if they can write to another process's memory.
Because it's a general purpose computer. Duh. The aim is to be able to arbitrary computations. Which overwriting crypto functions in sshd is a valid computation to be considered.
I don't think you should connect your general purpose computer to the internet then. Or keep any valuable data on it. Otherwise other people are going to get to perform computations on it.
You can definitely prevent a lot of file/executable accesses via SELinux by running sshd in the default sshd_t or even customizing your own sshd domain and preventing sshd from being able to run binaries in its own domain without a transition. What you cannot prevent though is certain things that sshd _requires_ to function like certain capabilities and networking access.
by default sshd has access to all files in /home/$user/.ssh/, but that could be prevented by giving private keys a new unique file context, etc.
SELinux would not prevent all attacks, but it can mitigate quite a few as part of a larger security posture
libselinux is the userspace tooling for selinux, it is irrelevant to this specific discussion as the backdoor does not target selinux in any way, and sshd does not have the capabilities required to make use of the libselinux tooling anyway
libselinux is just an unwitting vector to link liblzma with openssh
Even though sshd must run as root (in the usual case), it doesn't need unfettered access to kernel memory, most of the filesystem, most other processes, etc. However, you could only really sandbox sshd-as-root. In order for sshd to do its job, it does need to be able to masquerade as arbitrary non-root users. That's still pretty bad but generally not "undetectably alter the operating system or firmware" bad.
>Even though sshd must run as root (in the usual case), it doesn't need unfettered access to kernel memory, most of the filesystem, most other processes, etc
This is sort of overlooking the problem. While true, the processes spawned by sshd do need to be able to do all these things and so even if you did sandbox it, preserving functionality would all but guarantee an escape is trivial (...just spawn bash?).
SELinux context is passed down to child processes. If sshd is running as confined root (system_u:system_r:sshd_t or similar), then the bash spawned by RCE will be too. Even if sshd is allowed to masquerade as an unconfined non-root user, that user will (regardless of SELinux) be unable to read or write /dev/kmem, ignore standard file permissions, etc.
That's my point though--users expect to be able to do those things over ssh. Sandboxing sshd is hard because its child processes are expected to be able to do anything that an admin sitting at the console could do, up to and including reading/writing kernel memory.
I'm assuming SSH root login is disabled and sudo requires separate authentication to elevate, but yeah, if there's a way to elevate yourself to unconfined root trivially after logging in, this doesn't buy you anything.
Now, sandboxing sudo (in the general case) with SELinux probably isn't possible.
This does not matter either. The attack came in by loading into systemd via liblzma. It put on a hook and then sits around waiting for sshd to load in so it can learn the symbols then proceeds to swap in the jumps.
sshd is a sitting duck. Bifurcating sshd into a multimodule scheme won't work because some part of it still has to be loaded by systemd.
This is a web of trust issue. In the .NET world where refection attacks happen to commercial software that features dynload assemblies, the only solution they could come up with is to sign all the things, then box up anything that doesn't have a signing mechanism and then sign that, even signing plain old zip files.
Some day we will all have to have keys, and to keep the anon people from leaving they can get an anon key, but anons with keys will never get on the chain where the big distros would ever trust their commits until someone who forked over their passport and photos got a trustable key to sign off on the commits, so that the distro builders can then greenlight pulling it in.
Then I guess to keep the anons hopeful that they are still in the SDLC somewhere their commits can go into the completely untrusted-unstable-crazytown release that no instutution in their right mind would ever lay down in production.
I’ll admit to not being an expert in SELinux, but it seems like an impossibly leaky proposition. Root can modify systemd startup files, so just do that in a malicious way and reboot the system. that context won’t be propagated. And if you somehow prohibit root from doing that by SELinux policy then you end up with a system that can’t actually be administered.
[edit: sibling sweetjuly said it better than I could. I doubt that this much more than a fig leaf on any real world system given what sshd is required to have to do.]
Selinux domains are uncoupled from Linux users. If sshd does not have Selinux permissions to edit those files it will simply be denied. Even if sshd is run as root
Which amounts to the un-administerable system I mentioned. If it’s not possible to modify systemd config files using ssh, what happens when you need to edit them?
Really what they're proposing here is a non-modifiable system, where the root is read-only and no user can modify anything important.
Which is nice and all, but that implies a "parent" system that creates and deploys those systems. Which people likely want remote access to.. Probably by sshd...
You can limit the exposure of the system from RCE in sshd with SELinux without preventing legitimate users from administering the system.
Granted that SELinux is overly complicated and has some questionable design decisions from a usability standpoint but it's not as limited or inflexible as many seem to think.
It really can stop a system service running as "root" from doing things a real administrator doesn't want it to do. You can couple it with other mechanisms to achieve defense in depth. While any system is only as strong as its weakest link, you can use SELinux to harden sshd so even with exploits in the wild it's not the weakest link vis-a-vis an attacker getting full unconfined root access. This may or may not be worth your time depending on what that box is doing and how connected to the rest of your infrastructure it is.
There seems to be a pervasive misunderstanding of the difference between standard UNIX/Linux discretionary access control and SELinux-style mandatory access control. The latter cannot be fooled into acting as a confused deputy anywhere near as easily as the former. The quality of the SELinux policy on a particular system plays a big part in how effective it is in practice but a good policy will be far harder to circumvent than anything the conventional permissions model is capable of.
Moreover, while immutability is obviously an even stronger level of protection, it is not necessary to make the system immutable to accomplish what I've described here while still allowing legitimately and separately authenticated users to fully administer the system.
Most people turn SELinux off anyway, so they have no clue how it operates.
DACs (discretionary, unix perms) are DACs and MACs (mandatory, SELinux) are MACs. They are mandatory - it's in their name.
Think of SELinux as completely orthogonal access control system, that can overturn any DAC decision, which it in fact does. SELinux language is much more featured than DAC language, it can express domain transitions.
Nobody here has inspected the sshd_t policies but I believe exec transition should be forbidden for arbitrary binaries (I hope).
That should in essence thwart arbitrary exec from remote key payload.
If actual shellcode would be sent though (e.g. doing filesystem open/write/close), that is a little bit different.
It's possible to spawn a sshd as an unprivileged or partially-capabilitized process. Such as sandbox isn't the default deployment, but it's done often enough and would work as designed to prevent privilege elevation above the sshd process.
SELinux does not rely on the usual UID/GID to determine what a process can do. System services, even when running as "root", are running as confined users in SELinux. Confined root cannot do anything which SELinux policy does not allow it to do. This means you can let sshd create new sessions for non-root users while still blocking it from doing the other things which unconfined root would be able to do. This is still a lot of power but it's not the godlike access which a person logged in as (unconfined) root has.
Doesn't matter. A malicious sshd able to run commands arbitrary users can just run malicious commands as those users.
We'd need something more like a cryptographically attested setreuid() and execve() combination that would run only commands signed with the private key of the intended user. You'd want to use a shared clock or something to protect against replay attacks
Yes, this won't directly protect against an attacker whose goal is to create a botnet, mine some crypto on your dime, etc. However, it will protect against corruption of the O/S itself and, in tandem with other controls, can limit the abilities an attacker has, and ensure things like auditing are still enforced (which can be tied to monitoring, and also used for forensics).
Whether it's worth it or not depends on circumstances. In many cloud environments, nuking the VM instance and starting over is probably easier than fiddling with SELinux.
even easier is to STOP HOSTING SSHD ON IPV4 ON CLEARNET
at minimum, ipv6 only if you absolutely must do it (it absolutely cuts the scans way down)
better is to only host it on vpn
even better is to only activate it with a portknocker, over vpn
even better-better is to set up a private ipv6 peer-to-peer cloud and socat/relay to the private ipv6 network (yggdrasil comes to mind, but there's other solutions to darknet)
your sshd you need for server maintenance/scp/git/rsync should never be hosted on ipv4 clearnet where a chinese bot will find it 3 secs after the route is established after boot.
How about making ssh as secure as (or more secure than) the VPN you'd put it behind? Considering the amount of vulnerabilities in corporate VPNs, I'd even put my money on OpenSSH today.
It's not like this is SSH's fault anyway, a supply chain attack could just as well backdoor some Fortinet appliance.
Defence in depth. Which of your layers is "more secure" isn't important if none are "perfectly secure", so having an extra (independent) layer such as a VPN is a very good idea.
You have to decide when to stop stacking, otherwise you'd end up gating access behind multiple VPNs (and actually increasing your susceptibility to hypothetical supply-chain attacks that directly include a RAT).
I'd stop at SSH, since I don't see a conceptual difference to how a VPN handles security (unless you also need to internally expose other ports).
OpenSSH has a much smaller attack surface, is thoroughly vetted by the best brains on the planet, and is privilege separated and sandboxed. What VPN software comes even close to that?
The only software remotely in the same league is a stripped down Wireguard. There is a reason the attacker decided to attack liblzma instead of OpenSSH.
I imagine it stops some non-targeted attempts that simply probe the entire v4 range, which is not feasible with v6. But yeah, not really buying you much, especially if there is any publicly listed service on that IP.
If you have password authentication disabled then it shouldn't matter how many thousands of times a day people are scanning and probing sshd. Port knockers, fail2ban, and things of that nature are just security by obscurity that don't materially increase your security posture. If sshd is written correctly and securely it doesn't matter if people are trying to probe your system, if it's not written correctly and securely you're SOL no matter what.
Plausibly by having set-user-ID capability but not others an attacker might need.
But in the more common case it just doesn't: you have an sshd running on a dedicated port for the sole purpose of running some service or another under a specific sandboxed UID. That's basically the github business model, for example.
I need full filesystem access, VIM, ls, cd, grep, awk, df, du at the very least. Sometimes perl, find, ncdu, and other utilities are necessary as well. Are you suggesting that each tool have its own SSH process wrapping it?
Maybe write a shell to coordinate between them? It should support piping and output redirection, please.
Sigh. I'm not saying there's a sandboxed sshd setup that has equivalent functionality to the default one in your distro. I'm not even saying that there's one appropriate for your app.
I'm saying, as a response to the point above, that sandboxing sshd is absolutely a valid defense-in-depth technique for privilege isolation, that it would work against attacks like this one to prevent whole-system exploitation, and that it's very commonly deployed in practice (c.f. running a git/ssh server a-la github).
Git’s use of the ssh protocol as a transport is a niche use case that ignores the actual problem. No one is seriously arguing that you can’t sandbox that constrained scenario but it’s not really relevant since it’s not the main purpose of the secure shell daemon.
It's part of a test program used for feature detection (of a sandboxing functionality), and causes a syntax error. That in turn causes the test program to fail to compile, which makes the configure script assume that the sandboxing function is unavailable, and disables support for it.
You are looking at a makefile, not C. The C code is in a string that is being passed to a function called `check_c_source_compiles()`, and this dot makes that code not compile when it should have -- which sets a boolean incorrectly, which presumably makes the build do something it should not do.
This is something that should have unit/integration tests inside the tooling itself, yeah. If your assertion is that X function is called / in the environment X then the function should return Y then that should be a test especially when it’s load-bearing for security.
And tooling is no exception either. You should have tests that your tooling does the things it says on the tin and that things happen when flags are set and things don’t happen when they’re not set, and that the tooling sets the flags in the way you expect.
These aren’t even controversial statements in the JVM world etc. Just C tooling is largely still living in the 70s apart from abortive attempts to build the jenga tower even taller like autotools/autoconf/cmake/etc (incomprehensible, may god have mercy on your build). At least hand written make files are comprehensible tbh.
As far as I can tell, the check is to see if a certain program compiles, and if so, disable something. The dot makes it so that it always fails to compile and thus always disables that something.
> if a certain program compiles, and if so, disable something.
Tiny correction: [...] enable something.
The idea is: If that certain program does not compile it is because something is not available on the system and therefore needs to be disabled.
That dot undermines that logic. The program fails because of a syntax error caused by the dot and not because something is missing.
It is easy to overlook because that dot is tiny and there are many such tests.
I had a similar problem with unit testing of a library. Expected failures need to be tested as well. As an example imagine writing a matrix inversion library. Then you need to verify that you get something like a division by zero error if you invert the zero matrix. You write a unit test for that and by mistake you insert a syntax error. Then you run the unit test and it fails as expected but not in the correct way.
It's subtle. It fails as expected but it fails because of unexpected wrong causes.
The desire for "does this compile on this platform" checks comes from an era where there was pretty much no way to check the error. Somebody runs it on HP-UX with the "HP-UX Ansi C Compiler" they licensed from HP and the error it spits out isn't going to look like anything you recognize.
That one's a separate attack vector, which is seemingly unused in the sshd attack. It only disables sandboxing of the xzdec(2) utility, which is not used in the sshd attack.
I guess xzdec was supposed to sandbox itself where possible so they disabled the sandbox feature check in the build system so that future payload exploits passed to xzdec wouldn’t have to escape the sandbox in order to do anything useful?
Yes, but don't forget that there are different kinds of sandboxes. SELinux never needs the cooperation of any program running on the system in order to correctly sandbox things. No change to Xz could ever make SELinux less effective.
But don't forget that xz is also used as part of dpkg for unpacking packages.
The whole purpose of dpkg is to update critical system packages. Any SELinux policy that protects from a backdoored dpkg/xz installing a rootkit during the next kernel security update; will also prevent installing real kernel security updates.
The particular way of attack in this OpenSSH backdoor can maybe be prevented; but we've got to realize that the attacker already had full root permissions and there's no way of protecting from that.
SELinux policies are much more subtle than that. You don’t restrict what xz or liblzma can do, you restrict what the whole process can do. That process is either sshd or dpkg, and you can give them completely different access to the system, so that if dpkg tries to launch an interactive shell it fails, while sshd fails if it tries to overwrite a system file such as /bin/login or whatever. Neither would ordinarily do that, but the payload delivered via the back door might attempt it and wouldn’t succeed. And you would get a report stating what had happened, so if you’re paying attention the back door starts to become obvious.
Also I think dpkg switched to Zstd, didn’t it? Or am I misremembering?
But you’re not wrong; ultimately both sshd and dpkg are critical infrastructure. SELinux can prevent them from doing completely wrong things, but obviously it wouldn’t be useful for it to prevent them from doing their jobs. And those jobs are security critical already. SELinux is not a panacea, merely defense in depth.
But that's a check for a Linux feature. So the more interesting question would be, what in the Linux world might be building xz-utils with cmake, I guess using ExternalProject_Add or something similar.
sshd is probably the softest target on most systems. It is generally expected (and setup by default) so that people can gain a root shell that provides unrestricted access.
sshd.service will typically score 9.6/10 for "systemd-analyze security sshd.service" where 10 is the worst score. When systemd starts a process, it does so by using systemd-nspawn to setup a (usually) restricted namespace and apply seccomp filters before the process is then executed. seccomp filters are inherited by child processes, which can then only further restrict privileges but not expand upon the inherited privileges. openssh-portable on Linux does apply seccomp filters to child processes but this is useless in this attack scenario because sshd is backdoored by the xz library, and the backdoored library can just disable/change those seccomp filters before sshd is executed.
sshd is particularly challenging to sandbox because if you were to restrict the namespace and apply strict seccomp filters via systemd-nspawn, a user gaining a root shell via sshd (or wanting to sudo/su as root) is then perhaps prevented from remotely debugging applications, accessing certain filesystems, interacting with network interfaces, etc depending on what level of sandboxing is applied from systemd-nspawn. This choice is highly user dependent and there are probably only limited sane defaults for someone who has already decided they want to use sshd. For example, sane defaults could include creating dedicated services with sandboxing tailored just for read-only sftp user filesystem access, a separate service for read/write sftp user filesystem access, sshd tunneling, unprivileged remote shell access, etc.
Doesn't matter. This is a supply chain attack, not a vulnerability arising from a bug. All sandboxing the certificate parsing code would have done is make the author of the backdoor do a little bit more work to hijack the necessarily un-sandboxed supervisor process.
Applying the usual exploit mitigations to supply chain attacks won't do much good.
What will? Kill distribution tarballs. Make every binary bit for bit reproducible from a known git hash. Minimize dependencies. Run whole programs with minimal privileges.
Oh, and finally support SHA2 in git to forever forestall some kind of preimage attack against a git commit hash.
... and stop adding random patches to upstream software, especially when we're talking about security-critical stuff that must absolutely not be released without a very thorough security review.
Right, though if I'm understanding correctly, this is targeting openssl, not just sshd. So there's a larger set of circumstances where this could have been exploited. I'm not sure if it's yet been confirmed that this is confined only to sshd.
The exploit, as currently found, seems to target OpenSSH specifically. It's possible that everything involving xz has been compromised, but I haven't read any reports that there is a path to malware execution outside of OpenSSH.
> Initially starting sshd outside of systemd did not show the slowdown, despite the backdoor briefly getting invoked. This appears to be part of some countermeasures to make analysis harder.
> a) TERM environment variable is not set
> b) argv[0] needs to be /usr/sbin/sshd
> c) LD_DEBUG, LD_PROFILE are not set
> d) LANG needs to be set
> e) Some debugging environments, like rr, appear to be detected. Plain gdb appears to be detected in some situations, but not others
Would that help? sshd, by design, opens shells. the backdoor payload was basically to open a shell. that is, the very thing that sshd has to do.
The pledge/unvail system is pretty great, but my understanding is that it do not do anything that the linux equivalent interfaces(seccomp i think) cannot do. It is just a simplified/saner interface to the same problem of "how can a program notify the kernel what it's scope is?" The main advantage to pledge/unveil bring to the table is that they are easy to use and cannot be turned off, optional security isn't.
By design, OpenSSH will start an interactive shell with either the capabilities to escalate to root or direct root permissions. I don't think pledge/unveil will work any better than seccomp already does.
I do like the pledge/unveil API, but I don't think it would've made much of a difference.
There's a reasonably high chance this was to target a specific machine, or perhaps a specific organization's set of machines. After that it could probably be sold off once whatever they were using it for was finished.
I doubt we'll ever know the intention unless the ABC's throw us a bone and tell us the results of their investigation (assuming they're not the ones behind it).
Classic example of this being Stuxnet, a worm that exploited four(!) different 0-days and infected hundreds of thousands of computers with the ultimate goal of destroying centrifuges associated with Iran’s nuclear program.
Government organizations have many different teams. One might develop vulnerabilities while another runs operations with oversight for approving use of exploits and picking targets. Think bureaucracy with different project teams and some multi-layered management coordinating strategy at some level.
There aren’t a billion computers running ssh servers and the ones that do should not be exposed to the general internet. This is a stark reminder of why defense in depth matters.
https://gist.github.com/smx-smx/a6112d54777845d389bd7126d6e9...
Full list of decoded strings here:
https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01
--
For someone unfamiliar with openssl's internals (like me): The N value, I presume, is pulled from the `n` field of `rsa_st`:
https://github.com/openssl/openssl/blob/56e63f570bd5a479439b...
Which is a `BIGNUM`:
https://github.com/openssl/openssl/blob/56e63f570bd5a479439b...
Which appears to be a variable length type.
The back door pulls this from the certificate received from a remote attacker, attempts to decrypt it with ChaCha20, and if it decrypts successfully, passed to `system()`, which is essentially a simple wrapper that executes a line of shellscript under whichever user the process is currently executing.
If I'm understanding things correctly, this is worse than a public key bypass (which myself and I think a number of others presumed it might be) - a public key bypass would, in theory, only allow you access as the user you're logging in with. Assumedly, hardened SSH configurations would disallow root access.
However, since this is an RCE in the context of e.g. an sshd process itself, this means that sshd running as root would allow the payload to itself run as root.
Wild. This is about as bad as a widespread RCE can realistically get.