Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
New fuzzing tool finds USB bugs in Linux, Windows, macOS, and FreeBSD (zdnet.com)
201 points by doener on May 28, 2020 | hide | past | favorite | 164 comments


I don't think many people realize how complicated USB is. The USB 2.0 spec is like 700 pages long. The USB 3.0 spec is 500 pages long and the intro paragraph is like "readers are expected to be familiar with the USB 2.0 spec before proceeding".

I work in embedded systems and I absolutely despair when we need to fix a USB issue. USB is without a doubt the deepest rabbit hole I ever went down (and I never did find the bottom, because the issue ended up being due to Synopsys's USB core and we didn't have access to the Verilog - and apparently neither did they! They weren't using VCS back in 2016 apparently).


Too soon! I’m currently debugging an issue where a USB 3.2 Gen 2 scientific camera is unable to negotiate a connection with Linux. Under closer inspection it seems the USB 3.2 Gen 2 port on this relatively modern Intel NUC is being identified by the kernel as being USB 2.0. No such issues under Windows; but we’re not deploying that to the field.

So I go a bit deeper: the chip is actually a thunderbolt chip (aka USB 4.0), which is able to emulate all of the USB specs. For reasons unknown, the two controller chips on this NUC present as four controller chips; each chip presents as one USB 2 root hub and one 3 root hub. One of the chips is only available via an internal header on the board. All of the externally available USB ports (3x USB 3 ports, 1x USB 2 port, 1x Thunderbolt port) all result in connectivity to the USB 2 root hub.

Every fiber in my being wants to work out wtf is happening here; but I’m resisting because fortunately we have a vendor that can replace this NUC with one from 2016 instead of this one from 2018.

I did, of course, peek at the spec and experienced the bone-shuddering realization of how complex this whole system is. No wonder Windows BSODd when Gates first demoed USB on stage ...


Tell me about it. I thought bluetooth was complicated, until I finally caved and got a wireless headset. Being linux only, I found out that controlling simple things like the led's on the headset required really weird manual commands. I eventually gave up and found a package in yay that did the hard work for me but I was awakened to the weirdness and complexity of the usb spec.


>The USB 2.0 spec is like 700 pages long. The USB 3.0 spec is 500 pages long and the intro paragraph is like "readers are expected to be familiar with the USB 2.0 spec before proceeding".

JFYI, the UEFI specifications, last time I checked they were some 2000+ pages.


USB has nothing on bluetooth.


If you think USB is complicated, try FireWire. So many modes!


Maybe I'm just weird but I find this stuff kinda fascinating


I think this needs some important context, because otherwise it may be read as "Linux is so insecure compared to the other OSes".

From the screenshot it seems all of the bugs were found with KASAN and most of them were overread bugs. Likely for the other OSes they were only looking for crashes and probably often missed these classes of bugs that KASAN can uncover.

This essentially means they found more bugs in Linux because Linux has better tools to uncover subtle memory safety bugs.


>This essentially means they found more bugs in Linux because Linux has better tools to uncover subtle memory safety bugs.

Driver verifier has been part of Windows for 20+ years and detects memory overflows, underflows, use after free and whole bunch of other bad driver behavior specific to the Windows driver model.

I don't see any evidence in the paper that the authors knew about it, so it's possible more bugs can be found that way in Windows using their fuzzing tools


Instrumentation is huge too...

I had a bug with the webcam on my Razer Blade which I was able to fix on Linux because it was open-source. It was a pretty simple patch and I submitted it as a patch to Linux.

I had a bug with USB devices on my Mac and I had to go back and forth with Apple support for 2 weeks and ended up returning the device.


KASAN is a kernel address sanitizer. Is your claim that the other OSes, as tested, do not do address sanitization?


I think their claim is the researchers can run a Linux install with KASAN and see the results but they are unable to use an equivalent of that on MacOSX, Windows as Apple, Microsoft do not allow an end-user to perform that kind of instrumentation on their own device.


You and the person I replied to above are essentially correct:

>Fuzzing drivers on [FreeBSD, MacOS, and Windows] is more challenging than the Linux kernel due to the lack of support infrastructure. These OSes support neither KASAN, other sanitizers, nor coverage-based collection of executions. The lack of a memory-based sanitizer means our fuzzer only discovers bugs that trigger exceptions, and misses all bugs that silently corrupt memory. Because we cannot collect coverage information, our fuzzer cannot detect seeds that trigger new inputs.

(https://nebelwelt.net/files/20SEC3.pdf)

The researchers employed a partial workaround for the problem, but it is pretty obvious to me that the partial workaround does not level the playing field:

>To alleviate the second concern, the lack of coverage-guided optimization, we experiment with cross-pollination. To seed our dumb fuzzer, we reuse the inputs generated during our Linux kernel fuzzing campaign.


They are only partially correct about FreeBSD. In FreeBSD 12 there is no coverage sanitizer as it was added to 13 and never merged to 12.

Support for KASAN, and the other sanitizers is in development, however I'm currently too busy working on other things to have time to finish it.


XNU definitely supports kasan on macOS, since quite some years. (however, kasan kernels aren't shipped by default, you can build from source though)

However, no idea if Kernel Debug Kit ships with prebuilt kasan kernel and drivers.


I see a kernel.kasan inside the latest KDK. Doubt this extends to drivers, as all I can find for those are debug symbols.


No matter what the outcome is, remember: We should keep telling ourselves that it is possible for someone, somewhere unaided by typechecking, memory safety constraints, static analysis, and fuzz testing to write safe C.

We haven't found that person yet, but if we stop believing that person exists, then why are we still writing in these languages?


I'm just so thankful we have so many people in the LISP, Rust and Haskell communities ready to step up to the plate and replace over 50 years of C/C++ OS development with their grand, impervious and indestructible vision of what a safe, secure OS should be.

All they'd have to do is provide us a stable, secure foundation and a hypervisor that can run legacy OS's. Oh, and reimplement/replace the tens of thousands of API's provided by Windows/UNIX/Linux which are in use by millions of programmers everyday.

Also they'd need to be able to support all those other secure languages that have been written in C/C++ over the years. I guess they will be able to rewrite those as well in short order on top of their new "secure" foundation.

I'm sure it will happen any minute now.


Those folks are actually chomping at the bit to be taken seriously and have been for a very long time! Their proposition is simple: better tools can make us better developers.

JavaScript developers have told me on a few occasions that TypeScript is unnecessary, they can just write good JS and they don't need types! And it doesn't even catch every bug, so why bother?

I think I'm just tired of being told that while I can't write safe C, it's because I'm not smart enough (which may be true, I guess!), but also that the reason everyone else can't write safe C is because they, too, happen to also be not smart enough: https://news.ycombinator.com/item?id=23289693

> I design high-scale database kernels, large code bases, written in modern C++. I can’t remember the last time we had a memory safety issue. It isn’t as though we don’t have plenty of bugs during development, just not those kinds of bugs. Competent idiomatic code simply doesn’t leave much room for those kinds of bugs to occur. If you “constantly struggle with memory safety issues”, you are doing something fundamentally wrong. That’s not something you can blame on the language.

>

> I am always baffled by the people that supposedly write modern C++ professionally and constantly have memory safety issues. Most serious projects won’t hire you if you aren’t capable of writing memory safe code in your sleep, it is a basic skill.


> Those folks are actually chomping at the bit to be taken seriously and have been for a very long time!

The problem is, to be taken seriously, eventually they will need to demonstrate that they can solve the problem, rather than just talk about it.

I don't dispute the need for OS/systems development to move towards more secure languages, that much is clear. It's just none of the "visionaries" that talk about it so much have a clear roadmap that wouldn't put millions of working programmers out of jobs, let alone much in the way of working prototype code.

As such, it seems to be somewhat of a programming language enthusiasts pipe dream. 99.5% of people who seems to working on hobby OS projects choose C/C++, because that's where most of the accessible prior art is, and because it gives them a snowballs chance in hell of being able to port of wide range of pre-existing software and applications to their platform.


What are you even talking about? First of all, the people working on the language are obviously not going to be the people implementing the OS. There's only so many hours in a day, and a language as large as Rust, Haskell are a full-time job to maintain.

Rust has a bunch of hobby OS projects, and a very serious OS project called Redox OS[0]. Redox has a libc that's complete enough to allow it to run bash and other existing programs. It has a complete graphics stack, including its own compositor and UI toolkit.

Meanwhile, Microsoft is actively testing Rust as a language, both for new components and rewriting old code that would traditionally be written in C/C++[1]. They're doing this because memory safety bugs account for 70% of their critical bugs[2]. This is a major player in the industry, actively looking at a memory-safe language because they find C or C++ to be inadequate. If that's not validation enough, I don't know what is.

[0]: https://www.redox-os.org/

[1]: https://msrc-blog.microsoft.com/2019/11/07/using-rust-in-win...

[2]: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


Never said it had to be the language developers themselves to implement a modern OS, my implied meaning was the vocal proponents of the languages. I think that was fairly clear, and I don't know why you've jumped to that assumption.

Everybody who reads HN regularly with an interest in OS design and security and application security has heard of [0], and [2]. [1] simply says that Microsoft is "exploring" Rust, for an "experimental" rewrite of a low level component. That's it. No shipping code yet. I agree its a good step forward, but let's not overstate the impact just yet. If you consider that "validation enough", that's fine, but many others will disagree at this early stage, especially in context of displacing an entire ecosystem of C/C++ based systems and languages.

Lastly, Redox is a great example of what I'm talking about, people actually putting their money where their mouth is, and I think it's great to see. It also has 0% adoption outside of a small group of developers right now, and clearly has a long way to go. However by shipping a libc in order to support C programs, they risk exposing application security issues in exactly the same manner as the type of systems they are trying to replace (surely), unless they are prepared to develop additional proven mitigations.

Kudos to them for trying, but it remains to be seen whether they even get enough momentum to threaten, yet alone displace the incumbents. Surely history has shown that those platforms with the "killer applications" are those most likely to succeed, and that many elegant and better designed systems than UNIX/Windows have lost out to pragmatism on the part of customers/consumers.


I went and looked, unfortunately op's database product is not publicly available. It is easy to claim you are secure when no one sees the code. I'd happily do a deep dive on it for $10k per memory violation if they are so certain.


“Those folks are actually chomping at the bit to be taken seriously and have been for a very long time”

They can step forward now and deliver stuff.


It will naturally take time, but it's not like nothing's happening. Redox is growing and supporting libc-based apps. There were people looking at porting KVM to it, but I haven't seen any recent mentions - either way, the momentum is there.

Of course it will not be quick. But I don't see the point of ridiculing that effort either. It's not like Linux took over from Unix immediately either (even though the interface then was tiny in comparison).


I'm not ridiculing Redox at all, if you read further down the thread I give credit top them where credit is due, although I think they will have an uphill battle unless they receive some major corporate backing at some point. I save my ridicule for the frothing at the mouth C/C++ haters jeering from the sidelines who seem to that think that that battle is already won.

Linux grew rapidly in part because there was already a huge amount of C programmers on the Internet as well as UNIX admins, and Linux was reimplementing a fairly well known set of API's. Rust/Redox don't have quite the same head start. I wish them every success though.



The interesting bit to me is that the FreeBSD bug took 2 weeks of fuzzing to find, but the Mac/Windows bugs were found in one day.

USBFuzz found three bugs (two resulting unplanned restart and one resulting system freeze) on MacOS, and two bugs on Windows (resulting in a Blue Screen of Death, confirmed on both Window 8 and Windows 10) during the first day of evaluation. Additionally, one bug was found in a USB Bluetooth dongle driver on FreeBSD in two weeks.


FreeBSD is definitely the system that will make it through the apocalypse. A friend once told me he had a damaged RAM module, trying to install Windows, various Linux distributions, nothing worked. But it was possible to install FreeBSD and it just worked...


What mechanism is at work here? Do FreeBSD hackers write code that doesn't depend on RAM working?


Freebsd used to be known as the most sensitive. In the 2.x days (late 1990) freebsd would fail to load on system with bad memory that Linux or windows had no problems with.

In the end it is a good thing of the os refuses to run as no good can come of working with bad hardware


Now I'm wondering how FreeBSD handles damaged RAM differently, and the speed hit it takes for doing so.


To my knowledge, FreeBSD doesn't have anything specific here, it was probably just luck in where the broken addresses fell and how they were used. These days FreeBSD and Linux both have provisions to avoid addresses you know are bad, if you tell them at boot (and if you don't need those addresses to boot).

I'm pretty sure I've read that Solaris has provisions to stop using bad addresses detected at runtime, but it needs ECC or similar to detect the badness.


Still probably should get that RAM replaced though.


(Honest, naïve question, I lack awareness of the Linux versioning scheme) Isn't Linux way into 5.x land for quite some time? Is the testing of 4.20-rc2 some kind of proxy to test there were no backports of the very latest mainline?

----

Researchers said they tested USBFuzz on:

9 recent versions of the Linux kernel: v4.14.81, v4.15,v4.16, v4.17, v4.18.19, v4.19, v4.19.1, v4.19.2, and v4.20-rc2 (the latest version at the time of evaluation) FreeBSD 12 (the latest release) MacOS 10.15 Catalina (the latest release) Windows (both version 8 and 10, with most recent security updates installed)


Since v3.0, Linux has treated the first _2_ numbers of it's version as something that is incremented sequentially (in the v2.6 era, the _3_rd number was used, prior to v2.6 there was a even/odd stable/unstable split)

For example, we have normal releases v4.19 then v4.20 then v5.0 then v5.1. There will never be a v4.21.

Stable releases use the third number. v4.14.81 is a stable release of v4.14.0.

So the v4.20-rc2 statement just dates the work:

    $ git show v4.20-rc2
    tag v4.20-rc2
    Tagger: Linus Torvalds <torvalds@linux-foundation.org>
    Date:   Sun Nov 11 17:12:54 2018 -0600


Woah, so they did this a year and a half ago despite releasing this year (note that the paper has a 2020 reference in it). Weird!

I guess responsible disclosure timelines on this sort of work must really suck.


I don't know about computer security, but I've published peer reviewed research papers in other fields and a year and a half from data collection to publication doesn't seem unusual to me. If anything it's pretty fast.


The linux kernel has several LTS releases for people who want to stay on the same version for stability reasons but also need security updates. I suspect from a security perspective testing whatever the latest 4.19.X version is would be roughly equivalent to 5.X.

(Except, perhaps, for any USB-related features introduced in 5.X)


>"At its core, USBFuzz uses a software-emulated USB device to provide random device data to drivers (when they perform IO operations)," the researchers said."

The key to sniffing out all of these "bugs" (AKA, "security concerns") is to be able to emulate, emulate, emulate, everything as plug-in modules -- even down to hardware itself.

Virtual machine software does this -- but bugs (AKA, "security concerns") have been found in Virtual Machines too, so a "welded together at the seams" Virtual Machine is not the answer -- but rather, a modular, plug-and-play, open interface one is -- where the data moving into and out of interfaces can be monitored, logged, recorded, played back, analyzed, and modified easily to implment a test condition or conditions, as need be.

A software emulated USB device -- is a good step towards this vision.

It's sort of like if we rebuilt a virtual machine from the ground up, starting with the CPU emulation, and then said, OK, do I want to emulate this piece of hardware in the virtual machine software itself, or do I want to proxy it out, maybe via a serial path, ethernet/socket/ tcp/ip connection, or shared memory proxy to another plug-and-playable (and separately testable) module...

I think we need to rethink virtual machines as they exist today. Coding an interface in C for an existing virtual machine and saying that the job is done is not enough; virtual machines must become vastly more modular/proxyable than they are today. The bus of such a machine must become virtual and proxyable as well, such that 3rd party software, modular plug-ins, could observe it in realtime, as should memory, emulated VGA card, etc... any point there's a connection to hardware, real or virtual is a proxy point that must be modular and auditable by 3rd party plug-in programs...

THAT's how you write the virtual machines/systems of the future.

Which also become the debugging systems of the future...

In fact, such a "modular virtual machine" -- could be used by AI driven software to run unit tests, but run them with different combinations of VGA cards, emulated CPU's, emulated BIOS'ses, buses, controller chips, what-have-you...

Oh... and there should be a way to interface the VM with actual hardware chips... in other words, I have a hardware chip that is in question... emulate the entire rest of the system, but proxy a physical connection to that chip on a breakout board... etc.


Such a system wouldn’t be a VM but an emulator. They already exist and are routinely used. There are also specialized low level emulators, too, and even logic analyzers at the electronics level.

But all that does not really help fuzz a kernel quickly, which is much better done with a normal VM plus an emulated device, as the researchers did.


And the underlying hardware should be designed to virtualize its PCIe registers, many accelerators do this, but it should be transparent to both plumb the virtualized register sets into a guest but also interpose those register sets to build a dynamic out of band firewall to same said hardware.


I am glad I am using Qubes OS, which isolates usb devices from the rest of the system.


macOS is also moving into this direction: https://developer.apple.com/system-extensions/

"DriverKit provides a fully modernized replacement for IOKit to create device drivers. System extensions and drivers built with DriverKit run in user space, where they can’t compromise the security or stability of macOS."


Sadly, DriverKit doesn't actually address all the reasons why you'd want a kernel extension.


Do you have an example?


Little Snitch, the most critical piece of Mac software after the kernel.


Little Snitch will be fine. They will use the new API.

https://blog.obdev.at/little-snitch-and-the-deprecation-of-k...


Tbh, is that not viable through packet fence? I also use little snitch and had not considered this.


Me too. However, I can't run it on one of my computers because I need to do development on it which requires the GPU.

I really wish there was a reasonable solution for using the GPU with Qubes. If that existed I wouldn't have to use any other distribution, ever.


Have you tried GPU passthrough? Will need two GPUs for that though.


Seems like a pretty ridiculous hoop to have to jump through.


What do you mean? The two GPUs or the part where you need to give the VM a GPU if you want that VM to have a GPU ?


You do realize you are talking to a person running Qubes?


I can confirm that Qubes requires additional mental, time and computational resources to run. Many people do not have those and for them it would indeed be ridiculous to use Qubes. Its advantages are not important enough to everyone.


How is Qubes? Was thinking about a Purism laptop w/ Qubes as a way of indulging my paranoia.

It's a cool idea and I've been lookin at it for a while, but it seems onerous to isolate everything.


Librem 15 is exactly what I am using with Qubes. It is my daily driver. Isolation not only helps against threats but also helps to organize my work and life. Unbelievably easy backups help too.


I always assume plugging in an untrusted USB to be security-suicide. At least on Windows it can run arbitrary code, by design. Given that, these bugs don't really affect my threat model at all.



I'm pretty sure I dont have an evil maid.

I do wince as my mate, who works as a binman, tests found devices on a Windows laptop.

So far he has got is about a terrabyte of free storage and no problems.


> and no problems

...that he knows about.


I wonder if any USB device actually relies on these bugs being there to operate correctly.


Hard to imagine a device that relies upon use-after-free defects for its correct functioning.


"I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it."

https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...


Me: laughs in firmware engineer


Not sure about hardware devices, but I have encountered software that has relied on using freed memory to function as intended.


Obligatory xkcd: https://xkcd.com/1172/


And the winners are:

- 2x double-free

- 8x NULL pointer dereference

- 6x general protection

- 6x slab-out-of-bounds access

- 14x use-after-free access

Naturally it was only due to the current shortage of those mythical C developers that never make memory corruption mistakes.


In many ways I'm most interested in the Windows bugs: Windows 8+ has a model-checked USB stack written in P (https://github.com/p-org/P). Or maybe that's just the USB 3.0 stack? Either way, would be interested in whether they're in the integration code or bugs in the P compiler.


Hypothesis: the more fuzzing tools we develop, the fewer infallible C programmers remain.


I'm reminded of this apocryphal story of a revered machinist from Tampere. The city is on an isthmus between two lakes. Their shop was right next to the rapids between the lakes. They were famous, having never made any mistakes.

During some construction event, the city closed off the rapids and no water was flowing there, it was completely dry. There was a massive amount of partially machined objects on the bottom. The machinist had been tossing them out of the window into the water, every time when they made a mistake.


So the machinist had been using rigorous testing, catching all defects prior to release, rather than using a correct-by-construction methodology. Either approach seems fine.


Neither is perfect. Tests are only as good as your imagination of what to test. Proof by construction is only as good as your ability to have the right axioms


> Tests are only as good as your imagination of what to test.

A lot of work has been done on code-coverage, boundary-analysis, etc. Testing can never be complete, though.

> Proof by construction is only as good as your ability to have the right axioms

That's not a good summary of the challenges that formal methods face with respect to correctness. Defects can creep in at various points.

See p. 46 of the PDF linked from https://www.adacore.com/tokeneer


Well, apparently it didn’t catch the defect that mattered. Typical engineer trying to justify an engineering mistake!


He never had problem with use-after-free, because he was leaking memory.


I’m not a Rust programmer but usually one interjects to say that Rust would have prevented [some of] these issues.

Does Rust in fact prevent most of these at compile time? Any that it does not?


- 2x double-free

Yes, it would be very weird to manage to do this in rust. It would require screwing up badly in unsafe code. I guess the most likely way to fuck up unsafe code in this way would be to screw up a custom datastructure like a custom reference counted pointer (but the obvious solution is to just use the standard ones).

- 8x NULL pointer dereference

Yes, rust tells you when pointers might be Null (which is represented by saying the pointer is Option<PointerType> instead of just PointerType), and doesn't let you use the pointer in a way that implicitly assumes it isn't null.

- 6x general protection

These are generally memory errors, and are certainly undefined behavior, so yes.

- 6x slab-out-of-bounds access

"Yes" with a small caveat, depending on how you are using the slab it is likely that Rust doesn't force you to explicitly think about the out of bounds case and turns just silently turns the security vulnerability into a runtime error, which might be handled farther up the stack or might cause the kernel to halt.

- 14x use-after-free access

Yes, like double frees. The range of possible fuckups in unsafe code that cause this is probably slightly larger than the equivalent range for double frees.


So the same argument that is made against the "mythical C programmer that never makes memory corruption mistakes" is also valid for the mythical Rust developer that never makes mistakes in unsafe code... It's two sides of the same coin.


But overall the two situations aren't nearly equivalent. Writing C, you're constantly at risk of making these mistakes. Writing Rust, you can keep the vast majority of your code safe and more closely examine the smaller unsafe area for memory errors. (In driver development, you may have to use more "unsafe" than for application development, but this doesn't negate the benefits. Also the compiler output is much more helpful.)


> Writing C, you're constantly at risk of making these mistakes. Writing Rust, you can keep the vast majority of your code safe and more closely examine the smaller unsafe area for memory errors.

I think this is a fallacy...The majority of C code doesn't manipulate pointers either. The point is, the moment that you have _any_ unsafe code (C or Rust), it's a question of time before you will have some bugs, especially if you have a very large number of people working on the same code base...you may be extra careful, maybe the next guy is not...Rust is not going to magically solve these problems for unsafe code...


I think you guys are debating whether perfect is the enemy of the good.

Rust will not make your code magically bugfree (this is basically impossible by definition), but it will undoubtedly reduce the number of bugs in a very significant way and nudge you towards better code.


Exactly, it's a difference in degree, not in kind. Rust isn't perfect with regard to memory errors, it's just massively better than C. I don't think pml1 will be convinced, but I've been working on a C project lately and I've never appreciated Rust's memory virtues so much as when using valgrind to debug my string and hashmap implementations. Chased a memory leak for two evenings that was just having free() a few lines off in a function. Stupidly simple error, but my eyes just glazed over the code from having read it so many times. In Rust, it never would've happened because the borrow checker would've dropped the memory at the proper time.


> The majority of C code doesn't manipulate pointers either

But it's not just pointers. C is unsafe at every turn. Assignments can give undefined behaviour, as can the arithmetic operators, as can varargs... the list goes on.

    int i;
    int j = i; // undefined behavior

    int m = 0;
    int n = 42 / m; // undefined behavior

    int x = INT_MIN;
    int y = -1;
    int z = x / y; // undefined behavior (assuming two's complement)
 
    printf("%d"); // undefined behaviour
> Rust is not going to magically solve these problems for unsafe code...

It doesn't need to. By providing a language where only a tiny fraction of a well-designed codebase needs to use the unsafe features, the number of safety-related bugs can be vastly reduced.

Rust does not aim for perfection. If you want that, your best bet is formal methods.


Except in rust unsafe code is very much not the norm, and much easier keep an eye on. Anyone writing rust worth their salt should be paying 5 times as close attention to every line of unsafe code for exactly this reason.


Except unsafe blocks are rare and heavily marked as "this is where the unsafeness is".


It won't be rare in driver code.


I’ve written bare metal code for rust that does networking and had only a handful of `unsafe` usages, and none after startup/init was completed. It’s actually really incredible how far “simple” shared-read-vs-exclusive-write bound checking and a very strict (but capable/likely Turing complete) type system can take you.


It absolutely will, particularly in USB code. USB devices other than host controllers don't have memory mapped registers for instance, it's basically a network protocol.


> These are generally memory errors, and are certainly undefined behavior, so yes.

USB registers might be in DMA buffers or another memory location that might get mapped/unmapped at any point. Not everything fits a simplistic linear memory model.


This is an interesting point. Rust's borrow checker or bounds checks probably also can't know about a page table change happening underneath it. Anyone want to correct me on that?


There's no fundamental reason why you couldn't write an abstraction around the page table that the rust borrow checker understands.

The rust borrow checker works on the principle of ensuring that if you have a unique pointer (&mut) to something, nothing else can access it. If you have a shared pointer (&) to something, nothing else is mutating it except where internal mutability is explicitly marked (UnsafeCell, and abstractions using UnsafeCell such as Cell, RefCell, Mutex, RwLock, and so on).

To mutate a page table entry you would need an &mut reference to it, to access a page you would need an & reference to the page table entry (from the &PageTableEntry you would get a &[u8] pointing to the data which the borrow checker would guarantee you drop before you drop the &PageTableEntry before anything mutates the PageTableEntry).


This isn't exactly what you're asking about, but I really really love https://os.phil-opp.com/paging-implementation/

Shows you how you might implement paging, and how much unsafe you need to do so.


It's pretty easy to wrap those constructs in RAII wrappers to replace or augment the normal reference counting that C code would be using to keep those buffers mapped, along with associating the lifetime of the relevant buffers with that refcnt.

So it won't be perfect, but you can add safety versus what you get in C. You can even add safety versus what you'd get in C++ because of the lifetimes you can associate.


I know it's possible to reference count a page table mapping, in any language. My question is has anybody really attempted it in a rust kernel to make the sort of automatic safety measures we know rust for mean anything at all. It seems like if you really want correctness, every allocation must bump such a recount, which is very expensive.


So the general trick with reference counted pointers in rust, is that you don't have to touch the allocation count when creating a new pointer as long as you already have a pointer that you know lives for longer than your new pointer, and the rust type system will check that you didn't make a mistake when you thought you did.

I.e. say I have a `x: Rc<[u8]>`, that is a ref counted pointer to some memory, and a length of that memory. I can do `let y: &[u8] = &x;`. `y` is now a not-ref counted pointer to the same memory (with the same length), that's guaranteed to be dropped before `x` is so the memory won't be freed from under it. I can also do `let z: &u8 = &x[5]`. `z` is now a pointer to a byte in `x`. Like `y` it's not ref counted and the compiler will force us to drop it before we drop `x`.

You can make a whole allocator in this fashion (people have, even in the standard library I believe). If you get really clever you can probably even make an allocator in this fashion where the allocator doesn't use any unsafe code, you can easily make one where the users of the allocator don't need any unsafe code.


I don't think Rust really makes NULL dereferences any better. In practice, a NULL dereference in C is almost always "just" a crash that can't be turned into something worse (unlike the rest of these kinds of bugs), and Rust makes it really easy to call "unwrap", which if your Option is None... crashes.


Explicit `unwrap` (rust) is really orders of magnitude better than implicit `unwrap` (c dereference if you're lucky and the compiler hasn't optimized based on the assumption that the pointer is non-null). There aren't actually many explicit unwrap's in practical code, and they're something you notice when auditing or reviewing the code. The change in type means both the caller and the callee almost always agree on whether or not the contract is that "this pointer can be null" or "this pointer isn't null".


> In practice, a NULL dereference in C is almost always "just" a crash that can't be turned into something worse

No, in C a NULL pointer dereference is not a crash, it's undefined behavior (which yes, can manifest as a crash), which is much more unpredictable.


Indeed.

For anyone still believing a NULL dereference (or any of the other UB that is in the C spec) is just a segfault, "What Every C Programmer Should Know About Undefined Behavior" is still required reading:

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

Here's how triggering UB ends up as a vulnerability:

https://lwn.net/Articles/342330/


> Here's how triggering UB ends up as a vulnerability

To be fair, dereferencing NULL ended up as a vulnerability due to

1. the optimiser removing the subsequent NULL check, and

2. the zero page being mapped.

While 1. may happen, depending on how sophisticated the compiler is, 2. was already a security issue and is 'almost always' not the case.


I would argue that Rust removes a class of bugs. Not because the language is better, but because the compiler just does not allow it. It will not let you build your executable if that type of bug exists. For example you could still do something like a SQL injection attack using something compiled with Rust. The C compilers should be doing the same as Rust (some are, but usually at the warn level). Static analysis is not a new thing.

Usually the best thing any C/C++ developer can do is crank the warn levels as high as they can go. Then set the project to error out on warn. A good code smell for a project is when you see one that has turned off warnings. Usually the reason is 'too noisy'. I always tell my junior devs the same thing 'the compiler is trying to tell you something all you have to do is listen'.


The next level after fixing all compiler warnings is running under Valgrind's different modes to catch many more memory and threading bugs.


Yes it does. Not most of these, but all of them.


No, that's untrue. For instance, Rust doesn't prevent an out-of-bounds access at compile time; it just converts an out-of-bounds access into a panic at run time. That is, it reliably aborts the program (or the thread), instead of reading or writing out of the bounds of the object.

The same for "NULL pointer dereference"; the Rust equivalent (without using raw pointers, which require "unsafe") is calling .unwrap() on an Option<T> which has a None, which once again is not prevented at compile time, only converted into a panic at run time.

Edit: note, however, that idiomatic Rust can help prevent these bugs, by using iterators instead of indexed access, and match/if-let statements instead of unwrap/expect.


There's quite a huge difference between undefined behavior (dereferencing NULL or indexing out of bounds) which can lead to remote code execution and other super nasty bugs, and runtime checks which panic and abort the process reliably and in a well-defined way.


really ? You've checked all the code in question and you are a 100% sure that you would have required _zero_ unsafe code ?

Drivers by their very nature require a lot of unsafe pointer passing...Having worked on a lot of embedded Linux driver code, I'm not convinced that you could do without a ton of unsafe code...which basically negates all of Rust's safety guarantees...The current architecture just doesn't lend itself well to Rust (in my opinion), so you would have to basically rewrite very large parts of the plumbing, which by its very nature would introduce a ton of new bugs...


Rust has the concept of "interior unsafe".

The idea is that you build an abstraction and wrap away the unsafety. That abstraction has to be built defensively, which is enabled by the type system enforcing very powerful invariants across the program. You can safely encode the way your structures may or may not be used across threads, enforce exclusive vs shared access, control mutability.

Unsafe is not a "do whatever" card, you use it with the rest of the language as one more tool that helps build safe programs.


For the most part rust handles pointer chasing in safe code just fine.

Drivers are a fundamentally unsafe concept, like an ffi call, you're talking to hardware the compiler doesn't understand and assuming it's going to do what you want. So there will of course be some unsafe. In a well implemented driver it will also be very limited in scope and relatively easy to check.

I can't vouch for the code quality (I just don't know how good it is, I had no hand in writing it, and it's not used in a production system), but I believe you can find a usb driver implementation written in rust here: https://gitlab.redox-os.org/redox-os/drivers/-/tree/master/x...


>So there will of course be some unsafe. In a well implemented driver it will also be very limited in scope and relatively easy to check.

I'm sure that is what the C developer thought as well...I'm not trying to be snarky, but that same arguments that are made against C code holds equally true for unsafe code...

I don't think it is a reasonable position to suggest that unsafe Rust code is somehow safer than C code...


The point isn't that unsafe rust code is safer than C code, but that there is many orders of magnitude less unsafe rust code than c code in a driver of the same size.


Unsafe rust is safer than C for two reasons.

You can limit what comes in from safe rust.

Unsafe rust still has more compiler checks than C. Unsafe rust is not as permissive as C. Its rust still, with certain things permitted.

Its unsafe{} not nosafe{}


Rust didnt exist when these kernels were written. A mature kernel written in Rust still doesn’t exist.

Modern toolchains can effectively warn about nemory/pointer issues. I cant wait for a C++2x proposal to add Rust like memory semantics to the language - primarily to shut Rust fanboys up. Also, Rust does not warn about threading race conditions, you’d need a reference capabilities like language (Pony without ORCA) for that.


Rust does warn about threading race conditions. Kinda magic but it does. It prevents them.


Could you elaborate? AFAIK, Rust entirely prevents data races between threads, but it doesn't do anything special at all for other kinds of race conditions.


RIIR spam is not useful. People on HN are already aware of Rust.

Some are aware that many other langs are memory safe and yet that's not a reason enough to rewrite a kernel.


You're the first one in this thread to bring up the idea of rewriting the linux kernel in rust... the rest of us are just having a productive discussion on the degree to which a language solves a problem.


That would be me, unfortunately I don't have time currently to work on the kernel.


>Naturally it was only due to the current shortage of those mythical C developers that never make memory corruption mistakes.

the repetition of this cutesy line lately almost feels like there is some programming language astro-turfing campaign going on somewhere that I haven't become fully aware of yet.

i'm not saying that it's wrong -- safe languages producer safe(r) products -- but to be upset about the unsafe nature of C is, in my opinion, to be upset about the very thing that makes C a powerful language.

The host of foot-gun abstractions and abilities that come along with C are exactly the kinds of reasons why someone would choose C for a project in the first place : flexibility and a sort of 'freedom of expression' that is unmatched everywhere else except for maybe assembly language.


"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

-- C. A. R. Hoare, Turing award lecture in 1981,

http://www.labouseur.com/projects/codeReckon/papers/The-Empe...


You have no proof that writing a Linux-like kernel in Rust would result in fewer bugs, or even that it is feasible at all (what with writing even linked lists being difficult).


>or even that it is feasible at all (what with writing even linked lists being difficult).

You're right, a Rust OS kernel probably wouldn't use linked lists but I'm not sure why that is a barrier to making a kernel. The redox[0] kernel is written entirely in Rust.

[0] https://www.redox-os.org/


That is why I said "Linux-like". Redox has a microkernel design, which presumably makes writing the drivers in Rust much easier at the expense of some throughput and latency.


I mean it is obviously possible. Rust can do anything that C can do, literally.

Your question is mixing safe and unsafe Rust as if they're one and the same. What I think you meant to ask is "Can Linux be implemented in safe Rust?" And my answer would be "likely not, but the goal is to limit the number/size of unsafe sections, so that they can be QA-ed harder and bugs discovered."

When you write C/C++, 100% of that codebase is unsafe. When you write Rust, even a kernel, should be 80/20 safe/unsafe or less. That's where the improvements come from.


You are oversimplifying. Both of the languages being Turing-complete does not imply Rust being as good a choice for talking to hardware as C is. What I am getting at is: the imaginary Rust replacement for Linux may have ten times as much code as Linux does and be unmaintainable.

Rust's safe mode eliminates the possibility for some classes of bugs, but this still does not imply the hypothetical Rust kernel would be more reliable than Linux. What I am getting at is: Rust is a quite different language than C, and the hypothetical Linux replacement in Rust could be buggier than Linux.

Regarding the above, my message is not that I know that Rust is worse than C for some applications; but rather that Rust fans and C haters are not even addresing those points. pjmlp deftly avoided saying anything explicitly, but still he should for the sake of the discussion and decency at least try to address those points while making such distasteful comments as "Naturally it was only due to the current shortage of those mythical C developers that never make memory corruption mistakes.".


> When you write C/C++, 100% of that codebase is unsafe.

Don't you think that statement is a bit over-the-top ? Large parts of C/C++ codebases are just as safe as the equivalent Rust code, as it doesn't do any pointer/memory manipulation.

For example, how is making a os system call in Rust any safer than the equivalent call in C ?


If you mean literally writing

    asm!("
        mov eax,1  ; system call number (sys_exit)
        int 0x80  ; call kernel
    ")
Obviously both are equally unsafe, also obviously both are extremely rare and not "large parts" of any program whatsoever (unless you're writing your program directly in assembly).

If you mean using the wrapper libraries that typically wrap system calls. Rust removes all sorts of possible fuckups. For instance in C one might write

    ssize_t bytes = read(some_file, buf, nbyte);
and if buf is null or a dangling pointer or nbyte is greater than the size of the allocation of buf you get undefined behavior. Or if afterwards you write

    printf("%s", buf);
and you forgot that read doesn't necessarily return a null terminated string you get undefined behavior. And so on and so forth.

In rust you would write something like

    let mut buf = [0; nbyte];
    let bytes = some_file.read(&mut buf)?;
and buf can't possibly be null or dangling and you can't possibly have messed up the length of the array because the abstractions checked that for you.

Afterwards if we were to write

    println!("{}", buf);
Well it will fail to compile... because buf isn't a string... and doesn't otherwise implement Display. But if we were to try to match the C code exactly and write one of

    stdout().write_all(buf)?;
    println!("{}", str::from_utf(&buf)?);
the second one would return an error if it wasn't a utf8 string, but neither would ever result in undefined behavior/security vulnerabilities.

(And yes, both of those are probably bugs in most programs, since you probably want to print buf[.. bytes] not buf. But bugs aren't security vulnerabilities. Also in real code I expect you would use read_to_string() if you were going to print it, which eliminates this potential bug too)


Linked lists are difficult, but can be abstracted behind a library.

Here's a no_std intrusive linked list library I've used in kernel environments. https://docs.rs/intrusive-collections/0.9.0/intrusive_collec...

It's not like you reimplement linked lists in every driver in the Linux kernel either; you include linux/list.h like everyone else.


But "use rust" seems to be obvious enough as a solution that you simply assumed that's the direction GP is going? :)

Honest question though: shouldn't it be possible to only write single kernel modules (like drivers) in rust, without switching out the entire OS at once?


Answering myself here: of course someone else already did that. :)

https://github.com/fishinabarrel/linux-kernel-module-rust


No I don't, but there are surely proofs in Ada, PL/I, PL/S, NEWP.


Well... I'm going to guess that the largest kernel written in PL/I was at least one order of magnitude smaller than Linux, and probably two orders of magnitude smaller. I'm also going to guess that it was never subject to a 2020 state-of-the-art fuzzer. (I'm even going to guess that it never had USB support.) So, while you may have a point, your comparison is hardly apples-to-apples.


I bet IBM i does have a couple of USB ports.

And Unisys sells ClearPath MCP to three letter agencies over UNIX for a reason.


It's worth noting that many of these issues likely wouldn't have been possible with the use of RAII semantics.


So you're saying if the code was higher quality it would have fewer bugs? I mean, well, yeah.

The problem with solutions like RAII is that it doubles down on programmer infallibility, from "good programmers don't write bugs" to "good programmers correctly use RAII."

You haven't really solved the actual problem unless you have tooling available that won't let non-RAII be checked in at all. Does that exist?


> So you're saying if the code was higher quality it would have fewer bugs? I mean, well, yeah.

That's not what I am saying, at all. You cannot do RAII in C.

My point was that I believe that most of the bugs under discussion here would not have happened if a language that provides RAII capabilities had been used.

> The problem with solutions like RAII is that it doubles down on programmer infallibility, from "good programmers don't write bugs" to "good programmers correctly use RAII." > You haven't really solved the actual problem unless you have tooling available that won't let non-RAII be checked in at all. Does that exist?

I don't understand what you are trying to say honestly.

If you think that RAII requires the same level of attention and care as manual malloc()/free()/new/delete, then this makes me wonder if you even know what the point of RAII is. Given the same "programmer quality" (for the lack of a better term), the use of RAII will certainly reduce the occurrence of memory mismanagement bugs.

For the tooling, you can use clang-tidy in C++ to enforce the use of smart pointers, make_unique, etc. E.g., see:

https://www.bfilipek.com/2017/12/why-uniqueptr.html

(I am sure there are other static checkers around that can spot the use of manual memory management functions, it's an easy thing to check for).

So, please don't put words in my mouth. The claim was never that RAII gives you the mathematical certainty of the absence of certain classes of bugs. Rather the claim is that in practice the use of RAII would have prevented most of the bugs presented here.


However RAII does not prevent use-after-move, which is how use-after-free happens with std::unique_ptr.

And sadly, most recent C++ surveys place the use of static anaylsis tooling at around 50%.


I think part of the problem is the attitude that if your hardware is compromised so is the software, so you don’t need to account for the more insane inputs. After all, at least it doesn’t make much sense to have a device send special packets to cause DOS when it can for instance just blow up the system by causing electrical overload.

Still better to have these issues fixed.


Naturally. Where are they? They should be the ones developing operating systems, not these charlatans.


Too busy bragging about their pet language on HN, instead of off proving their point with it.


> Naturally it was only due to the current shortage of those mythical C developers that never make memory corruption mistakes.

Ah - you'd prefer to use the that mythical equivalent kernel written in Rust instead?


http://www.usbmadesimple.co.uk/

(tl;dr: USB is anything but simple.)


Pretty sure the NSA has been using similar tools for at least 10 years now.


Do Bluetooth next! :)


The FreeBSD vuln was in the drivers for a usb Bluetooth dongle, so there is apparently some coverage.


None of this is directed at OP.

Not to be that guy, but I posted this many hours ago.[1] I don't care about the karma, but this is a repost.

@dang, can anything be done to help HN not work like this in the future? Why have karma for posts but then allow reposts like this? It disincentivises first posts in favor of gaming the viewers of HN while trying not to get scooped. It sucks. None of this is directed at OP.

[1] https://news.ycombinator.com/item?id=23329790


He's expressed interest in the past about fixing this; he's not sure how to handle it yet:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

That said, two things:

You should send a message to hn@ycombinator.com if you have something to say to moderation. They're very responsive, and you'll get a better response rate than posting in random HN threads that might not even be seen. It also breaks the Guidelines:

Please don't post on HN to ask or tell us something. Send it to hn@ycombinator.com.

https://news.ycombinator.com/newsguidelines.html

You also shouldn't take this personally. The poster likely didn't see your post, very likely wasn't 'gaming' anything (they actually have some really interesting articles in their submission history that don't seem to have been posted before, and they're a journalist; it was likely they just read it and posted it), and it wouldn't be practical to 'ban reposts': HN already has a feature that upvotes a previous submission if you post within a given range from it.

I know it's not a great feeling to post something that you think is interesting and to see it get to the front page later via someone else (I can point to a bunch of my own posts where this happened), but it getting to the front page at all is something positive. It's not a race, so incentivizing quick-draw on articles from popular outlets (like zdnet, which has millions of readers) above all else probably wouldn't be useful, either.


I don’t care to do that. It’s a problem with basic functionality. There is site behavior which disallows reposts, but only sometimes. The sometimes part is the bug, from my point of view. I honestly don’t care enough to do anything but say it shouldn’t happen when it happens to me, as long as it happens to me. Bug reports should be public for non-security related bugs.

This is a bug. This thread is my report. I can be reached on this thread.

Edit: I have emailed and asked that replies are in-thread or linked to in this thread. I hope we can get some clarity into whether this is a "won't fix" situation or more of a subjective call on a per-case basis, or something else entirely.

I don't want to make waves, just trying to surf.



I do have one unanswered question: Why did this particular repost get posted instead of upvoting my post?


Because your post fell below the bar for 'significant attention' as explained in the comments I linked to.


Maybe if I had gotten some of the karma from the repost it would have cleared the bar. This argument is circular, and I know you have to make a judgment call, but this feels like the wrong way to go about a fair, transparent system. Perhaps that is not a goal of HN, but from my few interactions with you on this site, I think you probably care a whole lot more about HN than I do, and I care too! Thanks for all you do.

Edit:

Why not do it like a lottery with more than one winning ticket sold? Share the karma pool among the OP and the reposters. Just an idea.

Or, let the reposter keep the karma, but put a byline saying something like 'first posted by user123 on jan 2, 1969' with a link? I care far more about attribution than karma. I care about making good posts, but if someone else gets the credit, why would I bother? If no one will see it, the intrinsic value to me is something I already had. To share is to be seen. If a tree falls in the forest with no one around to hear it and all that.


Yes, we're planning to implement some form of karma sharing. That information was in kick's reply to you upthread (https://news.ycombinator.com/item?id=23342187 - see the first link there).

The reason for wanting to do that is incentivize finding good stories that haven't been posted yet: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


This is the first time I’ve heard the words “karma sharing” from a representative of HN. I didn’t mean to miss what ‘kick was getting at, but I’m glad for the discussion. I also apologize if I came off as aggressive. I am working to be a better person online and offline, and I appreciate the dialogue. I also feel that this thread is a good thing for clarity around these issues on the site. I wish my phrasing and attitude came off better, but we are what we do, and so I’ll try to do better.


Maybe it's not that important.


If it’s not important then why karma?

I couldn’t agree more, by the way. It isn’t important. That doesn’t mean it doesn’t matter.


The votes help rank the submissions and the comments - finding interesting things for HN users to read and talk about is the purpose of the site. Who found a thing first (especially something that was going to end up on the front page one way or the other) is not really that interesting to most people. If anything, an FP-reaching post's contribution to a user's overall karma should probably be capped at around some small factor of a median comment.


I guess I just see breaking news differently. I don’t think news reaching a site when it does is predictable or deterministic. It’s not a foregone conclusion. Many posts which are on-topic or timely or relevant will never make it to FP. That is by nature of the FP. It can only hold so many posts. I agree that it doesn’t matter in aggregate, but this site is niche for a reason. It’s because people care to post quality, timely, relevant content here that the FP can be something to strive to be on. It’s a worthy goal to share content that is deemed worthy to make it to FP. It had an impact. Reposts are just another way to spread the impact around so we all benefit.

I hope that makes sense. I don’t disagree about karma caps. Reddit handles aspects of this karma perverse incentives problem differently for text posts earning no karma, while link posts do. If I got that right. In any case it’s a thorny problem and I think that what exists at HN now works pretty well. I’m sure it can be better, and I can’t think of a better team to think about how and why.


It may be helpful for you to consider the highest karma-earners on the site [1], and how they got there.

None of them got there by being the first to get breaking stories onto the site. They got there by regularly posting interesting comments and articles, consistently over several years.

Though if you look at their submission histories, plenty of what they've posted has received few/no upvotes. Some users will even post the same article every few months for a year or more, before it ever gets voted onto the front page.

Consistent contribution over the long term is what's key.

There are plenty of HN contributors who aren't well known at all for posting comments, but who have huge karma counts just for consistently posting interesting articles, and much of it is years-old, or about non-current topics like history, philosophy, literature, architecture etc.

Someone like that won't be bothered if other people sometimes get the karma for a story that they had submitted earlier. They will just keep posting several interesting articles every week or even every day, knowing that overall their contribution is valuable and that karma allocation will balance out fairly over the long term.

[1] https://news.ycombinator.com/leaders


'breaking news' is pretty much antithetical to HN. It just happens to have 'news' in the name, sort of like the town of Newport News. For the purposes of HN, a deeper, more detailed story three days 'late' is better than a superficial 'breaking' story.


Some information only matters to you if you know right now. Timeliness and relevancy are important to the user. The existence of the New page on HN and users who visit it prove that the search for the right content at the right time is one worth it to those to visit New.


HN is just not that kind of site, really. The existence of the new page does not 'prove' anything more than the name. Newly submitted stuff has to go somewhere, it doesn't mean it urgently needs to reach users. The FAQ, the guidelines, zillions of dang posts talk about the sort of things that make good HN posts and 'newsiness' is not high among them.


I’m not saying it proves anything. I’m saying HN the site is different things to different people. It’s hard to say what people intend when they upvote something, but I think they mean that it helped them and think it belongs on HN. By virtue of upvoting, they signal its relevance to them and their vote of confidence in its relevance to the wider HN community. I don’t have any burden to prove here or any angle to promote. Users vote, and HN is the result. HN wouldn’t be HN without its values (thank you) or its users (thank you all).


Users vote, and HN is the result

I think you're cycling through a sort of greatest hits of various common misconceptions about the way HN works and you're better off just reading dang's moderator comments for a little bit - this stuff comes up all the time and he addresses it with better detail, accuracy and nuance than I can. You'll be in a better position to then make your critique rather than debating it with less familiarity and with some internet rando who's lecturing you about how you have it all wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: