Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Still, you can archive significant speed-ups with WebAssembly at some use cases.

For example, I have a hash function library (https://github.com/Daninet/hash-wasm) where I was able to archive 14x speedup at SHA-1 and 5x speedup at MD5 compared to the best JS implementations.

You can run the benchmarks on your computer here: https://daninet.github.io/hash-wasm-benchmark/



That's exactly the kind of thing I think WASM is good at - small, computationally expensive libraries that are easy to just plug in.

I'm more of a web developer and every time I think "hmm, could I use this to build a webapp?", but quickly shrug it off because it would create a big headache and the JS execution is rarely the bottleneck (and if it is, it's likely developer error and inefficiencies than the language / interpreter).


It's very similar to the Python/C distinction. Python will often drop into C for the use-cases you're describing. However, unlike WASM, Python/C is the wild west:

- The whole CPython interpreter is the "C-extension interface" which means that the CPython interpreter can hardly change or be optimized or else it will break something in the ecosystem (and for the same compatibility reason it's virtually impossible for alternative optimized interpreters to make headway), and because the interpreter is so poorly optimized the ecosystem depends on C extensions for performance. WASM presumably won't have this distinction.

- Without the abysmal build ecosystem that C and C++ projects tend to bring with them, building and deploying WASM applications will likely be pleasant and easy after a few years. Of course, if your WASM is generated from C/C++ then that's a real bummer, but fortunately this should be a much smaller fraction of the ecosystem than it is with C/Python.


Yeah, and most of the time if "JavaScript is slow" it is because of DOM manipulation or network latency, WASM can't even do those things.


Network roundtrips are unavoidable, but WASM could be used to parse a server response and generate custom HTML to use in replacing some portion of the DOM. It would likely be a lot faster than trying to do the same in pure JS, and it would obviate the use of over-complicated hacks like virtual DOM and the like.


No, parsing the response is usually way too fast to make a difference. Generating an HTML string is also usually pretty fast. The slowness happens when you ask the browser to parse that HTML string and generate the appropriate DOM, WASM is not going to get you out of that.


> The slowness happens when you ask the browser to parse that HTML string and generate the appropriate DOM

If you do it right, that step only has to happen once for each user interaction. You can entirely dispense with the need to do multiple edits to the DOM via pure JS.


Multiple edits on the HTML are by far not anywhere near as performance devastating as they were a decade ago.

At the moment, the "virtual DOM" approach is actually going against performance optimization.

JS frameworks like react, vue, angualr etc effectively replicates a big portion of browser's internal logic for nothing.


It’s not “at the moment” but “continuously from the creation of the virtual DOM concept” - often slower by multiple orders of magnitude.

The misrepresentation of a virtual DOM as a performance improvement came from two things: people who were comparing virtual DOM code to sloppy unoptimized code which was regenerating the DOM on every change and React fans not wanting to believe their new favorite was a regression in any way (not to be confused with the actual React team who certainly knew how to do real benchmarks and were quite open about limitations).

There’s a line of argument that the extra overhead is worth it if the average developer writes more efficient code than they did with other approaches but I think that’s leaving a lot of room for alternatives which don’t have that much inefficiency baked into the design.


I think there’s a bit more nuance to it. React (and other vdom implementations) try do be as efficient as possible when diffing / reconciling with the DOM. Sometimes this can result in improved performance but there are also use cases where you’ll want to provide it with hints (keys, when to be lazy, etc.). https://reactjs.org/docs/reconciliation.html

Above all I would pragmatically argue (subjectively) that the main advantage is enabling a more functional style of programs w/ terrific state management (like Elm). This can lead to fewer errors, easier debugging, and often better performance with less effort.


> I think there’s a bit more nuance to it. React (and other vdom implementations) try do be as efficient as possible when diffing / reconciling with the DOM. Sometimes this can result in improved performance but there are also use cases where you’ll want to provide it with hints (keys, when to be lazy, etc.). https://reactjs.org/docs/reconciliation.html

The key part is remembering that every one of those techniques can be done in normal DOM as well. This is just rediscovering Amdahl's law: there is no way for <virtual DOM> + <real DOM> to be smaller than <real DOM> in the general case. React has improved since the time I found a 5 order of magnitude performance disadvantage (yes, after using keys) but the virtual DOM will always add a substantial amount of overhead to run all of that extra code and the memory footprint is similarly non-trivial.

The better argument to make is your last one, namely that React improves your average code quality and makes it easier for you to focus on the algorithmic improvements which are probably more significant in many applications and could be harder depending on the style. For example, maybe on a large application you found that you were thrashing the DOM because different components were triggering update/measure/update/measure cycles forcing recalculation and switching to React was easier than using fastdom-style techniques to avoid that. Or simply that while it's easy to beat React's performance you found that your team saw enough additional bugs managing things like DOM references that the developer productivity was worth a modest performance impact. Those are all reasonable conclusions but it's important not to forget that there is a tradeoff being made and periodically assess whether you still agree with it.


I agree. I am curious though about how substantial the memory and diffing costs are. I don’t mean that in an I doubt it’s a big deal way, rather I’m genuinely curious and haven’t been able to find any literature on the actual overhead compared to straight up DOM manipulation. I would imagine batching updates to be an advantage of the vdom but only if it’s still that much lighter weight (seeing as you can ignore a ton of stuff from the DOM).


> I would imagine batching updates to be an advantage of the vdom but only if it’s still that much lighter weight (seeing as you can ignore a ton of stuff from the DOM).

There are two separate issues here: one is how well you can avoid updating things which didn't change — for example, at one point I had a big table showing progress for a number of asynchronous operations (hashing + chunked uploads) and the approach I used was saving the appropriate td element in scope so the JavaScript was just doing elem.innerText = x, which is faster than anything which involves regenerating the DOM or updating any other property which the update didn't affect.

The other is how well you can order updates — the DOM doesn't have a batch update concept but what is really critical is not interleaving updates with DOM calls which require it to calculate the layout (e.g. measuring the width or height of an element which depends on what you just updated). You don't necessarily need to batch the updates together logically as long as those reads happen after the updates are completed. A virtual DOM can make that easy but there are other options for queuing them and perhaps doing something like tossing updates into a queue which something like requestAnimationFrame triggers.


So you could probably describe vdom as a smart queue. How smart it is depends on the diffing and how it pushes those changes. Abstracting this from the developer. Bound to be less efficient than an expert (like an expert writing assembly vs C) but just like any other abstraction having both pros and cons.

The question is whether the abstraction is worth the potential savings in complexity (which maybe is not the case, but I sure do love coding in Elm).


Also whether there are other abstractions which might help you work in a way which has different performance characteristics. For example, I've used re:dom (https://redom.js.org/) on projects in the past, LitElement/lit-html are fairly visible, and I know there are at least a couple JSX-without-vdom libraries as well.

There isn't a right answer here: it's always going to be a balance of the kind of work you do, the size and comfort zones of your team, and your user community.


Very interesting thanks for pointing out re:dom. I took a look at their benchmarks and some vdom implementations compare very well to re:dom. I was pleased to see elm’s performance. So it seems like it can be done well when you want it. https://rawgit.com/krausest/js-framework-benchmark/master/we...


Na, the slowness comes from asking the browser to do that 1000's of times in a loop every click :)


Frankly that just seems more difficult and handles an issue I havent run into in 5 years that couldn't be solved with js performance optimizations.

Does WASM really make sense for something that isnt constantly doing high performance calculations? Do I gain anything from using it in most SPA's?


Forcing the browser to continually parse HTML and generate a new DOM tree, recalculate layout, etc. shouldn't be faster than updating specific nodes than need changes.


The first roundtrip is unavoidable. Making another handful of roundtrips every time the user scrolls the page is definitely avoidable.


What would it take to make DOM manipulation faster?


DOM manipulation

Browser vendors have done a lot of work on that over the past decade or so. It's nowhere near as slow as it was in the early days.


Absolutely, it's been kind of incredible progress. But it's still going to be a bottleneck more often than JS execution (in my experience at least).

Not always; I have definitely run into applications where parsing large amounts of data in code is a bottleneck, especially when building large charts. But often.


Where in my case "small, computationally expensive library" is a card game engine & its AI search


I think WASM is also good at hiding the source code? which is the main reason why I don't like it...


My general worry is that the performance gains from using some WASM will just get eaten up by the overhead of jump between JS and WASM and having to copy/convert data. You might be able reduce the problem by porting more stuff from the JS side to the WASM side, but then you risks pulling in huge chunks of your app.


JS <--> WASM function calls are not an issue[1], passing large amounts of data is though.

1. https://hacks.mozilla.org/2018/10/calls-between-javascript-a...


Does anyone know if that is also the case on V8?


JS/WASM calls are fast in V8, and still seem to be improved from time to time (e.g. see: https://v8.dev/blog/v8-release-90#webassembly), not sure about any large data optimizations (TBH I'm not sure what this is about though, because usually one would use JS slices into the WASM heap to avoid redundant copying)


That works if the data is already in the Wasm linear memory and you need to access it from JS. If you have strings (or whatever) in JS, you need to copy them into the linear memory for the Wasm module to use.


Any real performance gains will easily be balanced out by websites doubling their size once again, WASM or not.


And there might be other benefits besides performance. I'd like to use WASM to be able to reuse server side code in languages like Rust or Go in the client, so you don't have to re-implement algorithms and tricky processing code in javascript.


I experimented with this some weeks ago and it is certainly possible.

I had a PoC where my server runs Rust, exposes a JSON Rest API using serde to serialize my Rust structs to JSON. On the web Client I compiled Rust to wasm and used the Reqwest crate (http client that uses Fetch in wasm) to talk to my server, Rust structs are shared between server and client.

For me, the beauty about Rust in this setup, is that cross compiling/crossplatform is builtin into the tooling (Cargo). For example the Reqwest crate compiles down to use the browser Fetch api when running in Wasm, and the same crate on the server uses a native implementation using openssl (or rusttls).


I did something making a game. The game logic runs server side however in order to hide latency the clients also run a WASM copy locally. Then once the server processes their moves they check that everything was in-sync and if not reload with the server state.

(In practice the validation is probably not necessary but doesn't hurt to have).


I even use druid for a simple browser gui on top of a rust json rest service. For an internal tool. Serde on both ends. Works great.


Dot net does this with Bolero. I need to give it a go.


asm.js suffices for that.


Asm.js is a non-standard precursor to wasm.


You know a technique, based on asm.js, that lets you use Rust or Go in the browser?


A large part of that would be better support for integer math, and 64 bit in particular for sha1


Why are 64-bit integers useful for SHA-1 which uses 32-bit words?


The full product of 32-bit multiply is 64-bit.


I don't think SHA-1 uses any multiplications, only bitwise operations (not, and, or, xor), addition and rotation.


Yeah, WebAssembly have i64/u64 types as first class citizens unlike JavaScript which should emulate it or use BigInt which drastically slower than native 64-bit types. That's why crypto algorithms got a lot of speed benefits. AssemblyScript also show this. See this:

https://github.com/FriendlyCaptcha/friendly-pow https://github.com/hugomrdias/rabin-wasm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: