I have never understood why footnotes are added to the bottom when we have an interactive medium available at hand? Why not leverage <details> and <summary> to show them in-place without breaking the flow and without listing them all at bottom?
See your comment as an example. Why list all links at the bottom rather than in-place?
On HN it makes sense to me, because a long link would interrupt the flow of text. So you do a footnote to make them go out of the way. Otherwise Wikipedia has a happy medium I think; footnotes are on the bottom where you'd expect, but they show on hover so you don't have to jump there and then back.
I find inline links incredibly disruptive to my reading flow, the change in color makes my eyes start jumping around in the text. Wikipedia especially is absolutely hopeless, to the point where I've built a mirror that removes all inline links. (A page looks like this: https://encyclopedia.marginalia.nu/wiki/Hyperlink )
Just make it the same color. For me links on HN are underlined but the same color as regular text or a slightly dimmer gray if already visited. It's one of the first things I do when I install a new browser and if it's hard that browser doesn't stay installed long.
Just found this FF add-on [1], yesterday, which removes all links from a page. Works reasonably well. Can also invoke reader view after removing links and get the benefits there.
There must be other reasons, if it doesn't make sense technologically, no? Instagram also has the tech to show more than 3 images in a row, and Twitter could allow longer texts if they wanted so.
I think endnotes are typically an awful idea, and the web can’t do footnotes. What you want is generally side notes. I think what I do on my website is a fairly good compromise for JavaScript-free operation, with a side column for notes on large enough screens, and the notes inlined on small screens. That wouldn’t be suitable for very long notes; such are often more suitable as appendixes, a variant on endnotes.
As for <details>, you run into the problem that it’s a block-level element; it’s dubious using it as an inline-level element, though it’ll probably work well enough (I say probably due to uncertainty about screen readers) despite being nominally invalid, given that it’s not an element that will automatically close a paragraph tag like <div> does.
I think links at the bottom is generally foolish, taking more effort for both reader and writer, and never do it that way, interspersing them in the text, usually surrounded by angle brackets as has historically been the way of delimiting URLs in plain text.
On my website, I ise margin notes for the desktop and "footnotes" for mobile, however said footnotes are displayed just below the paragraph in which they appear.
This avoids requiring both long scrolling and interactivity.
For my site, there are other options that I'd like to explore. Tooltips for smaller things (like definitions) maybe sidebar notes for large screen sizes, and inline notes (like show/hide inserting it between that line and the next) for mobile.
You gotta dance with the browser that brought you though. Unless you plan to never publish you’ve gotta design the best interface you can with the constraints of your real users.
Ah, <details> and <summary>. The most glorious HTML5 elements of them all. Use and abuse these for all sorts of custom yet native capabilities, like <select>s with custom styles and proper keyboard/native controls.
This honestly looks like a huge PITA compared to even the worst static site generator’s syntax. Is this actually supposed to be used as end-developer templates?
True, but keeping to the spirit of the comment I replied to, I’d prefer it to be HTML only. Something like <template src=“header.html”> and it just uses a relative or absolute path.
Building a website in raw HTML/CSS is much easier than the equivalent in javascript-framework-du-jour. It's also much lighter on resources both on server and client. It's a win-win situation for everyone, especially for clients with less powerful computers.
In "modern" stacks it is preferable to do things like layouting of formulas millions of times on the clients, instead of once on the server. I guess that sort of thing needs JS.
As you have pointed out, but formulated more explicitly: client-side rendering should deal only with HTML/CSS because that's what the browser is built and optimized for. Every line of script changing the DOM (html structure) may trigger a redraw of the page, which means wasting considerable amount of resources! But even if your script outputs HTML only once, you still have O(n) HTML templating instead of O(1) for n clients. Such a waste!
~~With progressive enhancement, you could arrange to display only the current subpage. When Javascript is off, the whole page would render as a long-ish HTML document. Which is indeed no issue at all with the sitemap in a sidebar.~~
One suggestion, set the overflow to "scroll" so the scrollbar is always visible. When I open a section it appears, adding like 10px on the right and all the content moves left.
I still remember the first time I used google maps. It was the first example I ever saw of what was then called AJAX, and it blew my mind. Without javascript, google maps would have been the same as mapquest (or any other mapping sites from that era): a full page refresh to move or zoom on the map. Javascript was the differentiator that made google maps the winner.
Did you manage to sell Romulus to any gov customers? I built and sold a similar solution on top of SAP Hybris which we sold to several government departments around the world. It’s a very hard sell even with the worlds largest software sales organisation behind you.
The start was in political offices, which need CRMs and are motivated to move or they're fired. Constituent service satisfaction is one of the top indicators of being re-elected.
Moving into permanent government departments is more of a pain but we did see some success there.
Ultimately, though, the trough between early-adopters and getting mainstream is dishearteningly deep and there aren't enough ealry-adopters to build momentum. Not for us anyway.
This is something I've always wanted for virtual textbooks. A single .html file with all the JS, CSS, images as inline data blobs, etc. For most physics, math, CS, etc books, this should be possible. It would make it trivial to download the book for offline use and share it with others. Other than the raster images, the entire content of these types of books would probably fit in a couple MB of text. The use of #anchors or other smart URL manipulation would also make it easy to share internal hyperlinks if you want to tell a friend to look at chapter/section #foo_bar of the book.
An epub is indeed a zip file containing HTML, CSS, font, image, and metadata files.
I'd argue that that's not meaningfully different to OP's suggestion of a HTML file with all the content inlined, though. It's still a single file grouping everything required together and can be easily edited and read practically anywhere. It has a few advantages over the inlined-content HTML file, too:
- You can read the compressed file directly, an epub being typically half the size of the uncompressed files (going by a quick test of 30 randomly-selected files I had on hand).
- Storing the actual JPEG, PNG, OTF, etc files inside the zip is more efficient than inlining them as base64 and then making the browser decode them, in terms of both speed and filesize.
- While reading an epub, different sections can be a different HTML files, and only one needs to be loaded into memory at a time. This can be irrelevant for smaller things but it can make a big difference sometimes--with pages that include many charts and tables, documentation for graphics libraries that include images and animations for each documented function, etc.
- Epub files have native support for highlighting and bookmarking, to keep your place in long documents and share the file with your highlights attached.
Keep in mind that single integrated book formats are highly viable now.
It's in the interest of publishers not to offer them, however.
A chief example that comes to mind is the Feynman Lectures in Physics series, which are available online but only in a chapter-by-chapter basis in HTML format. (Quite beautifully formatted, FWIW.) If you want to glue those together into a single integrated whole, you'll have to do that yourself.
It makes sense from a DRM standpoint - PDFs and ePub are easy to crack if you offer it at all, so the only way to fix this is to go web-only and assume nobody will want to dig into turning that into a sharable PDF. Bonus points if you make it a SaaS test platform and get universities to buy into it, forcing every student to purchase the 'book' to receive grades.
HTML can be responsive, like an electronic document should.
PDF was designed to faithfully represent paper, and it has all the fluidity and customizability of a stack of printed paper. It's also completely anti-semantic: it has no document structure beside pages, and each page just describes how to put ink onto paper.
I think that the principal application area of PDF is just that: to represent paper, for printing purposes. For everything else, it's not exactly great.
PDF has a limitation: it defines page size and the layout of everything fixed to this page, and thus can't do responsive layout like HTML can (reorganizing stuff in the page to match the screen size)
I wouldn't call that a limitation. In fact that is one of the major selling points of the PDF format. The document looks as intended* by the creator on all platforms whether display or print media.
* Yes, I know, PDF doesn't -always- do this, but a well designed PDF generally does.
It is a limitation. Anyone who's ever tried to read a PDF journal article on their phone has experienced it. (and don't even get me started on Unicode copy/paste...)
HTML can be styled in a fixed layout if desired, and reflowed by a reader mode if needed. PDF can't be styled in a responsive way, and there's no (easily accessible) reader mode equivalent for PDFs.
True, but the rigid layout is what makes PDF unsuitable for some use cases. It’s main use is to represent a printed page of a very specific size. Anything outside that just requires too much flexibility.
The dimensioning problem isn't PDFs. The dimensioning problem is computer displays.
Get yourself an e-ink display of 10" or 13" (standard dimensions offered by the patent-monopoly vendor across multiple OEMs), and discover that online reading of PDFs is 1) quite pleasant (so long as the underlying PDF formatting itself is sane) and 2) vastly preferable to either HTML or "fluid" ePub or Mobi file formats.
Book formats developed over about 500 years largely guided by the capabilities and limitations of human eyes and hands. Typical mass-market books range in size from roughly 6" to 12" diagonal measure. Yes, there are smaller and larger formats, these are deviations from the norm and impose compromises for other concerns (portability for smaller formats, resolution for larger ones, typically pictoral or graphical in nature).
A 5" or 6" mobile device presents less display area than an index card. Laptop displays are too short to display a portrait-mode document one page at a time, and in almost all cases too small to present a 2-page up display.
When wedding PDFs with an appropriate display technology, the frustrations fixed-proportion PDF display disappear.
This does rely on the PDF being dimensioned for a typical book size, though there's considerable flexibility here, and any dimensions from ~6" to well over 12" will tend to be readable, there's no need for precisely matching device to document size.
I'm saying this as someone who's long railed against PDFs for documentation. My mind's been changed.
>Get yourself an e-ink display of 10" or 13" (standard dimensions offered by the patent-monopoly vendor across multiple OEMs), and discover that online reading of PDFs is 1) quite pleasant (so long as the underlying PDF formatting itself is sane) and 2) vastly preferable to either HTML or "fluid" ePub or Mobi file formats.
Notably this is not the standard size for most e-ink devices though, on which pdfs are a pain to read. Mobi/epub files, in contrast, are fantastic on my kindle (and on my phone, and on desktop).
If your argument has to boil down to "this file format is great if you just buy a specific device for viewing them, and eschew viewing them on any of the other devices you already own and use more frequently", I'd say your argument provides more evidence for the counterpoint than for the one you're arguing.
I'm happy you found a good way to consume a fundamentally outdated format, but PDFs are a bad format for the majority of use cases.
Oddly enough, my purchase decision was driven specifically by considerations of size, resolution, and suitedness to task.
Again: at 8", e-ink is pretty broadly useful. If you're frequently reading scanned-in journal articles, the 10" or 13" devices shine, though these can be accessed on smaller screens using in-page zoom-and-scroll. (Onyx BOOX has several settings for this in its NeoReader app.)
Note-taking, which was not a use I anticipated using, also happens to be really well-suited.
Yes, you can read on a smaller device if you must. However you're making the same sacrifices for mobility that are present in pocket-sized printed books, and the format is best suited to largely unformatted text (e.g., prose). Diagrams, tables, and other layout translate quite poorly, and this is intrinsic to the display itself.
The one task to which the tablet format seems best suited is precisely e-book reading. So I've ditched the "smartphone" (a pocket snoop) and settled on laptop / desktop (productivity) + tablet (ebooks), and dedicated devices for specific other applications, most especially capture (audio, images, video).
PDFs actually have some support for javascript! I don't know any serious use for it and some PDF viewers refuse to support it (ex: Apple's Preview app), but it does exist.
Interestingly, the support for mht/multipart is still there in Chrome and Firefox; or at least it was relatively recently. However it only works for files and not when accessing an mht file over http. The format itself is relatively simple.
That could easily be made into a PWA that people could save and view offline. I've been doing this with create-react-app and markdown files (muuuch bulkier solution) but it's an offline docs-like SPA people can click around with interactive examples and shareable links etc. Seems like offline availability was not a priority for other docs-making tools like docosaurus etc (last time I checked)
Please, don't make me run React to read your document, and don't make my device parse Markdown and generate HTML on each visit. This is wasteful. I and the planet should not suffer because you decided to author your book using Markdown (which is a fine format, but my browser does not understand it natively). A virtual book is best served as plain old static HTML pages. Shareable links is indeed an impressive feature, but it has been a given on the Web since 1991, I think we don't need to be impressed by it in 2021.
You can add some JS here and there for the few really interactive elements of the document but my browser already has all the features to render documents and links perfectly fine. People have been able to "click around" since 1991 and we never needed to download, parse and execute 2MB of JS for this.
Your book is probably big, and I'm probably not reading it in one go, so if it includes images and videos, downloading it all is probably unnecessary and the book is probably best split in several HTML pages. If you want to allow me to consult it offline, that's very kind and noble. Just put a zip file somewhere I can download.
Sorry for the rant, but I'm a bit fed up by having to download run megabytes of Javascript I can't control (and even read, because yay, bundles!!) to browse the web, just because.
> If you want to allow me to consult it offline, that's very kind and noble. Just put a zip file somewhere I can download.
To play devil's advocate: a majority of web traffic is on phones and tablets now, especially for long-form content where you will frequently see people request a page on a desktop, then request it two minutes later from a phone or tablet where they can read it more comfortably. 99% of mobile users will be happier when a text-heavy site is a PWA that caches itself, rather than a static HTML site that asks them to download a zip file, install an app to work with zip files on their device, unzip it to a folder of hopefully-relevantly-named HTML files, and then browse those, in the process breaking link sharing, link navigation (depending on OS), cross-device reading and referencing of highlights/notes, site search, and so on. Not to mention the limitations imposed on file:/// URIs, like browser extensions not working on them by default, which is a real problem for users relying on them for accessibility (e.g. dyslexia compensation, screen reader integration, stylesheet overrides). A lot of times that won't even be possible on a dedicated reading devices; my ereader will cache PWAs but will not download arbitrary files, if you make your site a PWA I can read it during my commute, if you make it static HTML with a zip file I can't. These are features most users appreciate a lot more than not having to load a 60k JS bundle (current size of React gzipped).
That would be perfect for books, but I rather they take advantage of the medium and make the examples interactive: I rather see how the results change when I change the inputs than have it all in one file.
Well, TiddlyWiki[1] has been going it for 17(!) years. It's a very mature, polished, and extensible engine for wikis, blogs, and personal knwledge bases.
The entire thing (including the editor!) is a single .html file. By default, even images are embedded.
For my ADHD Wiki[2], a resource that talks about ADHD with a copious amounts of relatable memes intertwined with the text, I chose to just use images in the same directory instead of embedding them; so you might need to do some work to download that page (I think File -> Save as.. can give you a readable static version on some browsers).
Anyway, somewhat surprising that people are stumbling into how much you can do with just one well-crafted .html file. Look ma, no node.js, no (no)SQL database, no nothing except for one file for one website (and that file isn't even that large, given what TiddlyWiki allows you to do).
TiddlyWiki can be run on node.js, but I don't see much reason to. If I want to make changes, I use the built-in editor, and then the "Save..." button generates me the .html of the updated version. Save it over the old one, upload over ftp, done. No deployment process to speak of.
And, at that, the feature set rivals (and, at times, exceeds) that of, say, Wikipedia.
(And for math nerds: it supports LaTeX via a KaTeX plugin. Maybe you can't copy-paste your entire thesis, but it's pretty damn close to real-time full-featured LaTeX).
The author seems to be fascinated by the concept of single-HTML website which uses anchor links for internal navigation to reveal or hide content instantly without page reload.
That's exactly what TiddlyWiki does.
If you are JS-averse, you can generate a static HTML version of the wiki as well without JS in it [1]. It doesn't use CSS tricks though to show/hide parts.
I've found having it all in a single html file tends to makes it easy to survive on a lot of different kinds of networks, and I like that the larger context of the site is automatically attached to any particular part of it (which is invaluable for my work). To my eyes, it's what a PDF dreams it could be.
Also, the ability to download & email the one file that is everything it is invaluable.
Never have to worry about the editing environment when porting from one machine to another. Never have to worry about version compatibility and stuff like that.
HTML+JS are mature enough that we can reliably expect the software required to open and edit your notes (i.e., a web browser) to reliably exist in the future. Can't say the same about most other formats.
Even LaTeX, with its version-control-friendly text file format, very quickly runs into portability issues with package management. Someone gives you a .tex file - good luck trying to compile it without Internet connection.
All fair points in many contexts, but not all contexts. Some counters:
1. You can embed CSS easily, and images using data: URIs...
2. ... so you can download the whole thing as a single and work offline.²
3. Non-standard clients are not my problem, file under “you can't please everyone, especially those who chose to be difficult to please!”. Accessibility is a concern though, I'd need to look into that before using such techniques without a reliable fallback.
4. This isn't practical for sites/pages needing large resources like high-res images, or that are large generally¹, but not all sites/pages need large resources or are large in general.
5. Not everything needs to be indexed well by search engines, I have things out there that are only relevant to those I refer to them, though I agree this could be a significant issue for many.
6. True. Though that breaks your second point, so you need to choose.
----
[1] you wouldn't want wikipedia done this way!
[2] also with external resources almost all browsers will happily save then when you do file|save, and to be pedantic the description given is “in a single html file” not “in a single file”
loading="lazy" is for images that are NOT embedded in the same file. So we either have entire site in a single HTML file or we have scalability for large stes. There's no solution that gives us both.
It depends on what your goal is. If you just want to make a single request to the web server, than loading="lazy" will not work as you said. (Technically speaking, TCP is sending multiple packages anyway resulting in higher latency, so not sure if that is a great goal.)
But if you just want to be able to save the entire website with Ctrl + S, then it works fine.
As an aside, loading="lazy" is the way in which images are embedded in the website from TFA https://i.imgur.com/wIkaE5g.png which was the reason why I mentioned it, although it certainly does not fit all possible use cases.
And how is 20MB js SPA with 20 wss connections more scalable?
I've see too many react/vue projects bundling everything into a single main.js file even pages I never click. e.g. some crazy map or graph module. Is there some magic in webpack to make sure the needed functions gets executed in "eagerly" fashion?
Or does json provide streamable parsing capabilities?
If you are interested in building a single page site, I really doubt that scaling is an issue you'll have to contend with and seems like a waste to even consider it for something so small.
If you get hung up scaling a single page, you have other problems
This is a web site as a single file, using CSS only.
It's a CSS equivalent to a single-page application (SPA), except that this is a single-page site. SPS, perhaps, or maybe a multi-paged file (MPF).
Strictly, it requires CSS features which weren't originally present in HTML, though the concept's likely been possible since the early 2000s, if not late 1990s.
It does rely on CSS support within the browser, and some simple browsers (mostly terminal-mode clients) won't present the multi-page aspect. The site / page itself remains useful.
A site is a website is the thing behind a domain name. Many were, and are, a single file. Its the rest of ya'll that are overcomplicating things and somehow perverted that to thinking its necessary. How many layers of meta are we right now for this article to be interesting?
Put your script tags at the bottom of the html page so that when the JavaScript is executed all the dom elements on the page can be referenced. Either that or bootstrap in a window onload callback. I remember picking up and being in awe of secrets of the JavaScript ninja
Here's an even bigger secret, if you're doing server-side rendering, putting things on top might actually be better.
You're maybe parallelizing things. As the client is downloading and interpreting the css and JavaScript, the server is doing database calls and rendering the HTML.
So you actually are doing things on the server side when you're "blocking"on the client. You didn't get this for free. You had to worry about buffers, flushing, and plenty of testing.
I don't know if you can still squeak an actual speed up with this technique (there's many attributes you can use to customize things these days) but I used to use it all the time back in the days of platter drives.
It's not, but with a little more imagination you could certainly take the author's work to the next level. Inline CSS and images as blobs.
Then in practice the issue becomes a bloated file that isn't able to reuse internal components, at least not without introducing a layer of complexity such as a scripting language and browser APIs to go with it.
Perhaps if HTML had been designed with more component re-usability in mind, the landscape of the internet would look very different today. Then again, what sense would there be in having a single HTML file for a site like Wikipedia or Altavista? And imagining the web evolving without a scripting language would be naive.
Single file websites that could be served up on blob or network storage certainly appeal to me, especially today.
Sounds like content-addressing would address (pun intended) your needs. I mean it's quite possible to build a IPFS/Bittorrent/DAT-based browser. I'd be interested in such a modern p2p-friendly browser minus the gigantic JS crap and attack surface.
My God, you mean it's possible to build a site that doesn't require 10MB of compressed stupid ass frameworks and preprocessors to turn your code into, uh, slightly different code? You can just... write HTML and CSS yourself? By hand? The deuce you say. :-D
I think this is substantially cleverer than merely "not using frameworks". Nobody's surprised you can do this without a framework, but I suspect a good number would be surprised you can do it without Javascript.
This wasn't a foundational web standard. The :target pseudoclass was introduced in CSS3.
The article is not another pointless potshot at frameworks, it's showing a clever use of a new standard to allow multiple pages in the same HTML document without using Javascript.
If you want to target multiple platforms (mobile, web, tv) with similar code that 10mb that high value users can load in two seconds with their latest iPhone is worth it
Well, there is an audience which likes to avoid websites with JS. I think their biggest motivation is tracking and bloat. So having a technique for a user experience without load times (after the initial load) without JS is what makes this somewhat special.
On the other hand, there have been solutions for this case for ages (like using radio buttons), so using :target is just a somewhat cleaner approach from my point of view.
Nobody else is really doing this, or was doing this. But you can't tell that it's different, that's what's neat about it. It works just like a multi-page site but there's not even any JS requirement.
A decade ago we have this niche vBulletin forum where members would self-publish news article regarding the hobby. vBulletin BBCode is pretty limited, but then the admin started to add more and more styling support via custom BBCode.
At one point we were able to create "interactive" content entirely in our heavily customized BBCode without a single line of JS, and this #target is one of the more used tricks.
That sounds like a pretty cool BBCode hack (and ultra-permissive by the admin considering...admins), I'd like to see it in action in 10-years-ago form.
I did a similar thing on my forum around 2005 but we even let members put CSS in their posts via BBCode… although it was rewritten, scoped and restricted.
But people made big “clubs” and then created these really cool semi-interactive posts using just CSS. It was really awesome what people can do.
That's the thing, I think most designers were probably using javascript even if alongside tricks like this without thinking it through. The current period in web design is one of the biggest "it's nice to have javascript turned off" moments we've had, so it's relevant.
If you already knew about this, the post seems like a joke, you've made millions off this for years now--that's really special, but that is also pretty subjective and doesn't mean it's not meaningful that one of the top CSS sites shared this article out this year.
Yeah, mostly its that people overestimate how much (or what kind of) tech prowess they need to convey to close a deal or offer a service that people want to pay for
The only slightly interesting part (which is why it's hosted on css-tricks) is not even explicitly mentioned in the article at all. It's the fact that it also uses no JS, and is all CSS based. Obviously that itself isn't crazy, though back in our days it would've been harder to do with old CSS. That is, having proper navigation/sections.
Multiple displayed pages, and potentially an entire website, are presented within a single source file.
There are no^W^W is one external dependency (the stylesheet https://john-doe.neocities.org/style.css). This could also be inlined, and is required for the concept to work. There are 75 directives and/or media queries.
The site can be browsed entirely offline once accessed.
Lovely article and such an interesting way of creating single page apps. I never knew about this usage of the :target selector.
I think a major issue with this approach is just practicability - I'd want to write my content in something like markdown and we will anyway need JS to convert the markdown to HTML.
I have also been interested in making SPAs, my my way is more traditional. Using javascript for the stitching. But only Vanillajs. I don't use any frameworks and bundlers for my SPAs. In fact right now I am working on a simple blog: http://rishav-sharan.com using just HTML, Vanilla js and Tailwindcss.
If anyone is interested in my approach, I have an article detailing things there, or you can just read through the source. Its un-minified and fairly readable.
> Please enable Javascript to view this site. This is a Single Page Application and it needs JS to render.
That's the entire problem with JS: i can't browse a page without a complex rendering engine (arguably full of security vulnerabilities) or even scrape it. Something like webmention/microformats (indieweb) federation becomes almost impossible with you due to your setup.
Also worth noting, rendering the Markdown on the client is super inefficient. First, because client-side JS will always be far slower than native code server-side: i first need to download the entire JS then run it in a super slow sandbox. Second, because there's economies of scale to be had: it may take a few milliseconds to build the markup, but every client has to do it. For n client, that's O(n) complexity vs O(1) for server-side rendering. So many CPU cycles wasted :)
The author of the site linked in the article suggests a couple of Markdown options for this on their blog page [1]. In this, they link a port of their website as a Jekyll theme [2].
Just a heads up, I get the “you need javascript” warning when using Firefox for iPad. I never had that with any website before, and I can switch between the two color schemes, which requires JS (i think).
I’m also more in favor for compiling the MD to HTML once and then serving pure HTML via the CDN. You could still keep it to the two steps you outlined in your blog post by running the build step and deployment using github actions.
Information presentation has practical and usability aspects.
Sometimes you want as much information as possible on a page. Sometimes you want one and only one portion presented. Which you choose depends very much on the application, user community, and objectives.
I have no idea whether or not the lightness of websites measured in kB is going to drive a trend.
But there is definitely a trend that our collective cost models are downshifting from “ah fuck it’ll be faster by the time this ships” to “well right this minute we’re not zero-effort scaling on the backs of the deep-infrastructure people”.
Maybe this perspective is perverse, but I personally find it cool that there’s both money and hacker cred in counting bytes, even megabytes, after a long time of “well the hardware will be better in 6 months so why bother”.
This reminds me strongly of the Info file format (itself generated from a TexInfo source file), in which an entire multi-page document is presented in a single file. See:
The difference is that Info requires a dedicates reader (the info or pinfo command-line utilities, or of course, Emacs), whilst this format will work with any graphical Web client.
It doesn't quite work as planned with a terminal-based browser. I've opened https://css-tricks.com/a-whole-website-in-a-single-html-file... with w3m, and rather than seeing only one of the intended "pages" at a time, the entire "site" is presented. That said, degredation is graceful, and navigation works.
A lot mentioned already - I put up an blog-article done in HTML/CSS and the Icons/Font using URI just some days ago (german) [1]. I has 77kB total and can be downloaded in a single file, preserving the layout and font (FreeSans). As it talks about the very unknown relationship how the proposed end of lignite-mining in eastern region Lusatia (2030-2038) affects freshwater-production in Berlin (end of groundwater pumping in Spree-river), reminded me (as Hacker News is located in California) of recent article "The Warning Shot the US Is Ignoring: Climate Change Impacts on California Central Valley" in CleanTechnica, I thought to share it.
I built a mobile web app for a conference that way [1], workes really well and is way more performant on older mobile devices than the previous version that was built in react.
The difference is they don't use javascript, a point many commenters apparently missed. This is similar how you can make drop-down header menus in css only - no javascript. Implementing css dropdown menus was one of my first big web dev tasks, since many users still had computers that slowed to a crawl when javascript and dom manipulation were involved.
this is awesome! nice job on the minimalist approach. it looks very clean.
im obsessed with offline-first/offline-only (optional) and have been trying to build all my products with the underlying philosophy of single-file tooling and “infra-less” in-mind; meaning it doesn’t care where it lives and highly portable by default.
here’s a note taking app that is all in a single html file. images are base64’d and data is kept in indexdb. https://github.com/bkeating/nakedNV
This is in the same vein as the "checkbox hack". It's a great rabbit hole to go down. It's fun to try and figure out how to accomplish normal UI functions with no JS and no page reloads. My personal favorite was making a popup Confirm/Cancel dialog where you could still see your content behind it. Cancel made it disappear; Confirm triggered a CSS url() call for the action.
Here's a todo app with no javascript (not my work:
This is awesome, but a little JavaScript extension here and there is improving the user experience A LOT! If the website is growing, a site search is something I would personally recommend to have.
Still static though. I built my blog with hugo (static site generator) and it is hardly noticable, that it is completely static.
What's old is new again! Chris Coyer (the owner of CSS-Tricks) made a site using a similar trick in 2010 and there is a screencast of the technique on the same site as the linked article[0]. He did use jQuery to hide and show sections, but I imagine you could even implement a lot of the animations from back then as CSS these days.
No, that's not the main idea, in fact it's not even an accurate description - the entire website is not in a single file. The website loads one HTML file, one CSS file and several image files.
The whole point here is you can do instant navigation to multiple pages that are contained within a single file without using JavaScript.
Haha. This reminds me of the dot com days. I was doing the front end as a mammoth servlet. The entire UI was more or less in one multi megabyte Java file. The architect of our group started add in changes - which, briefly, I kept my nose above water editing - but he knew I'd drowned. When I finally gave up he went over JSPs and the new fangled 'MVC' pattern and really worked through some UI patterns with me. It was very much a learning moment for me for design and maintenance.
right? everyone is fawning over this basic html as if its this amazing trick, the foreshadowing was the people pointing out how progressive web apps aren't necessary for most use cases, but now I really see I can't trust anybody's opinion here because this is normie levels of perception at this point.
On one hand I'm excited about the future and I want to build apps for humans in spaceships, but on the other I'm very disappointed with how hard it is to get to the actual piece of information you're looking for on some of the websites filled with all this fancy stuff.
Sites like this one the other hand, just don't waste my time and I like this idea of simplicity. I wish we could have that everywhere, so we can be more user friendly all around.
I sometimes convert pages to a single HTML file with SingleFile[0] which encapsulates all external assets into a single page. I have a small collection of pages which I can browse offline, and also have them for posterity incase they suffer from link rot.
One of the comments mentions a notable problem: Safari does not support 'Lazy' loading of images so every one they will be downloaded even if they are not seen.
Single, self-contained HTML files are my passion. I created a self-contained version of my comic https://comic.browserling.com/browserling-comic.html. All comics in one file, no external dependencies or anything.
As a semi-related subject, I like to use the SingleFile extension to save pages as one static HTML file for archiving purposes.
Got a ton of saved pages and articles from HN in my dropbox that I can read where I want later without worrying about dead links, ghost edits and other live annoying things.
The only me stopping me is component, sometimes you just need to reuse certain HTML structure. Currently I'm using custom script to generate HTML from hyperscript-like syntax, it'll probably be better than any native API that enables component, but still wish to see one.
I may be missing the point here (so please correct me if I am), but 'anchor' has been around for ages. I have seen several tutorial sites which use such elements to typically add a Hint to some discussion. Is there something novel have I missed?
I love the idea, but wouldn't use it for bigger pages, because it means you cannot link to individual sections within one page, because then :target doesn't apply to the top-level <section> anymore, right?
The site looks really nice but with todays total lack of optimization when it comes to size and code quality I don't see that this would become a trend.
This was absolutely not possible 20 years ago. The only pseudoclasses available at that point were the anchor pseudoclasses and :first-line/:first-letter.[1]
What's being shown off here is not having a single page of a website in just HTML/CSS, but having all pages.
Not if you wanted multiple pages without Javascript it wasn't. Which is what's interesting about the site being described, although the article does a weak job explaining it.
I'm not moving the goalposts, my goalposts are exactly where they were in my first comment: The concept demonstrated in the article was not possible twenty years ago.
In fact I was very specific about what concepts were not possible twenty years ago: "What's being shown off here is not having a single page of a website in just HTML/CSS, but having all pages."
You're the one who proceeded to ignore that and make an unrelated claim about something different that was possible twenty years ago.
You can have all the pages with DHTML and using style ID/classes to select what is visible, nothing new about it, other than the artificial limitation of not using DHTML.
Of course it's an artificial limitation, the whole thing is an exercise in artificial limitation. Putting all pages in the same document is an artificial limitation.
They're artificial limitations exercised in pursuit of demonstrating that, with new CSS tools, something surprising can be done. Something which, as I stated in my original comment, could not be done twenty years ago: putting multiple pages in the same HTML file without Javascript.
While it's not a single HTML file, the framework Remix has been wonderful to work with and makes building complex websites and web apps without any client side JS exceptionally easy.
The trick is kinda meh I feel. Yes you can make a pretty cool app in a single HTML file with some inline JavaScript to boot. Still possible today just ever more out of fashion
No JS here though, just CSS using :target. That's pretty cool if you don't want a JS dependency but want more control over interactivity and presentation.
You're making vague gestures about something that is a specific issue these days. JS delivers all kinds of straight up garbage, some of it meant to do very ugly things, it can be turned off, tada, I'm still on the internet.
If people (like me) want to offer resources requiring none of that, why not encourage them, insofar as it makes the internet more efficient, a more helpful resource, and just way cooler in terms of chilling with all the crazy wasteful and annoying JS activity.
It's valid, relevant, and worthwhile. There's really no need to talk about being scared, as if this is some issue that requires a good talking to from dad.
This is an unusually defensive response. There's a big gap between "all kinds of straight up garbage, some of it meant to do very ugly things" and the tiny amount of JS that would replace what this does. This doesn't even enable any interesting functionality. You could simply put multiple html files on the same web server rather than putting everything into one file. The visitor to the website would not know the difference (a point made in the linked article).
Did you read what you wrote? You assumed and implied that I was _scared of JS_ without asking whether this was the case, then called my response unusually defensive. Let's not sweep it under the table.
Your original reply was really uncalled for. Now you're back to some combination of hand-waves and missing the details (which self and others have shared up and down the thread already).
Really, a more respectful, thoughtful approach is merited.
It would be like telling someone who has been using an IDE: did you know you can compile your code from the command line? And having that person genuinely be blown away.
would it blow your mind to know that the vast majority of .net devs I've ever worked with don't know anything about compiling or running a project/solution other than hitting that big green 'play' button in visual studio?
SPAs display in a single HTML URL, but are themselves (typically) comprised of multiple elements, including many CSS, JS, and data elements which are fetched dynamically.
The example URL is a complete website within a single HTML document with no external dependencies and no further round-trip requests. It is a single-page site (SPS) or perhaps a multi-page file (MPF).
You can open that URL, disable networking, and browse the entire site to your heart's content in any browser supporting CSS.
If you open the file in a terminal browser (lynx, w3m, elinks[2], etc.), you'll see the full site presented at once, as a single page, without needing to specifically navigate between them (you can scroll the full site). Though the intra-site navigation itself still works --- it just doesn't reveal or hide sections.
The point of this demonstration is that single-file provisioning is specifically the point. And specifically, using only HTML and CSS.
The concept of an "SPA" refers to the appearance rather than provisioning of the app, and inherently relies on Javascript (or an equivalent scripting capability) to interactively rewrite the display. It's possible to single-file an SPA. The characteristic isn't central to the SPA concept, and in practice implementation is typically anything but.
SPAs are not accessible without Javascript, and don't render at all from a terminnal / console-mode browser. (Ask me how I know this...)
This is not an app. It's a website, or at least, multiple web pages, provisioned from a single HTML file.
Yes, this instance has several external references. I've noted CSS in an earlier comment, you mention image assets. The concept could be further optimised for portability by incorporating those inline.
Note that optimisations are also trade-offs. Inlined assets would mean duplication for a larger site. Which of those trade-offs are preferable or unwanted really depends on the specific goals.
But as a demonstration of an idea, this really is pretty elegant.
I prefer the denotation of a single file site (SFS). Perhaps alternatively a multi-paged file (MPF), as there might still be multiple such pages on a site, or a site comprised of a mix of MPFs, SPAs, and traditional static single-page URLs.
I have definitely built React SPAs with inlined JS and CSS as far back as 5 years ago, it even inlined some SVGs for icons; it was literally a single HTML page (unlike the discussed page, BTW, which loads the CSS and the images with separate network requests).
+ collapsible sections with `details` and `summary`[0]
+ footnotes, with navigation to/from with anchor tags. You can even apply CSS on the currently selected footnote.[1]
+ Semantic web that is compatible with everything and has sensible defaults so you can focus on what you're actually doing!
+ Tiny deploys and page loads. Single KBs (with brotli compression) for long blog posts. Just `scp` and Nginx keeps serving.
I can't think of anything else I want. And when I think of it, I can probably build it on top.
[0]: https://maddo.xxx
[1]: https://maddo.xxx/thoughts/an-introduction-to-product-strate...