Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Entire website in a single HTML file (css-tricks.com)
615 points by jaytaylor on Dec 24, 2021 | hide | past | favorite | 314 comments


HTML and CSS alone are really powerful. I decided not to use any JS on my personal site, but you can still have:

+ collapsible sections with `details` and `summary`[0]

+ footnotes, with navigation to/from with anchor tags. You can even apply CSS on the currently selected footnote.[1]

+ Semantic web that is compatible with everything and has sensible defaults so you can focus on what you're actually doing!

+ Tiny deploys and page loads. Single KBs (with brotli compression) for long blog posts. Just `scp` and Nginx keeps serving.

I can't think of anything else I want. And when I think of it, I can probably build it on top.

[0]: https://maddo.xxx

[1]: https://maddo.xxx/thoughts/an-introduction-to-product-strate...


I have never understood why footnotes are added to the bottom when we have an interactive medium available at hand? Why not leverage <details> and <summary> to show them in-place without breaking the flow and without listing them all at bottom?

See your comment as an example. Why list all links at the bottom rather than in-place?


On HN it makes sense to me, because a long link would interrupt the flow of text. So you do a footnote to make them go out of the way. Otherwise Wikipedia has a happy medium I think; footnotes are on the bottom where you'd expect, but they show on hover so you don't have to jump there and then back.


If only the Web had a way to make text clickable withtout showing the whole target URL.


I find inline links incredibly disruptive to my reading flow, the change in color makes my eyes start jumping around in the text. Wikipedia especially is absolutely hopeless, to the point where I've built a mirror that removes all inline links. (A page looks like this: https://encyclopedia.marginalia.nu/wiki/Hyperlink )


Just make it the same color. For me links on HN are underlined but the same color as regular text or a slightly dimmer gray if already visited. It's one of the first things I do when I install a new browser and if it's hard that browser doesn't stay installed long.


Just found this FF add-on [1], yesterday, which removes all links from a page. Works reasonably well. Can also invoke reader view after removing links and get the benefits there.

[1] https://addons.mozilla.org/en-GB/firefox/addon/nolinks/


Wouldn't a user script work just as well for this purpose? `document.querySelectorAll('a').for each(a=>{a.outerHTML=a.innerHTML});`


Sure, that works too. You still get the megabyte pageloads though, and you need to carry the script with you every where you go.


Looks good. A bit out of date though


Yeah, I think the there is a new data dump available, will rebuild the html files from that soon.


There must be other reasons, if it doesn't make sense technologically, no? Instagram also has the tech to show more than 3 images in a row, and Twitter could allow longer texts if they wanted so.


Sarcasm acknowledged

Hiding in the URL is a favorite feature for scams and pranks


The technology just isn't there yet.


The reason is that trolls like to post links to shock sites or a google search for something terrible. Even worse, you could get rickrolled.


Some day we will have native browser support for the full URL showing in the status field.


Oh no cried the scammers, how will we ever again trick people into clicking a link


window.status to the rescue! We must override the long scam URL there!


Just wanted to say, the best part was acknowledging the worst was being rickrolled. xD


I think endnotes are typically an awful idea, and the web can’t do footnotes. What you want is generally side notes. I think what I do on my website is a fairly good compromise for JavaScript-free operation, with a side column for notes on large enough screens, and the notes inlined on small screens. That wouldn’t be suitable for very long notes; such are often more suitable as appendixes, a variant on endnotes.

As for <details>, you run into the problem that it’s a block-level element; it’s dubious using it as an inline-level element, though it’ll probably work well enough (I say probably due to uncertainty about screen readers) despite being nominally invalid, given that it’s not an element that will automatically close a paragraph tag like <div> does.

I think links at the bottom is generally foolish, taking more effort for both reader and writer, and never do it that way, interspersing them in the text, usually surrounded by angle brackets as has historically been the way of delimiting URLs in plain text.


On my website, I ise margin notes for the desktop and "footnotes" for mobile, however said footnotes are displayed just below the paragraph in which they appear.

This avoids requiring both long scrolling and interactivity.


Do you think you could elaborate on how this works? It seems like a very ergonomic alternative to traditional footnotes.


Good question.

For my site, there are other options that I'd like to explore. Tooltips for smaller things (like definitions) maybe sidebar notes for large screen sizes, and inline notes (like show/hide inserting it between that line and the next) for mobile.

I'll look into those.


I like how the site being discussed implements footnotes with a hidden checkbox followed by <small> tag: https://john-doe.neocities.org/


> Why not leverage <details> and <summary> to show them in-place

Searchability. You want footnote content to be Ctrl+F-able.


A smart browser could understand footnotes and show them in searches.


You gotta dance with the browser that brought you though. Unless you plan to never publish you’ve gotta design the best interface you can with the constraints of your real users.


Ah, <details> and <summary>. The most glorious HTML5 elements of them all. Use and abuse these for all sorts of custom yet native capabilities, like <select>s with custom styles and proper keyboard/native controls.


I’d want templating if I’m going to make more than two pages on a site.


HTML has that built in: https://developer.mozilla.org/en-US/docs/Web/Web_Components/...

You'd need some JS to make use of them though.


This honestly looks like a huge PITA compared to even the worst static site generator’s syntax. Is this actually supposed to be used as end-developer templates?


True, but keeping to the spirit of the comment I replied to, I’d prefer it to be HTML only. Something like <template src=“header.html”> and it just uses a relative or absolute path.


Static site generators like Jekyll and Hugo are perfect for this :)


You can run React (and everything else) off a CDN.


React runs in the browser. You probably mean serve the files off a CDN


Yeah that's what I meant.


It can also run in the server with Server Side Rendering[0]. I believe some CDN's also offer support for this, too.

[0]: https://reactjs.org/docs/react-dom-server.html


> HTML and CSS alone are really powerful

https://html.energy/


> Written in pure, raw HTML. Can you feel the energy?

Has several CSS files (one 2.5 Kb) and a 13Kb minified js file (https://www.statcounter.com/counter/counter.js) that does... something related to user tracking.

I don't get it.


The Patreon page isn't HTML. Guess there are limits.


getting zombocom vibes


You like to live in hard mode. I like that.


Building a website in raw HTML/CSS is much easier than the equivalent in javascript-framework-du-jour. It's also much lighter on resources both on server and client. It's a win-win situation for everyone, especially for clients with less powerful computers.


Building a website without JS is not hard.


Depends of what a website is for you.


The same it is for everyone since the beginning of WWW: a static page with images. The web is uniquely unsuitable for anything else.


So forget e-commerce? Amazon didn't do too badly and CGI was hot on the heels of HTML so I think "everyone" is a little exaggerated.


> So forget e-commerce?

Says (s)he and immediately follows up with

> Amazon didn't do too badly


My point is that the WWW was a lot more than static HTML and CSS pretty soon after its inception.


You're confusing displaying different information at different URLs with being "more than static HTML and CSS".


This is a very old fashioned point of view I’m afraid, things have moved on.


[flagged]


I know enough to don't feel the need to demonstrate a random guy in a forum that I know more about a topic.


May I ask how to create a Mathematics Stack Exchange clone without JS?


PHP and cocaine.


In the past Latex symbols were rendered on server as an image.

It was not pretty but it works virtually anywhere.


What part of a Mathematics Stack Exchange clone would require JS?


In "modern" stacks it is preferable to do things like layouting of formulas millions of times on the clients, instead of once on the server. I guess that sort of thing needs JS.


As you have pointed out, but formulated more explicitly: client-side rendering should deal only with HTML/CSS because that's what the browser is built and optimized for. Every line of script changing the DOM (html structure) may trigger a redraw of the page, which means wasting considerable amount of resources! But even if your script outputs HTML only once, you still have O(n) HTML templating instead of O(1) for n clients. Such a waste!


The MathJax part.


MathJax is somewhat convenient but doesn't seem necessary at all.


Completely agree


How old are you?


~~With progressive enhancement, you could arrange to display only the current subpage. When Javascript is off, the whole page would render as a long-ish HTML document. Which is indeed no issue at all with the sitemap in a sidebar.~~

Edit: I obviously didn't read the article...


One suggestion, set the overflow to "scroll" so the scrollbar is always visible. When I open a section it appears, adding like 10px on the right and all the content moves left.


Thanks for sharing, beautiful site. Great typography.


Can you filter some of the content based on say selected keywords? #physics on physics and … not sure even one can two keywords.


#physics:target #section-1, #physics:target #section-2{display:block}

then wrap the sections in <div id="physics">


nice. most sites that use javascript don't need to... https://i.imgur.com/qaJYWit.png


I’m not sure that Google Maps is the best example of something that should work well without JS.


True, but even Google Maps could work without Javascript... It may not be as easy to use without it though.


I still remember the first time I used google maps. It was the first example I ever saw of what was then called AJAX, and it blew my mind. Without javascript, google maps would have been the same as mapquest (or any other mapping sites from that era): a full page refresh to move or zoom on the map. Javascript was the differentiator that made google maps the winner.


I'm not saying the experience is as good, but sometimes you need or want to browse without Javascript (at least some people).


Did you manage to sell Romulus to any gov customers? I built and sold a similar solution on top of SAP Hybris which we sold to several government departments around the world. It’s a very hard sell even with the worlds largest software sales organisation behind you.


We did, yes.

The start was in political offices, which need CRMs and are motivated to move or they're fired. Constituent service satisfaction is one of the top indicators of being re-elected.

Moving into permanent government departments is more of a pain but we did see some success there.

Ultimately, though, the trough between early-adopters and getting mainstream is dishearteningly deep and there aren't enough ealry-adopters to build momentum. Not for us anyway.


This is something I've always wanted for virtual textbooks. A single .html file with all the JS, CSS, images as inline data blobs, etc. For most physics, math, CS, etc books, this should be possible. It would make it trivial to download the book for offline use and share it with others. Other than the raster images, the entire content of these types of books would probably fit in a couple MB of text. The use of #anchors or other smart URL manipulation would also make it easy to share internal hyperlinks if you want to tell a friend to look at chapter/section #foo_bar of the book.


I think you've just described an EPUB?


I thought epub files were basically zip archives of all the required files


ePub, as with PDF, is a container-file format. (A ZIP file with a specified structure.)

If you want to consider that single or multiple files ... quickly veers into semantics.


An epub is indeed a zip file containing HTML, CSS, font, image, and metadata files.

I'd argue that that's not meaningfully different to OP's suggestion of a HTML file with all the content inlined, though. It's still a single file grouping everything required together and can be easily edited and read practically anywhere. It has a few advantages over the inlined-content HTML file, too:

- You can read the compressed file directly, an epub being typically half the size of the uncompressed files (going by a quick test of 30 randomly-selected files I had on hand).

- Storing the actual JPEG, PNG, OTF, etc files inside the zip is more efficient than inlining them as base64 and then making the browser decode them, in terms of both speed and filesize.

- While reading an epub, different sections can be a different HTML files, and only one needs to be loaded into memory at a time. This can be irrelevant for smaller things but it can make a big difference sometimes--with pages that include many charts and tables, documentation for graphics libraries that include images and animations for each documented function, etc.

- Epub files have native support for highlighting and bookmarking, to keep your place in long documents and share the file with your highlights attached.


You could make an ePub of a single HTML file with all the binary blobs embedded.


Keep in mind that single integrated book formats are highly viable now.

It's in the interest of publishers not to offer them, however.

A chief example that comes to mind is the Feynman Lectures in Physics series, which are available online but only in a chapter-by-chapter basis in HTML format. (Quite beautifully formatted, FWIW.) If you want to glue those together into a single integrated whole, you'll have to do that yourself.

PDFs and ePubs afford the single-file format.


It makes sense from a DRM standpoint - PDFs and ePub are easy to crack if you offer it at all, so the only way to fix this is to go web-only and assume nobody will want to dig into turning that into a sharable PDF. Bonus points if you make it a SaaS test platform and get universities to buy into it, forcing every student to purchase the 'book' to receive grades.


The multi-page format makes access all the more inconvenient.

Copyright status means that anyone who glues together the set will find themselves pursued for infringement.

In practice, the question's moot as the Feynman Lectures are available via LibGen, ZLib, and similar resources.


> Bonus points if you make it a SaaS test platform and get universities to buy into it

Surely this exists already? It's good and obvious an idea to not be taken already.


Multiple times over, yes, which is what the poster was alluding to.


You're describing PDF files, but unfortunately Adobe didn't invest much time or money into making them interactive with a scripting language.


Emphatically no.

HTML can be responsive, like an electronic document should.

PDF was designed to faithfully represent paper, and it has all the fluidity and customizability of a stack of printed paper. It's also completely anti-semantic: it has no document structure beside pages, and each page just describes how to put ink onto paper.

I think that the principal application area of PDF is just that: to represent paper, for printing purposes. For everything else, it's not exactly great.


PDF has a limitation: it defines page size and the layout of everything fixed to this page, and thus can't do responsive layout like HTML can (reorganizing stuff in the page to match the screen size)


I wouldn't call that a limitation. In fact that is one of the major selling points of the PDF format. The document looks as intended* by the creator on all platforms whether display or print media.

* Yes, I know, PDF doesn't -always- do this, but a well designed PDF generally does.


It is a limitation. Anyone who's ever tried to read a PDF journal article on their phone has experienced it. (and don't even get me started on Unicode copy/paste...)

HTML can be styled in a fixed layout if desired, and reflowed by a reader mode if needed. PDF can't be styled in a responsive way, and there's no (easily accessible) reader mode equivalent for PDFs.

HTML is a far better document format than PDF.


> there's no (easily accessible) reader mode equivalent for PDFs.

It’s called Liquid Mode in Reader, and it is, in fact, easily accessible.

> HTML is a far better document format than PDF.

HTML is better for some things, PDF for others. That's why PDF is widely used on the web when HTML is available.


Liquid mode only works we for a subset of PDF files. And sometimes it doesn't work at all.


Reader mode only works well for a subset of HTML files, too.


Yes, of course.


A significant advantage of the two column journal format (although still they end up feeling pretty clunky on a phone).


True, but the rigid layout is what makes PDF unsuitable for some use cases. It’s main use is to represent a printed page of a very specific size. Anything outside that just requires too much flexibility.


Feature.

The dimensioning problem isn't PDFs. The dimensioning problem is computer displays.

Get yourself an e-ink display of 10" or 13" (standard dimensions offered by the patent-monopoly vendor across multiple OEMs), and discover that online reading of PDFs is 1) quite pleasant (so long as the underlying PDF formatting itself is sane) and 2) vastly preferable to either HTML or "fluid" ePub or Mobi file formats.

Book formats developed over about 500 years largely guided by the capabilities and limitations of human eyes and hands. Typical mass-market books range in size from roughly 6" to 12" diagonal measure. Yes, there are smaller and larger formats, these are deviations from the norm and impose compromises for other concerns (portability for smaller formats, resolution for larger ones, typically pictoral or graphical in nature).

A 5" or 6" mobile device presents less display area than an index card. Laptop displays are too short to display a portrait-mode document one page at a time, and in almost all cases too small to present a 2-page up display.

(You can verify this yourself trivially at the Internet Archive using its BookReader, e.g., https://archive.org/details/UnderstandingPhotoTypesetting/pa...)

When wedding PDFs with an appropriate display technology, the frustrations fixed-proportion PDF display disappear.

This does rely on the PDF being dimensioned for a typical book size, though there's considerable flexibility here, and any dimensions from ~6" to well over 12" will tend to be readable, there's no need for precisely matching device to document size.

I'm saying this as someone who's long railed against PDFs for documentation. My mind's been changed.


>Get yourself an e-ink display of 10" or 13" (standard dimensions offered by the patent-monopoly vendor across multiple OEMs), and discover that online reading of PDFs is 1) quite pleasant (so long as the underlying PDF formatting itself is sane) and 2) vastly preferable to either HTML or "fluid" ePub or Mobi file formats.

Notably this is not the standard size for most e-ink devices though, on which pdfs are a pain to read. Mobi/epub files, in contrast, are fantastic on my kindle (and on my phone, and on desktop).

If your argument has to boil down to "this file format is great if you just buy a specific device for viewing them, and eschew viewing them on any of the other devices you already own and use more frequently", I'd say your argument provides more evidence for the counterpoint than for the one you're arguing.

I'm happy you found a good way to consume a fundamentally outdated format, but PDFs are a bad format for the majority of use cases.


Oddly enough, my purchase decision was driven specifically by considerations of size, resolution, and suitedness to task.

Again: at 8", e-ink is pretty broadly useful. If you're frequently reading scanned-in journal articles, the 10" or 13" devices shine, though these can be accessed on smaller screens using in-page zoom-and-scroll. (Onyx BOOX has several settings for this in its NeoReader app.)

Note-taking, which was not a use I anticipated using, also happens to be really well-suited.

Yes, you can read on a smaller device if you must. However you're making the same sacrifices for mobility that are present in pocket-sized printed books, and the format is best suited to largely unformatted text (e.g., prose). Diagrams, tables, and other layout translate quite poorly, and this is intrinsic to the display itself.

The one task to which the tablet format seems best suited is precisely e-book reading. So I've ditched the "smartphone" (a pocket snoop) and settled on laptop / desktop (productivity) + tablet (ebooks), and dedicated devices for specific other applications, most especially capture (audio, images, video).

"The Case Against Tablets"

https://joindiaspora.com/posts/880e5c403edb013918e1002590d8e...


> but unfortunately Adobe didn't invest much time or money into making them interactive with a scripting language.

Yes, they did. (Non-Adobe readers often don't support JS, though some do, but Adobe definitely built the support for interactivity.)

Also 3D content and a lot of other things most people probably aren't aware of, because they are peripheral to the common use cases of PDF.


PDFs actually have some support for javascript! I don't know any serious use for it and some PDF viewers refuse to support it (ex: Apple's Preview app), but it does exist.


Form verification for instance, if I remember correctly.


PDF would have probably been better without it but Adobe couldn't resist adding new features so they could flog more copies of Acrobat.


The problem with PDF's has always been the layman can't create them without some expensive Adobe program.


This is categorically false.

There have always been other tools for producing PDF (E.g. TeX) and there are also tons of free converters, print-to-PDF drivers, etc.


I have never seen anyone I would remotely describe as a layman using TeX.


Any modern office suite (MS Office, Libreoffice, Google Apps) has PDF export. Many image editing applications can do it too.


macOS hasn’t had this problem in 20 years.


Yes, it's literally just Print dialogue -> Save as pdf.

Also worth noting that the PDF preview on the Mac has very nice simple editing capabilities to combine pages, delete pages, crop etc.


> Adobe didn't invest much time or money into making them interactive with a scripting language.

yes they did. JavaScript, though I have no idea if there is a DOM or anything like it. pdf is kinda messed up in ways like that.


Think about how much worse PDF exploits would be if they did!



You can even embed images with a src converted to base64.

This method even works well with the back/forward browser buttons, something that a naive show/hide JavaScript solution wouldn’t.


Or even store the whole page as a self-extracting zip file. This page is a zip file: https://gildas-lormeau.github.io/


MHT did this 20 years ago and got all but abandoned (probably for good reasons).


Web bundles is supposed to replace it, cf. https://web.dev/web-bundles/


Interestingly, the support for mht/multipart is still there in Chrome and Firefox; or at least it was relatively recently. However it only works for files and not when accessing an mht file over http. The format itself is relatively simple.


Yeah it's pretty much the MIME email format as a container for a webpage.

A lot of the issues "solved" in modern frameworks could have been addressed by using this, but instead things went a different path.


That could easily be made into a PWA that people could save and view offline. I've been doing this with create-react-app and markdown files (muuuch bulkier solution) but it's an offline docs-like SPA people can click around with interactive examples and shareable links etc. Seems like offline availability was not a priority for other docs-making tools like docosaurus etc (last time I checked)


Please, don't make me run React to read your document, and don't make my device parse Markdown and generate HTML on each visit. This is wasteful. I and the planet should not suffer because you decided to author your book using Markdown (which is a fine format, but my browser does not understand it natively). A virtual book is best served as plain old static HTML pages. Shareable links is indeed an impressive feature, but it has been a given on the Web since 1991, I think we don't need to be impressed by it in 2021.

You can add some JS here and there for the few really interactive elements of the document but my browser already has all the features to render documents and links perfectly fine. People have been able to "click around" since 1991 and we never needed to download, parse and execute 2MB of JS for this.

Your book is probably big, and I'm probably not reading it in one go, so if it includes images and videos, downloading it all is probably unnecessary and the book is probably best split in several HTML pages. If you want to allow me to consult it offline, that's very kind and noble. Just put a zip file somewhere I can download.

Sorry for the rant, but I'm a bit fed up by having to download run megabytes of Javascript I can't control (and even read, because yay, bundles!!) to browse the web, just because.


> If you want to allow me to consult it offline, that's very kind and noble. Just put a zip file somewhere I can download.

To play devil's advocate: a majority of web traffic is on phones and tablets now, especially for long-form content where you will frequently see people request a page on a desktop, then request it two minutes later from a phone or tablet where they can read it more comfortably. 99% of mobile users will be happier when a text-heavy site is a PWA that caches itself, rather than a static HTML site that asks them to download a zip file, install an app to work with zip files on their device, unzip it to a folder of hopefully-relevantly-named HTML files, and then browse those, in the process breaking link sharing, link navigation (depending on OS), cross-device reading and referencing of highlights/notes, site search, and so on. Not to mention the limitations imposed on file:/// URIs, like browser extensions not working on them by default, which is a real problem for users relying on them for accessibility (e.g. dyslexia compensation, screen reader integration, stylesheet overrides). A lot of times that won't even be possible on a dedicated reading devices; my ereader will cache PWAs but will not download arbitrary files, if you make your site a PWA I can read it during my commute, if you make it static HTML with a zip file I can't. These are features most users appreciate a lot more than not having to load a 60k JS bundle (current size of React gzipped).


You want offline viewing?

Chrome: File > Save Page As... > Webpage, Complete

Safari: File > Safe As... > Web Archive

My god, it's just text files and images, you don't need JavaScript.


And images can be also embedded. These days you can fake a real URL completely on a fronted side.


I absolutely hate it when people do this.


What do you mean by faking a real URL on frontend side? I'm genuinely interested.


Isn't that what .epub is for?


That would be perfect for books, but I rather they take advantage of the medium and make the examples interactive: I rather see how the results change when I change the inputs than have it all in one file.


I don't see why you couldn't handle images with base64 data urls...


Well, TiddlyWiki[1] has been going it for 17(!) years. It's a very mature, polished, and extensible engine for wikis, blogs, and personal knwledge bases.

The entire thing (including the editor!) is a single .html file. By default, even images are embedded.

For my ADHD Wiki[2], a resource that talks about ADHD with a copious amounts of relatable memes intertwined with the text, I chose to just use images in the same directory instead of embedding them; so you might need to do some work to download that page (I think File -> Save as.. can give you a readable static version on some browsers).

Anyway, somewhat surprising that people are stumbling into how much you can do with just one well-crafted .html file. Look ma, no node.js, no (no)SQL database, no nothing except for one file for one website (and that file isn't even that large, given what TiddlyWiki allows you to do).

TiddlyWiki can be run on node.js, but I don't see much reason to. If I want to make changes, I use the built-in editor, and then the "Save..." button generates me the .html of the updated version. Save it over the old one, upload over ftp, done. No deployment process to speak of.

And, at that, the feature set rivals (and, at times, exceeds) that of, say, Wikipedia.

(And for math nerds: it supports LaTeX via a KaTeX plugin. Maybe you can't copy-paste your entire thesis, but it's pretty damn close to real-time full-featured LaTeX).

[1]https://tiddlywiki.com/

[2]https://romankogan.net/adhd


Yes, TiddlyWiki is great, but isn't it full of JS? I mean, the somewhat special thing here, is that the page is built without JS.


And why does it matter?

The author seems to be fascinated by the concept of single-HTML website which uses anchor links for internal navigation to reveal or hide content instantly without page reload.

That's exactly what TiddlyWiki does.

If you are JS-averse, you can generate a static HTML version of the wiki as well without JS in it [1]. It doesn't use CSS tricks though to show/hide parts.

[1] https://tiddlywiki.com/static/Generating%2520Static%2520Site...


My TW: https://philosopher.life/

I've found having it all in a single html file tends to makes it easy to survive on a lot of different kinds of networks, and I like that the larger context of the site is automatically attached to any particular part of it (which is invaluable for my work). To my eyes, it's what a PDF dreams it could be.


Also, the ability to download & email the one file that is everything it is invaluable.

Never have to worry about the editing environment when porting from one machine to another. Never have to worry about version compatibility and stuff like that.

HTML+JS are mature enough that we can reliably expect the software required to open and edit your notes (i.e., a web browser) to reliably exist in the future. Can't say the same about most other formats.

Even LaTeX, with its version-control-friendly text file format, very quickly runs into portability issues with package management. Someone gives you a .tex file - good luck trying to compile it without Internet connection.

PS: awesome website and art, thanks for sharing!


So to sum what I'm reading, it's a neat little demonstration of the :target pseudoclass.

But:

- No, the entire website isn't a single HTML file (it's also images and a CSS file)

- So no, you can't download a single file and have it work offline

- And no, this won't work well with screenreaders and other non-standard clients

- And no, this doesn't scale well to larger sites

- And no, this won't be indexed well by search engines

- And no, you don't need this to avoid Javascript navigation (just use multiple HTML files)


All fair points in many contexts, but not all contexts. Some counters:

1. You can embed CSS easily, and images using data: URIs...

2. ... so you can download the whole thing as a single and work offline.²

3. Non-standard clients are not my problem, file under “you can't please everyone, especially those who chose to be difficult to please!”. Accessibility is a concern though, I'd need to look into that before using such techniques without a reliable fallback.

4. This isn't practical for sites/pages needing large resources like high-res images, or that are large generally¹, but not all sites/pages need large resources or are large in general.

5. Not everything needs to be indexed well by search engines, I have things out there that are only relevant to those I refer to them, though I agree this could be a significant issue for many.

6. True. Though that breaks your second point, so you need to choose.

----

[1] you wouldn't want wikipedia done this way!

[2] also with external resources almost all browsers will happily save then when you do file|save, and to be pedantic the description given is “in a single html file” not “in a single file”


Out of curiosity, what aspect of the site would lead it to not being indexed well by search engines?

Edit: I just read tyingq's explanation. Is there anything else?


Just embed both CSS files and images in the HTML file.


And now it's even less scalable to larger sites because you're forcing the eager loading of resources on pages you'll never click on.


Easy solution. Use progressive compression for images and set loading="lazy" https://developer.mozilla.org/en-US/docs/Web/Performance/Laz...


loading="lazy" is for images that are NOT embedded in the same file. So we either have entire site in a single HTML file or we have scalability for large stes. There's no solution that gives us both.


It depends on what your goal is. If you just want to make a single request to the web server, than loading="lazy" will not work as you said. (Technically speaking, TCP is sending multiple packages anyway resulting in higher latency, so not sure if that is a great goal.)

But if you just want to be able to save the entire website with Ctrl + S, then it works fine.

As an aside, loading="lazy" is the way in which images are embedded in the website from TFA https://i.imgur.com/wIkaE5g.png which was the reason why I mentioned it, although it certainly does not fit all possible use cases.


> And now it's even less scalable to larger sites

And how is 20MB js SPA with 20 wss connections more scalable?

I've see too many react/vue projects bundling everything into a single main.js file even pages I never click. e.g. some crazy map or graph module. Is there some magic in webpack to make sure the needed functions gets executed in "eagerly" fashion?

Or does json provide streamable parsing capabilities?


If you are interested in building a single page site, I really doubt that scaling is an issue you'll have to contend with and seems like a waste to even consider it for something so small.

If you get hung up scaling a single page, you have other problems


This isn't a single page site, it's a single HTML file site with multiple (logical) pages.


I see no problem for SEO. I see no problem for screenreaders

This is all classic and supported HTML, so those should work perfectly for this.


The problem for SEO is that either the hidden parts are ignored or the indexing won’t properly differentiate between the different "pages" (sections).


Normal websites used to be one file way back in the day. It was a "pro tip" to split out the JS & CSS into a separate files.


Web pages used to be a single file.

This is a web site as a single file, using CSS only.

It's a CSS equivalent to a single-page application (SPA), except that this is a single-page site. SPS, perhaps, or maybe a multi-paged file (MPF).

Strictly, it requires CSS features which weren't originally present in HTML, though the concept's likely been possible since the early 2000s, if not late 1990s.

It does rely on CSS support within the browser, and some simple browsers (mostly terminal-mode clients) won't present the multi-page aspect. The site / page itself remains useful.


Hum... You may be thinking about a page being a file.

A site is a collection of those.


A site is a website is the thing behind a domain name. Many were, and are, a single file. Its the rest of ya'll that are overcomplicating things and somehow perverted that to thinking its necessary. How many layers of meta are we right now for this article to be interesting?


I thought it funny just how many of your respondents decided to charge the semantic hill, completely missing the point.

But you know this doesn't use JS, right? (joking)


It’s probably your use of the word “normal” as it was more common for multi-page sites to either have a file per page or to use cgi.


I remember when I made a website with frames and felt like a genius.


Yes! Reusable menus, finally!


Put your script tags at the bottom of the html page so that when the JavaScript is executed all the dom elements on the page can be referenced. Either that or bootstrap in a window onload callback. I remember picking up and being in awe of secrets of the JavaScript ninja


Here's an even bigger secret, if you're doing server-side rendering, putting things on top might actually be better.

You're maybe parallelizing things. As the client is downloading and interpreting the css and JavaScript, the server is doing database calls and rendering the HTML.

So you actually are doing things on the server side when you're "blocking"on the client. You didn't get this for free. You had to worry about buffers, flushing, and plenty of testing.

I don't know if you can still squeak an actual speed up with this technique (there's many attributes you can use to customize things these days) but I used to use it all the time back in the days of platter drives.


Nobody knew about this trick back in the day, if it would have even worked. It uses no JS.


There is no JS here, and it's multiple "pages" in one file.


This is great. I just wish this clumsy wording was a bit tighter:

> the whole website is contained within a single HTML file.

Because, it's not. An HTML file, a CSS file, and some PNGs. What's interesting is there is no JavaScript.


It's not, but with a little more imagination you could certainly take the author's work to the next level. Inline CSS and images as blobs.

Then in practice the issue becomes a bloated file that isn't able to reuse internal components, at least not without introducing a layer of complexity such as a scripting language and browser APIs to go with it.

Perhaps if HTML had been designed with more component re-usability in mind, the landscape of the internet would look very different today. Then again, what sense would there be in having a single HTML file for a site like Wikipedia or Altavista? And imagining the web evolving without a scripting language would be naive.

Single file websites that could be served up on blob or network storage certainly appeal to me, especially today.


Sounds like content-addressing would address (pun intended) your needs. I mean it's quite possible to build a IPFS/Bittorrent/DAT-based browser. I'd be interested in such a modern p2p-friendly browser minus the gigantic JS crap and attack surface.


> it's quite possible to build a IPFS/Bittorrent/DAT-based browser

what would make it different from DAT's Beaker Browser?


A CSS file is unnecessary, and you could skip the images or embed them.


The question is, what is the USP here?

Having no JS or having everything in a single HTML file, and the title suggests the latter.

Also, you can embed the JavaScript too.


My God, you mean it's possible to build a site that doesn't require 10MB of compressed stupid ass frameworks and preprocessors to turn your code into, uh, slightly different code? You can just... write HTML and CSS yourself? By hand? The deuce you say. :-D


I think this is substantially cleverer than merely "not using frameworks". Nobody's surprised you can do this without a framework, but I suspect a good number would be surprised you can do it without Javascript.


They shouldn't be surprised, though. That's the point. The whole web dev community seems to have forgotten about the foundational web standards.


This wasn't a foundational web standard. The :target pseudoclass was introduced in CSS3.

The article is not another pointless potshot at frameworks, it's showing a clever use of a new standard to allow multiple pages in the same HTML document without using Javascript.


CSS 3 was released in 1999 and the :target pseudo class is in the spec. So not exactly new... [1] https://www.w3.org/TR/1999/WD-CSS3-selectors-19990803#target...


If you want to target multiple platforms (mobile, web, tv) with similar code that 10mb that high value users can load in two seconds with their latest iPhone is worth it


This looks like one of those sites I made as a kid. What is the significance of this?


Many people seeing this weren't alive when you were a kid.

I heavily encourage people constantly demonstrating how to competently do things using conventional systems.

I've gotten into many arguments over this stuff. Not that these simpler approaches don't work, but they lack the formality and theater.

Some people need to see stuff like this every day until they stop creating giant towers of spaghetti that don't do anything


Well, there is an audience which likes to avoid websites with JS. I think their biggest motivation is tracking and bloat. So having a technique for a user experience without load times (after the initial load) without JS is what makes this somewhat special.

On the other hand, there have been solutions for this case for ages (like using radio buttons), so using :target is just a somewhat cleaner approach from my point of view.


Nobody else is really doing this, or was doing this. But you can't tell that it's different, that's what's neat about it. It works just like a multi-page site but there's not even any JS requirement.


A decade ago we have this niche vBulletin forum where members would self-publish news article regarding the hobby. vBulletin BBCode is pretty limited, but then the admin started to add more and more styling support via custom BBCode.

At one point we were able to create "interactive" content entirely in our heavily customized BBCode without a single line of JS, and this #target is one of the more used tricks.


That sounds like a pretty cool BBCode hack (and ultra-permissive by the admin considering...admins), I'd like to see it in action in 10-years-ago form.


I did a similar thing on my forum around 2005 but we even let members put CSS in their posts via BBCode… although it was rewritten, scoped and restricted.

But people made big “clubs” and then created these really cool semi-interactive posts using just CSS. It was really awesome what people can do.


Been doing this for over half a decade off of a template I boosted 8 years ago and have been modifying since.

Has made tens of millions in revenue.

Just assumed every developer or template seller in south asia was using this - or their clients. Probably more common than you think.


So you are using a single HTML file with no JS at all to deliver a multi-page site? Just to be clear.


To be clear, yes. With the section tag. Sometimes I even add javascript to the style section without any object orientation. Stop the presses.


That's the thing, I think most designers were probably using javascript even if alongside tricks like this without thinking it through. The current period in web design is one of the biggest "it's nice to have javascript turned off" moments we've had, so it's relevant.

If you already knew about this, the post seems like a joke, you've made millions off this for years now--that's really special, but that is also pretty subjective and doesn't mean it's not meaningful that one of the top CSS sites shared this article out this year.


Yeah, mostly its that people overestimate how much (or what kind of) tech prowess they need to convey to close a deal or offer a service that people want to pay for


Any examples you could point to?


The only slightly interesting part (which is why it's hosted on css-tricks) is not even explicitly mentioned in the article at all. It's the fact that it also uses no JS, and is all CSS based. Obviously that itself isn't crazy, though back in our days it would've been harder to do with old CSS. That is, having proper navigation/sections.


Multiple displayed pages, and potentially an entire website, are presented within a single source file.

There are no^W^W is one external dependency (the stylesheet https://john-doe.neocities.org/style.css). This could also be inlined, and is required for the concept to work. There are 75 directives and/or media queries.

The site can be browsed entirely offline once accessed.

See also: https://news.ycombinator.com/item?id=29670168


Lovely article and such an interesting way of creating single page apps. I never knew about this usage of the :target selector.

I think a major issue with this approach is just practicability - I'd want to write my content in something like markdown and we will anyway need JS to convert the markdown to HTML.

I have also been interested in making SPAs, my my way is more traditional. Using javascript for the stitching. But only Vanillajs. I don't use any frameworks and bundlers for my SPAs. In fact right now I am working on a simple blog: http://rishav-sharan.com using just HTML, Vanilla js and Tailwindcss.

If anyone is interested in my approach, I have an article detailing things there, or you can just read through the source. Its un-minified and fairly readable.


> Please enable Javascript to view this site. This is a Single Page Application and it needs JS to render.

That's the entire problem with JS: i can't browse a page without a complex rendering engine (arguably full of security vulnerabilities) or even scrape it. Something like webmention/microformats (indieweb) federation becomes almost impossible with you due to your setup.

Also worth noting, rendering the Markdown on the client is super inefficient. First, because client-side JS will always be far slower than native code server-side: i first need to download the entire JS then run it in a super slow sandbox. Second, because there's economies of scale to be had: it may take a few milliseconds to build the markup, but every client has to do it. For n client, that's O(n) complexity vs O(1) for server-side rendering. So many CPU cycles wasted :)


The author of the site linked in the article suggests a couple of Markdown options for this on their blog page [1]. In this, they link a port of their website as a Jekyll theme [2].

1. https://john-doe.neocities.org/#blog

2. https://github.com/bradleytaunt/john-doe-jekyll


There are plenty of non-JS converters from Markdown to HTML and static websites are a thing.


It is a static site. Only that I decided to hand roll my own implementation instead of using something like Jekyll.

And as it doesn't have any server for the markdown (they are just CDN files), I have to use JS to do the translation to HTML.


Why not render them on your own computer after you write them?


Simple and efficient. I can maybe setup a build step for the markdown. Thanks for the suggestion. Can't believe it never crossed my mind


Just a heads up, I get the “you need javascript” warning when using Firefox for iPad. I never had that with any website before, and I can switch between the two color schemes, which requires JS (i think).

I’m also more in favor for compiling the MD to HTML once and then serving pure HTML via the CDN. You could still keep it to the two steps you outlined in your blog post by running the build step and deployment using github actions.


> we will anyway need JS to convert the markdown to HTML.

You could compile the markdown to HTML server-side …


This whole approach was an exercise in avoiding the server back-end.


TIL about the :target pseudoselector. I've actually written JS to implement this functionality because i didnt know about it. embarrassed.


Uhm... you're not alone :s


Same here


Using url fragments seems to limit how well Google can index it.

site:john-doe.neocities.org in a google search only finds the main page and dist page.


This is like a chicken and an egg problem.

Google likely haven't designed their indexer to handle pages like this because not many people do it.

Many people don't do it, because Google won't index it properly.


I did find an old post from Google that suggests adding a ! after the #.

https://developers.google.com/search/blog/2009/10/proposal-f...

No idea if that still works.


I don't know why they hide stuff, just show the whole site as a single page you can scroll down.


Information presentation has practical and usability aspects.

Sometimes you want as much information as possible on a page. Sometimes you want one and only one portion presented. Which you choose depends very much on the application, user community, and objectives.


There’s only the main page, that’s the trick. Nothing else for Google to find.


Given one of the sections is a blog, that seems sub optimal.


I have no idea whether or not the lightness of websites measured in kB is going to drive a trend.

But there is definitely a trend that our collective cost models are downshifting from “ah fuck it’ll be faster by the time this ships” to “well right this minute we’re not zero-effort scaling on the backs of the deep-infrastructure people”.

Maybe this perspective is perverse, but I personally find it cool that there’s both money and hacker cred in counting bytes, even megabytes, after a long time of “well the hardware will be better in 6 months so why bother”.


Ive never come across that attitude about 6 months later but definitely “works on my $2k machine so ship” which is probably equivalent.


See also https://portable.fyi/ built with portable-php[1][2], "a single HTML document from a collection of Markdown files."

1. https://github.com/cadars/portable-php 2. https://news.ycombinator.com/item?id=25770516


This reminds me strongly of the Info file format (itself generated from a TexInfo source file), in which an entire multi-page document is presented in a single file. See:

https://www.gnu.org/software/texinfo/manual/texinfo/html_nod...

The difference is that Info requires a dedicates reader (the info or pinfo command-line utilities, or of course, Emacs), whilst this format will work with any graphical Web client.

It doesn't quite work as planned with a terminal-based browser. I've opened https://css-tricks.com/a-whole-website-in-a-single-html-file... with w3m, and rather than seeing only one of the intended "pages" at a time, the entire "site" is presented. That said, degredation is graceful, and navigation works.

I'm quite impressed.


My gut instinct is that these CSS tricks may be bad for accessibility. Does anyone know how well a screen reader would deal with it?


I was just reading it in Links 2 and it is totally readable inline. It also gets a 100 from Google Lighthouse Mobile & Desktop Accessibility checks.


i recently read that display:none elements are hidden from screen readers


They are. I recommend using `height:0` and `overflow: hidden` instead. Or the more advanced `.sr-only` class: https://kittygiraudel.com/snippets/sr-only-class/


I wonder whether, upon clicking the links, there is any indication about the change in visible content.



And the linked site garnered some discussion as a "show hn":

"Show HN: A simple way to make HTML websites": https://news.ycombinator.com/item?id=25170078


Love this! Some time ago I created a tool to make similar one-page sites by parsing a markdown file: https://leoncvlt.github.io/imml/

It's also available as a library / on the command line: https://github.com/leoncvlt/imml


A lot mentioned already - I put up an blog-article done in HTML/CSS and the Icons/Font using URI just some days ago (german) [1]. I has 77kB total and can be downloaded in a single file, preserving the layout and font (FreeSans). As it talks about the very unknown relationship how the proposed end of lignite-mining in eastern region Lusatia (2030-2038) affects freshwater-production in Berlin (end of groundwater pumping in Spree-river), reminded me (as Hacker News is located in California) of recent article "The Warning Shot the US Is Ignoring: Climate Change Impacts on California Central Valley" in CleanTechnica, I thought to share it.

[1] http://futureasapresent.org/trinkwasser.html

[2] https://cleantechnica.com/2021/12/18/climate-change-impacts-...


Is this a joke? Why is this a revelation?



Lol I know right?!


I built a mobile web app for a conference that way [1], workes really well and is way more performant on older mobile devices than the previous version that was built in react.

[1]: https://app.ishl.eu/2018


I used a similar trick to write my resume site [1] with a "spatial" transition effect inside a single html file with no javascript.

It's a bit of a weird flex but was fun to do.

1: https://norilo.me/


I love the simplicity of this so much, I built a whole CMS using that, with markdown and a single HTML file! https://github.com/arnaudsm/raito


The difference is they don't use javascript, a point many commenters apparently missed. This is similar how you can make drop-down header menus in css only - no javascript. Implementing css dropdown menus was one of my first big web dev tasks, since many users still had computers that slowed to a crawl when javascript and dom manipulation were involved.


I agree, I just went one step further to ease content management.


this is awesome! nice job on the minimalist approach. it looks very clean.

im obsessed with offline-first/offline-only (optional) and have been trying to build all my products with the underlying philosophy of single-file tooling and “infra-less” in-mind; meaning it doesn’t care where it lives and highly portable by default.

here’s a note taking app that is all in a single html file. images are base64’d and data is kept in indexdb. https://github.com/bkeating/nakedNV


This is in the same vein as the "checkbox hack". It's a great rabbit hole to go down. It's fun to try and figure out how to accomplish normal UI functions with no JS and no page reloads. My personal favorite was making a popup Confirm/Cancel dialog where you could still see your content behind it. Cancel made it disappear; Confirm triggered a CSS url() call for the action.

Here's a todo app with no javascript (not my work:

https://www.mattzeunert.com/2017/10/30/javascript-free-todo-...


Here's a tool that helps builds these by "compiling" all files into one html file

https://github.com/sean7601/compileJS


Nice! Would be awesome if it could optionally pull in the images as base 64.


This is awesome, but a little JavaScript extension here and there is improving the user experience A LOT! If the website is growing, a site search is something I would personally recommend to have.

Still static though. I built my blog with hugo (static site generator) and it is hardly noticable, that it is completely static.

See https://pilabor.com/blog/2021/05/building-a-blog-with-hugo/ for details.


What's old is new again! Chris Coyer (the owner of CSS-Tricks) made a site using a similar trick in 2010 and there is a screencast of the technique on the same site as the linked article[0]. He did use jQuery to hide and show sections, but I imagine you could even implement a lot of the animations from back then as CSS these days.

0: https://css-tricks.com/video-screencasts/84-site-walkthrough...


Using jquery to hide and show sections is... not like OP at all. The whole point in OP is doing this without JS.


The code layout is so similar that these techniques are surely cousins (same ugly url and all). The only difference is the :target trick.

Further, the no-js portion isn’t even the main idea. “Entire website in a single file” is.


No, that's not the main idea, in fact it's not even an accurate description - the entire website is not in a single file. The website loads one HTML file, one CSS file and several image files.

The whole point here is you can do instant navigation to multiple pages that are contained within a single file without using JavaScript.


While cool, I think I prefer these older styled longform HTML pages that have links back to the main nav:

http://moonmusiq.com/


Haha. This reminds me of the dot com days. I was doing the front end as a mammoth servlet. The entire UI was more or less in one multi megabyte Java file. The architect of our group started add in changes - which, briefly, I kept my nose above water editing - but he knew I'd drowned. When I finally gave up he went over JSPs and the new fangled 'MVC' pattern and really worked through some UI patterns with me. It was very much a learning moment for me for design and maintenance.


Man, I'm old.


right? everyone is fawning over this basic html as if its this amazing trick, the foreshadowing was the people pointing out how progressive web apps aren't necessary for most use cases, but now I really see I can't trust anybody's opinion here because this is normie levels of perception at this point.


On one hand I'm excited about the future and I want to build apps for humans in spaceships, but on the other I'm very disappointed with how hard it is to get to the actual piece of information you're looking for on some of the websites filled with all this fancy stuff. Sites like this one the other hand, just don't waste my time and I like this idea of simplicity. I wish we could have that everywhere, so we can be more user friendly all around.


I sometimes convert pages to a single HTML file with SingleFile[0] which encapsulates all external assets into a single page. I have a small collection of pages which I can browse offline, and also have them for posterity incase they suffer from link rot.

[0] https://addons.mozilla.org/en-US/firefox/addon/single-file/


One of the comments mentions a notable problem: Safari does not support 'Lazy' loading of images so every one they will be downloaded even if they are not seen.


While safari doesn’t support lazy it didn’t download the images until I clicked the link that had the images. Not sure exactly why, just weird.


Sounds like a safari problem to me (:


Since this hijacks the target anchor, is there a way to still scroll to a part of a page within the section? I think text fragments would work (https://wicg.github.io/scroll-to-text-fragment/#indicating-t...), but they're only supported in Chrome.

Update: maybe element.scrollIntoView()? I'll investigate sometime later


For those interested, I’ve created a Jekyll theme based on this very setup[0]

I also created a forked SSG based off of https://portable.fyi/ [1]

0: https://github.com/bradleytaunt/john-doe-jekyll

1: https://phpetite.org


Cool! CSS question: how does the page stop the browser from scrolling down to the #anchor? It remains at the top, which is great, but I wonder why?


At the time I used padding-top for each of the #elements, the header being absolutely positioned.

More recently I discovered that you can just use `scroll-margin-top: 100vh`, as seen here: https://cadars.github.io/portable-php/


Interesting, thanks for the tip!


Single, self-contained HTML files are my passion. I created a self-contained version of my comic https://comic.browserling.com/browserling-comic.html. All comics in one file, no external dependencies or anything.


The inline example didn't even work for me. Highlights multiple sections. And the href for section-2 is wrong.


As a semi-related subject, I like to use the SingleFile extension to save pages as one static HTML file for archiving purposes.

Got a ton of saved pages and articles from HN in my dropbox that I can read where I want later without worrying about dead links, ghost edits and other live annoying things.


The only me stopping me is component, sometimes you just need to reuse certain HTML structure. Currently I'm using custom script to generate HTML from hyperscript-like syntax, it'll probably be better than any native API that enables component, but still wish to see one.


I may be missing the point here (so please correct me if I am), but 'anchor' has been around for ages. I have seen several tutorial sites which use such elements to typically add a Hint to some discussion. Is there something novel have I missed?


That reminds me of the old WAP pages with their distinct cards.

https://en.m.wikipedia.org/wiki/Wireless_Application_Protoco...


I love the idea, but wouldn't use it for bigger pages, because it means you cannot link to individual sections within one page, because then :target doesn't apply to the top-level <section> anymore, right?


This reminds me a bit of TiddlyWiki, minus the whole "you can edit the file using the file itself" part.

https://tiddlywiki.com


The site looks really nice but with todays total lack of optimization when it comes to size and code quality I don't see that this would become a trend.


I did a similar thing on my mini WIP portfolio site @ https://anmsh.net/


You can also inline images by putting base64 in img tags. Pandoc creates nice single file websites with flags to inline both CSS and images.


Not so good with the stylesheet disabled. It's better to have separate pages and still doesn't require any buildshit.


It is kind of ironic that 20 years later this needs to be re-discovered.


This was absolutely not possible 20 years ago. The only pseudoclasses available at that point were the anchor pseudoclasses and :first-line/:first-letter.[1]

What's being shown off here is not having a single page of a website in just HTML/CSS, but having all pages.

[1] https://www.w3.org/TR/REC-CSS1/#pseudo-classes-and-pseudo-el...


Having an entire Website in a single HTML page was definitely possible.


Not if you wanted multiple pages without Javascript it wasn't. Which is what's interesting about the site being described, although the article does a weak job explaining it.


That is moving the goalposts, with DHTML or not, it can be a single HTML page.


I'm not moving the goalposts, my goalposts are exactly where they were in my first comment: The concept demonstrated in the article was not possible twenty years ago.

In fact I was very specific about what concepts were not possible twenty years ago: "What's being shown off here is not having a single page of a website in just HTML/CSS, but having all pages."

You're the one who proceeded to ignore that and make an unrelated claim about something different that was possible twenty years ago.


You can have all the pages with DHTML and using style ID/classes to select what is visible, nothing new about it, other than the artificial limitation of not using DHTML.


Of course it's an artificial limitation, the whole thing is an exercise in artificial limitation. Putting all pages in the same document is an artificial limitation.

They're artificial limitations exercised in pursuit of demonstrating that, with new CSS tools, something surprising can be done. Something which, as I stated in my original comment, could not be done twenty years ago: putting multiple pages in the same HTML file without Javascript.


How about an entire website (HTML, CSS, JS + assets) in a single tweet?

https://twitter.com/rafalpast/status/1316836397903474688?s=2...


There's a bug in it. Section 2 doesn't work (missing dash)


This is a cursed way to structure a plaintext html website


While it's not a single HTML file, the framework Remix has been wonderful to work with and makes building complex websites and web apps without any client side JS exceptionally easy.


The trick is kinda meh I feel. Yes you can make a pretty cool app in a single HTML file with some inline JavaScript to boot. Still possible today just ever more out of fashion


No JS here though, just CSS using :target. That's pretty cool if you don't want a JS dependency but want more control over interactivity and presentation.


Not to get too dramatic about this, but if you're that scared of any JS, why are you using the internet?


You're making vague gestures about something that is a specific issue these days. JS delivers all kinds of straight up garbage, some of it meant to do very ugly things, it can be turned off, tada, I'm still on the internet.

If people (like me) want to offer resources requiring none of that, why not encourage them, insofar as it makes the internet more efficient, a more helpful resource, and just way cooler in terms of chilling with all the crazy wasteful and annoying JS activity.

It's valid, relevant, and worthwhile. There's really no need to talk about being scared, as if this is some issue that requires a good talking to from dad.


This is an unusually defensive response. There's a big gap between "all kinds of straight up garbage, some of it meant to do very ugly things" and the tiny amount of JS that would replace what this does. This doesn't even enable any interesting functionality. You could simply put multiple html files on the same web server rather than putting everything into one file. The visitor to the website would not know the difference (a point made in the linked article).


Did you read what you wrote? You assumed and implied that I was _scared of JS_ without asking whether this was the case, then called my response unusually defensive. Let's not sweep it under the table.

Your original reply was really uncalled for. Now you're back to some combination of hand-waves and missing the details (which self and others have shared up and down the thread already).

Really, a more respectful, thoughtful approach is merited.


No way to update title without javascript.


Most websites would rather host their files on 10 different domains nowadays (because the cloud)


Back to old school.


Anyone here absolutely godsmacked that this is the top news on HN?

The hell happened? Did I get cryogenically frozen and just woke up?


I had the same reaction.

It would be like telling someone who has been using an IDE: did you know you can compile your code from the command line? And having that person genuinely be blown away.


would it blow your mind to know that the vast majority of .net devs I've ever worked with don't know anything about compiling or running a project/solution other than hitting that big green 'play' button in visual studio?



Except instead of being something cool like "diet coke and mentos" this is something basic that webdevs should be familiar with.


Software development sure has come a long way, now we can change what the screen shows when you click a button!


We are getting trolled so hard. Some people are really in awe over this. Including the blogger.


Yeah I don't know what just happened. I came to the comments to see if I was missing something but no, that's it. Weird.


test


WTF - Why cant I reply to comments..


As thread depth increases, the "reply" link is delayed for an increasing time. This is one of HN's behavioural hacks at cooling off flamewars.

You can click on a comment directly to reply to it immediately. Use this knowledge with care.


Weird - the REPLY was missing fora time. disregard


Why the fuck is this top of HN.

I'm picturing way to many people viewing this today and thinking "Wow! Websites can be made without React? I will upvote!"

Are we at a point where university Web101 classes just jump straight into SPAs?


Yes. There are a lot of devs that don't understand the fundamentals of how the web works.

The trending topic on front-end twitter last week was if you should even bother learning CSS or is only knowing Tailwind fine.

It's madness.


How is this news? React SPAs have been around for years, and SPAs existed long before React too.


This thing is fast unlike anything built as a spa nowadays that make your laptops fan spin so fast it wants to take off.


It's fully possible to build fast SPAs.

Though almost no one is able to do it.

Which is weird, as it's objectively easier than writing enterprise react crap.


The difference compared to SPAs is that this doesn't use javascript.


SPAs display in a single HTML URL, but are themselves (typically) comprised of multiple elements, including many CSS, JS, and data elements which are fetched dynamically.

The example URL is a complete website within a single HTML document with no external dependencies and no further round-trip requests. It is a single-page site (SPS) or perhaps a multi-page file (MPF).

https://john-doe.neocities.org/

You can open that URL, disable networking, and browse the entire site to your heart's content in any browser supporting CSS.

If you open the file in a terminal browser (lynx, w3m, elinks[2], etc.), you'll see the full site presented at once, as a single page, without needing to specifically navigate between them (you can scroll the full site). Though the intra-site navigation itself still works --- it just doesn't reveal or hide sections.


I've built React SPAs with inlined JS, CSS and SVGs in 2017, which is why I'm still entirely clueless what's special about this.

BTW, the discussed page is not at all a single-page site; it makes separate network requests for CSS and PNG files.


The point of this demonstration is that single-file provisioning is specifically the point. And specifically, using only HTML and CSS.

The concept of an "SPA" refers to the appearance rather than provisioning of the app, and inherently relies on Javascript (or an equivalent scripting capability) to interactively rewrite the display. It's possible to single-file an SPA. The characteristic isn't central to the SPA concept, and in practice implementation is typically anything but.

SPAs are not accessible without Javascript, and don't render at all from a terminnal / console-mode browser. (Ask me how I know this...)

This is not an app. It's a website, or at least, multiple web pages, provisioned from a single HTML file.

Yes, this instance has several external references. I've noted CSS in an earlier comment, you mention image assets. The concept could be further optimised for portability by incorporating those inline.

Note that optimisations are also trade-offs. Inlined assets would mean duplication for a larger site. Which of those trade-offs are preferable or unwanted really depends on the specific goals.

But as a demonstration of an idea, this really is pretty elegant.


yeah, but you missed the point.


No, not a SPA.

There is a concept of a single file app (SFA). You can share an html file, and that is the application.

It can have inlined micro js/css frameworks, or hand-rolled everything. An SFA should be human readable though- viewing the page source is useful.

I think it’s a concept worth exploring.


I prefer the denotation of a single file site (SFS). Perhaps alternatively a multi-paged file (MPF), as there might still be multiple such pages on a site, or a site comprised of a mix of MPFs, SPAs, and traditional static single-page URLs.


MP-SFS


But what is the benefit?


Showing the capabilities of the web browser as an application target, directly. You have HTML, CSS, and JS available.

No build tools required. Take this file and open it with a web browser. Then modify it, and refresh the browser.

Distribution and Development for applications is about as straightforward and accessible as it gets.


But what benefit is there for the end user?


Source code that is readable. Viewing the source on web pages in the 90s was how I learned to make my first web pages.


less Co2 emissions


Also, those SPAs are loading the React literary off of a CDN. It’s not inlined into the page.


I have definitely built React SPAs with inlined JS and CSS as far back as 5 years ago, it even inlined some SVGs for icons; it was literally a single HTML page (unlike the discussed page, BTW, which loads the CSS and the images with separate network requests).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: