Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Xanadu Basics – Visible Connection (2018) [video] (youtube.com)
73 points by nathcd on June 24, 2019 | hide | past | favorite | 25 comments


Some further reading:

A recent and great interview with Nelson - https://www.notion.so/tools-and-craft/03-ted-nelson

A long-form read by Wired from 1995 - https://www.wired.com/1995/06/xanadu/

Nelson's response to Wired - http://xanadu.com.au/ararat


Thanks, I read the Wired back then but never Nelson's response.


Also -- he has a recent response: https://www.youtube.com/watch?v=-_-5cGEU9S0


I get the same feelings reading his descriptions of Xanadu as I did reading descriptions of Google Wave. It all sounds a though it aught to be great and wonderful, but I can't actually visualise how it would be useful every day. What actual value would I get out of it, constantly on an ongoing basis?

I can see how perhaps you could construct interesting use cases, but I just don't see the value as a primary interface. It doesn't even sound all that hard to implement. I know there's a basic implementation, but his description of it is hedged around with caveats. Certainly with modern development environments, it should be many orders of magnitude easier than in the 60s or 70s to get this working, yet it's still struggling to get off the ground. Contrast with the web he is so critical of, which was useful and valuable the first day it was released.

If it takes 50 years and you still don't even have a compelling proof of concept, I can't help suspecting the grand vision is just far too elaborate and heavy weight to be workable in practice.


> It all sounds a though it aught to be great and wonderful, but I can't actually visualise how it would be useful every day. What actual value would I get out of it, constantly on an ongoing basis?

There are a couple of releases you can use, if you want to get a feel for it. The integrated editor we had planned out isn't implemented in any of them, unfortunately, so editing is still awkward.

Most of the big benefits will only come out of having a community using this stuff, and because Ted keeps tweaking formats & there's no compatibility between the released implementations, even those of us within the Xanadu group can't really experience that.

> It doesn't even sound all that hard to implement.

As a former implementer, yes: none of the core concepts are hard to implement.

Ted is a stickler for smooth UIs, & nearly 100% of the difficulty has been in trying to convince existing UI libraries to display his interesting visualization and input ideas in a performant way.

For instance, the XanaduSpace demo used OpenGL to display everything in 3d but internally a lot of stuff was hardcoded, & when I took over development of it, we found that we had a lot of overhead when editing text, so we switched from the OpenGL 1.1 API to the current API & idioms, which let us do faster rendering. However, we had to completely rewrite glyph rendering in order to fit more than about 20 pages of text in GPU RAM (and cards didn't expose standard API calls for freeing these allocations). Our super-elegant proposed method of storing text in GPU RAM ended up never quite working (and we got burnt out by our day jobs at the same time as trying to wrangle with OpenGL), so we never met the benchmark of displaying an entire copy of the King James Bible in real 3d. We plugged the same backend into a TK frontend, which was usable but ugly. Then we drifted away from the project. The code is still around, but I don't think anybody picked it up.

Currently-available web-based Xanadu systems (OpenXanadu and Xanadu Cambridge) are cripped by same-origin policy. Xanadu doesn't want to host arbitrary user data locally (at least, not unless and until customers pay for storage). I wrote a caching proxy for OpenXanadu back in 2013 or 2014, but it wasn't used for basically the same reason: storing arbitrary user data costs us money & makes us liable for takedown requests & other such stuff. While desktop-based systems like XanaSpace could fetch from arbitrary remote hosts using arbitrary protocols (mostly HTTP, but I snuck ftp and gopher support in, and pushed for IPFS support as well), OpenXanadu is stuck only fetching from xanadu.com & a handful of sites that have whitelisted us (and attempts to get project gutenberg & the internet archive to whitelist us for SOP failed, I think). Again: trying to work around SOP was a lot harder than the actual implementation.

I wrote an experimental span-based editor for XanaSpace, and I was asked to port it to javascript for use with OpenXanadu. How it worked was that you selected text in a 'source document' panel (some text or stripped HTML fetched from the wild blue yonder), the text you highlighted would be colored & a copy would be inserted into a kind of visible pastebin, and you'd insert chunks from that bin into an accumulating new document in another panel. Then, in a box below, the EDL format for the document you're creating would be listed, which you can copy into a file to be loaded by OpenXanadu. Unfortunately, there was a hard to find off by one error in the code for ignoring color tags in the source document -- in other words, when highlighting, the character offsets would get less and less accurate the more you go along. It was an intermittent problem. I didn't end up fixing it before I left the project. The XanaSpace version worked fine.

This editor was supposed to be a stopgap, & the eventual editor we planned to make would involve dragging selections directly out of existing documents, snapping them together to form new documents, & being able to then insert or delete text from within these composite documents directly. Existing GUI toolkits, to the extent that they allowed dragging of widgets, didn't show the dragged item as it crossed widget boundaries (and anyway, couldn't show beams between scrolling text things), so we were going to have to write our own text rendering/editing code (which isn't so bad -- we did it for XanaduSpace and again for XanaSpace) and make it handle mouse events reliably on arbitrary unicode text using proportional truetype fonts (substantially more complicated), make that handler differentiate click from select from drag, etc. It was gonna be complicated, & every little noodle of text needed to store its own arbitrarily-complex history, with the beams it was attached to animating along with the drag and such. We put it off, and it looks like everybody else also put it off. We knew that if it wasn't sufficiently pretty, it would never see the light of day (even as open source).

I prefer to develop in public, so after I left Xanadu I started developing my own (less pretty, but at least conceptually-pure) implementations. I also wrote down software-engineer-centric descriptions of concepts we used, so that other people can do the same. (They can be found here: https://hackernoon.com/an-engineers-guide-to-the-docuverse-d... ) None of this stuff is actually hard to implement, but for a backend guy like me, making it pretty was a lot harder than making it usable.


Pre-teen me was utterly fascinated with Xanadu, but that was before I learned about the computational and storage costs of data structures, and how large-scale human interaction with systems evolves. Now, I can't wrap my head around what Nelson promotes and what I extrapolate to a scaled-up global system. I'd really like to know from those more deeply immersed in the implementation of his vision how the following pieces are addressed, because I can't quite reconcile between the dribs and drabs I've pieced together of the internals and what I know of large-scale systems.

1. As near as I can tell, the ZigZag data structure at its core is a type of directed graph. Fast path traversal has always been a challenge for me with these at scale. When I want to pull a list of all referring/predecessor links, I'm either walking the graph, or maintaining a continuously-updated cross-reference lookup. I must be overlooking how he's solved this problem, because the kind of micro-payment environment he envisions seems to me to require knowing this information (is this what he calls a "cell"?) to compute the presented price of a piece of information derived from a foundation of other pieces of information.

2. How does transclusion work for non-text data like audio, video, rich media, chats, or in a generalized way, arbitrary data sets?

3. Was there ever a notion of transclusion quality or security? I don't see any acknowledgement in Nelson's design of the need to address bad actors abusing transclusion, giving rise to equivalents to spammers, trolls, phishers, etc. within the xanadocs environment.

4. What determines a cell? As near as I can tell (and my understanding is admittedly flawed), cells are decided by the author, but I must be wrong here. That seems a high cognitive load to make those decisions, and I don't see any support for crowd-defined cells, nor an API that would facilitate building such an interaction.

BTW, for those who lament the quiescence of the Gzz [1] project due to patent protection, it can pick up again now that the patent expired [2]. Whoever picks it up likely should read about the experiences of one of the original Gzz authors [3]. The most programmer-oriented write up of the data structure I could find [4] didn't erase my impression that Nelson's vision was fine for a small group, but doesn't address scaling up to the challenges we see on a global level.

It's still fascinating to think about, and I see value within a corporate environment, but I struggle seeing it staying useful in the public realm due to the this-is-why-we-can't-have-nice-things principle.

[1] http://www.nongnu.org/gzz/

[2] https://en.wikipedia.org/wiki/ZigZag_(software)#History

[3] http://lambda-the-ultimate.org/node/233#comment-1715

[4] https://hackernoon.com/an-engineers-guide-to-the-docuverse-d...


I am the author of [4].

Regarding your first question:

ZigZag is not really intended for large shared multi-user stuff. It's not supposed to scale huge. We have completely different data structures for translit (i.e., the hypertext part).

ZigZag can be used for fast manipulation of spacial relations (basically, applying multiple kinds of projections to a multi-dimensional directed graph) & we use it under the hood for representing display layout in XanaSpace, but its main use is as a personal mind-mapping utility. It has only extremely tenuous connections to how we do hypertext, and none at all to micropayments.

The transcopyright system (i.e., the thing for ownership and micropayments) is based on transclusion. Basically, all that means is that we provide facilities to fetch things piecemeal, and we provide facilities for owners to restrict control over getting usable forms of things. (Specifically, we support applying a one time pad, so that if you've got distributed fetching of real content, you can still require folks to fetch the OTP from a trusted oracle & maybe pay for it too before they can actually decode the data.)

Question 2:

We can always fall-back to byte offsets. We floated the idea of supporting more user-friendly offset formats for audio & video (specifically adopting ideas from the w3c media fragments spec), specifically because audio & video have complicated compression schemes & finding safe frame boundaries requires deep knowledge of the underlying format. We put that on hold in order to focus on getting text stuff really solid in the UI department.

Question 3:

People can create spammy documents. They can also transclude from or link to more popular documents in the hopes that their low-quality documents will get more exposure. But, links are not universally resident (which is to say, creating a link to a document doesn't mean that everybody viewing that document immediately sees it). Instead, links get distributed the same way as documents do: by word of mouth (as people recommend to their friends to view particular documents or load up particular sets of links). Links do double-duty as connections between documents and as formatting information (like themes or skins), and collections of links (ODLs) can be passed around separately from documents. (By 'passed around' I generally mean that their addresses get circulated, although we also had the notion of an archive format for passing around collections of documents and links that weren't to be made generally available.)

In other words, the spam potential of linking between a popular document and a low-quality document is akin to the spam potential of creating a low-quality website that is hidden from google.

Question 4:

A cell is a data structure consisting of a pair of associative arrays (pointers to adjacent cells, keyed by strings) & a 'value' (typically a string).

Cells are created by the author (or maybe by a program), which is fine, because cells never travel beyond the vicinity of a single machine. (We had a 'slice' mechanism where cells owned by different folks could join up together, but this was never intended to scale beyond small groups.)

Again: since ZigZag is not a part of the hypertext/transliterature system, the performance of ZigZag is a lot less important. (Within the translit realm, we had exotic high-performance data structures during the Autodesk / XOC era, which we've basically dropped in favor of simple stuff like spanpointers.)

If you're interested in ZigZag, I recommend reading my implementation (https://github.com/enkiv2/misc/blob/master/ds-lib/ZZCell.py). It's cleaner than GZZ & is almost identical to what's still used internally at Xanadu (last I've heard), because this is a rewrite from memory of what we wrote for XanaSpace & when Ted occasionally sends people to me for advice on how to implement new versions of ZigZag I point them to this.


Started playing around with https://gitlab.com/krampus/wormwood , it's a client that seems pretty good. It takes a bit of elbow grease to get going, but this should be fun to play with.

Setup is a bit of a pain and it could use a readme but once you install the perl modules, build the assets and set up the server it's pretty nice. Could probably use a dockerfile as well

Give it a poke! http://lain.gboards.ca/view.cgi?url=static/doc/doc.xan.org


Information(Idea in mind) => Write a document => Write on PC and print out on paper => Display on screen

Two dimension paper or screen is never the best way of displaying information, but it is the cheap and easy one.

the core problem is how we organize and display information, I don't think the old screen will solve it in the good way, but recently MR, AR show us the three dimension way, maybe we can find more dimension in the future.


I see people write things like this from time to time, but I'm not convinced this will work out in practice. Human 3D vision is quite limited. We have distance perception, but it's quite crude and we have no way to perceive through things, to see their actual depth or full dimensionality. We only get a view of the side facing us. In fact fake 3D on two dimensional displays provides a very convincing analogue.

The primary way we exchange information is text and for that 3D provides no advantage, in fact it makes it worse by being distracting. Visual representations are a niche for representing certain constrained forms of formatted data, in the form of charts. These can be very compelling for very structured and regularised information.

The old saying that a picture paints a thousand words is all very well, but if I give you an arbitrary thousand word text, you'll have a heck of a time representing that entirely visually.


We do have 3D senses, but the primary ones aren't vision. Proprioception tells us the position of our body in space and is relevant to the discussion because we can tag and encode information based on posture or movement, think hand-gestures, body language, right hand rule, counting on fingers, etc. Another is enabled by the existence of grid cells[1] in the brain. These allow us to be aware of (and imagine) our position in Euclidean space. This is where AR/MR could really shine by bringing the Memory Palace mnemonic to life and possibly extending it in all kinds of fantastic ways. People have exploited these senses for millennia as memory aids and ways of understanding and interpreting information, we shouldn't lose that tradition and should make every effort to enhance it with modern technology rather than replace it with a single interface metaphor.

[1] https://en.wikipedia.org/wiki/Grid_cell



Someone might care to suggest to Mr. Nelson the prospect of the visible connections afforded by YouTube playlists....



Lots of interesting things on Ted's twitter https://twitter.com/TheTedNelson


One of those interesting things is the ridiculous cult of personality that’s formed around him. Nelson is brilliant, but some of the crap his followers put out is just unreal.


what's with hiding complexity? is it desirable to show connections? from the examples in the video, it seems the connections do not convey any additionnal meaning.

It seems to me that there are indeed systems where connections convey meaning. In those systems, the nodes connected are invisible (represented as dots) as to show the topology of the connections.

Maybe it's not useful to show texts and connections at the same time


The point is that humans navigate and orient themselves easier in a spatial persisting environment.


One problem is CORS creating a firewall around every web page. Another problem is copyright, showing someone else's work on your web page. Fair use would have to be expended to showing parts of a document.

Wikipedia did something like this where you can see content when hovering over a link ...

But I think what the old guy is talking about is actually showing a visual graph how documents and text are linked together. We humans are good at scanning text. So I think something like this would work well in a search engine, where instead of getting a list of links, you get a visual graph of interlinked documents, that makes it faster to explore different branches and find what you are looking for.


Ted Nelson is visionary but bad businessman/project leader.

The Curse Of Xanadu: https://www.wired.com/1995/06/xanadu/

He patented some of his ideas (like ZigZag data structure) but failed to commercialize despite multiple attempts. Then he killed open source attempt to make it happen outside his control (GZigZag). Being very defensive and insisting of owning the project when one has not the ability to lead is unfortunate. Nelson had very good ideas.

The idea that you make closed source hypertext project seemed possible at the time, before the web, but it has no change of happening now.


Ted Nelson responded:

'Errors in "The Curse of Xanadu," by Gary Wolf'

http://xanadu.com.au/ararat

Look I'm as bummed as anyone that he never got Xanadu off the ground, but please don't link to that "hit piece". Ted Nelson is a visionary and deserves more respect.


With due respect, Nelson's response ssuggests an inability to distinguish forest from tree.

There is a difference between narrative and factual error. Nelson fails to draw that and is stuck in minutiae.

There's also considerable prickliness, if not outright paranoia, shown.

(And yes, I've read the Wired piece as well.)


It's not true that he killed GZZ. He had filed the patent already, & the GZZ developer (who had previously been working closely with Ted) dropped the project when he heard that a patent existed. (As far as I am aware, all patents filed with respect to ZigZag or Xanadu are expiring within the next few years.)

Several Xanadu implementations are already released as open source (the first one in 1999).

Believe me: I've tried to convince Ted that the appropriate way to get stuff to have traction these days is to develop in public, rather than keeping stuff secret during development & only releasing source when it's been polished. It's not that he wants a closed & commercialized system (though he hasn't completely given up on having some stake in monetization), but that he wants enough control to avoid the kinds of runaway misunderstandings that produced the web. If I had seen a warped version of my own work ruin the world, I'd be conservative too.

Now that web standards are so complicated that writing a new web browser is impossible for even large & very rich companies (Microsoft is ditching IE in favor of Chrome, following dozens of others), a properly-designed (and thus, simple) hypertext system becomes a lot more sensible -- for more and more people, the web will seem less and less like a reasonable compromise (being never a particularly good system for hypertext, and a far worse application sandbox platform). Some folks have been jumping to gopher, but others will probably migrate to new systems that are along the lines of Xanadu.


Maybe the reason was his incompetence, incisiveness or something else, but he ended up putting people to work on a project without revealing relevant stuff of his intentions and underlying situation. Not communicating and stringing people along is not a good personality trait.

see: http://lambda-the-ultimate.org/node/233#comment-1715


When I was brought onto the project, he made sure I was OK with working on patent-encumbered designs & possibly closed-source code, specifically because of his experience with the GZZ developer. It's a misunderstanding he learned from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: