A neat thing in Warp (I don't know if it was also pre-Warp) was that when it booted if it didn't have an OS/2 driver for the hard disk it would create a v86 task that would take over the state of the BIOS and it would then use BIOS INT 13h calls to access the disk.
That meant you could install and use it on a PC with a disk controller that OS/2 did not recognize out of the box as long as that controller provided BIOS INT 13h support.
I think 386+ Windows did the same, all through at least Windows 95. I even remember some goose chasing trying to enable the coveted "32 bit disk access" (which IIRC meant using native protected mode Win drivers for disk access) instead of the dreaded legacy disk access that called into the real mode BIOS:
If IBM had given OS/2 to their mainframe division to develop (and hired someone to design a better GUI, with much better icons - these things matter), I think it could have been a hit. I was using hypervisor stuff on VM/CMS in the early 1980s and it worked flawlessly (fun to run a vm inside a vm at the time). Of course, you need the hardware to support it, and IBM were too slow on using the latest Intel chips, and should probably have developed their own.
VM/CMS wasn't known for its great GUI though. So, the reason OS/2 GUI looked bad might actually be because mainframe guys were too involved with its design. OS/2 had few remarkable VM/CMS characteristics that I could remember, including intricate, hierarchical error codes prefixing every error message like "HPL1001A: File not found", and favoring upper-case on file names and configuration files.
Not to mention that OS/2 came with XEDIT and REXX which were cornerstones in VM/CMS.
In retrospect OS/2 was an OS created for a certain kind of professional user just at the time when the nature of who actually used a computer underwent a fundamental shift.
If you were the kind of customer that had a mainframe in the head office, or maybe an AS/400 and some terminals, and your IT stack was tightly managed, OS/2 would have been quite familiar: Redbooks, prescriptive hardware compatibility, Rexx, manuals with every error code documented, and very IBM-ish approaches to development (eg System Object Model).
Sadly for IBM they didnt realize that people wanted an OS that was simple to use and would work well enough on any cheapo PC to get things done without needing a managed IT provider or somesuch.
I view NT as Microsoft understanding that while they would never want to be IBM, the Win31/Win95 platform was too fragile to be a basis for the future and that they better take on board some of the lessons of OSes like OS/2 and UNIX before their market window slammed shut.
Back in the day, large companies would often be "IBM shops," meaning every piece of technology, from the mainframe to every terminal, every PC, every monitor, every printer was made by IBM, they would run OS/2 on the PCs. I worked at such a place for about a year in the ~1993 time frame. The only time non-IBM tech was approved for purchase was if they simply didn't offer the item needed.
That's a good observation. Yes, IBM has always seen the money in enterprise and tried to solely focus on that while Microsoft has been able to remain ambivalent successfully. Both Windows 3.1 and Windows for Workgroups 3.11 were great success stories for example.
Even PS/2 line could be a great consumer product if IBM weren't so adamant on making it an enterprise product line.
The "support elements" on the mainframe, which are just a laptop connected to some internal buses and power controls, did run OS/2 for many years. You could IPL the machine right from the Warp desktop. It was neat.
They also continued VM with VM/370, VM/ESA and z/VM operating systems which are equally rock solid systems. Aside from these IBM did actually go big on Intel systems but they got completely blindsided by clone manufacturers and entirely failed to see what cloud infrastructure would do to their impressive mainframe lineups.
I’m not sure how targetting a processor which was very expensive and available in low quantity from a single source at the time would have made OS/2 1.0 a success. 386 systems didnt start landing on regular users desk until a couple of years later when prices fell and the chips became available in quantity including being available from non-Intel foundries. Even Compaq who introduced the 1st 386 systems sold many more 286 based systems for years after introducing the Deskpro 386. Tieing OS/2 1.0 to the 386 at the time would have just condemned it to failure in a different manner. That doesn’t even take into account that OS/2 was supposed to be the OS for the new PS/2 systems introduced at the same time which consisted mainly of 286 based systems.
What if OS/2 v1 had been a 386 product?
I think it might have done quite well.
OTOH, and I loved OS/2 - I have spent more cash on OS/2 than all other PC software put together in my entire computing life; possibly more than on anything except Spectrum games, and maybe more even than that! - but even as a fan, it was a pig to install, a pig to network, a pig to install drivers, etc. etc.
When I tried the Windows 4 beta, I was dazzled. THIS is how it should be. It Just Worked, and setup & tweaking was a dream. Explorer, so elegant! Device Manager - I nearly wept for joy. No 2000-line CONFIG.SYS file! No separate windows for the directory tree and the directory contents!
WPS, elegant & sophisticated? My arse it was. Half-assed Mac ripoff.
And for all OS/2's alleged reliability, Fractint could kill it easily, the whole machine. Win95 was no better, and as the 32-bit apps & shonky drivers piled up, considerably worse. Then came the horrors of Win98. And SE. And ME.
But at first, even the beta of Windows 4 was about as good. And DOS drivers worked at a push. And DOS games and things. The long-filenames-on-FAT hack was a hack, but it worked. Make a long filename on HPFS, look for it from a DOS window or WinOS2 - gone! Invisible! You can't have it, mate, tough.
Then they hacked that to give us FAT32, and lo, it worked and was just like the old days. Incremental steps, no big bangs.
But when the state of the art was the horrors of Windows 3.x on DOS - even DR-DOS, optimised until it bled with QEMM - or the driver-less and app-less incompatible nightmare of NT 3.1 or 3.5 (if you could afford a £2500 PC to run it well) - OS/2 really actually was "a better DOS than DOS, a better Windows than Windows".
But I still wonder... If OS/2 1 had been a 386 OS, and had swept away Quarterdeck QEMM and DesqVIEW, killed the infant BSD4.4-Lite on 386, ensured that Windows 3.0 had been aborted... If it had used V86 mode to flawlessly multitask DOS apps, boot DOS and its drivers off a floppy for those troublesome programs for near-perfect compatibility...
Well... Program Manager and File Manager, which in the 1990s everyone thought were Windows 3.0 innovations but actually came from OS/2 1... They weren't so bad. I kinda liked them, actually. Had them tuned for a very efficient, convenient GUI. Loads of custom hotkeys for launching and switching apps, which always damned well worked, unlike on Explorer when if Windows was narked it would just ignore you, or launch 876 extra copies of your app then fall to its knees and die.
It coulda been a contender. In 1987, we knew no better. We might have gone for it.
But knowing what I do now about OS/2 2 compared to Windows 95... I am not sure that we were not a whole lot better off with what we got than what might have been.
I remember impotently screaming abuse at a Warp Connect box, just trying to get it on my LAN and on the Internet via dial-up at the same time. Either Win95 or NT 3 were vastly better than that.
OS/2 was, in a horrible way, more DOSsy than DOS. Everything was hand-configured in a vast ASCII config files, which you had to hand-massage into perfection with excruciating care. Then, if you were particularly masochistic, optimise for performance. I never did get Warp 3 to drive the graphics cards and the sound cards of my two 486 laptops at the same time. One or the other, but not both. And one of them was a bloody IBM!
I would in an odd way have liked to see OS/2 thrive, but you know... Despite my irrational nostalgia for it, on the whole, when Windows 95 gave us plug-and-pray, I mean, plug-and-play, and power management and suspend/resume and so on, and then NT4 gave us a vaguely modern GUI... Then Windows 2000 brought it all together into a single whole, which if not exactly seamless by any means, did slap enough makeup on Frankenstein's Monster to make it look presentable...
Sorry to say it, but I think we were better off.
I know, heresy, praise for Microsoft from one of the "Linux Taleban". Shocking.
Of course, after that it all went a bit wrong. I know everyone loves XP in hindsight, but with all the bloat, I wasn't and am not so sure. Themes? Really? Do I need that? I know, I can oh-so-intuitively switch to Windows Classic in Display Preferences, then run SERVICES.MSC and stop the THEMES service and disable it... But I can't uninstall Movie Maker or IE or any of the other cruft, no way José. I can't move the hibernation file to another drive or partition.
Then came Vista and we learned to love XP.
Then came 7, and everyone loves Windows again, except for those of us who found it handy to run a command-line app full-screen occasionally.
"It coulda been a contender. In 1987, we knew no better. We might have gone for it."
Again how could it have been a contender in 1987 targeting 386 systems which for the most part didn't exist and didn't exist in quantity at a reasonable price for a few more years after?
First, in the early to mid 1980s, cheap PCs were all 8088/8086 machines (we called the XT class hardware). That was the baseline even when you could buy faster because it was so expensive.
Then, from the mid-1980s, 80286 PCs came along (called AT-class kit). DOS couldn't really do anything much more on a 286 than it could on an 8086, so 286s were just used as faster XT-class boxes.
There were 3 OSes that could use the 286: Digital Research's Concurrent DOS, SCO Xenix, and IBM/MS OS/2 1.x.
Xenix was a hit.
The others not so much.
Then came the 386DX.
Via UMBs and improved DOS memory management, 386s could do way more in plain old DOS. There were memory managers like QEMM386 and 386Max.
For power users they could multitask. There was a brief flowering of multitasking DOS extensions and replacements: Concurrent DOS 386, DESQview, DESQview/X, PC-MOS 386, TSX-32, and others. Most aimed at multiuser stuff because 386DX kit was expensive.
This drove the market towards the 386. It could do more, even just for DOS and DOS users.
I made a good living optimising the memory management of DOS PCs.
Once it was established, the much cheaper 386SX came along and made 386 PCs cheap. The tech and the software and the skills were already there.
But OS/2 1.x couldn't do any of it. It could barely run DOS at all. If you were a DOS user OS/2 gave you nothing worth having.
NT was a hit by aiming at the PCs of the near future. That's what OS/2 should have been targeted at as well.
Then gradually, incrementally, Windows 3 took over because it worked on XTs and ATs but on 386s it gave you DOS multitasking and copy-and-paste between DOS apps. It gave you the good stuff you got from DESQview. AND you got new apps.
OS/2 1.x was gradual developing. It got a GUI after 1 version. It got a decent GUI next. Then it got smaller and faster.
NT was incremental too.
3.1 worked but was huge. 3.5 got VFAT and was smaller and faster with better networking. 3.51 got smaller and faster and stabler and started ditching the OS/2 legacy stuff.
The point being, OS/2 1.0 should have gone up against DESQview and Concurrent DOS as a DOS multitasker for rich power users, then OS/2 1.1 against Windows 2, then OS/2 1.3 would have taken the place of Windows 3.
Thanks for the recap but it wasn't needed I was involved in the industry throughout this history and used most of the OSs and other products you mentioned and more either personally, professionally or both. You still didn't address why an operating system whose development was started in 1985 should have initially targeted a processor which wasn't available in a production PC until late 1986 and wasn't available in large quantities at a reasonable price for a couple of more years. Who exactly would they have been selling OS/2 too? How do you think customers they were selling large quantities of 8086/8088 and 80286 systems to would have reacted to being told they were a dead end for their next gen OS?
I started my first job in 1988, at an IBM (and Apple and Compaq) dealer.
We sold lots of PS/2s, and yes, they were mostly 286s -- and they mostly ran plain DOS, no Windows and definitely no OS/2. 386s were servers.
But even then there were a few executives or managers with 386s on their desktops.
By just a couple of years later, 386SX boxes were everywhere, including on laptops. I got 2 free Librex machines from my 3rd job, because Nippon Steel pulled out of laptops. I ran OS/2 2.0 on mine and while it wasn't _quick_ it was pretty good.
OS/2 1.x was comparable in price to DOS + DESQview386.
It didn't sell at all to any of my DOS-using customers in any of my jobs, because they needed their apps. It was expensive and it didn't deliver any benefit.
But DESQview did. Not a lot but it was there. QEMM386 was everywhere, and DR-DOS/Novell DOS was quite a hit. MS-DOS 5 and 6 sold great.
DOS users would pay a bit extra for stuff that enhanced their use of DOS apps. If OS/2 1 had been a 386 OS, which it nearly was, I think it could have sold to DOS power users.
The offer would be: you get multitasking -- and a CD drive and networking without faffing around with memory management. Then on top of that you get a GUI, and you get it with half a dozen DOS apps at once, in 4MB of RAM, and every one of them gets 620kB of available memory.
I do reckon we could have sold that, yes.
It was close. MS had demo versions that have leaked. MS had a nearly complete 32-bit OS/2 2.0 with Program Manager and File Manager, almost ready to roll in 1990, earlier the same year Windows 3 came out... and that was after wasting ~5 years on 286 OS/2.
For a whole new OS to succeed, aim at the hardware available to rich power users the year it launches and available to everyone 2-3 years later.
NT 3 needed a nearly £10K PC when it launched but just a few years later it could run well on a bog-standard office PC, like a Pentium 66, so long as you exercised a little care over the choice of components. I know this because I ran it myself in such an office, when everyone else was on WfWg 3.11.
Those of us who wanted a desktop unix wanted in part because it was a unix, but in part because it was also a multi-user system. So the folks who created BSD unix for 386 would still have done it.
They absolutely could, yes. That is not what I'm arguing at all, so I am perfectly happy to agree.
IMHO what drove a lot of the development of FOSS xNix, especially Linux -- which has thereby created a lot of the tools and apps and things that make the BSDs useful today -- was the desire to have a proper memory-managed protect-mode OS for personal desktop use.
That's why Xfce, GNOME, KDE, X.org and XFree86 before it, and a tonne of other FOSS projects exist.
The fact that occasionally some of this is also useful on servers is, I reckon, because so many people wanted their own personal UNIX workstation, using COTS kit.
OS/2 looked more modern than Win3.1 because, well, it was newer, but also because of the increased use of "gray 3D slab" style which meant modern at the time (see OSF/Motif).
But the design language was less uniform in OS/2 despite that it was based on Presentation Manager standard like Win3.1. The generous use of colors in OS/2 made it look like a kindergarten scrapbook, rather than a serious desktop OS. Maybe, they thought people would find it friendlier that way, and it might have even looked friendlier.
Yet, Win95 looked way more professional than OS/2 because the design language (fonts, conservative color use in UI elements and more incorporation of gray 3D style in the UI) was simply more elegant. That was mostly because of the influence of NeXTSTEP in the design of Win95 though, so thanks, Steve Jobs, I guess? :)
Not to mention completely innovative paradigms in Win95 like "Start" menu and taskbar which made it to 2024 and became the de-facto standard of desktop UX.
Personally I would choose OS/2 over Win3.1 anytime. OS/2 was more "Mac like", for example, properties and preferences dialogs did not have the Apply/Cancel buttons, you had a Default button to reset to factory settings or you could simply close the dialog for your changes to be applied.
This said, I believe that what made a big difference in look and feel when Windows 95 was released was the introduction of TrueType fonts that had much better rendering. OS/2 was using PostScript fonts, and introduced support for TrueType much later... If I remember correctly thanks to the open source efforts of the FreeType project.
> The generous use of colors in OS/2 made it look like a kindergarten scrapbook
Are we still talking about OS/2 2.x? I somehow remember it to be more "somber", and a very quick search seems to confirm that, though it's not conclusive.
Hmm, tough call. I remember it looking "more professional", in some ways even more logical, but overall I'd say Windows 3 (and yeah, especially 3.1/WfW 3.11) was more "pleasing". That was mostly the color scheme, I think.
There's no universe in which OS/2 was a hit. IBM was simply delusional. They wanted OS/2 to give them back the monopoly on computing hardware they once had in the early 1980s before the "PC clones" took over the marketplace. That was never going to happen.
You can pick from any number of business, technical or marketing decisions - and there are many to choose from - that contributed to its failure, but fundamentally, nothing was going to save OS/2 from its original sin.
I don't really doubt that that's true for business or marketing decisions, but what exactly were the technical reasons? I don't know too much about OS/2 internals, but what made it worse than, say, NT?
It didn't really cause issues for end users, but OS/2 was tied to x86, it actually uses more than two rings and AIUI even Warp 4 still has 16 bit code in various places. One of the reasons for creating NT back when it was "NT OS/2" was as a "Portable OS/2" so that the OS didn't get left behind when one of the upcoming RISC architectures inevitably destroyed x86 (nobody then would expect a clunky CISC like x86 to be on top 30+ years later, or that the nearest challenging RISC would be the chip from the Acorn Archimedes).
At the time of OS/2 vs NT, NT needed a huge amount of ram to work, while OS/2, even though more greedy than Windows 3 or Windows 9.5 was still cheaper.
The system was very good, stable, snappy. The main issue was the lack of software. Microsoft made deals with all hardware vendors and whenever people bought a personal computer they would find Windows pre-installed. At that time most of the people did not know what an operating system was, they just thought of a PC as an appliance, and most believed that Windows was the only option, that PC=Windows. Plus, computer and software were quite expensive, so why buy another operating system when your new PC comes with Windows for free. There are articles over the internet that claim that Microsoft, to kill OS/2, made deals with PC makers (HP, Compaq etc.) giving them OEM Windows licenses for free as long as they did not offer OS/2 as an alternative purchase choice to the end users, which basically meant 99% of people did not even know OS/2 existed. You would then find OS/2 in airports, banks, public transport etc. (all big organizations who were IBM customers) but it was not known by regular users.
This said the main weakness of OS/2 was the SIQ [1], single-input-queue. The user interface had a single queue for dispatching UI events, and due to a poor design, a single badly written UI program that got stuck while processing a call from the event loop, would freeze the whole OS/2 user interface. This was improved in later releases but never completely solved, even though admittedly in the last years of OS/2 this type of issues became quite rare.
Perhaps this is true, but don't you think the dangerous competition for Windows was really more other DOS graphical interfaces? Things like GEM, DESQview, PC/GEOS (GeoWorks), DeskMake.
Especially GeoWorks ... makes you wonder why Microsoft won. If you ever tried installing and using OS/2 on a normal (for the time) PC, you don't wonder why it lost.
Win95 (and beyond, and somewhat before) DOS boxes also allowed to run DOOM and other protected mode software. But pretty much only if the software was written for "DPMI" ("DOS Protected Mode Interface")[1], which abstracted all of the protected mode "management" stuff away from the software, like switching into protected mode, allocating memory, etc.
If you ran DOOM, you might remember DOS/4GW. That was a so called "DOS extender", and the DPMI "server" when DOOM ran under DOS. When you ran DOOM under Windows, Windows itself became the DPMI server, and DOS/4GW acted as a thinner layer.
Protected mode software written that way did not run as a v86 task anymore, but closer to a regular Win32 task (it was after all real 32bit software).
Thanks for the info. I grew up on the DOS PC and know 4gw was a 32bit extender though I was unaware of the DPMI abstraction and how it related to running DOS under Windows.
I think you're mistaking the Windows Subsystems for "virtualization". Windows has/had tons of subsystems beside the today well known WSL.
There were MSFU, NTVDM (Dos) or even OS/2 subsystems. As Windows NT is based on the ideas and code of OS/2 (the chkdsk output is the same till Windows 7 iirc) both OS support the subsystem feature. Printing is its own subsytem btw.
This went as far as having the Win32 subsystem available on OS/2 and vice versa.
These subsystems are small layers that convert between the NT native system and the software that is run on top of them. The thing you call Windows is just the Win32/Win64 "subsystem" running on top of the NT-Kernel.
Here a few links I could find regarding this topic. This stuff is ancient and Microsoft doesn't make a big fuzz about it, because it's one of the core features that allows Microsoft to port Windows quickly to any platform and run software built for any architecture on top of the NT kernel.
There are enough "subsystems" out there, like Windows on Windows (WoW64) or even Windows on Arm and so on.
This hole is deeeeeeep. I recommend the sysinternals book.
I don't think they're mistaking anything. 386-era Windows and OS/2 versions used the CPU's v86 task support, which, while having some severe warts[1], can well and truly be called "virtualization".
When most people talk about VMs these days they mean something similar to the Popek and Goldberg definition, which wouldn’t be achieved on x86 until a decade later.
I don't see how the real mode emulation on the 80386 (VM86) fails to meet the standard of the Popek and Goldberg definition. IA-32, yes, that took a while, but VM86 allowed 8086 (real mode) tasks to run as if they were running authentically in real mode while the 386 was in protected mode, and had all the features P&G describe in their definition. It runs natively, it runs with equivalent performance, and there's a VMM trapping privileged instructions to either emulate or arbitrate system resources. It's the full deal!
I don't think P&G implies that the VM needs to run with exactly the same capabilities and characteristics as the host architecture. If that were true, then probably a large amount of modern virtualized machine are suddenly not P&G anymore, just because some obscure CPU (or other) feature might not be available in the VM, which is not a meaningful distinction.
Os/2 Warp had support for windows 3.x virtualization, so little surpise for me. Also IBM developed virtualizaion tenchnology (like LPAR) long time ago: https://en.wikipedia.org/wiki/Logical_partition
Yes, one of its marketing highlights were "a single Windows app won't crash the whole OS" as that was before Windows 95 came out. IIRC, some Windows apps even ran faster under OS/2.
The OS/2 Warp commercials blew my mind in the mid-90s, and they never even once showed the system. I was a Mac person at the time, and would have been all over OS/2 if I'd had the chance. But I was in a very small town, this was the world with a pre-functional-WWW, and it was just impossible to find anything but System 7 and Win 3.1 at the time.
> Windows 3 also uses virtualization for its DOS boxes, but this is "internal" to Windows. OS/2, on the other hand, exposes this entire functionality to the user.
What does this mean? It looks wrong. Of course Windows exposes virtualization to users, otherwise they would not be able to run anything on DOS boxes. That's the entire point, and that's why virtualization is used. Because the difference between a "DOS program" and "real mode operating system" is, due to how such a thin layer DOS is, practically zero. So each DOS box is a full VM emulating everything from VGA to floppy, because your average DOS program is very likely going to access them directly.
The same test program happily runs in a Win95 DOS box.. or even a Windows 10 one. This is not a special OS/2 feature, it's a requirement for running a DOS box.
In OS/2, you can run any version of DOS, or even multiple different versions at the same time, and I think possibly any real-mode OS that doesn't do anything too crazy with the hardware.
In Windows, you are limited to the version of DOS that Windows is running on. Windows does not expose the ability to run any other version of DOS or other OS; nor does Windows API expose any of its virtualization functionality that would be useful in doing so.
And so you can on any other DOS box, including Windows ones. That is, again, a necessary requirement of a DOS box: you have to emulate a full (real mode) PC. Every DOS box is its separate VM, so it couldn't care less if you run different versions of DOS on different instances. For example, you can run FreeDOS, and even real mode Windows 3 itself on a 9x DOS box. You can do int13h disk access from a DOS box and completely wreck your disks. This is _required_ by any minimally effective DOS box, otherwise FDISK wouldn't work!
Keyboard, mouse, even sound have to be emulated as if they're were real devices, too. Otherwise, your fancy "DOS" game (that happens to call practically no DOS interrupts) would not work .
As I was saying, there's practically no difference between a DOS program and a real mode operating system. How would the VM manager notice you weren't running (MS)DOS, much less care?
> Windows does not expose the ability to run any other version of DOS or other OS; nor does Windows API expose any of its virtualization functionality that would be useful in doing so.
You really do not need _any_ functionality to boot another OS from DOS. It's one int 19h away -- or copy the bootloader in memory and jump to it. It's a shorter program than the vga.com program used in this article.
In fact, the moment you run the author's vga.com on a DOS box, even from command.com, you are effectively no longer running DOS: you have already bootstrapped your own non-DOS operating system on a Windows DOS box.
If you want to be nitpicky, it's likely your "non-DOS" OS has to keep certain DOS structures in the usual places, specially if you want to use e.g. host filesystem level accesses (not full disk), but this will most definitely also be the case for a OS/2 DOS box.
The context was you asking what the OP meant by "internal" to Windows ("What does this mean?"), not what was technically possible.
In OS/2 it was a native, natural, advertised capability to run other versions of DOS, including file system access by including a supplied device driver.
In Windows, the supported, advertised, native method was to run the version booted from. While it be possible to hack together running under some other version of DOS, it isn't what was expected, or exposed in the UI.
Understood, but I would still argue this is at the very least highly misleading -- the virtualization is there, it is exposed to users (you can change all the parameters in the PIF file, for example). If it doesn't really advertise you can use it to run a "different version of DOS or a different OS altogether" it's mostly just that, advertising, and likely done due to monopolistic reasons.
Because the ability to boot another OS from a DOS command prompt is hardly hacking -- see LOADLIN. It is more or less the way DOS works.
Why is it "highly misleading" to characterize technically possible but undocumented and unsupported capabilities as "internal". Internal APIs are often undocumented and unsupported, how is this different?
I don't think it's just a monopoly thing. Does the PIF file under Windows (16 bit derivatives and Win9x) allow you to specify kernel, config.sys, autoexec.bat?(1) Without those capabilities I don't see the Windows DOS environment as comparable to OS/2 where you are basically booting DOS in a virtualized BIOS environment.
In Windows you might be able to modify the provided DOS environment after the fact, but as far as I know you are going to be starting from the DOS provided by Windows. The level of DOS integration was actually a benefit in later versions as portions of the DOS kernel got replaced with 32 bit counterparts.
I think LOADLIN worked more because of the lack of security in DOS. As far as I know it generally doesn't work in any of the virtualized DOS environments, so I (somewhat ironically) consider it at the very least highly misleading for LOADLIN to be characterized as "more or less the way DOS works".
It's sometimes possible with security vulnerabilities to red pill modern versions of Windows and insert some other kernel underneath, but I don't think that would typically be considered 'the way Windows works'.
(1) I know NT allows config.sys and autoexec.bat, but it's not "booting" a real DOS environment, but using the files as part of an emulated DOS environment.
> Why is it "highly misleading" to characterize technically possible but undocumented and unsupported capabilities as "internal". Internal APIs are often undocumented and unsupported, how is this different?
Because, again, the virtualization _is there_, exposed for all users to use, and the PIF does allow you to change parameters for the virtualization such as the amount of memory, and this is very well documented. My very original message says that you _require_ this virtualization to run any meaningful DOS program, so it is definitely there.
Sure, it's hardwired to boot a specific DOS version. And while true, this "limitation" is practically meaningless in the big context of being able to run DOS programs since DOS programs tend to be _more complicated_ than real mode kernels.
> The level of DOS integration was actually a benefit in later versions as portions of the DOS kernel got replaced with 32 bit counterparts.
This is irrelevant, in part because DOS programs _like to replace portions of the DOS kernel too_, which implies that whatever you do you must still support the ability to entirely replace whichever OS is being run in the VM.
> Does the PIF file under Windows (16 bit derivatives and Win9x) allow you to specify kernel, config.sys, autoexec.bat?
No, but it does allow the user to replace parameters for the virtualizer, for example. And you can still trivially load device drivers (e.g. using devload) and/or TSRs, which again are going to override/replace parts of the loaded DOS , and this is the normal way of life for DOS.
> I think LOADLIN worked more because of the lack of security in DOS
You have to understand something: this is way _every DOS program works_. They'll hook DOS interrupts, they'll hook hardware interrupts, they do whatever they want with the operating system. Some DOS office programs even literally swap out the DOS kernel to disk to gain a couple extra KB of RAM.
You still seem to think that this is something rare and thus make the wrong analogy with vulnerabilities in the Windows kernel. But this is just the wrong view. DOS is a _minuscule_ OS in comparison , so it is actually _more common_ for programs to replace, hook and extend it than it is for them not to do it. At least for every program larger than a hello world. It is, indeed, the way DOS works.
Whatever DOS game you can think of is almost 100% certain to be bypassing DOS _entirely_ and banging the hardware on its own.
There's a reason FreeDOS has to keep some undocumented DOS memory structures literally in the same memory addresses. Practically nothing would run otherwise.
This is the reason _all_ DOS boxes that want to have a modicum of compatibility with real DOS software have to basically be a full PC virtualizer, and you can't just ship say a modified version of DOS that redirects input/output to your new OS and can run DOS programs as native processes (e.g. like Wine does for Win32).
Loadlin does work in a DOS box, but Linux won't (it expects a 386). I have used it to boot other kernels from time to time. But otherwise it is just to show that once DOS loads your program you have full control of the computer, and that is no different from your average DOS program.
As some added context, that is probably[1] true for any Windows version that uses v86 (which implies at least a 386 and enhanced mode windows), not so much for any earlier, non-enhanced Windows, or any Windows running on anything less than a 386.
In those, a DOS box is relatively far from a "separate VM".
But the same would apply to OS/2.
[1] I say "probably" because I haven't verified the limitations that Window may apply on its v86 tasks. It's at least possible that there's some tight integration between the DOS version that Windows is "running on" (which remained a thing for any non-NT-based Windows, including 95,98,me) and the "DOS inside the DOS box". Which, yes, would limit what software you can run, but then so does for example the need of protected mode DOS software to use DPMI/VCPI to be able to run in a DOS box, already. Some games just would not run in DOS boxes, that's why you could still boot "DOS mode" in Win95 and later. It's also possible that there is tight integration between Windows and the DOS box in other ways that also adds limitations.
This all kinda obvious if you're familiar with the 386 hardware and early PC history.
The OG 8086 was a chip from a different world than what a modern developer is used to (even a modern PC developer). The 8086 is definitely rooted in the tradition of earlier CPU's of the first days of home computing in the 1970's. While it had segments instead of banks, the 8086 was a near-contemporary and near-peer of classical bank-switched 16-bit CPU's of the 1970's, like the Zilog Z80, Motorola 68000, or MOS 6502.
After the IBM PC had already gotten popular, people wanted to bring protected mode OS's and software to the PC (especially as these were standard features on the "big boy computers," i.e. time-shared multiuser systems like mainframes and VAX's). So Intel made the 286 and IBM put it in the PC/AT and its successor, the PS/2.
The 286 suffered from a number of design issues. It was still 16-bit. And critically, you couldn't easily mix protected mode and real mode. This meant anyone who wanted to run a fancy new protected-mode OS had to sacrifice their entire existing userspace of DOS software. Interest in the feature was extremely tepid; most people simply used their 286 as a faster 8086.
For the next chip, the 386, Intel learned from those mistakes. They added all the infrastructure needed to virtualize a complete 16-bit real-mode system (I/O ports, memory ranges, interrupts). The 386 has three operating modes: Real mode, protected mode, and virtual 8086 mode. The latter is what they use for DOS emulation in OS/2 and Windows 95.
The OS/2 2.x DOS emulator was pretty amazing technology back in 1992. Equally amazing is that there was little technical reason it couldn't have been released much earlier; the 386 debuted in 1985!
It was around this time that OS developers basically decided "Segments were a mistake. Let's just give every process a flat address space and use the MMU to manage memory with pages rather than segments," a design decision doubtlessly helped along by the fact that the 386 MMU could manage memory with pages rather than segments. (This was, in hindsight, a solid design decision that has stood the test of time. C programmers these days have no idea what a far pointer even is, and it's probably better that way.)
Unfortunately the 386's protected mode had its own fatal flaw: It was only designed with the necessary traps to emulate a real-mode guest. Hardware support for emulating a protected-mode guest didn't appear until well after the turn of the millenium, with Intel Vanderpool and AMD Pacifica. (Not coincidentally, modern cloud computing and VPS hosting -- which we all take for granted today -- started to appear not long after. Before this time, "hosting" was understood to mean "rent a whole server" or "rent a UNIX / Linux shell account on a shared server.")
From hands-on experience, I've learned that v86 initially had some severe, ugly warts, which made writing a v86 "hypervisor" way more tedious, and make it run slower, than what seemed reasonable. These limitations were addressed in much later CPUs in the so called "VME" extensions, for which there was apparently quite some drama related to NDAs, and subsequent reverse engineering: https://www.rcollins.org/articles/vme1/
My (long abandoned) toy OS used virtual 8086 to provide "DOS services" support. The idea was that I could focus on the interesting parts of the OS, and outsource everything that I either hadn't gotten to yet, or that I didn't want to get to, to DOS.
That included several devices, the filesystem, an entire shell (command.com isn't great, but better than nothing), even networking through packet drivers if needed. The OS was a "modern" and full protected mode true multitasking OS, but one of its task was the DOS instance you started it from, transplanted as a v86 task, and my OS could just call into that (and vice versa, so you could run DOS programs that call into my OS). Initially, even allocating memory would just ask DOS to do the bookkeeping!
(I never got to implement VME since I made v86 work on itself, and I dropped this toy project once my work fully shifted to working on real OSes at that level anyway.)
> outsource everything that I either hadn't gotten to yet, or that I didn't want to get to, to DOS.
Good plan.
I think I wrote, circa 2000, that BeOS was the best PC OS I'd ever seen by far -- but the critical lack of apps was a deal-breaker. But if it had a DOS shell, I could still in 1999-2000 have done all my actual work in DOS apps, and just used native BeOS for Internet stuff. But it never got a DOSbox.
Some of this is quite revisionist. Intel didn't learn from 8086 mistakes when they designed the 386. They had an order of magnitude more transistors to play with which allowed them to implement features that weren't achievable in the 8086 time frame (1976).
If they learned from mistakes it was from the iAPX432.
Iirc hardware virtualization in x86 CPUs came after VMWare had demonstrated the concept and proven the market, using a software emulation approach.
Hardware virtualization with feature parity came very late. Hardware virtualization limited to real mode VMs came with the 386 itself, as the commenter you replied to mentioned.
It was absolutely crucial to its success. The 286's protected mode was considered useless, more or less, because it could not coexist with DOS apps (not without severe hacks at least, stuff like resetting the entire CPU into real mode, something earlier OS/2 did by the way). Intel learned massively from that mistake and made real mode VMs a first class citizen in the 386, so I don't think it's fair to say that Intel didn't learn anything coming into the 386.
Now, as I've mentioned here elsewhere, v86 had some severe warts that made it a bit painful (and that were only corrected in the Pentium and late 486s), but it worked, and it was a cornerstone feature. Which part do you think was revisionist?
Surely 6800 rather than 68000. You can't put the 68000 in the same category as the 6502 and 8086! It just isn't fast enough!
(Also: the 24-bit address bus meant not much need for bank switching. The 16 MB limit wasn't a problem until well after 68000 systems had become outdated, or superseded by equivalents based on the (fully 32-bit) 68020 or later.)
> classical bank-switched 16-bit CPU's of the 1970's, like the Zilog Z80, Motorola 68000, or MOS 6502.
Hang on a cotton-pickin' minute.
This is mixing up 3 different things.
1. 6502 and Z80 were 8-bit, not 16-bit.
2. 68000 is a 1970s chip but it doesn't belong in there with those 2. Its peers were the 65816 and maybe the Z800. The one that would belong was the 6800 or 6809.
3. 6502 and Z80 were both used in machines with bank-switching, but the CPUs themselves couldn't do it. They needed the chipset to do that, and the early 8-bit micros didn't have that, so their 16-bit address bus limited them to 2^16 bytes of RAM: 65535 bytes, meaning 64kB.
I think part of the brilliance of the 8088/8086 was that it brought the bank-switching logic on board and into the instruction set, so it was in some ways a sort of 8-bit chip that could natively handle 1MB of memory space... clumsily, via 16 separate 64kB blocks, but what mattered is that it could do it at all.
1. I was using the modern definition of a processor's "bitness", i.e. the size of a memory address. (Modern x86's are considered "64-bit" even though you can use up to 512 bits at once with AVX instructions. To be fair, you're absolutely correct: These chips were assigned "bitness".)
2. I'm not super familiar with Motorola; I'll defer to you on this one.
3. Yes, I know. My point is that a modern developer used to programming in a flat address space, upon learning about segments for the first time would say "Ewww, why did they do it that way? Madness!" But if you consider the history of bank-switched systems, segments make a bit more sense.
But the madness is lurking under the surface. "Why does the DOS memory allocator always give me the beginning of a segment no matter how much I allocate?" I once asked. "Isn't it incredibly wasteful to always give you 64k regardless of how many bytes you request?" Only much later did I find out that the x86 only supports 20-bit addresses and internally runs the calculation phys_address = segment*16 + offset. "Each segment is its own private 64k" was the natural (wrong) mental model I'd constructed; "segment n and n+1 overlap except in the first/last 16 bytes" is quite counterintuitive. So the "wasteful" allocations only "waste" at most 15 bytes.
In hindsight I tend to think that the admittedly strong demand for flat memory, and the reduction in use of CPU protection rings, segments and so on, was taking a high-level view a huge mistake from which we have yet to recover.
But yeah, from everything I read at the time, as PCs became serious tools able to run serious apps, programmers hated segments.
Outside of that world, though, OSes like SCO Xenix and MWC Coherent made good use of the 286. It was actually very capable.
Before that, in the early days of the 8088/8086, I think there was some value in the segment model for how it facilitated bringing across CP/M apps. When I got started in this industry, the 1980s DOS world was still dominated by modern versions of apps that originated on CP/M, which were in the late '80s just getting displaced by a new generation of much more powerful DOS apps designed for a PC with the full complement of 640kB of RAM and a hard disk.
So, things like:
* dBase II (CP/M) -> dBase III (PC and DOS)
* WordStar 2/3 (CP/M) -> WordStar 3/4, 2000, etc. (PC and DOS)
* SuperCalc (CP/M) -> SuperCalc 5 (PC and DOS) -- a late resurgence.
... were supplanted by tools like FoxPro or Clipper, WordPerfect, and Lotus 1-2-3.
That meant you could install and use it on a PC with a disk controller that OS/2 did not recognize out of the box as long as that controller provided BIOS INT 13h support.