Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Looking Glass: Run a Windows VM on Linux in a Window with Native Performance (hostfission.com)
126 points by cercatrova on April 18, 2020 | hide | past | favorite | 42 comments


This works well for the most part. However, there are some things worth considering.

For one, the frame relay program will need to start in your guest before Looking Glass can connect. Last I checked the best you can do is have it start at login, so you need autologin to have a somewhat seamless setup.

Secondly, the cursor is sent through using the SPICE protocol, which is convenient. Unfortunately Looking Glass has no support for also handling audio, and no intent to merge such functionality in the future. So if you want audio, you’ll have to set up Qemu to forward audio directly to the host.

Finally, GPU passthrough setups is the thing that sold me on NixOS. If you’re a fairly experienced Linux user, setting up this complicated and multifaceted setup in a mostly reproducible way using a single file of configuration is pretty pleasing. Most of my configuration was this:

https://gist.github.com/jchv/b0e4b39679e450536a17cc6a5d69169...

Really, all you need is to substitute the right IDs and it should practically work anywhere, all you have to do is setup a VM with the requirements met (UEFI, PCI-e device passthrough, Looking Glass device attached, and KVM FR server running) and run Looking Glass.

Ultimately, I dropped Looking Glass and GPU passthrough. It was a little cumbersome and performance wasn’t perfect. I ran GPU passthrough on a physical monitor for a little while, and then dropped it entirely after finding out that Steam Proton and Wine actually covered all of my bases anyways.


Huh, cool! I’d love to run Illustrator/Photoshop on my NixOS box. I presume you use looking-glass-client from nixpkgs. Any other parts of the config you can share/recommend?


Not that I can recall! Yeah, I was using that one. In fact, when I wanted to upgrade, I actually submitted PRs to Nixpkgs for it. In the meantime, though, you can of course just use overlays or overrides to get a more up to date looking-glass-client in your local setup.

Now, that does actually bring up at least one nice bit of configuration, which is how to configure your overlays to be consistent system-wide. It's covered on the wiki. I'll be honest, I still find overlays a little bit confusing, but I eventually got this setup working. It has a lot of advantages over other methods, my favorite of which is that all of my overlay stuff is consistent across the machine and stored in my Git-versioned NixOS configuration. You might already have this, but it's worth noting.

https://nixos.wiki/wiki/Overlays#Using_nixpkgs.overlays_from...

Other than that and setting up the libvirt box, there wasn't really much else to it. I had it done in an hour or two. Pretty sure the only other NixOS configuration I had was setting my user to be in the `libvirtd` group.


I saw a recent thread about running an X server on Windows for Linux applications, and this is sort of the opposite. You can run a Windows VM with GPU passthrough on a Linux host and if you want to run them side by side without switching windows, you can use a VNC window or other, but the problem is you don't get hardware acceleration. Looking Glass solves this by copying the Windows GPU frame buffer onto the Linux side so it can be read transparently. It's also open source.

Source: https://github.com/gnif/LookingGlass Quickstart: https://looking-glass.hostfission.com/wiki/Installation Reddit discussion: https://old.reddit.com/r/vfio

This only works on a Linux host with Windows. There have been rumors that Apple might implement their own version for running perhaps a Windows VM or other VM transparently on a macOS host but those are just based on some virtIO development that seems to be going on in macOS. Or maybe it's to virtualize iOS or iPadOS, who knows?

Personally I use Proxmox, which is a Linux hypervisor based on QEMU / KVM but with other features and a web GUI for administration, to run Windows, macOS, and Linux simultaneously, as well as host containers for side projects and other self hosted services.


What's your experience with running macOS under ProxMox? Do you have a guide for that you can point to?


Nick Sherlock has great guides: https://www.nicksherlock.com/2019/10/installing-macos-catali...

I'd recommend trying to use OpenCore instead of Clover though. It's more stable even in a VM, and you don't need all the kexts as what the kexts are emulating for a bare metal machine are, in fact, emulated already as a VM.

You can also check out /r/Hackintosh, they have some info on it.


Thanks for this.

> Personally I use Proxmox

How do you find it to manage? With ESXi it’s quite easy to manage with Fusion, so I keep going back. Are you using the browser for Proxmox and Apple Remote Desktop/VNC to get into the machine?


Proxmox is very easy to manage, they have a web GUI. As for macOS, I either use VNC (NoMachine) or I switch the inputs on my monitor. I have each GPU hooked up to my monitor which has 4 display inputs. Then I have to use something like evdev or Barrier but it's usable.


And pct can execute things inside containers so you can do everything from the host.


Thanks - that way of passing through the graphics would never have occurred.


I use Proxmox on a server where it manages a herd of containers plus a few VMs for those times I need to run Windows. On a single DL380G7 Proxmox shepherds the following:

Containers:

- router (OpenWRT)

- mail

- web

- media (serves audio, video, books, photos etc)

- database

- authentication, central letsencrypt instance

- remote desktop/app session

- build services

- hydra (NSA's decompiler)

- peer to peer apps

- 'cloud' (Nextcloud)

- comms (Jitsi meet, Nextcloud Talk + related services)

VM:

- Windows, several versions for experimenting

- ELSA (VAG online service manual)

All on a single box, easily managed through either the Proxmox web interface or - for those things which the web interface does not facilitate - through the CLI tools or by editing LXC or QEMU config files.

While there are some areas where I have had to intervene to make things behave the way I want them to behave (e.g. I don't want to use ZFS since I prefer LVM raid so I had to set up storage 'by hand', the snapshot backup system does not cope with FUSE filesystems so I have to use hook scripts to unmount those before backup and remount them afterwards, etc) all in all the experience with Proxmox has been mostly positive.


Impressive. Can you give a rough guideline about how much men and CPI you allocate to each task (and how much physical men and CPU are in the host)?


I assume that by 'men' you mean 'cores'? If so, the host has two 6-core/12-thread CPU's (X5675@3.07GHz) and 128GB of memory, the internal array is populated with 8x146GB, the machine is hooked up to a NetApp DS4243 with 24x600GB (20 of which are in use, 4 cold spares). The machine has plenty of spare capacity, both CPU-wise as well as memory-wise. Storage is getting a bit short, eventually I'll start swapping drives in the DS4243 with higher-capacity versions. The whole contraption is housed in a sound-proofed cabinet with equipment in top, produce drying racks in the bottom. A forced draft fan all the way in the bottom of the rack draws in air through a large truck air filter in the top. This was the spare heat is used for a productive purpose instead of just wasted.

Resources are used as needed, when a container seems to be running close to capacity I add cores, memory or storage. In the current configuration I assigned 2 cores/1GB to mail, 2 cores/1GB to auth, 8 cores/8GB to build, 6 cores/16GB to (data)base, 8 cores/16GB to serve (which runs a host of web services), 6 cores/8GB to session (which runs remote desktop/single app sessions through X2go), 4 cores/1GB to panopticon (CCTV), 2 cores/1GB to p2p, 4 cores/4GB to comms and 4 cores/512MB to router. Total load on the machine is negligible, memory pressure is... absent (currently ~8GB in use, ~84GB buffer/cache, ~36GB free).

A machine like this one can be had for not that much ex-lease. As long as you get a G7 or newer power usage is acceptable, G6 and under are power hogs.


Thanks. I was writing on mobile, and autocorrect corrected “mem” as in memory to men without me noticing.... sorry.


Slightly off-topic:

I'm getting ready to build a new Linux gaming PC. I'm considering installing Windows 10 in a guest VM so I can run a few modern games that do poorly under Steam / Proton.

Anyone suggestions for good sources of info regarding which software stacks (e.g., Ubuntu 19.10 + VMWare) are likely to provide a good experience? Graphics performance matters.

I'm unwilling to run Windows 10 as the host OS because of personal feelings about telemetry.


My main desktop runs Linux, I do have a Windows VM (on qemu/libvirt) that I use primarily for Fusion360 and some gaming.

I had to setup GPU passthrough with a second GPU for this.

It is not that hard, you will however need to make sure your motherboard supports proper IOMMU grouping. This however, is the tricky part since most motherboard manufacturers do not really provide this information.

[1] More info on ArchWiki: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...


So IIUC your approach, you're exposing the actual GPU to the guest OS via PCI-passthrough. And in practice that means you'll want two GPUs: one for the host and one for the guest.

But looking at the online docs for VMWare Workstation, it sounds like they take a different approach: the guest VM has a virtual device driver that lets both host and guest OSs use the same GPU at the same time.

I would think the VMWare Workstation-like approach is preferred, because it avoids the hassle of using two graphics cards. I know that Workstation is expensive ($250) and not OSS. Besides those two reasons, are there still reasons to prefer the PCI-passthrough approach?


> Besides those two reasons, are there still reasons to prefer the PCI-passthrough approach?

The software approach generally has lousy performance due to the extra overhead of forwarding rendering commands between the guest and host. In addition, since this approach requires writing a completely new graphics driver for the guest, there tend to be limitations with rendering API compatibility, as well as bugs. With the hardware approach, the guest can use the GPU vendor’s normal driver.


Just yesterday I finally got to running actual tests of graphical performance between my host (Windows 7 Ultimate, NVIDIA GTX 950 - not top of the range :)) and guest (Windows 10 Pro running inside VMWare Workstation 15). Used "PerformanceTest 10" from Passmark. Host scored 6015, ran all benchmarks nicely. Guest scored 1645, only ran some benchmarks (due to lack of Dx12 and compute capabilities), and even those were penalized.

Virtualbox guest didn't participate in the contest, as I find graphical performance of VBox guests noticeably worse.

I must add though, that for desktop tasks VMWare Workstation performance is absolutely adequate, able to run double 2K monitors wonderfully. For gaming setup I'm personally planning to play with PCI-passthrough. The only problem is not to get lost in the tinkering, before playing any actual games :)


I did not try VMWare Workstation, but I did try VMWare Player and VirtualBox to see if I can get away with running Fusion 360 on a plain VM, and both had extremely horrible performance.

I remember even SketchUp wouldn't start in VirtualBox. I was able to get SketchUp running in VMWare Player, but the performance was very bad.

edit: this was also about 1.5 years ago, not sure how much improvements VMWare and VirtualBox have made in that time in terms of virtualizing GPU calls.

That is why I decided to end up setting up a VM with GPU passthru. It works extremely well for my usecase. I was able to score a cheap GPU for my Linux host (since I do not do anything graphically intensive on Linux, just need to drive 3 monitors), and was able to passthru my more expensive GPU over to the Windows VM. I would assume if you have integrated graphics on your CPU, you could use that for your host OS and get away with having only a single GPU.


With your setup, is there a convenient way to flip back and forth between which GPU is used by the Linux host vs. which GPU is used by the Win10 guest?

I'd still like to use Windows 10 only for those games that I can't make work on Linux.


I use a separate GPU for my linux host and the windows VM.

I am 99.9% sure you can not passthru a GPU to a VM that is being used by the host.

edit: I think I misunderstood your question. The GPU that gets passed thru to the VM, never gets initialized by the host system during boot. So, if you want to change which GPU you want to passthru to the VM, you will have to reboot to allow your host to use the other GPU (and not init the on you want passed thru to the VM).

I would suggest you look at a youtube video where they run thru setting this up, so you get a better idea of what this entails!


Thanks! I'm still deciding if it's worth the effort just to play certain games on Windows 10. The ease with which I can change the OS-GPU association may be the deciding factor.


I haven't tried it myself, but there's a bit of info on the /g/ wiki with a hardware database linked at the bottom.

https://wiki.installgentoo.com/index.php/PCI_passthrough


I've used PCI passthrough with VMWare ESXi. It's good when it works, but when it doesn't, you're out of luck. I doubt GP will have much luck with VMs once it gets to 3d acceleration.


GPU passthrough with qemu/kvm works spectacularly to 3D applications, I've been using it for years to play Windows only games from my linux desktop.


I run Proxmox (Debian) with an AMD GPU as well as Nvidia. I can pass both to Windows and they work fine, the Nvidia one is much more of a hassle though, you can run into Code 43 issues, meaning that Nvidia detects you're in a VM and refuses to start. On AMD this is much less of a problem.


As far as I can tell you need two GPUs and all this does is efficiently copy the second GPU (assigned to a VM) output to a window on the main GPU (assigned to the host).

This seems mostly useful for laptops with both integrated and discrete GPUs but no extra monitor output and for non-fullscreen 3D on desktops, since for fullscreen 3D on desktop you can just run a second cable to the monitor, which will be faster and allows the guest to drive monitor sync.


>This seems mostly useful for laptops with both integrated and discrete GPUs but no extra monitor output and for non-fullscreen 3D on desktops, since for fullscreen 3D on desktop you can just run a second cable to the monitor, which will be faster and allows the guest to drive monitor sync.

i've been doing it for years (albeit with qemu), even with additional monitors available.

It's nice to keep your workflow where you want it. I use a unix-alike OS nearly always, but work will inevitably pull me into a windows software that'll bitch about GPU acceleration (say.. autodesk fusion) even if it doesn't need it for the task.

It's nice to keep that window where I want it. Sure, I loses sync/cec stuff, but I can drag that window across my entire workspace of monitors, and that's pretty powerful when you're stuck in a VM for a single piece of software.

When I use a VM for gaming, that's when I run the GPU by itself to a separate monitor. Gaming VMs are ran weirdly anyway -- it'd probably have access to an entire block device rather than some VDI equivalent, PCI access, etc. That's more like just running two computers off of one mainboard at that point; not much to keep isolation sanitary.


Can you play multiplayer games and not get banned?


This depends on the Game, but most work and VM is usually not a reason to ban players. But some AntiCheat systems like the one from Valorant try to detect VM's and refuse to run on them.


I haven’t heard of this. Why is gaming from a VM blocked?



Looks like that's for running it on linux via proton, not running on a VM.


I hear MMOs don't like it because it makes it easier for multiboxers.


You can cheat by reading and modifying memory from a VM from the host instance, or something like that.


Yes, for muxless laptops this can be especially useful. However, it is also useful just for ergonomics, where I (who only has one monitor and has no interest in multiple) can make one section of a tiling window manager into Windows and leave it there, and interact with it when needed, without switching display outputs and also mouse/keyboard, which is the bigger hassle.


On a side note, this is an excellent landing page. My first thought when looking at the title was "I wonder if it does VGA pass-through." The second line of the landing page answered my question. Good job on the details so far! Can't wait to test it out.


Funny coincidence to have this article on the front page while fixing the last sentences in my blog post. This post might help some people in regard to audio passthrough with PulseAudio in QEMU/KVM, which had a lot of problems until recently. Check it out: https://bitkeks.eu/blog/2020/04/compile-qemu-42-on-debian-bu... Related Tweet: https://twitter.com/bitkeks/status/1251892611343880194


I'll give this a try. On my last job I needed windows to run spark ar, Facebook's tool to create augmented reality for their Instagram filters. I was using vm ware, but it was glitchy


If you are just running full-screen applications (like games or software) and you have multiple screens it is better to just use scripts to switch display inputs (via dcc).

I wasn't really able to find a usecase for looking glass. (But it's an interesting idea.)


You can also use evdev or Barrier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: