airlied (airlied) wrote,
airlied
airlied

disconnected VM operation and 3D

So one of the stumbling blocks on my road to getting 3D emulation in a VM is how most people use qemu in deployed situations either via libvirt or GNOME boxes frontends.

If you use are using libvirt and have VMs running they have no connection to the running user session or user X server, they run as the qemu user and are locked down on what they can access. You can restart your user session and the VM will keep trucking. All viewing off the VM is done using SPICE or VNC. GNOME Boxes is similar except it runs things as the user, but still not tied to the user session AFAIK (though I haven't confirmed).

So why does 3D make this difficult?

Well in order to have 3D we need to do two things.

a) talk to the graphics card to render stuff
b) for local users, show the user the rendered stuff without reading it back into system RAM, and sticking it in a pipe like spice or vnc, remote users get readback and all the slowness it entails.

No in order to do a), we face a couple of would like to have scenarios:

1. user using open source GPU drivers via mesa stack
2. user using closed source binary drivers like NVIDIA or worse fglrx.

How to access the graphics card normally is via OpenGL and its window APIs like GLX. However this requires a connection to your X server, if your X server dies your VM dies, if your session restarts your VM dies.

For scenario 1, where we have open source kms based drivers, the upcoming render nodes support in the kernel will allow process outside the X server control to use the capabilities of the graphics card via the EGL API. This means we can render in a process offscreen. This mostly solves problem (a) how to talk to the graphics card at all.

Now for scenario 2, so far NVIDIA has mostly got no EGL support for its desktop GPUs, so in this case we are kinda out in the cold, until they have at least EGL support, in terms of completely disconnecting the rendering process from the running user X server lifecycle.

This leaves problem (b), how do we get the stuff rendered using EGL back to the user session to display it. My first initial hand-wave in this area involved EGL images and dma-buf, but I get the feeling on subsequent reads that this might not be sufficient enough for my requirements. It looks like something like the EGLStream extension might be more suitable, however EGLstream suffers from only being implemented in the nvidia tegra closed source drivers from what I can see. Another option floated was to somehow use an embedded wayland client/server somewhere in the mix, I really haven't figured out the architecture for this yet (i.e. which end has the compositor and which end is the client, perhaps we have both a wayland client and compositor in the qemu process, and then a remote client to display the compositor output, otherwise I wonder about lifetime and disconnect issues). So to properly solve the problem for open source drivers I need to either get EGLstream implemented in mesa, or figure out what the wayland hack looks like.

Now I suppose I can assume at some stage nvidia will ship EGL support with the necessary bits for wayland on desktop x86 and I might not have to do anything special and it will all work, however I'm not really sure how to release anything in the stopgap zone.

So I suspect initially I'll have to live with typing the VM lifecycle to the logged in user lifecycle, maybe putting the VM into suspend if the GPU goes away, but again figuring out to integrate that with the libvirt/boxes style interfaces is quite tricky. I've done most of my development using qemu SDL and GTK+ support for direct running VMs without virt-manager etc. This just looks ugly, though I suppose you could have an SDL window outside the virt-manager screen and virt-manager could still use spice to show you the VM contents slower, but again it seems sucky. Another crazy idea I had was to have the remote viewer open a socket to the X server and pass it through another socket to the qemu process, which would build an X connection on top of the pre opened socket,
therefore avoiding it having to have direct access to the local X server. Again this seems like it could be a largely ugly hack, though it might also work on the nvidia binary drivers as well.

Also as a side-note I discovered SDL2 has OpenGL support and EGL support, however it won't use EGL to give you OpenGL only GLES2, it expects you to use GLX for OPENGL, this is kinda fail since EGL with desktop OpenGL should work fine, so that might be another thing to fix!
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 9 comments