You are viewing airlied

airlied
airlied
:.:..:.

March 2015
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31

airlied [userpic]
hotplug demo video (teaching X.org new tricks).

So today I managed to see something on screen doing X.org hotplug work. So I present to you live X.org plugging.

http://www.youtube.com/watch?v=g54y80blzRU

Pretty much is a laptop running xf86-video-modesetting driver, with my server, an xterm + metacity. Plug in a USB displaylink device, with a kernel drm driver I wrote for it. (Sneaky xrefresh in the background). and the USB device displays the xterm and metacity.

So what actually happens?

The X server at the start had loaded drivers using udev, and the a new driver ABI. It exports one X protocol screen and plugs an internal DrvScreen into it.

When the hotplug happens, the server inits another DrvScreen for the new device, and plugs it into the single protocol screen. It also copies all the driver level pixmaps/pictures/GCs into the new driver. The single protocol screen at the bottom layer multiplexes across the plugged in drvscreens.

This is like Xinerama pushed down a lot further in the stack, so instead of doing iterations at the protocol level, we do it down at the acceleration level. Also I have randr code hooked up so all the outputs appear no matter what GPU they are from.

This isn't exactly what I want for USB hotplug, ideally we'd use the main GPU to render stuff and only scanout using the USB device, but this is step one. I also need the ability to add/remove drvscreens and all the associated data in order to support dynamic GPU switching.

The real solution is a still a long ways off, but this is just a small light in a long tunnel, I've been hacking on this on/off for over a year now, so its nice to see something visible for the first time.

Comments

This is awesome.

> ideally we'd use the main GPU to render stuff and only scanout using the USB device
I live in hope that, one day, my IGP only has to be used for scanout.

With xf86-video-modesetting + X changes, would (future) dynamic switching using a multi-GPU setup only require KMS drivers for the active and target GPUs?

KMS drivers modified to work with the new server. for switching etc we also need more kernel level object sharing via prime or dmabuf.

So it's currently mirror-only? No extension of desktop on the fly yet? The latter would be great, as my small laptop can't fit a browser and some aterms on it (1366 wide, alas; I miss my 1450 wide displays...)

Nothing stops it from being non-mirror apart from fixing bugs in it and adding further randr support.

This post is both awesome and incredible and also something I have trepidation over-- I'm a huge fan of the dynamic input configuration work and the multi-resource tapping abilities Peter Hutterer introduced in XInput2, and one of the main reasons I think that work was so good was because it embraced the idea of multiple resources.

The hybrid graphics / dynamic CPU switching case is important. But this assumption that hotplugged devices are second class citizens, that work needs to happen on some main GPU, that strikes me as incredibly wrong, as faultily monolithic. USB3, thunderbolt, external PCIe solutions, 40G ethernet... higher bandwidth is happening, and I still think good GPU resources might become a pluggable resource users commonly encounter.

If we can start plugging in high powered GPU bricks, will this infrastructure leverage that and let users hotplug and utilize those diverse resources?

MPX was brilliant because it acknowledged multiple resources. I'm no one, have no platform to be talking or addressing you from w/r/t this issue, but still I emotionally feel connected to this principle: if possible, keep in consideration the idea of pluggable high performance GPU's as you embark upon this incredibly awesome and important work.

Thanks David.
-mf / @rektide

The problem with plugging in multiple GPUs is thats is an impossibly difficult solution. To do things like run a compositor across 3-4 GPUs requires rewriting GL stacks and compositors to do things they just can't do at all today. The plan is to get a stack with all the pieces in place to do all the other use cases and provide a base to develop the other stuff on.

Consider you have an intel GPU and you hotplug an nvidia gpu and want to expand the screen on it? now what does the GL stack report for that single screen? how does a compositor get rendered by both GPUs etc. The reason for only having master + offload GPUs is because the model is a lot easier to get working and is a lot more practical for now. Windows also doesn't try to work in this situation anymore, you can't plug in two GPUs that are different families (not just different manufactureres) it won't display on the second one. We can do a bit better, but doing homogeneous fullscreen spanning GL without application help is always going to be close to impossible to get either right or pretty.

multi-seat x is probably what i need to look into. there still isn't vga arbiter code to run multiple video cards in parallel, on separate X instances, is there?

the sort of thing should be workable now, at least in F16 they are working towards plugging in a GPU/keyboard/monitor and have gdm spawn a while new session. My work is more about a single server for a single user.

(Anonymous)
Xspice

Xspice is implemented with a video driver. iiuc this should already work with your demo? mirror mode, two drivers, one for the hardware and another for Xspice, everything is rendered twice. For your real solution I hope you keep this use case in mind.