?

Log in

airlied
airlied
:.:..:.
Back Viewing 10 - 20 Forward

So one of the stumbling blocks on my road to getting 3D emulation in a VM is how most people use qemu in deployed situations either via libvirt or GNOME boxes frontends.

If you use are using libvirt and have VMs running they have no connection to the running user session or user X server, they run as the qemu user and are locked down on what they can access. You can restart your user session and the VM will keep trucking. All viewing off the VM is done using SPICE or VNC. GNOME Boxes is similar except it runs things as the user, but still not tied to the user session AFAIK (though I haven't confirmed).

So why does 3D make this difficult?

Well in order to have 3D we need to do two things.

a) talk to the graphics card to render stuff
b) for local users, show the user the rendered stuff without reading it back into system RAM, and sticking it in a pipe like spice or vnc, remote users get readback and all the slowness it entails.

No in order to do a), we face a couple of would like to have scenarios:

1. user using open source GPU drivers via mesa stack
2. user using closed source binary drivers like NVIDIA or worse fglrx.

How to access the graphics card normally is via OpenGL and its window APIs like GLX. However this requires a connection to your X server, if your X server dies your VM dies, if your session restarts your VM dies.

For scenario 1, where we have open source kms based drivers, the upcoming render nodes support in the kernel will allow process outside the X server control to use the capabilities of the graphics card via the EGL API. This means we can render in a process offscreen. This mostly solves problem (a) how to talk to the graphics card at all.

Now for scenario 2, so far NVIDIA has mostly got no EGL support for its desktop GPUs, so in this case we are kinda out in the cold, until they have at least EGL support, in terms of completely disconnecting the rendering process from the running user X server lifecycle.

This leaves problem (b), how do we get the stuff rendered using EGL back to the user session to display it. My first initial hand-wave in this area involved EGL images and dma-buf, but I get the feeling on subsequent reads that this might not be sufficient enough for my requirements. It looks like something like the EGLStream extension might be more suitable, however EGLstream suffers from only being implemented in the nvidia tegra closed source drivers from what I can see. Another option floated was to somehow use an embedded wayland client/server somewhere in the mix, I really haven't figured out the architecture for this yet (i.e. which end has the compositor and which end is the client, perhaps we have both a wayland client and compositor in the qemu process, and then a remote client to display the compositor output, otherwise I wonder about lifetime and disconnect issues). So to properly solve the problem for open source drivers I need to either get EGLstream implemented in mesa, or figure out what the wayland hack looks like.

Now I suppose I can assume at some stage nvidia will ship EGL support with the necessary bits for wayland on desktop x86 and I might not have to do anything special and it will all work, however I'm not really sure how to release anything in the stopgap zone.

So I suspect initially I'll have to live with typing the VM lifecycle to the logged in user lifecycle, maybe putting the VM into suspend if the GPU goes away, but again figuring out to integrate that with the libvirt/boxes style interfaces is quite tricky. I've done most of my development using qemu SDL and GTK+ support for direct running VMs without virt-manager etc. This just looks ugly, though I suppose you could have an SDL window outside the virt-manager screen and virt-manager could still use spice to show you the VM contents slower, but again it seems sucky. Another crazy idea I had was to have the remote viewer open a socket to the X server and pass it through another socket to the qemu process, which would build an X connection on top of the pre opened socket,
therefore avoiding it having to have direct access to the local X server. Again this seems like it could be a largely ugly hack, though it might also work on the nvidia binary drivers as well.

Also as a side-note I discovered SDL2 has OpenGL support and EGL support, however it won't use EGL to give you OpenGL only GLES2, it expects you to use GLX for OPENGL, this is kinda fail since EGL with desktop OpenGL should work fine, so that might be another thing to fix!

Okay its been a while, so where is virgil3d up to now I hear you ask?

Initially I wrote a qemu device and a set of guest kernel drivers in order to construct a research platform on which to investigate and develop the virgil protocol, renderer and guest mesa drivers based on Gallium3D and TGSI. Once I got the 3D renderer and guest driver talking I mostly left the pile of hacks in qemu and kernel alone. So with this in mind I've split development into two streams moving forward:

1) the virgil3d renderer and 3D development:
This is about keeping development of the renderer and guest driver continuing, getting piglit tests passing and apps running. I've been mostly focused on this so far, and there has been some big issues to solve that have taken a lot of the time, but as of today I got xonotic to play inside the VM, and I've gotten the weston compositor to render the right way up. Along with passing ~5100/5400 piglit gpu.tests.

The biggest issues in the renderer development have been
a) viewport setup - gallium and OpenGL have different viewport directions, and you can see lots of info on Y=0=TOP and Y=0=BOTTOM in the mesa state tracker, essentially this was more than my feeble brain could process so I spent 2 days with a whiteboard, and I think I solved it. This also has interactions with GL extensions like GL_ARB_fragment_coord_conventions, and FBOs vs standard GL backbuffer rendering.

b) Conditional rendering - due to the way the GL interface for this extension works I had to revisit my assumption that the renderer could be done with a single GL context, I had to rewrite things to use a GL context per guest context in order to give conditional rendering any chance of working. The main problem was using multiple GL queries for one guest query didn't work at all with the cond rendering interface provided by GL.

c) point sprites - these involved doing shader rewrites to stick gl_PointCoord in the right places, messy, but the renderer now has shader variants, however it needs some better reference counting and probably leaks like a sieve for long running contexts.

2) a new virtio-gpu device

The plan is to create a simple virtio based GPU, that can layer onto a PCI device like the other virtio devices, along with another layer for a virtio-vga device. This virtio based gpu would provide a simple indirect multi-headed modesetting interface for use by any qemu guests, and allow the guest to upload/download data from the host side scanouts. The idea would be then to give this device capabilities that the host can enable when it detects the 3d renderer is available and qemu is started correctly. So then the guest can use the virtio gpu as a simple GPU with no 3D, then when things are ready the capability is signalled and it can enable 3D. This seems like the best upstreaming plan for this work, and I've written the guts of it.

In order to test the virtio-gpu stuff I've had to start looking at porting qemu to SDL 2.0 as SDL 1.2 can't do multi-window and can't do argb cursors, but SDL 2.0 can. So I'm hoping with SDL 2.0 and virtio-gpu you can have multiple outputs per the vgpu show up in multiple SDL windows.

I'll be speaking about virgil3d at the KVM Forum in Edinburgh in a couple of weeks and also be attending Kernel Summit.

So X.org had a GSOC project to implement Xv support in glamor, but the candidate got a better offer to do something more interesting, so I was bit sleep deprived (sick kid) and didn't want to face my current virgl task and I'm interested in using glamor potentially for virgil so I took a learning day :-)

So I spent the day writing Xv support for glamor for no good reason,

http://cgit.freedesktop.org/~airlied/glamor
git://people.freedesktop.org/~airlied/glamor xv-support

git://people.freedesktop.org/~airlied/xf86-video-ati glamor-xv-support

contains the result of my day, the glamor repo may not be public yet, its waiting on fd.o cgit crawler.

Xv works for YV12 planar videos, I suspect to do packed video support I'd need a GL extension to expose the hw formats for doing packed video, this probably wouldn't be a major extension and maybe someone might do it sometime.

The code supports, brightness, contrast, hue and saturation controls using the code ported from the radeon driver.

I've tested it with mplayer on evergreen card of some variant, and it seems to work fine with the one video I used :-)

I've published on Google docs a bit more of a technical document on how virgil3d is designed.

https://docs.google.com/document/d/1cL2sEi5DAd-qY_OzyNiUyIbcU2ZvcI8lhv3PWzFoH_w/pub

I'm hoping to flesh it out a bit more, and of course I'll probably never keep it up to date, but it should be close enough :-)

I've also put up some build instructions here:
https://docs.google.com/document/d/1CNiN0rHdfh7cp9tQ3coebNEJtHJzm4OCWvF3qL4nucc/pub

They are messy and incomplete, any don't go packaging anything.

Virgil is a research project I've been working on at Red Hat for a few months now and I think is ready for at least announcing upstream and seeing if there is any developer interest in the community in trying to help out.

The project is to create a 3D capable virtual GPU for qemu that can be used by Linux and eventually Windows guests to provide OpenGL/Direct3D support inside the guest. It uses an interface based on Gallium/TGSI along with virtio to communicate between guest and host, and it goal is to provided an OpenGL renderer along with a complete Linux driver stack for the guest.

The website is here with links to some videos:
http://virgil3d.github.io/

some badly formatted Questions/Answers (I fail at github):
http://virgil3d.github.io/questions.html

Just a note and I can't stress this strongly enough, this isn't end user ready, not even close, it isn't even bleeding edge user ready, or advanced tester usage ready, its not ready for distro packaging, there is no roadmap or commitment to finishing it. I don't need you to install it and run it on your machine and report bugs.

I'm announcing it because there maybe other developers interested or other companies interested and I'd like to allow them to get on board at the design/investigation stage, before I have to solidify the APIs etc. I also don't like single company projects and if I can announcing early can help avoid that then so be it!

If you are a developer interested in working on an open source virtual 3D GPU, or you work for a company who is interested in developing something in this area, then get in touch with me, but if you just want to kick the tyres, I don't have time for this yet.

So I've been involved in a recent dispute on the wayland project, with a person I'd classify as a poisonous person. Basically a contributor who was doing more damage than good, and was causing unneeded disturbances. I won't comment any further on that here, but just setting the scene for writing this.

So everytime something like this happens in a project, there emerges from the woodwork, people who claim that having public discussions about these sort of things is bad for open source, or makes us look like a crowd of juvenile developers, also how you never see this thing on closed sourced projects, or with open-source projects developer in-house and thrown over the wall. I've also recently seen this crop up when Linus flamed people, and everyone wondered why he didn't do it on some sort of private list or something.

Now I can only think these people are one of:

a) never worked in a company on a major closed source project.

b) if they have, its been top down development, where managers are telling them what to do, and maybe some architect dude has drawn a load of pretty pictures and docs. Of course the architect is never wrong, but its above your pay grade to talk to someone of such authority, so when you find problems with the architecture you hack around them instead of growing a pair and standing your ground, or else you aren't good enough to notice anything wrong.

I've seen plenty of companies where developers leave due to in-fighting or transfer to a different department, this stuff never comes out and you all are none the wiser.

So open source doesn't have top-down development, its all bottom up, most contributors to major projects do so with some ideas of what they want, but they aren't been driven by a management chain. However it means that there is generally nobody to force someone into their views, and when two people collide (or in this case, one person and everyone else), something has to give, and its best to give in public, so nobody can say it was some sort of cabal or closed decision.

Now open-source is about seeing the sausage making process, you get to see all the bits of stuff you don't want to think go into the sausages, you have to face a lot more truth, and you have to be willing to stand up against things without mummy manager to back you up. You can't have all the nice benefits of open-source development without having the bad side, the public blowups and discussion, it just can't work like that. If we take all those discussions to private lists or emails, where do you draw the line, are the people on that private list some sort of shadowy cabal overlords? Do you want an open-source development model that isn't public?

I'm sure people will say why can't we all just get along? and why can't everyone act mature? well a) we are human, b) there is no HR department frontend blocking the people at the gate, there's no interview process to weed out undesirable traits before they join the project. So when someone submits patches that work you generally accept them as a contributor, and it can take a while before you realise they are doing more harm than good, at which point its going to be public.

[update: Mir page removed most of the reasons Wayland wasn't suitable, so why did they not use wayland again?]
[update: still my opinion, really, nobody is making me say shit, lwn commenters really like to believe I've got a hand up my ass]

Okay I'm going to write a short piece on why I believe Mir isn't a good idea. If you don't know what mir is then don't bother reading the rest of this until you do.

So lets take a look at Mir from a cynical pov (I'm good at that): Say this is nothing more than a shallow power play by Canonical to try and control parts of the graphics infrastructure on Linux. It must be really frustrating to have poured so much money into a company and not have 100% control over all the code the company produces and have the upstream community continually ignore your "leadership". This would leave you wanting to exert control where you can and making decisions on what spaces you can do that in internally.

So in order to justify the requirement that Mir is required by the community at large above the current project in the space, Wayland, it is necessary to bash wayland in order that your community can learn the lines so they can repeat them right or wrong across the Internet. So you post a page like this
https://wiki.ubuntu.com/MirSpec
and a section called "Why Not Wayland / Weston?".

Now I've been reliably informed by people who know, that nothing in that section makes any sense for anyone who studied wayland for longer than 5 mins a year or two ago, especially the main points about the input handling. Nobody from Canonical has ever posted any questions to wayland mailing lists or contacted Wayland developers asking to support a different direction.

So maybe I'm being too cynical and Hanlon's razor applies, "Never attribute to malice that which is adequately explained by stupidity".

Now the question becomes do you want the display server that you are going to base the future of the Linux desktop and possible mobile spaces on a server written by people too stupid to understand the current open source project in the space?

The thing is putting stuff on the screen really isn't the hard part of display servers, getting input to where it needs to go is, and making it secure. Input methods are hard, input is hard, guess what they haven't even contemplated implementing yet?

Valve? NVIDIA? AMD? I'd be treading carefully :-)

(all my own opinion, not speaking for my employer or anyone really). Probably should comment on the g+ threads or lwn or somewhere cool.

So I took some time today to try and code up a thing I call reverse optimus.

Optimus laptops come in a lot of flavours, but one annoying one is where the LVDS/eDP panel is only connected to the Intel and the outputs are only connected to the nvidia GPU.

Under Windows, either the intel is rendering the compositor and the nvidia GPU is only used for offloads (when no monitors are plugged in), but when a monitor is plugged in, generally the nvidia takes over the compositor rendering, and just gives the Intel GPU a pixmap to put on the LVDS/eDP screen.

Now under Linux the first case mostly works OOTB on F18 with intel/nouveau, but switching compositors on the fly is going to take a lot more work, particularly with compositor writers, and I haven't see much jumping up on down on the client side to lead the way.

So I hacked up a thing I called reverse optimus, it kinda sucks, but it might be a decent stop gap.

The intel still renders the compositor, however it can use the nvidia to output slaved pixmaps. This is totally the opposite of how the technology was meant to be used, and it introduces another copy. So the intel driver now copies from its tiled rendering to a shared linear rendering (just like with USB GPUs), however since we don't want nouveau scanning out of system RAM, the nouveau driver then copies the rendering from the shared pixmap into the nvidia VRAM object. So we get a double copy, and we chew lots of power, but hey you can see stuff. Also the slave output stuff sucks for synchronisation so far, so you will also get tearing and other crappyness.

There is also a secondary problem with the output configuration. Some laptops (Lenovo I have at least), connect DDC lines to the Intel GPU for outputs which are only connected to the nvidia GPU, so when I enable the nvidia as a slave, I get some cases of double monitor reporting. This probably requires parsing ACPI tables properly like Windows does, in order to make it not do that. However I suppose having two outputs is better than none :-)

So I've gotten this working today with two intel/nvidia laptops, and I'm contemplating how to upstream it, so far I've just done some hackery to nouveau, that along with some fixes in intel driver master, and patch to the X server (or Fedora koji 1.13.1-2 server) makes it just work,

http://cgit.freedesktop.org/~airlied/xf86-video-nouveau/log/?h=rev-optimus

I really dislike this solution, but it seems that it might be the best stopgap until I can sort out the compositor side issues, (GL being the main problem).

update: I've pushed reverse-prime branches to my X server and -ati repo.

So I awake to find an announcement that the userspace drivers for the rPI have been released, lots of people cheering, but really what they've released is totally useless to anyone who uses or develops this stuff.

(libv commented on their thread: http://www.raspberrypi.org/archives/2221#comment-34981
maybe he'll follow up with a blog post at some point).

So to start the GLES implementation is on the GPU via a firmware. It provides a high level GLES RPC interface. The newly opened source code just does some marshalling and shoves it over the RPC.

Why is this bad?
You cannot make any improvements to their GLES implementation, you cannot add any new extensions, you can't fix any bugs, you can't do anything with it. You can't write a Mesa/Gallium driver for it. In other words you just can't.

Why is this not like other firmware (AMD/NVIDIA etc)?
The firmware we ship on AMD and nvidia via nouveau isn't directly controlling the GPU shader cores. It mostly does ancillary tasks like power management and CPU offloading. There are some firmwares for video decoding that would start to fall into the same category as this stuff. So if you want to add a new 3D feature to the AMD gpu driver you can just write code to do it, not so with the rPI driver stack.

Will this mean the broadcom kernel driver will get merged?
No.

This is like Ethernet cards with TCP offload, where the full TCP/IP stack is run on the Ethernet card firmware. These cards seem like a good plan until you find bugs in their firmware stack or find out their TCP/IP implementation isn't actually any good. The same problem will occur with this. I would take bets the GLES implementation sucks, because they all do, but the whole point of open sourcing something is to allow other to improve it something that can't be done in this case.

So really Rasberry Pi and Broadcom - get a big FAIL for even bothering to make a press release for this, if they'd just stuck the code out there and gone on with things it would have been fine, nobody would have been any happier, but some idiot thought this crappy shim layer deserved a press release, pointless. (and really phoronix, you suck even more than usual at journalism).

Two videoes on youtube:

Randr 1.5 GPU offload:
http://www.youtube.com/watch?v=XEUKuNTRp78

Randr 1.5 USB hotplug
http://www.youtube.com/watch?v=yoUNsbFmxS0

Back Viewing 10 - 20 Forward