[update: Mir page removed most of the reasons Wayland wasn't suitable, so why did they not use wayland again?]
[update: still my opinion, really, nobody is making me say shit, lwn commenters really like to believe I've got a hand up my ass]
Okay I'm going to write a short piece on why I believe Mir isn't a good idea. If you don't know what mir is then don't bother reading the rest of this until you do.
So lets take a look at Mir from a cynical pov (I'm good at that): Say this is nothing more than a shallow power play by Canonical to try and control parts of the graphics infrastructure on Linux. It must be really frustrating to have poured so much money into a company and not have 100% control over all the code the company produces and have the upstream community continually ignore your "leadership". This would leave you wanting to exert control where you can and making decisions on what spaces you can do that in internally.
So in order to justify the requirement that Mir is required by the community at large above the current project in the space, Wayland, it is necessary to bash wayland in order that your community can learn the lines so they can repeat them right or wrong across the Internet. So you post a page like this
and a section called "Why Not Wayland / Weston?".
Now I've been reliably informed by people who know, that nothing in that section makes any sense for anyone who studied wayland for longer than 5 mins a year or two ago, especially the main points about the input handling. Nobody from Canonical has ever posted any questions to wayland mailing lists or contacted Wayland developers asking to support a different direction.
So maybe I'm being too cynical and Hanlon's razor applies, "Never attribute to malice that which is adequately explained by stupidity".
Now the question becomes do you want the display server that you are going to base the future of the Linux desktop and possible mobile spaces on a server written by people too stupid to understand the current open source project in the space?
The thing is putting stuff on the screen really isn't the hard part of display servers, getting input to where it needs to go is, and making it secure. Input methods are hard, input is hard, guess what they haven't even contemplated implementing yet?
Valve? NVIDIA? AMD? I'd be treading carefully :-)
(all my own opinion, not speaking for my employer or anyone really). Probably should comment on the g+ threads or lwn or somewhere cool.
So I took some time today to try and code up a thing I call reverse optimus.
Optimus laptops come in a lot of flavours, but one annoying one is where the LVDS/eDP panel is only connected to the Intel and the outputs are only connected to the nvidia GPU.
Under Windows, either the intel is rendering the compositor and the nvidia GPU is only used for offloads (when no monitors are plugged in), but when a monitor is plugged in, generally the nvidia takes over the compositor rendering, and just gives the Intel GPU a pixmap to put on the LVDS/eDP screen.
Now under Linux the first case mostly works OOTB on F18 with intel/nouveau, but switching compositors on the fly is going to take a lot more work, particularly with compositor writers, and I haven't see much jumping up on down on the client side to lead the way.
So I hacked up a thing I called reverse optimus, it kinda sucks, but it might be a decent stop gap.
The intel still renders the compositor, however it can use the nvidia to output slaved pixmaps. This is totally the opposite of how the technology was meant to be used, and it introduces another copy. So the intel driver now copies from its tiled rendering to a shared linear rendering (just like with USB GPUs), however since we don't want nouveau scanning out of system RAM, the nouveau driver then copies the rendering from the shared pixmap into the nvidia VRAM object. So we get a double copy, and we chew lots of power, but hey you can see stuff. Also the slave output stuff sucks for synchronisation so far, so you will also get tearing and other crappyness.
There is also a secondary problem with the output configuration. Some laptops (Lenovo I have at least), connect DDC lines to the Intel GPU for outputs which are only connected to the nvidia GPU, so when I enable the nvidia as a slave, I get some cases of double monitor reporting. This probably requires parsing ACPI tables properly like Windows does, in order to make it not do that. However I suppose having two outputs is better than none :-)
So I've gotten this working today with two intel/nvidia laptops, and I'm contemplating how to upstream it, so far I've just done some hackery to nouveau, that along with some fixes in intel driver master, and patch to the X server (or Fedora koji 1.13.1-2 server) makes it just work,
I really dislike this solution, but it seems that it might be the best stopgap until I can sort out the compositor side issues, (GL being the main problem).
update: I've pushed reverse-prime branches to my X server and -ati repo.
So I awake to find an announcement that the userspace drivers for the rPI have been released, lots of people cheering, but really what they've released is totally useless to anyone who uses or develops this stuff.
(libv commented on their thread: http://www.raspberrypi.org/archives/2221#comment-34981
maybe he'll follow up with a blog post at some point).
So to start the GLES implementation is on the GPU via a firmware. It provides a high level GLES RPC interface. The newly opened source code just does some marshalling and shoves it over the RPC.
Why is this bad?
You cannot make any improvements to their GLES implementation, you cannot add any new extensions, you can't fix any bugs, you can't do anything with it. You can't write a Mesa/Gallium driver for it. In other words you just can't.
Why is this not like other firmware (AMD/NVIDIA etc)?
The firmware we ship on AMD and nvidia via nouveau isn't directly controlling the GPU shader cores. It mostly does ancillary tasks like power management and CPU offloading. There are some firmwares for video decoding that would start to fall into the same category as this stuff. So if you want to add a new 3D feature to the AMD gpu driver you can just write code to do it, not so with the rPI driver stack.
Will this mean the broadcom kernel driver will get merged?
This is like Ethernet cards with TCP offload, where the full TCP/IP stack is run on the Ethernet card firmware. These cards seem like a good plan until you find bugs in their firmware stack or find out their TCP/IP implementation isn't actually any good. The same problem will occur with this. I would take bets the GLES implementation sucks, because they all do, but the whole point of open sourcing something is to allow other to improve it something that can't be done in this case.
So really Rasberry Pi and Broadcom - get a big FAIL for even bothering to make a press release for this, if they'd just stuck the code out there and gone on with things it would have been fine, nobody would have been any happier, but some idiot thought this crappy shim layer deserved a press release, pointless. (and really phoronix, you suck even more than usual at journalism).
So I've been slowly writing the hotplug support v3 in between all the real jobs I have to do.
[side note: When I started out on hotplug. one of my goals was to avoid changing the server API/ABI too much so I could continue side by side testing.]
how did I get to v3?
v0.1: was called dynerama it failed miserably and proved that using Xinerama as the plugging layer was a bad plan.
v1: was the first time I decided to use an impedance layer between some server objects and driver objects.
v2: was the a major rebase of v1.
v2 was trucking along nicely and I managed to get the design to the stage where PRIME offloading intel/nouveau worked, USB device hotplug with udl worked, and GPU switch worked between two drivers. However v2 duplicated a lot of code and invented a whole new set of API objects called DrvXRec, so DrvScreenRec, DrvPixmapRec, DrvGCRec etc, this lead me to looking at the pain of merging this into the drivers and the server, and my goals of avoiding changing the API/ABI was getting in my way.
So before starting v3 I decided to rework some of the server "APIs".
The X server has two main bodies of code, one called DIX, and one called DDX. The DIX (device independent X) code and the DDX (Device dependent X code). In the X.org tree the dix lives up in the top level dirs, and for X.org server the DDX lives in hw/xfree86. The main object with info about protocol screens and GPUs is called ScreenRec in the DDX and ScrnInfoRec in the DIX. These are stored in two arrays, screenInfo.screens in the DIX and xf86Screens in the DDX, when code wants to convert between these it can do one of a few things.
a) lookup by index, both structs have an index value, so to go from ScrnInfo to Screen you look at screenInfo.screens[scrninfo->scrnIndex] and other way is xf86Screens[screen->myNum]. This is like the I didn't try and make an API, I just exposed everything.
b) ScrnInfo has a ScreenPtr in it, so some code can do ScrnInfo->pScreen to get the pointer to the dix struct. But this pointer is initialised after a bunch of code is called, so you really can't guarantee this pointer is going to be useful for you.
c) XF86SCRNINFO uses the DIX private subsystem to lookup the Scrn in the Screen's privates. This is the least used and probably slowest method.
So also screenInfo.screens contains the protocol screens we exposed to the clients, so this array cannot really change or move around. So I'd like to add screeninfo.gpuscreens and xf86GPUScreens and not have drivers know which set of info they are working on, however (a) totally screws this idea, since the indices are always looked up directly in the global arrays.
Now lots of the Screen/ScrnInfo APIs exposed to the drivers pass an int index as the first parameter, the function in the driver then goes and looks up the global arrays.
So my first API changes introduce some standard conversion functions xf86ScreenToScrn and xf86ScrnToScreen, and converts a lot of the server to use those. Yay an API. The second set of changes then changes all of the index passing APIs to pass ScrnInfoPtr or ScreenPtr, so the drivers don't go poking into global arrays. Now this is a major API change, it will involve slightly messy code in drivers that want to work with both servers, but I can't see a nicer way to do it. I've done a compat header file that will hopefully allows to cover a lot of this stuff where we don't see it.
I've ono other API introduction on the list, Glyph Pictures are another global array indexed by screen index, I've gone and added an accessor function so that drivers don't use the index anymore to get at the array contents directly.
Once this stuff lands in the server, a team of people will go forward and port the drivers to the new APIs (who am I kidding).
No new videos yet, need to fix some more rendering bugs so it looks nicer :)
So I've been working towards 3 setups:
a) intel rendering + nouveau offload
b) nouveau rendering + DVI output + intel LVDS output
c) hotplug USB with either intel or nvidia rendering.
Categorisation of devices roles:
I've identified 4 devices roles so far:
preferred master: the device is happy to be master
reluctant master: the device can be a master but would rather not be
offload slave: device can be used as an additional DRI2 renderer for a master
output slave: device can be used an additional output for a master
For the 3 setups above:
a) intel would be preferred master, nvidia would be offload slave
b) nvidia would be preferred master, intel would be output slave
c) usb devices would be output slaves, however if no master exists, usb device would be reluctant master.
I've rebased the prime work on top of the dma-buf upstream work, and worked through most of the lifetime problems. Some locking issues still exist, and I'll have to get back to them. But the code works and doesn't oops randomly which is good.
prime is the kernel magic needed for this work, as it allows sharing of a buffer between two drm drivers, so for (a) it shares the dri2 front pixmap between devices, for (b/c) it shares a pixmap that the rendering gpu copies dirty updates to and the output slaves use as their scanout pixmap.
So I've done nearly all the work to share between intel and nouveau and I've done the kernel driver work for udl, but I haven't done the last piece in userspace for (c), which is to use the shared pixmap as usb scanout via the modesetting driver.
Today I hacked in a switch on the first randr command, so I can start the X server with intel as master and nouveau in offload mode. I can run gears on intel or nouveau, then after the randr command and another randr command to set a mode, the X server migrates everything to the nouveau driver, puts it in master mode, and places the intel driver into output slave mode. It seems to render my xterm + metacity content fine.
So the current short-term TODO is:
fix some issues with my nouveau/exa port rendering
fix some issues with xcompmgr
add usb output slave support.
worked out how to control this stuff, via randr protocol. How much information do we need to expose to clients about GPUs, and how do we control them. Open issues with atomicity of updates to avoid major uglys. Switching from intel master to nvidia master + intel outputs, means we have to reconfigure the Intel output to point at the new pixmap, but the more steps we put in there for clients to do, the more ugly and flashing we'll see on screen, however we probably want a lot of this to be client driven (i.e. gnome-settings-daemon).
Longer term TODO:
Get GLX_ARB_robustness done, now that Ian has done the context creation stuff, this should be a lot more trivial. (so trivial someone else could do it :)
So today I managed to see something on screen doing X.org hotplug work. So I present to you live X.org plugging.
Pretty much is a laptop running xf86-video-modesetting driver, with my server, an xterm + metacity. Plug in a USB displaylink device, with a kernel drm driver I wrote for it. (Sneaky xrefresh in the background). and the USB device displays the xterm and metacity.
So what actually happens?
The X server at the start had loaded drivers using udev, and the a new driver ABI. It exports one X protocol screen and plugs an internal DrvScreen into it.
When the hotplug happens, the server inits another DrvScreen for the new device, and plugs it into the single protocol screen. It also copies all the driver level pixmaps/pictures/GCs into the new driver. The single protocol screen at the bottom layer multiplexes across the plugged in drvscreens.
This is like Xinerama pushed down a lot further in the stack, so instead of doing iterations at the protocol level, we do it down at the acceleration level. Also I have randr code hooked up so all the outputs appear no matter what GPU they are from.
This isn't exactly what I want for USB hotplug, ideally we'd use the main GPU to render stuff and only scanout using the USB device, but this is step one. I also need the ability to add/remove drvscreens and all the associated data in order to support dynamic GPU switching.
The real solution is a still a long ways off, but this is just a small light in a long tunnel, I've been hacking on this on/off for over a year now, so its nice to see something visible for the first time.
If you've seen the social network, you know when he launches "The Facebook" the first time he needs his friend with the non-nerd contacts email list, I get the feeling that google plus never got this step.
When I joined facebook it was because my mother sent me an invite.
The thing is finding out what my nerd friends are doing is easy, they are always on irc or mailing lists or twittering. I'm not sure what google+ adds to this.
To start off, I don't actually mind arrogant people as long as they back their attitude up with some semblance of sanity, however arrogance without ability pisses me off, and it seems that its the number 1 trait to be a maintainer of glibc.
Today someone pointed out F_SETOWN_EX to me which I can use to fix a problem in the X server, however trying to use gettid made me realise why these guys are considered such fuckwits.
WTF? glibc exposes 3 interfaces that need the result of gettid passed to them and internally relies on gettid to implement raise properly and it won't expose the interface. Notice in the bug he never addresses the problems, just closes it with "CLOSED ARROGANCE".
Other recent glibc fuckwittery includes the whole overlapping memcpy issues in F14 breaking flash for everyone, and trying to remove the rpc implementation a week before the F15 final release freeze, so that if the package had gotten in and packages that needed a rebuild for say a security fix would mean linking the fixed package against a different RPC library than the original which could cause untold pains.
So really if glibc could DIACF already thanks.
This is a test post hidden in an informational post.
I've upgraded the planet generator software for planet.kernel.org and planet.freedesktop.org to the latest venus codebase.
This was mainly due to the older planet software running p.fd.o taking down the webserver once or twice this week.
Hopefully it doesn't do any damage.