This is a test post hidden in an informational post.
I've upgraded the planet generator software for planet.kernel.org and planet.freedesktop.org to the latest venus codebase.
This was mainly due to the older planet software running p.fd.o taking down the webserver once or twice this week.
Hopefully it doesn't do any damage.
First day of LCA 2011 at Kelvin Grove QUT Brisbane.
I'll be organising the Southern Plumbers mini conference today, and will be speaking on Thursday at 11:30 AM about graphics drivers (who'd have guessed).
So I have a T410s with an LVDS panel and switchable graphics between intel and nvidia. I've gotten the basic switching support just like we have on the intel/amd combination.
The code is a start towards generic nvidia/nvidia and intel/nvidia switching but its missing some bits. The MUX switch on some GPUs relies on passing a parameter to the WMI function that we aren't passing, luckily the lenovo doesn't need this parameter at the moment so it works fine. Other laptops in this range may require the parameter.
I'll try and get some more info on the magic value we need to pass on other systems to make it work better, other systems like the Sony Vaios have a number of muxable outputs listed in a table, that the intel acpi code prints out at drm.debug=0x4. Again the lenovo doesn't have this table.
once switched and powered off the Intel, we get log spam from the IPS driver about the MCP limits.
nouveau gets a 1024x768 console since we can't detect the LVDS panel at startup.
also, this only works with open source drivers, i915 and nouveau.
Okay I sat down for a few hours last week with a switchable Intel/NVidia GPU laptop, and at least worked out some more info. I'm going to braindump it here.
Firstly in intel_acpi.c jbarnes has worked out some of the values for outputs in the ACPI tables esp around what is and isn't MUXed.
Now we had suspected that one of the nvidia GPU DSM methods (method 5) might actually do something, but a deeper investigation along with mjg59 made us realise it isn't the mux control, so we started looking elsewhere.
So we tracked down the MXDS method which is attached to the ACPI outputs per GPU, there is one on the LVDS for the internal GPU and one on the LVDS for the external GPU. So it appears that this is the mux switching object. Looking further up it appears that this is called via a WMI interface, so to do this all properly it looks like we need to write a WMI driver to call the mux switching, instead of just banging on ACPI methods.
So going forward a WMI driver needs to be written to which you pass the ACPI output IDs to the WMI MXDS method, and it should switch the mux.
Now despite this we didn't actually get the switching to happen, nouveau never got an auxch connection to the panel on the laptop, so I'd really like to see if anyone is useful enough to take this knowledge and make it do something on an LVDS laptop, all I have locally are a big pile of hacks.
Thanks to Jesse Barnes for the laptop, and Matthew Garrett for the ACPI decoding.
[update: one commentor asked about Macs, they have a non-WMI, non-ACPI method of handling things, sidolin on #nouveau at one stage mostly reverse engineering the mux, which was just reading/writing some memory mapped I/O ports to do the switch, not sure how far he is from pushing some patches out].
As part of LCA 2011, we are organising a mini-conference to provide a place for low-level system developers to gather and discuss interesting aspects of the kernel and lowlevel userspace. This miniconf is aimed at a similar audience to the Linux Plumbers Conference.
We'd really like to have talks that are entry points to open floor discussions and interactive sessions, though if a talk is interesting enough it will be considered depending on the quality of other submissions.
Areas of interest:
- Linux kernel
- X.org / Mesa
- toolchain/gdb (not deep compiler stuff, more system interactions)
- udev/u*/*Kit/hal replacements
The Southern Plumbers mini-conference is planned for Monday 24th Jan 2011.
Please send all submissions to email@example.com. Submissions will close on Friday, October 29th (when I leave for the real Linux Plumbers Conference), though I may accept submission over the week of LPC as well if I can drum up interest.
CFP available at http://planet.kernel.org/splca/cfp.txt
Dave Airlie and Matthew Wilcox.
So to follow on from my posting stating my position wrt kernel drivers for closed source userspace drivers, lets take a look at the embedded GPU industry and Linux kernel relationship.
What does the embedded industry get from Linux?
They get a kernel which is royalty free, with 1000s of man-years of development experience and resources. Before Linux these vendors either sourced an OS on a royalty basis from some closed-shop, or rolled their own in-house one.
Now people might say "but the embedded GPU industry has to support Windows as well", but take one look at NVIDIA Tegra One and you can see the embedded windows marketplace is less than important, NVIDIA Tegra Two is all about the Linux, whereas they were pretty much only talking to MS on Tegra one.
So Linux is a great boon for this industry, and means they can produce higher quality products for a lower cost (or lower quality products at a lower cost in some cases). So really there are probably two games in town for these embedded vendors, selling into Apple or selling into Linux centric developments, like Android, Meego, Linaro.
So what are they actually hiding in userspace?
The main thing they seem to be hiding is shader compilers and their GPU assembler code, things that convert from GLES into the assembler code for their GPUs. This stuff isn't rocket science but it probably is where most of their speed up and tricks are hidden.
So why do they think it valuable?
I think all 3D IP vendors dream of becoming Imagination Technologies, they need to learn there is already one Imagination Technologies and the only way to easily disrupt their revenue stream and sell into other SOCs is to be disruptive, not just follow the herd. They also probably had to spend a lot of money writing a decent GPU compiler from scratch, whereas most embedded firmware is a lot more trivial, so they probably think they need to directly recoup the costs from this development instead of giving it away. The thing is they are hw vendors, the sw is a sunk cost, opening it would actually make future maintenance easier. HW companies never do well at SW and they would be best to just open it and try and involve some community development around it.
Is the value of this IP more valuable than what the receive from Linux?
This is the crux of my issue with these vendors, they are receiving the Linux kernel for free, but don't want to contribute anything back. They know they can't sell into any where else except Linux driven products, but they insist on keeping their development methodologies from the days of Windows and their own in-house OSes. Those days are gone, but they cling to the idea that for some reason they can produce a better GPU stack on their own than they could in collaboration with other, despite the fact that the kernel that forms the basis for their sales was developed in this fashion. They also all use gcc as the compiler for their CPUs again proving the insanity.
Isn't it up to them what they do?
Totally, but its also up to the Linux community to push back against them. The thing is they'd never have opened any code if it wasn't for the GPL making them at least open the kernel portions, they don't care about freedom or GPL, they care about their bottom line, and doing the least amount of work to remain legal and make money. Now they are getting all this wonderful software for free, Linux phone sales are driving their bottom line, but they still don't want to play the game by the rules of the kernel. They want to have their cake and eat it too. (the cake is a lie). Hence they spend their time creating their own solutions in private, releasing what they have to comply with legalese but never actually allowing people the freedom to use their devices.
So shouldn't we give a little?
The thing is two major vendors have been pushing Imagination Technologies for years to open something, these guys are aiming to sell thousands->millions of devices, we have gotten the ugliest kernel shim in the world in 4 years of trying. All the other vendors are only willing to give that little. I don't personally think any of them want to open this stuff and will hide behind IP excuses for ever.
What will make them change their minds?
a) money and lots of it. If google or olpc can demand open driver commitments (in contracts, not handwaving agreements) then I suspect these vendors will quickly realise the value of their IP is dwarved by the value of sales. This probably means a major chance for one of the vendors to control a lot of the space in the Linux world.
b) disruptive vendor, one vendor realises before the others that opening their IP will lead to more sales than keeping it closed and also lead to the chance of more people optimising their technology and leveraging other work in the industry.
So are you saying they should drop all their in-house developed solutions?
No I'm saying that the driver for their hardware is a single entity, and if the whole entity isn't open, then none of it is truly open. So if they don't want to release an open userspace, then they don't get to merge their open kernel bits to support the closed userspace. We have to keep the maintenance burden on them, so it keeps costing them money to track newer kernels, and they don't get community support from other vendors who have committed to doing things right.
So why should they re-write drivers?
This happens in Linux the whole time, with nearly every new technology. Wireless, RAID, SATA for example, all have had vendors trying to push complete stacks of their own writing, you'll notice over time the drivers that are actually written to the current stacks work best, an the crazy vendors drivers are often horror shows.
What would be nice to happen?
It would be great if there was a hero with time/funding and involvement in the ARM GPU community to take over being maintainer of these solutions, from kernel all the way to userspace. Vendor driver writers could ask this person for advice, and they could have some sort of working group where they develop a stack based around current Linux technologies, like GEM/TTM/DRI2/Mesa/Gallium3D. If you take a look at the mesa stack lately, there has been a lot of work on making it work as an EGL/GLES stack as well as a classic GL stack. Then vendors would supply open drivers compliant with this stack, and just sell lots of chips.
What would be most likely negative solution?
We get what we have now, they maintain the 5-6 GPU stacks in their own world, and never talk to each other, and it costs them more and more money going forward to maintain. Some hero reverse engineers one or two of the GPU architecture, maybe some hero writes a open driver stack from docs under NDA or with open docs.
I may update this post as I have more thoughts ;-)
[I posted this to lkml earlier - discussion should happen there not in comments here, but its nice to have somewhere easy to point people at].
Now this is just my opinion as maintainer of the drm, and doesn't
reflect anyone or any official policy, I've also no idea if Linus
agrees or not.
We are going to start to see a number of companies in the embedded
space submitting 3D drivers for mobile devices to the kernel. I'd like
to clarify my position once so they don't all come asking the same
If you aren't going to create an open userspace driver (either MIT or
LGPL) then don't waste time submitting a kernel driver to me.
My reasons are as follows, the thing is you can probably excuse some
of these on a point by point basis, but you need to justify why closed
userspace on all points.
a) licensing, Alan Cox pointed this out before, if you wrote a GPL
kernel driver, then wrote a closed userspace on top, you open up a
while world of derived work issues. Can the userspace operate on a
non-GPL kernel without major modifications etc. This is a can of worms
I'd rather not enter into, and there are a few workarounds.
b) verifying the sanity of the userspace API.
1. Security: GPUs can do a lot of damage if left at home alone, since
mostly you are submitting command streams unverified into the GPU and
won't tell us what they mean, there is little way we can work out if
the GPU is going to over-write my passwd file to get 5 fps more in
quake. Now newer GPUs have at least started having MMUs, but again
we've no idea if that is the only way they work without docs or a lot
2. General API suitability and versioning. How do we check that API is
sane wrt to userspace, if we can't verify the userspace. What happens
if the API has lots of 32/64 compat issues or things like that, and
when we fix them the binary userspace breaks? How do we know, how do
we test etc. What happens if a security issue forces us to break the
userspace API? how do we fix the userspace driver and test to confirm?
c) supplying docs in lieu of an open userspace
If you were to fully document the GPU so we could verify the
security/api aspects it leaves us in the position of writing our own
driver. Now writing that driver on top of the current kernel driver
would probably limit any innovation, and most people would want to
write a new kernel driver from scratch. Now we end up with two drivers
fighting, how do we pick which one to load at boot? can we ever do a
generic distro kernel for that device (assuming ARM ever solves that
I've also noticed a trend to just reinvent the whole wheel instead of
writing a drm/kms driver and having that as the API, again maintainer
nightmares are made of this.
d) you are placing the maintenance burden in the wrong place
So you've upstreamed the kernel bits, kept the good userspace bits to
yourselfs, are stroking them on your lap like some sort of Dr Evil,
now why should the upstream kernel maintainers take the burden when
you won't actually give them the stuff to really make their hardware
work? This goes for nvidia type situations as well, the whole point is
to place the maintainer burden at the feet of the people causing the
problems in an effort to make them change. Allowing even an hour of
that burden to be transferred upstream, means more profit for them,
but nothing in return for us.
Ted with the smackdown:
when you can't fix bugs in your "enterprise distro" file them against someone elses community distro:
So we bought an Electrolux front-loader washing machine just over a year ago, all ready for the amount of washing a baby produces!!
a few days ago, the drum stopped spinning. I did all the filter cleaning etc the manual recommended but still no moving drum. So as I was ringing the warranty repair people I decided to use a lot more force than I'd consider using on something under warranty. This was followed by the sound of a coin dropping somewhere inside and the drum moving again. However it still wouldn't move under power. So I called Electrolux who gave me the is Tuesday 2 weeks good for you?.
So I decided to have a look myself, took the lid off and low and behold the drive belt had snapped in half. So today I bought a new drive belt from a spares place and tonight I shall endeavour to resurrect the washer.
Now some net searching over the course of last night discovered this is a really common problem on these washing machines. The washing machine when faced with the prospect of coins will just let a coin enter between the two drums and once there it can then jam the inner and outer drums together and cause the drive belt to snap or the motor to give up. Now I'm not the best person at taking money out of my jeans and stuff goes into the wash sometimes before I notice, but really who thinks selling a washing machine that can die when faced with one or two accidental coins is a) something you can sell, b) something you can apparently say isn't a warranty fault as there is a sticker on the thing saying not to put coins in it. I think if Electrolux came to fix it would cost nearly 1/4 the price of the washing machine if the warranty didn't cover it.
So I'm hoping to abuse my google juice to say to everyone "DON'T BUY ELECTROLUX FRONT LOAD WASHING MACHINES" because they obviously don't think about using them in the real world.
I've just done a 1.6.1 release of radeontool from my personal repo, it contains both
radeontool and avivotool, and is probably full of ugly but whats in distros now is older
radeontool (and avivotool) are lowlevel tools to tweak register and dump state on radeon
GPUs, they also can parse parts of the BIOS data tables.
Tarballs are at
r128 support from Jonathan Nieder
Some new legacy table decoding added