Log in

No account? Create an account

July 2017
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31

airlied [userpic]
embedded GPU : what are they hiding?

So to follow on from my posting stating my position wrt kernel drivers for closed source userspace drivers, lets take a look at the embedded GPU industry and Linux kernel relationship.

What does the embedded industry get from Linux?

They get a kernel which is royalty free, with 1000s of man-years of development experience and resources. Before Linux these vendors either sourced an OS on a royalty basis from some closed-shop, or rolled their own in-house one.

Now people might say "but the embedded GPU industry has to support Windows as well", but take one look at NVIDIA Tegra One and you can see the embedded windows marketplace is less than important, NVIDIA Tegra Two is all about the Linux, whereas they were pretty much only talking to MS on Tegra one.

So Linux is a great boon for this industry, and means they can produce higher quality products for a lower cost (or lower quality products at a lower cost in some cases). So really there are probably two games in town for these embedded vendors, selling into Apple or selling into Linux centric developments, like Android, Meego, Linaro.

So what are they actually hiding in userspace?

The main thing they seem to be hiding is shader compilers and their GPU assembler code, things that convert from GLES into the assembler code for their GPUs. This stuff isn't rocket science but it probably is where most of their speed up and tricks are hidden.

So why do they think it valuable?

I think all 3D IP vendors dream of becoming Imagination Technologies, they need to learn there is already one Imagination Technologies and the only way to easily disrupt their revenue stream and sell into other SOCs is to be disruptive, not just follow the herd. They also probably had to spend a lot of money writing a decent GPU compiler from scratch, whereas most embedded firmware is a lot more trivial, so they probably think they need to directly recoup the costs from this development instead of giving it away. The thing is they are hw vendors, the sw is a sunk cost, opening it would actually make future maintenance easier. HW companies never do well at SW and they would be best to just open it and try and involve some community development around it.

Is the value of this IP more valuable than what the receive from Linux?

This is the crux of my issue with these vendors, they are receiving the Linux kernel for free, but don't want to contribute anything back. They know they can't sell into any where else except Linux driven products, but they insist on keeping their development methodologies from the days of Windows and their own in-house OSes. Those days are gone, but they cling to the idea that for some reason they can produce a better GPU stack on their own than they could in collaboration with other, despite the fact that the kernel that forms the basis for their sales was developed in this fashion. They also all use gcc as the compiler for their CPUs again proving the insanity.

Isn't it up to them what they do?

Totally, but its also up to the Linux community to push back against them. The thing is they'd never have opened any code if it wasn't for the GPL making them at least open the kernel portions, they don't care about freedom or GPL, they care about their bottom line, and doing the least amount of work to remain legal and make money. Now they are getting all this wonderful software for free, Linux phone sales are driving their bottom line, but they still don't want to play the game by the rules of the kernel. They want to have their cake and eat it too. (the cake is a lie). Hence they spend their time creating their own solutions in private, releasing what they have to comply with legalese but never actually allowing people the freedom to use their devices.

So shouldn't we give a little?

The thing is two major vendors have been pushing Imagination Technologies for years to open something, these guys are aiming to sell thousands->millions of devices, we have gotten the ugliest kernel shim in the world in 4 years of trying. All the other vendors are only willing to give that little. I don't personally think any of them want to open this stuff and will hide behind IP excuses for ever.

What will make them change their minds?

a) money and lots of it. If google or olpc can demand open driver commitments (in contracts, not handwaving agreements) then I suspect these vendors will quickly realise the value of their IP is dwarved by the value of sales. This probably means a major chance for one of the vendors to control a lot of the space in the Linux world.

b) disruptive vendor, one vendor realises before the others that opening their IP will lead to more sales than keeping it closed and also lead to the chance of more people optimising their technology and leveraging other work in the industry.

So are you saying they should drop all their in-house developed solutions?

No I'm saying that the driver for their hardware is a single entity, and if the whole entity isn't open, then none of it is truly open. So if they don't want to release an open userspace, then they don't get to merge their open kernel bits to support the closed userspace. We have to keep the maintenance burden on them, so it keeps costing them money to track newer kernels, and they don't get community support from other vendors who have committed to doing things right.

So why should they re-write drivers?

This happens in Linux the whole time, with nearly every new technology. Wireless, RAID, SATA for example, all have had vendors trying to push complete stacks of their own writing, you'll notice over time the drivers that are actually written to the current stacks work best, an the crazy vendors drivers are often horror shows.

What would be nice to happen?

It would be great if there was a hero with time/funding and involvement in the ARM GPU community to take over being maintainer of these solutions, from kernel all the way to userspace. Vendor driver writers could ask this person for advice, and they could have some sort of working group where they develop a stack based around current Linux technologies, like GEM/TTM/DRI2/Mesa/Gallium3D. If you take a look at the mesa stack lately, there has been a lot of work on making it work as an EGL/GLES stack as well as a classic GL stack. Then vendors would supply open drivers compliant with this stack, and just sell lots of chips.

What would be most likely negative solution?

We get what we have now, they maintain the 5-6 GPU stacks in their own world, and never talk to each other, and it costs them more and more money going forward to maintain. Some hero reverse engineers one or two of the GPU architecture, maybe some hero writes a open driver stack from docs under NDA or with open docs.

I may update this post as I have more thoughts ;-)

game theory?

I think (IMveryHO) that you are approaching this from the wrong side, even though I do agree with your stance and (most of) your motivation. To me the problem looks very much like game theory: the first company to open up their code will give an (perceived) advantage to all their competitors -- even though their competitors might not dare to look at it for fear of copyright infringement. It takes quite a leap of faith to open up internal processes, remember that "classic business" thrives on having a leading edge over your competitors. Even more so in a market that is not dominated by a single common enemy.

Probably the easiest way to get more out of these vendors is to take away the (perceived) value of their IP: when Gallium gets its own high-performance shader compiler, they would only have to write a new backend for it instead of having to write their own compiler (disclaimer: I have no knowledge of whether it already exists, how far along its development is or how it performs). And yes, I fully realize that this puts the "burden" of making these companies play nice on the community, but it's still the easiest solution.

If only Tungsten would have ended up in a foundation instead of a vendor...

Re: game theory?

pretty much that is what I expect will happen, the mesa/gallium framework will improve so that open GLES is easier than closed, and they'll start to build closed drivers around that instead, then eventually someone will open that.

It won't help for any of the current embedded GPUs since these companies never invest in stuff they've sold.

Re: game theory?

In most cases this would be true, but in this specific case Qualcomm and it's partners have already committed to using this GPU core in their current generation of Scorpion/Snapdragon products, including Chrome OS smartbooks and tablets (single and dual core apps) and Android superphones from multiple vendors. While it's true they could change GPU or even adopt Imagination in the future, the fact that they own most of this IP and have contractual relationships with the owners of the rest (ATI?) makes it unlikely in the near future. Qualcomm (QuIC) has already contributed an open X driver for this class of hardware and undergone at least one transition from a proprietary interface to kernel space (PMEM on android) to a DRI/GEM based one, excepting the KGSL interface which is used for creating 3d contexts not framebuffer or mode management.

The kernel should expose abstracted APIs

one of the things I have always thought is that the kernel Apis should hide hardware differences as far as possible. so that the kernel is not just a remote pipe to a PCI bus. If you need userspace libraries like Gallium 3D, then they should be shipped with the kernel.

Basically the kernel is GPL with exceptions for user mode components. If the interface isn't a defined standard, then the exception can't really apply and the userspace library must be a derivative of the kernel.

What about Intel?

Hi Dave,

While from an FOSS developer I fully agree with you and drivers should be open as other code the vendors use, from an user or even vendor point-of-view we are not being as convincing as we could.

Take for instance Intel GFX drivers. While the HW does not help much, they do have the best engineers to write their drivers, yet the available outcome is not on par with any other system and is getting worse and worse as we move forward in time. This is noticeable by everyone running laptops or even netbooks with their i965/945, Phoronix and others tried to measure it, yet nothing seems to get better.

Other efforts like in Radeon and Nvidia open source communities ain't better, but these are a more complex matter as people don't get all the docs they need, the hardware, the tools and so on.

Thus I really think that if we really want vendors to believe in that story you told us, it is better to just prove it with the easiest target that is Intel drivers. It would help people point at it and say "It's X% faster than Windows or MacOS on the same HW. It's on par with Y and Z that are more expensive and powerful hardware, yet it provides more features like proper XRandR1.3"

Again, I fully agree with your points and thank you for pushing such change, we really need more of that!

Re: What about Intel?

The thing is the scale of developers is still very very small. The number of developers working on Windows drivers for the big PC players dwarves the number of Linux developers. Like even in Intel where there are a number of developers, they are only now getting to writing a GLSL compiler, whereas I'm sure the GLSL compiler team for Windows is 20-30 people and is finished 6 months before the GPU ships.

You can't just ramp up the same size team as long as Windows is involved.

However in this case there is no Windows, no reasons to write a closed driver, the only market these guys can sell into is Linux based, they just seem to think its the same place as a Windows market and lots of vendors are clearly trying to place Linux as a replacement OS for Windows without making vendors aware of their non-strictly legal responsibilities, (yes I'm looking at Google, OLPC at least in the first version seemed to do better).

Re: What about Intel?

I think the OP does have a point, though. We hit serious regressions with older - and by that I mean 'anything pre-i945, sometimes even i945 itself' - intel chipsets with the move to KMS. The response from ajax and everyone at intel was pretty much 'eh, that's old, who cares'. I've more or less given up on getting anyone at intel to care about how broken the intel driver currently is with i8xx chips. The 'not caring about anything they sold already' mindset is alive and well when it comes to F/OSS drivers too...

Re: What about Intel?

It is true that over time the Intel driver got less stable for older hardware,
which is unfortunate. But it's not true that they don't try to improve it, see
http://bugs.freedesktop.org/show_bug.cgi?id=27187 for instance. My impression
is that the (especially) old Intel video hardware is quite fragile and easy to
crash when anything unexpected is asked of it, like anything in the new model
of KMS et al. Because of the limited resources and somewhat dire state of the
Intel driver it's not surprising that Intel focusses on improving the driver
for new hardware first. I think the main problem is that they've been working
on a buggy base, both hardware and old drivers, which makes it so hard to get
it all stable and working, let alone perform well.

All in all the open Intel drivers aren't the shiniest example of good open
source drivers, because of the lack of stable 2D support, but that's for a
great part due to plain crappy hardware.

They probably should have left the driver for the old hardware the same, and
only support new hardware with KMS and such. That way all the regressions would
have been avoided.

Re: What about Intel?

The user experience with Intel on GNU/Linux is already much better than with Windows. Try using EEE PC 701 or a Thinkpad X30 with Debian and Windows. These might be older chipsets, but the prove the point.

GPLing the X server ?

Maybe the X community should take a clue from the kernel. The reason they release their kernel drivers is because of the GPL. Releasing new X work under the GPL would probably force the various embedded vendors to chose between re-inventing X and releasing their crap. Politely asking didn't work and will not work. They have to be forced, either very very big customers (not happening) or by licensing.

Re: GPLing the X server ?

It has been suggested a number of times, its just a major change and most likely they'd just fork the old version of X and ship the whole lot with their drivers (not a joke). We'd have major trouble getting new features of X into those markets.

More Open == More Sales

The stupid thing is that these don't realise being closed is costing them sales. I work at an organisation with over 2000 boxes (including our servers ). Because we have no idea how we may need to deploy them in the future none of them contain any hardware that doesn't have mainline kernel support. At the end of the day, we aren't buying the units for their shader technology, we are buying them to use them and if we can't use them, why would we buy them?

Make it happen

So, rather than waiting for one of the embedded GPU vendors to figure this out, how about telling them? Figure out which embedded GPU vendor seems like the best candidate, use your considerable connections to arrange a meeting with someone in a position to make a decision, and lay out the current and future state of the industry for them.

Re: Make it happen

Oh there are people behind the scenes working on this, I'm just trying to make a public statement that people can be pointed at.

Keep it up Dave!!!

Lots of awsome. Just keep it up and i think you are 100% right in rejecting the open kernel closed userspace "drivers"