Is it just me?

Story: Nvidia's Excellent Linux AdventureTotal Replies: 19
Author Content
Koriel

Mar 16, 2012
11:02 AM EDT
But I have been using Nvidia on Linux pretty much since they issued their first binary driver and unlike the articles author have never to this day had a problem with them other than some installation issues. And NVidia plays nice with most Wine supported games.

Im not sure I know what the move to join the Linux Foundation means for NVidia but since their binary drivers work well for me im not sure I really care.

The only thing I can find to criticise Nvidia is the lack of a screen resizing tool similar to the one on their windows driver, forcing me to do my Xorg.conf tweaking by hand for my HDTV which is an odd beast since it is one of the earliest HDTV's that doesn't support 1:1 PC input and thus I have overscan to compensate for.

The author should try using AMD/ATI Catalyst drivers on Linux and lets see how fast he goes running back to NVidia begging to have Jen-Hsun Huang's children.

Bob_Robertson

Mar 16, 2012
11:22 AM EDT
Koriel,

I'm convinced that "Linux Insider" is a false flag operation run by Microsoft.
PsynoKhi0

Mar 16, 2012
11:33 AM EDT
I guess it's a matter of personal experience. I've been using Radeon with fglrx myself pretty much since I started using Linux as my main OS some 5 years ago. I did hear horror stories from way back when ATI was an independent company. And yes, there are still quirks every now and then, though I'd argue the drivers' faults are blown way out of proportions. Are they still lagging behind nvidia's blob? Probably. However it seems to me AMD is working 3-fold as much as the green team: bringing fglrx up-to-par on the consumer front after years of neglect from the previous company, adding new stuff, and releasing documentation as well as recruiting people to work on the FOSS radeon drivers.

The one time I had to use the nvidia blob resulted in my desktop freezing every 5 seconds each time had some Flash vid up. So much for the "best thing since sliced bread".

And mostly due to other issues I have with nvidia as a company, my money goes to AMD. With no reason to switch for the forseeable future.
cr

Mar 16, 2012
11:47 AM EDT
@bob: wasn't it Maureen O'Gara's old bully pulpit?
Koriel

Mar 16, 2012
12:45 PM EDT
As an ATI Radeon HD5750 owner I can tell you from personal experience that the Catalyst driver problems are not overblown and are legion , feel free to check out the Phoronix forums for the multitude of severely p**sed off linux owners of Evergreen chipset based Radeons.

A lot of them end up on Ebay with warnings not to use on Linux.

I relegated mine to a Windows box as quite frankly it was close to useless on a linux box.
kenholmz

Mar 16, 2012
1:00 PM EDT
I haven't been around for a while. I still think we should each use what works best for us. I, too, have used Nvidia for years but I don't disparage anyone who uses a different video card and drivers.

Objective evaluations are always welcome as this may be of some help for others. My gosh awful subjective experience with this hardware or that driver is probably not useful.

I see that hairyfeet is still opining (must be code for whining because that is apparently his raison d'être).
PsynoKhi0

Mar 16, 2012
1:34 PM EDT
"As an ATI Radeon HD5750 owner I can tell you from personal experience that the Catalyst driver problems are not overblown and are legion , feel free to check out the Phoronix forums for the multitude of severely p**sed off linux owners of Evergreen chipset based Radeons."

I'm registered on the Phoronix forums. Just not posting there much anymore since repeating "AMD has a list of distribution they officially support, for others either radeon is your best bet or try fglrx but you're on your own" or "Try installing them correctly" over and over and over is tiresome.
Bob_Robertson

Mar 16, 2012
1:53 PM EDT
Psyno,

That's a fascinating list of caveats over something that really ought to be as simple as "the module is called x".
Koriel

Mar 16, 2012
2:32 PM EDT
@Psyno Agreed, AMD support is pretty much non existent and the neverending "repeat installs advice", well im pretty sure they are just trying to extract the urine from folks on that score, not sure if NVidia is any better but the point is moot because i have never had any reason to use NVidia support as its always worked.

But basically I will not be purchasing another AMD card as it also has a known bug on Windows as well where if you overclock by even just 1MHz then on a dual monitor setup the second monitor flickers badly, this problem has existed since Catalyst 9.9 and still exists in the latest driver.

I know all hardware is going to have their teething problems but its been pretty much 2 years now since the release of the first HD card and still nothing has been done about this issue on Windows this is poor support however you define it.

Its NVidia all the way for me until someone better comes along.

tracyanne

Mar 16, 2012
4:28 PM EDT
Gee I've never had any problems with either. I feel left out.
BernardSwiss

Mar 16, 2012
7:43 PM EDT
My biggest (only) on-going trouble with graphics drivers was with an NVIDEA card (locked up on anything that used OpenGL -- including screensavers) whether I used the proprietary blob or the open driver, and heavy Flash was a no-no, as well. Sometimes locked so hard I coudn't even ssh in from another box. The logs made it very clear that the trouble was with the graphics driver.

The open ATI/AMD drivers have always worked quite well for me, even if I don't get heavy-gaming type performance, they've been solid and smooth. I haven't used the non-open drivers since Debian Woody or Sarge (I don't recall which) since the open drivers have been plenty good enough.

- -

Change of topic:

"Hairyfeet" keeps on yammering -- endlessly -- about ABI vs API interfaces for graphics cards/drivers (I suspect it's at the root of his inability to adjust to doing things differently than Windows does it);

Does anybody have a good link to a comparison of the two approaches, and why one would prefer one to the other (particularly in reference to the "philosophical" Linux preference for an API based approach)?

Khamul

Mar 16, 2012
8:32 PM EDT
Quoting:Does anybody have a good link to a comparison of the two approaches, and why one would prefer one to the other (particularly in reference to the "philosophical" Linux preference for an API based approach)?


There's valid arguments both ways.

An ABI interface to the kernel means that vendors can compile proprietary drivers and release those, and don't have to open-source their driver, and also don't have to maintain separate versions of their driver for every kernel version that comes along. They just make one version, and that's it, as long as the kernel continues to support that ABI. The Linux kernel changes rapidly, as does its API, so if you want to have a proprietary driver, you can do it, but you'll have one version for 3.0.0, another for 3.0.1, another for 3.0.2, etc. The maintenance is a nightmare. Or, you can do what Nvidia does, which is have an open-source "shim" that changes with the kernel, and then a binary blob that connects to that shim, which doesn't change. If you want to support proprietary vendors, an unchanging ABI is essential.

Linux doesn't do this; they have an API, but it changes with the wind. Any time someone decides something needs to be done a little better, they go and change the API, meaning the function calls and their arguments are subject to change at any time. This is very useful in this environment. For instance, the wireless driver API changes frequently, as wireless hardware changes frequently. So the API keeps up with all the changes, and the kernel devs, whenever they make a change, go and change all the in-tree drivers so that they use the updated API. Frequently these changes are minor: just an added argument to a function, for instance. But even a minor change will completely break a static ABI. Why is this useful? 1) You don't have to try to plan for everything and the kitchen sink when you design the API; you just make it up as you go along. Make one revision, work with that a while, find out that something is missing or could be done better, then just go back and update it, no problem. With a static ABI, once it's released, it's set in stone and you have to support it forever. If you want to improve it, you have to release an updated version, and then support two ABIs. It's a maintenance nightmare.

So the choice is: do you want the kernel devs to have a maintenance nightmare, or do you want the proprietary software vendors to have a maintenance nightmare? MS chose the ABI path for obvious reasons, but they have tons of developers to maintain their code, and even so, their software is bloated: all that extra code in there to support all these ABI versions adds up, and more code means more bugs. Linux chose the changing-API route; the kernel is by most accounts not bloated (there's a lot of code there, but most of it is device drivers to support every device (and CPU architecture) known to man, so only a small fraction gets compiled with any particular kernel; the core code is actually pretty slim). But on top of that, there's only so many Linux devs, and there isn't exactly a long line of developers wanting to spend their free time maintaining backwards-compatibility code just so a bunch of proprietary companies have an easier time, and there probably aren't many Linux companies wanting to employ people full-time to this end either.

Moreover, the whole backwards-compatibility thing doesn't always work out either. There's tons of older devices that don't work on Windows 7 because their drivers either won't install or just don't work, and the vendors never bothered making updated drivers for 7 (they want you to buy new devices, after all, or perhaps they went out of business). This doesn't happen with open-source drivers that are maintained in the Linux kernel tree; once they're in there, they stay in there almost forever, except for some seriously ancient devices that no one uses any more and they can't find anyone willing to maintain any more (the floppy tape driver comes to mind; I think that was dumped in the 2.6 kernel series). So if your hardware is more than a few years old, odds are it'll "just work" with Linux, and might not work at all in Windows 7.

For the most part, the Linux devs' policy of having a changing API is working quite well, by keeping the kernel non-crufty, and by encouraging vendors to release documentation or source code to support their devices in the kernel. For devices that require kernel drivers (this doesn't include printers and scanners, most notably), Linux support is generally excellent: just plug it in and it "just works". The glaring exception is video drivers, because these vendors refuse to release docs or source code (usually saying the code isn't all theirs anyway, and contains parts from other vendors), and also because 3D video drivers are very large and complex compared to just about any other devices, so it's very hard for open-source devs to reverse-engineer them. This is why Nouveau is so slllooowwww compared to the binary Nvidia drivers.
BernardSwiss

Mar 16, 2012
9:05 PM EDT
Thank-you Khamul,

that was an excellent summary, and about how I understood the matter (thank-you for reassuring me :-) that I have a handle on it).

But what I'm really looking for is something that would be a good place to direct hairyfeet to have a look at (or more realistically, to direct any of the numerous less-informed readers who might take hairyfeet's opinion as an essentially well-informed and reasonably authoritative (if not exactly impartial) summary of the topic.
Bob_Robertson

Mar 18, 2012
4:14 PM EDT
Khamul, Bernard,

One more thing to add to this. Khamul's discussion is valid inside the kernel, with kernel modules.

It is the strongest argument to "mainline" hardware modules, since then all the kernel devs are keeping your modules updated at the same time all the other modules are updated. As Koh-Hartman said, "You can't keep up with this rate of change, so join it. Make it work for you." (paraphrased from his hour long Google tech-talk)

HOWEVER, OUTSIDE the Kernel, the system calls are exceptionally stable. A user program compiled in 1999 is practically guaranteed to work exactly as it did in 1999, because of what has been described as "A hard shell surrounding a gooey center".

It is unfortunate that over time I have come to think that the protestations by Nvidia that "we just can't open-source the driver" is just an excuse so that they don't reveal to the world all the little tweaks they have to put in to overcome their own mistakes in hardware. Pure vanity.

Seriously, writing the driver is a COST to them. Pure overhead. Why would they not do everything possible to reduce that cost?
gus3

Mar 18, 2012
4:30 PM EDT
The driver is an investment. The ROI comes in the form of hardware sales.
Khamul

Mar 18, 2012
5:50 PM EDT
I don't know why anyone would take "hairyfeet"'s opinion as authoritative. He's just some yahoo on Slashdot. And he has a stupid name; "Khamul" is a much better name, as long as we're looking at people who don't even have real names, and instead use silly handles. The main problem with him is that this author on LinuxInsider keeps quoting him for some reason, and I really do wonder if LI isn't a false-flag operation. There's tons of informative posters on Slashdot, so why does she keep quoting his dumb (and biased) posts?

@Bob: Yes, system calls outside the kernel are exceptionally stable. However, an exceptionally small number of programs actually use these system calls. Very, very few programs are written to make system calls directly. One set of software that does is the glibc library, so most POSIX-compatible programs link to that, rather than calling the kernel directly. By doing this, they get access to all the standard C library functions, but more importantly, they aren't Linux-specific; they can be compiled against any POSIX OS, including *BSD and even Windoze without too many changes. However, even then, many programs make use of even higher-level libraries, such as gtk and Qt, which themselves link against glibc. Those change much faster, though still fairly slowly.

However, when talking about video drivers, all this is irrelevant; video drivers have to run in kernel-space for performance reasons, and also because they're directly accessing hardware.
gus3

Mar 18, 2012
9:42 PM EDT
In the case of running Linux-targeted programs on FreeBSD, it's no problem, since the "int 0x80" interface is made available through a Linux compatibility wrapper module. Ooooh, those wascally FweeBSD people! ;-)

But I beg to differ about video drivers having to run in kernel space. One needs only mapped pages to the memory, plus I/O privileges. Heck, the mapped pages don't even have to be identity-mapped (where linear address equals physical address). As I understand, the X.org server does most of its graphics stuff with minimal kernel calls; for the most part, the kernel simply hands over pages and ports, for the server to use as it will.

Even if I'm wrong about that last part, there's still no requirement for graphics drivers to run in kernel space.
jdixon

Mar 19, 2012
6:38 AM EDT
> ....there's still no requirement for graphics drivers to run in kernel space.

Correct. It's my understand that non-kernel space graphics drivers have less than optimal performance, however. :(

It seems that would be a good area for OS development to concentrate on, but instead everyone seems to take the easy way out and simply move the graphics drivers into kernel space.
Bob_Robertson

Mar 19, 2012
8:53 AM EDT
It seems this discussion has actually made progress.

Now I'm curious just how much of a performance hit a non-kernel video driver would take by being moved to user-space.
gus3

Mar 19, 2012
9:52 AM EDT
If you're talking about a stand-alone driver program, the hit would be pretty big. Instead of making a single call across the user-kernel boundary, there would be two calls, one into the kernel (perhaps a scheduling interrupt), followed by another transfer into the driver program's user space. Then, another timer tick, or process yield, would go back into the kernel, and then into the client program to resume. (Edit: sounds a lot like an X server, doesn't it?)

OTOH, something like libvga puts the video hardware directly at the disposal of the client program, with just enough kernel support to be useful. So the performance becomes "however much you can cram through it".

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!