Are Geforce cards really license-restricted for Neural Net and/or datacenter uses?

Blockchain apps are the only allowed operation for geforce in the datacenter.

There is a chapter in the book “Life After Google” that indrectly explains why – AWS could charge $40k/mo for the machine learning GPUs for what you could do with about $19k of 1080Ti graphics cards +/- (for a particular image processing app I’m thinking of).

Updated for thread split:

CNBC source:

GeForce and TITAN GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation … We recognize that researchers often adapt GeForce and TITAN products for non-commercial uses or other research uses that do not operate at data center scale. NVIDIA does not intend to prohibit such uses.

Bold added for emphasis.

WFCC Source:

I’m not alone in thinking that ‘datacenter’ not being defined is problematic:

Almost any non-personal use can be construed as ‘datacenter’ use, which include VMs being run by hosting providers

I have one source, so it may be tin-foil hat, on the reason that Stadia uses AMD was mainly around licensing. (note: this could be partly because of the “datacenter” use and partly because grid licensing is expensiiiive… and sr-iov is ‘standard’ in case google plans to run multiple VMs off of one card. source was not clear on that. )

Can even you use CUDA without the driver (e.g. nouveau?) Even with reduced performance?

Not as far as I know:
https://devtalk.nvidia.com/default/topic/995606/linux/can-cuda-operate-with-open-source-nvidia-driver-or-only-with-nvidia-proprietary-driver-/

There was one developer who started, but no progress was made afaik:
https://lists.freedesktop.org/archives/nouveau/2015-July/021500.html

Why would nvidia care who is doing what?

In the book “Life After Google” , One of the chapters talks about Dreamscope https://dreamscopeapp.com/ and how it was running on AWS’s machine learning instances – IIRC costs crept up to $40k/mo on AWS but Geforce 10 series cards worked just fine.

In fact, oversimplifying a bit, but it seems like this business was spun off by the dreamscope devs from their business need
of not paying so much for data center machine learning stuff – https://lambdalabs.com/deep-learning/workstations/4-gpu

It doesn’t take a rocket scientist to figure out that one of the reasons for the licensing change fron nvidia last year was because of the success of companies like Lambda labs, and the perfect success case study, dreamscope: https://dreamscopeapp.com/

And the Life After Google book has some anecdotal evidence that the team behind dreamscope, and other teams like that, have been successful enough with inexpensive gaming cards to make nvidia nervous enough to change the license.

1 Like

No one owns hardware anymore i guess

Ok, so the Nvidia software for GeForce and Titan products is not authorized for data centers, except for blockchain processing.

See subsection 2.1.3 https://www.nvidia.com/content/DriverDownload-March2009/licence.php?lang=us&type=GeForce

No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.

But the Nvidia spokesman quoted in that article leaves it open for individual/researcher use:

We recognize that researchers often adapt GeForce and Titan products for non-commercial uses or other research uses that do not operate at data center scale. Nvidia does not intend to prohibit such uses.

So, I don’t see how this relates to System76’s computers or clients, in the context of this video.

So to conclude:

  • The Thelios is a desktop computer; it won’t be in data centers. That license restriction on GeForce/Titan cards doesn’t apply here.
  • System76’s servers (eg. Ibex Pro) with GPUs use Tesla cards.
  • The License doesn’t mention neural networks, even if that is the typical use case for GPU compute in AI.

The funny part is the reason given for this license restriction:

GeForce and Titan GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation, where there are often multi-stack racks.

But if that is their concern, why would blockchain processing be exempted? Wouldn’t the exact same concerns still apply?

AI isn’t blockchain though, so prohibited.
No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.

The non-commercial wording worries me. Are they going to go after an individual researcher working on a commercial setting? Unlikely.

Would they go after a company that does a pilot project with a system like this, and then deploys the same thing for their org? Yes, probably.

So, I don’t see how this relates to System76’s computers or clients, in the context of this video.

My point is that if I were working in a commercial context, on ANY commercial non-blockchain project, and I bought one of the systems with an nvidia solution, and I might want to scale my project up, then I’d want to be aware that I would not be able to deploy the same hardware at scale for around the same cost because of licensing limitations.

The context of Nividia’s license change here was also shady imho. It was an unannounced edit. That gives me some pause about relying on the state of the license to not change further in the future.

But if that is their concern, why would blockchain processing be exempted? Wouldn’t the exact same concerns still apply?

Yes, exactly. It turns out that a lot of people were buying geforce cards for their ML stuff and sticking them in racks of servers because who cares if it’s not ECC? The whole point of the way things are done now is to run your enterprise on as crappy hardware as you can and do the redundancy in software. It goes against the whole idea of buying $10k per unit graphics cards. But this is getting severely off-topic for a workstation review. Hence the license change.

1 Like

The license itself doesn’t mention non-commercial vs commercial, only datacenter. And to be specific, the restriction is on the software, not the hardware, although the distinction is moot. New motivation for renewed efforts on open source Nvidia drivers? One can dream.

This restriction is definitely irritating. Those who dislike Nvidia’s practices have one more reason to dislike them.

But unless this is a general discussion about Nvidia, I still don’t see the relevance of that comment to the System76 Thelios. It’s misleading. There’s no indication that the Thelios can’t be used for AI or neural nets, without concerns about violating Nvidia’s license, technically or otherwise.

As I said in my earlier post If I were a commercial researcher and went to implement a larger scale version of what I’d been working on only to find I couldn’t use the same hardware at scale, I would be upset. This practice, in general, is a bit of a Trojan horse.

In my experience it’s pretty common for someone working in a commercial space to experiment with a nice beefy workstation like the thelios to do proof of concept, then scale up. Well if you invest your time in an Nvidia solution and there is a chance you want to scale up, you should be aware, should you not? Seems relevant to me, hence why I mentioned it.

3 Likes

Valid concerns about Nvidia, but that’s nothing new. But, this statement in the review:

technically it’s against NVidia’s license to use those graphics cards [2080s] for neural nets

is both generally false except for datacenters, and not applicable to the Thelio. Hence why I found it shocking and misleading.

Plus, if you were to scale up, would you be building a datacenter? Even having rackmount servers for your organization wouldn’t be running into the license issue. Or if you scale by deploying your application in a datacenter, would you necessarily be building your own servers there, or probably just renting what the datacenter provides? Quite an edge case, no?

At a minimum, the statement is lacking crucial context.

Splitting hairs. Nvidia doesn’t get a pass of omission here – it’s applicable to the Thelio user that is working in this space the same way someone who is in a kitchen might get burned if a stove maker has laid a user-unfriendly trap. Sure not every user would care but at least some would want to know.

The fact that there is an “except for…” Merits having attention drawn to it.

What is “data center” use is not clearly defined by nv and the " non commercial " quote is from Nvidia clarifying how they would enforce. It’s also not impossible some user somewhere might use a Thelio in the data center, even though it’s not rack mount. Nothing prevents that.

With the rtx generation there is a marked de-emphasis of GeForce cards for ai in nv marketing for ML/DL . It “seems” to me Nvidia is looking to segment such that Titian rtx is fof “serious” desktop so users and datacenter gpus for everyone else, excluding GeForce entirely, but yes technically we aren’t there yet for GeForce cards being fully excluded from the license – just partially – the not fully defined by nv “datacenter uses” …

1 Like

∃ ≠ ∀

The existence of some restriction under some set of circumstances is not equivalent to a general restriction.

Quite.

Actually I have to retract/modify my previous argument:

this statement in the review:

technically it’s against NVidia’s license to use those graphics cards [2080s] for neural nets

is both generally false except for datacenters, and not applicable to the Thelio.

That statement is completely false, including for datacenters.

The restriction is on the deployment of Nvidia software for GeForce/Titan in datacenters. The use of neural nets is not restricted on those card.

Bottom line: it is not against Nvidia’s license to use 2080s for neural nets.

Can you elaborate how how you came to that conclusion? Have you ever actually setup a geforce card on linux for machine learning? What software are you going to use on those cards to skirt the nvidia license agreement?

I suppose that you could use the cards for general compute without agreeing to an nvidia license in the same way you could burn wood inside an electric oven for heat, and still manage to cook food…

So I think we’re just going to have to disagree here. I do not think you are correct at all, in reality, but time will tell.

Edit: I updated the OP where I split this threadjack with some more references and information of why I continue to believe the license is something to generally be aware of if you work in this space.

1 Like

I absolutely think you have to be aware of licensing when it comes to NVidia. This is the same company that tried to pull GPP which undoubtedly violated laws both in the US and abroad so you cannot underestimate their ability for shenanigans. And they really don’t have to change the wording of the license… they just can change the definition of the words in their license to use as they see fit.

By noting the the lack of any such legal restriction? We are talking about Nvidia’s license, as prompted by the quoted statement in the System76 Thelio review. As such, I also fail to see how this is considered a threadjack.

If you want to talk about practical restrictions or limitations, that’s another topic entirely. To play devil’s advocate, it could be argued that Nvidia’s solutions are the least restrictive on the actual use and development of AI/ML/NN. They provide the most expedient way available to date to do those tasks. For a non-ideological practitioner who wants to get stuff done, Nvidia is currently the way to go.

As great as it is to see progress with ROCm, it is not yet up to par for performance, ease of installation, support of hardware, availability of libraries, etc.

Again, this is about the current legal situation. Concerns about closed vs open source, GPP, corporate strategies, vendor lock-in, etcetcetc are all besides the point.

My point is simply that the statement in the review turns out to be factually false. It boggles my mind how this is even in contention.

The GPPstar was destroyed by the rebels, but now the Npire strikes back!

2 Likes

Absolutely brilliant.

1 Like

Okay, in very basic terms.

To Actually Use These Cards (Geforce 2080) for ML, you would install Cuda, and possibly cuDNN, on Linux for ML, which currently requires the nivida proprietary driver. The most current nvidia drivers have the datacenter restriction in its license. The license does not define what a datacenter is.

Do you dispute any of those specific statements, then? Those are the facts. Is there another way to do AI on these cards that I’m not aware of? e.g. without the nvidia proprietary driver?

I’ve updated the sources at the top with more references.

I am not the only one that has a problem with what is defined as a ‘datacenter’ and the quote/clarification from nvidia in the CNN article would seem to make it worse, not better, for commercial uses.

From my own experiences, which is admittedly anecdotal, and experiences other industry contacts have shared with me, and stuff I have read about this in books like Life After Google (and what is plainly written in the license) most people that I know that would be spending $8000 on a workstation would be working in a commercial context and they would simply not start a new project on Geforce cards because of the licensing ambiguities.

Is there some room for interpretation in the license? Sure, but that’s part of the game being played by nvidia here. Is Lambda labs still selling servers with Geforce cards? You bet.

I have a separate video coming up on machine learning and I will try to use more specific and pedantic language when I mention that the nvidia driver license currently restricts datacenter, and that if you want to use the cards for cuda, you’re SoL without the proprietary driver, possibly (depending on interpretation) commercial use or use at scale of geforce cards, for future videos.

2 Likes

Looking forward to it. An important topic. But how does that justify stating that NN use on 2080s violates Nvidia’s license? It’s simply not true.

The word “commercial” does not appear even once in the EULA. Speculations do not belong in a factual reading and discussion of a legal contract.

So you’re saying I can in fact use a 2080 in a datacenter context for neual nets with Nvidias drivers? Gee a lot of websites, CNN and officials from Nvidia “clarifying” the license terms got that one wrong I guess…

i think he is playing the if they dont lock you out from doing it out right then its fine approach. which in theory on a piece of hardware should be true. but this nvida who locks all the cool toys behind a massive pay wall.

2 Likes

Cherry-picking and then generalizing.

If they made the same statement, then yes. Must have slipped my eye.

Just “playing” read-what-the-contract-actually-says and don’t-jump-to-clickbaity-false-conclusions game. Admittedly, not many players on those game servers.

The addition of the BSD forum section got me thinking… maybe a Legal section is due next? :face_with_monocle: Brought to you by Nvidia, Apple, …

Nvidia’s license vs. PR sounds like

  • we prohibit datacenter use
  • we might not prosecute you for it if you’re a non-profit

the more news i see out of nvidia the more i dislike their management and wish they lost their lawsuit(s) with SGI