
Hi all. Although I really don't want to, I need to upgrade the graphics card on my main desktop. I currently have an AMD RX 6600 but some of the applications I want to work with, notably in AI and content creation, either don't work well on AMD cards or not at all. For decades I've stuck to AMD cards -- partly because of their better track record with Linux drivers, but also because of my affinity for ATI in Markham. Alas, though, I must get something from Team Green now. Leaning towards an RTX 3060 12GB, that should be enough for me. Here is my question. Even having picked that one chipset, there are a dozen variations that I see between CanadaComputers and Amazon. These vary in price by more than $150. The only differences I can tell are the number of fans and the output connectors (some have 2 HDMI and 2 Displayport, some 1/3). Are there any compelling reasons other than price that would drive a purchase? I don't care how many fans so long as the GPU is sufficiently cooled (I have no interest in overclocking). Also, any GTA store recommendations besides CC are appreciated. Thanks! -- Evan Leibovitch, Toronto Canada @evanleibovitch / @el56

On 3/29/24 3:36 PM, Evan Leibovitch via talk wrote:
Hi all.
Although I really don't want to, I need to upgrade the graphics card on my main desktop. I currently have an AMD RX 6600 but some of the applications I want to work with, notably in AI and content creation, either don't work well on AMD cards or not at all.
For decades I've stuck to AMD cards -- partly because of their better track record with Linux drivers, but also because of my affinity for ATI in Markham. Alas, though, I must get something from Team Green now. Leaning towards an RTX 3060 12GB, that should be enough for me.
If I'd seen this a week earlier I would have proposed a trade -- I just took out an MSI GeForce RTX 2060 to replace it with an AMD RX 6600. I can't help with your main question, unfortunately, but I can tell you why I chucked my Nvida card, which is that there seems to be a nasty interaction bug between the nonproprietary nouveau drivers and the current iteration of mesa, which leads to all manner of complications, mostly in never coming out of suspend but sometimes even not booting the console. (In each case using ssh to get in to the computer found everything else running perfectly.) This all happened under current Arch Linux. Most likely you aren't interested in the nouveau drivers. I don't know how extensive the interactive bug with mesa is, but you might check that out before you take the leap. I have never used the nvidia proprietary drivers so I have no views about whether they have problems. -- Peter King peter.king@utoronto.ca Department of Philosophy 170 St. George Street #521 The University of Toronto (416)-946-3170 ofc Toronto, ON M5R 2M8 CANADA http://individual.utoronto.ca/pking/ ========================================================================= GPG keyID 0x7587EC42 (2B14 A355 46BC 2A16 D0BC 36F5 1FE6 D32A 7587 EC42) gpg --keyserver pgp.mit.edu --recv-keys 7587EC42

Thanks! Too bad about the trade.... - Evan On Fri, Mar 29, 2024 at 8:19 PM Peter King via talk <talk@gtalug.org> wrote:
On 3/29/24 3:36 PM, Evan Leibovitch via talk wrote:
Hi all.
Although I really don't want to, I need to upgrade the graphics card on my main desktop. I currently have an AMD RX 6600 but some of the applications I want to work with, notably in AI and content creation, either don't work well on AMD cards or not at all.
For decades I've stuck to AMD cards -- partly because of their better track record with Linux drivers, but also because of my affinity for ATI in Markham. Alas, though, I must get something from Team Green now. Leaning towards an RTX 3060 12GB, that should be enough for me.
If I'd seen this a week earlier I would have proposed a trade -- I just took out an MSI GeForce RTX 2060 to replace it with an AMD RX 6600.
I can't help with your main question, unfortunately, but I can tell you why I chucked my Nvida card, which is that there seems to be a nasty interaction bug between the nonproprietary nouveau drivers and the current iteration of mesa, which leads to all manner of complications, mostly in never coming out of suspend but sometimes even not booting the console. (In each case using ssh to get in to the computer found everything else running perfectly.) This all happened under current Arch Linux.
Most likely you aren't interested in the nouveau drivers. I don't know how extensive the interactive bug with mesa is, but you might check that out before you take the leap. I have never used the nvidia proprietary drivers so I have no views about whether they have problems.
-- Peter King peter.king@utoronto.ca Department of Philosophy 170 St. George Street #521 The University of Toronto (416)-946-3170 ofc Toronto, ON M5R 2M8 CANADA http://individual.utoronto.ca/pking/
========================================================================= GPG keyID 0x7587EC42 (2B14 A355 46BC 2A16 D0BC 36F5 1FE6 D32A 7587 EC42) gpg --keyserver pgp.mit.edu --recv-keys 7587EC42
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- Evan Leibovitch, Toronto Canada @evanleibovitch / @el56

GPU-for-AI advice from the cheap seats (I've never done this). AI stacks seem very fragile. Any time you make a substitution the whole thing might misbehave. You can either copy EXACTLY a working configuration from someone else or you can sign up for adventure. Most likely you will try some of both. NVidia cards come in at least three flavours: those for AI, those for pros (running things like autocad), and those for gamers. They try hard to segment the market so they can charge as much as possible for the segment. Serious AI cards cost up to US$20,000. Maybe more. How do they keep gamer cards from being great AI cards? I don't know all the techniques, but they include: - crippling the speed of 64-bit floating point - limiting the amount of on-board memory (pretty important) - (I think) limiting the ability to partition the card into virtual cards for multiple processes to run in parallel safely. - limiting how GPUs can scale up (interconnect) This is on top of the crippling they did to hobble cryptocurrency mining. I don't know what that amounted to technically. Advice for buying a card: - how much do you want to spend? Not enough! - you probably want as much on-board RAM as possible because swapping stuff between the computer's RAM and the GPU's RAM is apparently a serious bottleneck. I don't have a cost/benefit curve. I'm pretty sure that it is like real RAM: performance falls off a cliff when you don't have enough. - fans probably matter because (1) good cooling should be quieter than bad cooling (not in data-centre cards: nobody cares about noise there), and (2) without good cooling, throttling will happen. Read good review sites to get opinions about card cooling issues. - Guess: the version of PCIe used might matter. We're in a period of transition. (Remember: the motherboard and the GPU have to both support the PCIe version you target.) - if I were at all adventurous, I'd look at Intel graphics cards. They are probably a bargain on a per teraflop basis. Intel tries hard to push AI on Linux. They are pretty good open-source players and seem more competent than AMD. - my impression is that AMD for AI on Linux smells a bit like a lost cause: they care about their commercial GPU-compute customers but not us. ROCm only really seems to work on their industrial GPU cards. - the latest gen of NVidia is apparently not much a step up from the previous one. Check the performance, not just the model number. Fun fact: Mac folks never cease to brag about how great Unified Memory is on the M1 etc. That seems silly when you realize that integrated GPUs have always had this. But there is apparently a case where the performance benefit is great: - tonnes of memory on the Mac (very expensive), more than you can get on an NVidia gamer GPU - working with a model that doesn't fit in the NVidia GPU but does fit in the Mac's RAM. The Mac's RAM has very fast access from both the GPU and the CPU. Much faster than the PCIe bus. In fact, much faster than an X86 can access bulk RAM. Phoronix.com seems to focus on Linux and GPUs. https://ca.pcpartpicker.com/ might be helpful finding deals. Where to buy? Check out Amazon (easy returns; sometimes good prices) Canada Computers (sometimes bad customer service but I've not really encountered this) Best Buy (sometimes) NewEgg (sometimes) Memory Express

On Sat, Mar 30, 2024 at 12:13:35PM -0400, D. Hugh Redelmeier via talk wrote:
Fun fact: Mac folks never cease to brag about how great Unified Memory is on the M1 etc. That seems silly when you realize that integrated GPUs have always had this. But there is apparently a case where the performance benefit is great:
Unified memory is a great idea when designed with the bandwidth it requires. The integrated video on PCs never was, it has no more bandwidth than the CPU without the integrated GPU had, so it was always costing you bandwidth your CPU needed. What the MAC has done, and what SGI did as well as some of the Xbox models, is design the system with lots of memory bandwidth, more than the CPU itself could ever take advantage of, which means anything you now put in ram is also useable directly by the GPU and neither is getting starved for bandwidth. So unified memory with high bandwidth is good. Unified memory with low bandwidth is bad. I remember a laptop my wife had with intel integrated video where doubling the ram made the machine way way faster because it allowed it to switch from single to dual channel access and doubled the bandwidth and suddenly the video wasn't starving the CPU as much anymore. It made way more difference than a bit more ram normally should have done. -- Len Sorensen
participants (5)
-
D. Hugh Redelmeier
-
Evan Leibovitch
-
Evan Leibovitch
-
Lennart Sorensen
-
Peter King