NVidia GTX 1080 annoucement

Howdy, I only caught the last 30min of the live stream. Made me wonder... Is there way to use its 2560 GPU cores as substitute for the usual 4 CPU cores? -- William

| From: William Park <opengeometry@yahoo.ca> | I only caught the last 30min of the live stream. Made me wonder... Is | there way to use its 2560 GPU cores as substitute for the usual 4 CPU | cores? Yes, kind of. High-end GPUs have evolved towards providing computing power for non-video problems. But they are kind of horrible and odd to program. That's what Cuda and OpenCL are all about: somewhat high-level languages that can be used to program these monsters. These have massive parallelism, awkward memory resources, few separate instruction streams, and idiosyncratic instruction sets. Right now the hottest application seems to be deep neural nets. Neural nets tend to require lots of floating-point array work, something well suited to GPUs. There is a meetup group in town that is about GPU computing <http://www.meetup.com/GPU-Programming-in-Toronto/> I haven't read it, but this article might be useful <https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units> PS: AMD is supposed to be close to announcing its new offerings. NVidia's and AMD's new generation is the first real progress in a few years. Previous announcements have been only incremental changes since both NVidia and AMD were held up by process shrink difficulties and the silicon fabs. I have a soft spot for AMD since it tries hard to be open-source and it is local.

On Fri, May 06, 2016 at 10:43:54PM -0400, William Park wrote:
Howdy,
I only caught the last 30min of the live stream. Made me wonder... Is there way to use its 2560 GPU cores as substitute for the usual 4 CPU cores?
For general purpose computing? No. For specialized stuff that can be run as thousands of parallel streams with lots of loops over massive amounts of data, especially floating point data? Yes. So video compression and decompression sure, 3D graphics sure, folding@home certainly, and various other tasks like that. But not your x86 code running your web browser or your compiler. Reminds me of the days when Beowulf clusters were new and people would come on IRC and ask "How to I make a beowulf cluster?" and then be asked "You do realize it won't make your web browser go faster, it only works with software written to take advantage of it?" and they would go "Oh, nevermind then". -- Len Sorensen

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/06/2016 10:43 PM, William Park wrote:
Howdy, I only caught the last 30min of the live stream. Made me wonder... Is there way to use its 2560 GPU cores as substitute for the usual 4 CPU cores?
http://www.gpudb.com/docs/5.1/ - -- Daniel Villarreal http://www.youcanlinux.org youcanlinux@gmail.com PGP key 2F6E 0DC3 85E2 5EC0 DA03 3F5B F251 8938 A83E 7B49 https://pgp.mit.edu/pks/lookup?op=get&search=0xF2518938A83E7B49 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJXRM0RAAoJEPJRiTioPntJff0IAICrrdL4GQqTYj2ol2CZuBD5 uC7S/nY6SarkPmBVyr9brUQYZMvmvTFBfnRc85J98JvAMxwe6vlpIlKsfLI2jFZC UEMTAThyX+2ju1YQUMleWr2ZYdBZzrdUEk7l1NiaE3ETsfBVPWDE2nDT6QQufRly 8l77lOSilQkwkThdLyrOacvWvPpLHl0q/X5k/HbkQdS5QTHPD9fQRiFQZqZfcbjO yRzLgLeS4H0RLUJ1znMOS7yMYIOj4sdRTUBCi2tsR/Bhd3s5PgPklVgdViERqKV8 RZR4RJsZFROgPgkgQgCmGSB/FLt3XvvD6S59Ca5gHPkIz9M6BAF4w0d0qvbagJI= =BIWX -----END PGP SIGNATURE-----
participants (4)
-
D. Hugh Redelmeier
-
Daniel Villarreal
-
Lennart Sorensen
-
William Park