Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

anyone using Titan X cards? How does it compare to a cloud solution like an ec2 gpu instance with around 1000 cuda cores per instance?


Been looking at Titan X the last few days. Here is one article I came across on the topic:

https://timdettmers.wordpress.com/2014/08/14/which-gpu-for-d...

tl;dr: GTX Titan X = 0.35 GTX 680 = 0.35 AWS GPU instance (g2.2 and g2.8) = 0.33 GTX 960

GTX Titan X = 0.66 GTX 980 = 0.6 GTX 970 = 0.5 GTX Titan = 0.40 GTX 580

also: I was under the impression single precision was fine for most deep learning applications and double precision doesen't even have good support in most libraries but I guess it depends on the use case.


FLOP-wise that makes sense. But for deep learning, the big deal is in the 12Gb GPU-local memory, which has enormous bandwidth (and can store more of your dataset / parameters at once). The largest concern with GPU processing is keeping the GPU adequately fed with data - and avoiding round-trips of blobs of data with the CPU helps a lot.


Oh I agree and the article talks plenty about that topic as well. For me the temptation with Titan X is primarily the "laziness" of a) not manualy having to try to parallelize AWS units and b) not needing to try to squeeze in models into 4-6gb.

Rather than a speedup factor of 2-3.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: