Hardware Complex

April 23, 2009

NVIDIA’s GT300 – Parallel Pwnage

Filed under: Hardware, Tech News, Video Cards — paradigmshift @ 2:51 pm

Apparently, NVIDIA’s next core, the GT300, set to introduce DirectX 11 compatibility into NVIDIA’s GPU lineup, will feature no less than 512 (!) processing cores arranged into sixteen 32-core clusters. In comparison, NVIDIA’s current single core champion, the GTX 280/285, using the GT200(b) core, features 240 stream processors in ten 24-core clusters. To put this into perspective, using the 65nm process, the GTX 280’s core was a gargantuan 1.4 billion transistors over a 576mm2 die surface. Here it is compared to a dual-core Intel Penryn (what the current crop of Core 2 Duos are based on):

Its like comparing any of us to a professional porn star. Image courtesy of Anandtech.

The 55nm GT200b revision found in the GTX 285 shrunk this down to 470mm2. The GT300 will reportedly use a 40nm process for further size reduction and power savings, while allowing for increasing clock speeds. However, it’s not just the number of processors that are different. The GT200 and all previous NVIDIA GPUs utilizing the unified shader architecture since the G80 (e.g. 8800 GTX) used SIMD (Single-Instruction Multiple-Data) units, while the processors found in the GT300 will be MIMD (Multiple-Instruction Multiple-Data).

What this might mean is that while each cluster (actually stream multiprocessors, as each cluster is further divided into these SM’s with their own caches) in the previous architecture was only able to operate on a single instruction at a time, the stream multiprocessors in the GT300 will be much more versatile, able to operate on different instructions from its cache at the same time in an asynchronous fashion. While the GT200 and its predecessors had fairly fine granularity in its instructions measured by clusters of stream processors, the GT300 will take it even one step further and achieve granularity on a per-processor basis, so potentially, every processor on the GT300 could be operating on a different instruction, if the situation called for it. Of course, I’m just pulling this out of my ass based on a few lines from an online rumor report.

What this means for your gaming is even better load balancing between different computations necessary for rendering bleeding edge graphics, such as pixel and vertex shading. With the advent of on-GPU physics processing through NVIDIA’s PhysX, this becomes even more important (bouncing boob physics), for the GPU to be able to divide its attention between rendering graphics and calculating physics, for all the jiggling and flopping around you could desire at the fastest framerates.

What this might mean for General Purpose computing on GPUs and CUDA or OpenCL applications is finer control over how the GPU issues and executes threads. Currently, calling a CUDA-enabled function to run on the GPU means issuing it in blocks of threads, which is then managed by the GPU, only giving the programmer abstract thread IDs and global synchronization instructions to work with. Since all the clusters across the GT300 will be identical, the programmer might now be able to work within the cluster and perhaps be able to order different processors in the cluster to execute different functions, maybe through something like a processor ID, while the GPU still maintains control over which cluster to issue that batch of instructions to.

Since this is all a rumor though, who knows what final product NVIDIA has prepared, which is also rumored to perhaps appear in Q4 2009. This particular tidbit about MIMD processors on the GT300 is from TechConnect Magazine.

April 7, 2009

Completely Necessary Overkill

Filed under: Hardware, Video Cards — paradigmshift @ 5:06 pm

The man building this must be going through the tech geek equivalent of a mid-life crisis. He has linked 17 NVIDIA Geforce GTX 295 graphics cards in a single server rack with all the hardware communicating with each other. The 23 total number he comes up with in the video is from adding the two he has in his home computer, plus the four that he is currently waiting on for power supplies. Watch the video and literally feel your e-peen shrink and retreat back into your geek gape-hole.

Unfortunately, seeing as NVIDIA’s display drivers only support up to Quad SLI, in other words just two of these beasts, to to render a game, this setup is really only useful for non-gaming applications such as Folding@Home and various CUDA implementations. So it still won’t run Crysis at acceptable framerates maxed out at Triple-head 2560x3x1600 resolutions. Still, the sheer amount of transistors here outclasses the computing power that any single person has in their possession but at $500 for a single GTX 295, this is also out of the price range of most of us peons. Makes a great space heater, in the very least.

Mercifully for our geek pride though, it turns out this is a hoax, according to the EVGA forums, and that this is actually a Folding@Home cluster. Looks like the creator of the hoax has some problems he is trying to compensate for, although he does it in a quite unorthodox manner, unlike most of us who will brag about non-existent hot-rod cars and various fictional women we’ve conquered.

Video link courtesy of Engadget

April 2, 2009

4890 vs 275 (Obviously the Bigger Number is Better, Duh)

Filed under: Hardware, Video Cards — paradigmshift @ 5:06 pm

VS
Whoever wins, we lose a shitload of money buying new graphics cards

Today sees the hard release of the ATI Radeon 4890 and the paper launch of the NVIDIA GTX 275 to fill in that mid-range segment of the high-end GPU market, although calling these cards mid-range is like calling an Audi RS6 a “decent” performer compared to an Audi R8 or D-cups “moderately sized” because that girl can’t use her rack as a flotation device, unlike the other girl with E-cups. These video cards will still own the shit out of most games at the resolutions most people run at, even Crysis.

The ATI card is a re-engineering of the core found in the 4870 to allow for higher clocks, while the GTX 275 is just the flagship NVIDIA single-card dual-core GPU solution GTX 295 cut in half. Both will start their prices at around $250, although manufacturer’s mail-in rebates will make you think you are only paying $220, but most of the time the bastards will either claim they didn’t receive your rebate request in the mail or “lose” your rebate request because they are currently “transitioning to a new rebate database system” or more likely, just used it as toilet paper.

Rebate Customer Service: Yes, I have your rebate request in my hand right now. Don’t worry, we were just about to “process” it. *flushes*

Anyways, now that the reviews are in, you can expect all the fanboys to come out of the woodwork to cherry-pick benchmark results to show how much better the product from their brand of choice is. If NVIDIA or ATI really wanted blowjobs, I’m sure they could do much better than nerds that get involved in internet dick-waving matches. The reviews will talk about the high overclocking overhead of the 4890 even though most people still think that means setting your system time two minutes ahead, and others will talk about how NVIDIA supports PhysX, even though like one game uses it (Mirror’s Edge), and the chick in that game doesn’t even have the breast size to use the physics engine to its fullest extent.


Who cares about glass???


Physics Engines should be for boobs


Except these boobs

Anyways, here’s a link to a fuckton of reviews on both these cards, but even I wasn’t bothered to sift through them and look at every last benchmark. Especially now with AMD/ATI’s supposed DX11 part coming at the end of summer and NVIDIA’s part following soon after (to probably coincide with Windows 7’s release, and the inevitable DX11 or Windows 7 exclusive game that will get hacked in days to run on Windows XP). If you already have a 4870 or GTX 260 or something comparable, this shit is worthless unless if you leave your computer on 24-7 downloading porn or something, then the power savings of moving from the 4870 to 4890 might be worth it (the 275 sucks up more power than the 260).

Very comprehensive list of reviews, courtesy of DailyTech

if you want to buy the 4890, here’s a good listing of products from Newegg

You’ll have to wait for the GTX 275 because its a paper launch right now, the actual hard launch date is around April 14th.

Create a free website or blog at WordPress.com.