Batman Arkham City with hybrid PhysX on AMD

Batman Arkham City is just like Mafia II a Nvidia-sponsored game with exclusive CUDA-based-PhysX support:

AMD users don’t get full particles, debris, fog and smoke effects. I bought this game but the exclusive support for Nvidia-owners-only pissed me so much off that I decided to get an additional Nvidia card that would do dedicated CUDA-PhysX calculations after watching a video on YouTube of someone who did it:

I paid 30 Euro for a GT430/PCI-E on eBay as it seemed that this card is sufficient for normal PhysX setting in the game with its 96 processor cores. I build it in next to the HD 6870 but had to switch my Asrock AOD790GX/128M main-board from PCI-E 16/1 mode to PCI-E 8/8 Crossfire-mode in order to detect the card. The GT430 barely fit in as the HD 6870 is a 2 slot card but it worked. After doing that I installed the latest Nvidia drivers 285.62 that come with PhysX 0621 and then applied the patch that makes this all possible…

http://www.ngohq.com/graphic-cards/17706-hybrid-physx-mod-v1-03-v1-05ff.html

… it made possible what Nvidia says is impossible: you can use an AMD card for the 3D acceleration and an additional one from Nvidia for PhysX rendering. Nvidia is using PhysX as a unique selling point for their GeForce cards and is pushing it into the market by paying developers to implement exclusive support. That’s not fair and the poor success of the Bullet Physics Library shows how serious Nvidia is about it. They push this proprietary “standard” by all means.

The setup with the GT430 in hybrid-mode together with the HD 6870 is maybe not the fastest and my system is mid-range (Phenom II @3,2 GHz, 6870 1GB, 4GB RAM) but it’s enough to get 30 FPS average in the in-game benchmark and play the game smoothly on DX11 normal tessellation normal PhysX settings (rest is set to very high). Before it was ~50 FPS without the PhysX effects but in this game the performance hit is worth it IMHO. To get high PhysX effects you would need to use a GTX460 BTW and to get more FPS something in general like a 6970 with 2GB.

How AMD copes with Nvidia “standards”

AMD always had a a difficult time when it came to standards. Just recently they announced they are going to remove 3DNow! from future CPUs. AMD now recommends going for SSE (that Intel introduced BTW)… so here it seems they lost the standard-fight. For sure it’s not easy to establish standards against giants like Intel or Nvidia but AMD found a way to cope with this! … and a smart one too.

As I described in my last post Nvidia is trying to push PhysX & CUDA (their hardware acceleration standard for physics in games) with all their power they still have in the market. They team up with game software companies, support their game development and probably even pay them not to implement support for the technology of “the other guys” (that would be ATI – pardon AMD… now they are one company and are phasing out the ATI brand).

More or less the same happened with the 3D glasses technology… Nvidia pushed the market to support their standard with 3D Vision and some hardware manufacturers even followed them. It’s basically the only 3D shutter glasses set available right now… but here comes the trick: you can use it only with Nvidia graphics cards. Surprise 😉 Probably you could hack them to work with AMD but that’s a different story.

AMD could go ahead and invent their own standards and fight the standard-war trying to establish them… but you need a lot of money for such an approach and you probably are going to lose in the end (like with 3DNow!). AMD came up with a clever strategy here. They try to win with open standards against Nvidia and their proprietary technology. AMD supports OpenCL (physics engine acceleration), OpenGL ES (web browser acceleration) and just recently I learned about Open Stereo 3D (for 3D shutter glasses) . I just hope this works out for AMD!

http://www.tomshardware.com/news/3dnow-simd-extensions-phenom-sse,11128.html

http://www.electronista.com/articles/10/03/15/amd.announces.partners.on.open.stereo.3d.standard/

Mafia II and the Nvidia deal

Mafia II is an awesome game, there is no doubt about that but it supports only Nvidia for the physics. That’s sucks big time! OpenCL is a hardware acceleration standard supported by both Nvidia and ATI. Awesome… but 2K Games didn’t implement it. Instead they went for Nvidia only PhysX support. I’m not sure how much Nvidia paid them but that’s really stupid. You will get Mafia II even bundled with 4xxNvidia cards so it’s clear there is some kind of deal: http://news.bigdownload.com/2010/08/28/get-mafia-ii-free-with-new-nvidia-graphics-card-purchase/

1. Such “It’s meant to be played with Nvidia” logos didn’t make me more loyal to Nvidia when I had Nvidia graphics cards.

2. Now that I switched to ATI (won’t never go back to Nvidia BTW) these logos that come up when starting the game really piss me off.

3. OpenGL/OpenCL, DirectX/DirectCompute are standards that are supposed to make life easier for gamers, game publishers and hardware manufacturers. Nvidia-only-PhysX is the opposite.

On the right you see the experience for ATI-owners with PhysX off (running on the CPU results in ~10fps depending on the machine)

So in general Mafia II is really worth playing even for ATI users but game publishers should stop putting those “It’s meant to be played with Nvidia” logos in their games because they might just annoy too many people having “the other” graphics card. Review@Gamespot:

More about the “It’s meant to be played” program: ttp://www.nzone.com/object/nzone_twimtbp_gameslist.html

So this is OpenCL … “the way to go” FTW:

Other comments on this topic: http://ve3d.ign.com/articles/news/53989/Mafia-II-PC-PhysX-Trailer

DirectCompute

ATI Stream & Nvidia CUDA

9287-amdstreamcosttime

There are many great things that can make your PC faster… but wait. There is a second powerful processor in your machine already that you might not be aware of. It’s the GPU of your graphics card. ATI & Nvidia are going the right way. There are so many things the GPU can do. I’m encoding quite a lot of videos lately. Just check the chart above. Lower is better and notice the price of the AMD platform. This is what I call vaule. Physics or Artificial Intelligence in games – why do they have to use one of the cores of a Multi-Core-CPU? The GPU can provide the rendering power for that.

More Information:

http://www.trustedreviews.com/graphics/news/2008/11/13/AMD-Details-ATI-Stream/p1

http://ati.amd.com/technology/streamcomputing/consumer-entertainment.html

http://www.nvidia.com/object/cuda_home.html

Edit:

Q: Why are reviewers seeing so little GPU processing during transcoding?
A: The ATI Video Converter uses the GPU for only a portion of the video encoding. Specifically, the GPU currently offloads only the motion estimation portion of the encoding pipeline which is the most compute intensive part of the encoder. The GPU is particularly well-suited for this task. Given that a fixed workload is being offloaded to the GPU the load on the GPU may be relatively low or high based on the specific model of GPU.

Seems that ATI’s Stream got a little bit in bad light becasue it’s using only little GPU processing during transcoding but I guess that problems will be solved soon:

http://en.expreview.com/2008/12/23/amd-responds-to-avivo-video-converter-feedbacks.html#more-1706