PDA

View Full Version : News: Nvidia Physx goes open-source



Christiaan van Beilen
03-12-2018, 16:31
Source: https://blogs.nvidia.com/blog/2018/12/03/physx-high-fidelity-open-source/

Gone are the days of exclusivity on GPU hardware accelerated Physics simulation called Physx.

Of course AMD still needs to implement it and support it, but it is a great step.

Also Nvidia will be releasing the Physx 4.0 SDK December 20th this year.


https://www.youtube.com/watch?v=K1rotbzekf0

Let us hope that all of this will aid Project CARS and other games in the near future. :)

Azure Flare
03-12-2018, 18:44
Because everyone is clamoring to use PhysX of course. :p

Christiaan van Beilen
03-12-2018, 19:33
Because everyone is clamoring to use PhysX of course. :p

I don't know. If we can utilize it to calculate Physics related matters more efficiently, like maybe AI responses, than I am all for it.

It was a long time argument that game devs could not implement Physx, except for some graphics gimmicks, because AMD users would be at a disadvantage.
I was just glad that soon we hopefully can throw this argument out of the window with this.

Personally I love adding realistic flowing water or clothing to games. Seeing fluttering clothes in Mafia 2 at the time was cool but quite demanding, just like RTX is now a demanding gimmick of this generation.
All these things make games look more lifelike and that is awesome, and if everyone can use it in both Nvidia and AMD camps it is even more awesome.

Although AMD will have to step up their game in regards to graphics cards though. I would love a 2080ti equivalent for a fraction of the price.

Sankyo
04-12-2018, 07:23
It was a long time argument that game devs could not implement Physx, except for some graphics gimmicks, because AMD users would be at a disadvantage. (

To be clear: using CPU-based PhysX was never an exclusive of course (which is what pCARS is doing), only using it on GPU.

Christiaan van Beilen
04-12-2018, 11:36
To be clear: using CPU-based PhysX was never an exclusive of course (which is what pCARS is doing), only using it on GPU.

Naturally I know this. I forgot to clarify it in the OP.
Also the reason why Physx was heavy in Mafia 2 is because it is forcefully CPU bound, which can only be worked around as far as I know if you choose a dedicated GPU for the Physx calculations.

The same is the case for pCARS. It is forcefully bound to CPU only if I remember, and there isn't a command line to force the Physx data to be send elsewhere.

Sankyo
04-12-2018, 13:08
Naturally I know this. I forgot to clarify it in the OP.
Also the reason why Physx was heavy in Mafia 2 is because it is forcefully CPU bound, which can only be worked around as far as I know if you choose a dedicated GPU for the Physx calculations.

The same is the case for pCARS. It is forcefully bound to CPU only if I remember, and there isn't a command line to force the Physx data to be send elsewhere.
CPU PhysX in pCARS is taking in the order of 1-2% CPU time IIRC as it's not graphics-related, so putting it on the GPU won't win much in this case.

Asturbo
04-12-2018, 13:15
Hope nVidia also supports Freesync Displays in the future.

Christiaan van Beilen
04-12-2018, 15:41
CPU PhysX in pCARS is taking in the order of 1-2% CPU time IIRC as it's not graphics-related, so putting it on the GPU won't win much in this case.

How about in cases of having an older CPU and a new GPU, which would give a CPU bottlenecking situation. Any means in that situation to offload the CPU is a win, hence having the optional control over this is desired; in any game by the way and not just pCARS.

MaximusN
04-12-2018, 16:53
How about in cases of having an older CPU and a new GPU, which would give a CPU bottlenecking situation. Any means in that situation to offload the CPU is a win, hence having the optional control over this is desired; in any game by the way and not just pCARS.
And if I'm not mistaken graphicscards are (a lot) more efficient at it. And you can lower the graphics settings to offload the GPU. It's a lot harder to reduce CPU load. So having GPU Physx is a very big pro IMHO.

Sankyo
05-12-2018, 07:50
How about in cases of having an older CPU and a new GPU, which would give a CPU bottlenecking situation. Any means in that situation to offload the CPU is a win, hence having the optional control over this is desired; in any game by the way and not just pCARS.
Maybe, since a new GPU on an older CPU probably won't be utilized fully anyway because the CPU cannot feed it fully.

Personally I think that the relative load of PhysZ won't change on an older CPU, hence if the PhysX calculations are causing a high CPU load then the rest of the physics model (i.e. STM) is completely stretching the CPU to its limits anyway and offloading PhysX to the GPU won't help much I think.
Plus, sending PhysX to the GPU and using its results is not for free either as the CPU will need to feed the GPU. Probably less CPU load, but not zero I think.

MaximusN
05-12-2018, 10:16
Maybe, since a new GPU on an older CPU probably won't be utilized fully anyway because the CPU cannot feed it fully.

Personally I think that the relative load of PhysZ won't change on an older CPU, hence if the PhysX calculations are causing a high CPU load then the rest of the physics model (i.e. STM) is completely stretching the CPU to its limits anyway and offloading PhysX to the GPU won't help much I think.
Plus, sending PhysX to the GPU and using its results is not for free either as the CPU will need to feed the GPU. Probably less CPU load, but not zero I think.
Sorry but GPU's are way way way more efficient at doing Physx than CPU's.

Look I respect(or actually I don't) the choice that was made way back when to keep the red camp happy, but doing CPU Physx is far from efficient. It's a 'GPU-tech' that doesn't translate well to CPU. Just like graphics.

And this step making it open source will solve the 'keeping camps happy'-thing finally, so this is about the best news ever. Followed by raytracing being mainstream in a couple of graphic-card generations. :p

Christiaan van Beilen
05-12-2018, 13:13
Maybe, since a new GPU on an older CPU probably won't be utilized fully anyway because the CPU cannot feed it fully.

Personally I think that the relative load of PhysZ won't change on an older CPU, hence if the PhysX calculations are causing a high CPU load then the rest of the physics model (i.e. STM) is completely stretching the CPU to its limits anyway and offloading PhysX to the GPU won't help much I think.
Plus, sending PhysX to the GPU and using its results is not for free either as the CPU will need to feed the GPU. Probably less CPU load, but not zero I think.

Depending on the amount of workload it might be insignificant, but I think still relevant.

If you look at the process. A CPU is more common to get interrupted via IRQ, also the data is processed at system clock and not GPU internal clock, a GPU can access system memory via DMA and ommit the CPU largely. Plus GPU memory is faster and has a higher bandwidth once it is in there.

All in all I do think it will be significantly faster because of all of that, and probably more that I am now forgetting. System optimization in the 90s hasn't made an effort for nothing to offload CPU workload and ommit interrupts (via DMA for example) in order to optimize serial processing. As well as improving data access and communication around the CPU.

Each processor has its own benefits, but Physx was made for parallel computing since the days of Ageia who made the first Physx cards.

Sankyo
05-12-2018, 13:51
Sorry but GPU's are way way way more efficient at doing Physx than CPU's.
I'm not disputing that. I'm just saying that if the relative CPU load of PhysX is small, moving it to the GPU (and with that decreasing GPU graphics processing power as a possible unwanted side-effect) may not give that much benefit even though the idea sounds nice.


Look I respect(or actually I don't) the choice that was made way back when to keep the red camp happy, but doing CPU Physx is far from efficient. It's a 'GPU-tech' that doesn't translate well to CPU. Just like graphics.
Well there wasn't any choice here in the first place IMO, since you simply can't choose to use hardware-specific implementation of generic code. Choosing GPU PhysX and with that making pCARS an nVidia-only game is commercial suicide.

What could have been a better option is to choose and open-source GPU physics engine (if that exists or existed at the time), or choose a less demanding CPU-based physics solver (if available, somehow I still have the feeling that PhysX is less efficient than, for example, Havoc, but I'm most likely wrong in thinking that).


And this step making it open source will solve the 'keeping camps happy'-thing finally, so this is about the best news ever. Followed by raytracing being mainstream in a couple of graphic-card generations. :p
Agreed with the first, still not convinced that ray tracing will be more than a marketing gimmick in the next 5 years or so...

Christiaan van Beilen
05-12-2018, 14:03
Ray tracing is indeed a gimmick. In ~4 years you might buy a RTX 4080ti and revisit some RTX titles of today, and maybe finding some new enjoyment in playing them with everything on.

It does make the world a lot more realistic though. Just like having Physx calculate natural motion of wind, water, anything of cloth (clothes but also flags for example) and so on.

We are far from having this in VR, let alone a properly sharp VR experience. That will take another decade or two at the current painfully slow development rate.

MaximusN
05-12-2018, 14:21
Ray tracing is indeed a gimmick. In ~4 years you might buy a RTX 4080ti and revisit some RTX titles of today, and maybe finding some new enjoyment in playing them with everything on.

It all depends on the question if the next cards from both(or by then 3) manufacturers(even the lower ends) have raytracing cores up to the task. Because if I understand it correctly it makes the workload of graphics artists a lot easier. Downside is they will still have to cater for non raytracing capable cards too, but my guess is they will put less effort in the old-school lighting.

Speaking only for myself I'd much rather have 1080p with real(ish) lighting than 4K with fake. so I take the FPS hit like a champ. :) Just like any movie outscores a game in graphical quality even if it is at 720p and the game is at 4k. And it's almost all down to lighting...

Christiaan van Beilen
05-12-2018, 15:17
I think you are a bit too optimistic about the speed at which you expect the transition to happen.

I think ray tracing will be only common good by the time Windows is turned up to 11, I mean when we will have Windows 11.
Even then we might not have a full transition yet. Just think back to Bilinear and Trilinear filtering and how long that legacy stuff was still in games while AA techniques became plenty.

There will of course be a transition period that begins with the next gen of cards, if indeed AMD also comes with something also capable of ray tracing.
This will only be the start in that gamers will more and more have ray tracing capable cards, and once steam shows that more than 1/2 of the user base has capable cards developers will add the gimmick. Only once the user base is beyond 3/4 it will become a more serious task to support it, but by that time the higher end of cards will have become far more capable too.