plus if you add a dedicated physic's processor that probably going to work alot better the closer it is to the GPU.
Um, no.
There are two uses for physics in a game. One use is for something that is
entirely graphical. Particle systems that don't affect gameplay (outside of obscuring the screen in some way), exploding bits of stuff that don't affect gameplay, etc. That can be done through GPUs entirely, especially now with write-back in GPUs thanks to geometry shaders. You don't need a separate physics processor for it.
The other kind of use for physics is for things that
do affect gameplay. HL2, for example. Unfortunately, this is not the kind of physics that you can just say, "Do this," and retrieve the answer afterwards. The game's code, possibly even the script, has to be involved. You need to be able to set rules that are entirely arbitrary. Collision detection needs to be forwarded to the AI and game systems, so that they can assign damage, delete entities, and all of that good stuff. In short, this is not stuff that's good for assigning to a separate processor. There's a reason why physics chips didn't take off.
It is much easier for a game developer to just use more CPU cores to do physics than to use an off-board physics chip.
I think whats going on is Intel screaming out to NVIDIA that there about to seriously enter the graphics market and NVIDIA pretty much answering back go ahead we have years of RnD on you, oh and just like your trying to incorporate the GPU and CPU we can do the same thing on our end. NVIDIA has the upper hand in terms of the technology and Intel in terms of the market should be interesting to see how it plays out over the next 5-10 years.
Yeah, the problem is that, while nVidia has years of R&D for graphics chips, they don't have something that Intel does: the x86 Instruction Set Architecture.
As terrible and annoying as x86 is, it is the closest thing to a lingua franca that assembly has. Millions of man-hours have been invested in compiler design for x86. People have millions of lines of code written in it. There are terabytes of x86 executables out there.
Using x86, or a derivative thereof, as a GPU's shading language may be slightly less inefficient than a specialized shading language, but it's x86. Every time something has gone against x86, claiming better efficiency, it has
lost. Why? Because the efficiency difference is never enough to trump the value of x86 and its backwards compatibility to code
compiled 15 years ago.