PDA

View Full Version : "mantle" AMDs direct access graphics api



nocturnal7x
09-26-2013, 12:31 PM
This has the potential to be amazing.

http://www.pcgamer.com/2013/09/26/amd-announce-mantle-a-direct-access-graphics-api-that-will-debut-in-battlefield-4/

Exciting times to be a PC gamer. I hope nvidia works on something similar.

BlackOctagon
09-26-2013, 02:28 PM
Yep, this was pretty much the only great thing to come out of yesterday's preview. Combined with AMD's apparent takeover of the console market for the next few years, offering developers a low level api to which to directly code could prove a master stroke, both for them as a company and for us as pc gamers

If, that is, the performance gains are as significant as appears possible on paper. Time will tell

Shadman
09-26-2013, 09:45 PM
Yeah, theoretical. We will see.

nocturnal7x
11-07-2013, 06:47 PM
Star Citizen is going to support mantle:

https://robertsspaceindustries.com/comm-link/transmission/13362-Star-Citizen-To-Include-Mantle-Support

SOBs, Im building a new comp when this game comes out, I have a feeling Ill still be going green but this sure ups the ante for AMD, if the shit works.

HyperMatrix
11-07-2013, 06:55 PM
Star Citizen is going to support mantle:

https://robertsspaceindustries.com/comm-link/transmission/13362-Star-Citizen-To-Include-Mantle-Support

SOBs, Im building a new comp when this game comes out, I have a feeling Ill still be going green but this sure ups the ante for AMD, if the shit works.



As many have said it comes down to a couple things. From a theoretical standpoint, the more developers support it, the better chance of it succeeding. However...the real determining factor will be actual performance gains. Keep in mind that back in the day this is exactly how games were developed. But as more and more cards came out...it became a pain for developers to adjust and code for every single architecture out there. That's how the OpenGL/DirectX stuff started. A developer would design for just 1 API, and that API would be responsible for working with all those cards. There was more overhead, but a ton of ease on the development side. So the question now is whether development budgets are big enough, and performance gains noticeable enough, where developers would be willing to invest in optimizing for low level access. And if so...on what cards? What happens when next generation of cards come out? What happens if Nvidia decides to do it to? (though they already do a bit of it using NVAPI). Perhaps it'll be adopted but only for the "current generation" of cards for each game that comes out, and the rest will run through DirectX/OpenGL. But you just can't toss that all out the window.


All this speculation, while very informed and accurate (sorry for tooting my own horn), could mean nothing if the performance gain is, say, just 10%. Or what if it's a 50% difference? That will be interesting to wait and see.

Shadman
11-07-2013, 08:25 PM
You say 10% is nothing, but weren't you the one parading around about 8% higher performance when OCing something awhile ago (incredibly vague, I know)

HyperMatrix
11-07-2013, 09:19 PM
You say 10% is nothing, but weren't you the one parading around about 8% higher performance when OCing something awhile ago (incredibly vague, I know)


Absolutely. For the end user it's always welcomed. What I'm saying is that if it takes a lot of work, for only 10% better performance, developers are less likely to invest the time and money into doing it when it will only benefit a certain % of users who have compatible cards. I'm not poopooing the benefit of the 10% for someone like you or I. Just pointing out the cost/benefit analysis that developers are likely to do before deciding to support it or not. Can indie developers do it? Or will it be something only bigger developers do? For example I can see it being done by "engine" makers, like CryEngine, Unreal Engine, Unity, Frostbite. Those are all big enough where they themselves can do the engine optimizations without the game developers haven't to worry too much. At that point you'll see benefits. But again, it all goes back to my previous statements and concerns. I'd like to hear a developer comment on ease of developing for Mantle along with the types of performances gains they saw.

winterhell
11-08-2013, 12:30 AM
The limiting factor in OpenGL and DirectX is more or less the number of draw calls you can make per second. Every time you bind a vertex array, texture, shader parameter or tell the gpu to draw "it", you are calling a function that has its own constant overhead.
You can render hundreds of thousands of objects per second. At this point it makes no performance gain to make objects with less than say 200 triangles because of draw call stalling and switching the context.

If you have bare access to the metal, you can probably reduce the overhead and cpu usage for rendering up to 10 times. The performance gains are increasingly lower and lower with the generations, not unlike increasing the frequency of the system RAM.

What GPU makers need is to implement nVidia's bindless textures extension that was available since GTX 600. It solves many problems, often reduces the number of function calls to half or less, and works great with Deferred Rendering. One can have thinner and faster render targets and make more magic with less.

BlackOctagon
11-08-2013, 01:22 AM
As I understand it, AMD chose to go down this route at the repeated request of developers who were frustrated by the standard APIs of today. The benefits of something like Mantle to us (and to AMD) are likely to be in terms of performance. But for the developers who requested this it's about giving them something that's easier to work with when developing the PC versions of multi-platform games...even if the APIs of the PS4 and XB1 are only 'similar' to Mantle and not the same thing

Sent from dumbphone (pls excuse typos and dumbness)