PDA

View Full Version : Mythbusters for people with triangle heads Thread.



whitespider
12-21-2012, 01:04 AM
This thread is dedicated to complicated pc hardware questions that I don't entirely know the answers to

I'll get the giant plastic ball rolling.



Do three graphics cards actually eliminate microstutter. Is that true for AMD only? Does AFR work better with 3 cards? And if so, why?

Does triple buffering work for SLI or crossfire? - And if not, why the hell is it an option in control panels?

When I turn high precision event timer off. It makes the frame perception or microstutter, far far less. Yet nobody is talking about it. WHY?

Why is multi sample AA in direct x 11 games (ingame as an option), so much more demanding than dx9 msaa (ingame as an option)?

Anyone notice that TXAA, a feature for nvidia cards. Makes games scale poorly in sli. Why have nividia never corrected this? Surely SLi and txaa are something that should go hand in hand?


There are others. However I have forgotten them. I'll add later.

BlackOctagon
12-21-2012, 03:00 AM
6. How exactly does RadeonPro 'fix' crossfire microstutter via a mere frame limiter?

Cpt.Teacup
12-21-2012, 03:36 AM
3. When I turn high precision event timer off. It makes the frame perception or microstutter, far far less. Yet nobody is talking about it. WHY?

My guess answer: People a.) Don't know about it; b.) Aren't knowledgeable enough to understand it; c.) Don't care.

winterhell
12-21-2012, 03:59 AM
Unless you've mentioned this elsewhere, its worth noting that AFR introduces additional input lag in the face of 1 or 2 frames (with 2 or 3 cards respectively). Due to its nature the GPU computations are somewhat reduced thus the opportunity for higher fps. The number of vertices/polygons and commands send to the gpu is halved for 2 cards compared to simultaneous rendering, though those things are becoming less and less of a bottleneck in modern games, as opposed to pixel shading power.
So when targeting 120fps, additional 8-16ms input lag may or may not be so bad if it'll allow you to have more stable framerate experience.

Triple buffering also adds 1 frame of input lag, allowing the GPU to render the last frame while the CPU is computing the logic for the next frame.
With regular double buffering the CPU and GPU are often waiting for the other to finish its work so they can continue doing their thing.

whitespider
12-21-2012, 04:11 AM
Unless you've mentioned this elsewhere, its worth noting that AFR introduces additional input lag in the face of 1 or 2 frames (with 2 or 3 cards respectively). Due to its nature the GPU computations are somewhat reduced thus the opportunity for higher fps. The number of vertices/polygons and commands send to the gpu is halved for 2 cards compared to simultaneous rendering, though those things are becoming less and less of a bottleneck in modern games, as opposed to pixel shading power.
So when targeting 120fps, additional 8-16ms input lag may or may not be so bad if it'll allow you to have more stable framerate experience.

Triple buffering also adds 1 frame of input lag, allowing the GPU to render the last frame while the CPU is computing the logic for the next frame.
With regular double buffering the CPU and GPU are often waiting for the other to finish its work so they can continue doing their thing.

Yes, but does it actually 'function' in multi-gpu mode? As I have seen no evidence to say that it does, or does not.

whitespider
12-21-2012, 04:11 AM
Seriously? A double post. Ok, i'll keep this reserved for more mythbuster questions.

*Reserved - For the wale stalker*

jedi95
12-21-2012, 04:12 AM
5. Anyone notice that TXAA, a feature for nvidia cards. Makes games scale poorly in sli. Why have nividia never corrected this? Surely SLi and txaa are something that should go hand in hand?



My best guess on this one is that TXAA introduces some cross-frame dependency to handle the temporal component. Essentially you end up needing frame N-1 before you can perform TXAA on frame N. Since SLI involves rendering multiple frames in parallel, this necessarily sequential step reduces overall scaling.

Again, that's just my best guess, but it makes sense to me.

whitespider
12-21-2012, 04:15 AM
My best guess on this one is that TXAA introduces some cross-frame dependency to handle the temporal component. Essentially you end up needing frame N-1 before you can perform TXAA on frame N. Since SLI involves rendering multiple frames in parallel, this necessarily sequential step reduces overall scaling.

Again, that's just my best guess, but it makes sense to me.

An interesting counterargument to that, is black ops 2. The standard nvidia profile, (even now) has reduced TXAA scaling just like assassins creed 3 and the secret world. Yet changing the sli flag to the huxley one. Makes TXAA seemingly scale very well.

jedi95
12-21-2012, 04:26 AM
An interesting counterargument to that, is black ops 2. The standard nvidia profile, (even now) has reduced TXAA scaling just like assassins creed 3 and the secret world. Yet changing the sli flag to the huxley one. Makes TXAA seemingly scale very well.

In that case I'm either wrong with my above statement, or TXAA doesn't actually work when you change the SLI flag. Is there any way to tell for sure if TXAA is being properly applied with that flag set?

whitespider
12-21-2012, 04:52 AM
In that case I'm either wrong with my above statement, or TXAA doesn't actually work when you change the SLI flag. Is there any way to tell for sure if TXAA is being properly applied with that flag set?

Yeah, i played the entire game with it enabled and a high level of gpu useage. I kept switching it out to see the difference, and it was extremely apparent.

HyperMatrix
12-21-2012, 06:43 PM
Regarding your statement on HPET. I did before and after testing with and found absolutely no different using this tool:
http://www.thesycon.de/eng/latency_check.shtml

One thing that did make a significant difference, however, is an app you need to keep open and running in the background. Have a look here:
http://www.lucashale.com/timer-resolution/

whitespider
12-21-2012, 06:51 PM
Regarding your statement on HPET. I did before and after testing with and found absolutely no different using this tool:
http://www.thesycon.de/eng/latency_check.shtml

One thing that did make a significant difference, however, is an app you need to keep open and running in the background. Have a look here:
http://www.lucashale.com/timer-resolution/

I don't want to say something outright wrong. But disabling bios level hpet is extremely apparent to me. I will test the timer you linked and see as it works an something additional.

HyperMatrix
12-21-2012, 07:20 PM
I don't want to say something outright wrong. But disabling bios level hpet is extremely apparent to me. I will test the timer you linked and see as it works an something additional.

I've read somewhere that Asus doesn't actually let you disable hpet. Though I'm not sure why the option would be in the BIOS if it doesn't actually work. Others were seeing differences with the tool when disabling HPET. The same cannot be said for me. I wonder if it's cpu/mobo dependent.

whitespider
01-08-2013, 07:09 PM
Some more contenders for people with triangle heads.


1. I just got 16gb of rather impressive ram that sneaky recommended. And while windows is more responsive, I have noticed something else. Some games tend to 'stream data' that is processed down an extremely narrow thread or stream. Deus Ex Human revolution is a prime example of this, as is saints row the third. Both these games show rather low amounts of cpu utilization during high streaming moments (aka, lot's of data). I figured this was simply how multithreaded and well coded the graphics engine was. Which is what I still actually believe.

However, with this rather overclockable ram (overclocked, naturally), and extremely small page files across my 4 hdd's (16mb-100mb) the streaming of said 'streaming limited' games, tends to be quite a bit more 'flowy'. Yes. 'flowy'. And when I look at my cpu usage, it is in the 60-70% range for saints row the third instead of the previous 45-65%, and slightly lower for deus ex human revolution. Faster ram, it seems - allows for better streaming, which by proxy also allows the cpu to process just that little bit more data. Allowing the framerates to be a little higher. I was getting around 60fps with everything at ultra and fxaa over the top of that.Now I am 60-73-80fps with a bias towards 75fps.

This does not remove traditional stutter from games that are prone to it, rather it simply allows more data chunks to be processed by the cpu, while lessening the stuttering as well. So, then. Faster ram allows shitty ports to function slightly less shitty? Nice eh?

BlackOctagon
01-09-2013, 03:54 AM
6. How exactly does RadeonPro 'fix' crossfire microstutter via a mere frame limiter?

If anyone's curious, I found out the answer to this myself. it DOESN'T. The microstutter is still there. The reviewers who concluded that this solved microstutter were either blind, or on drugs, or on the take (possibly a combination of the 3)

n0rp
01-29-2013, 12:59 PM
Nah just believe the new graphs u find here.. xD http://www.sweclockers.com/recension/16383-asus-rog-ares-ii/13#pagehead

BlackOctagon
01-29-2013, 04:32 PM
That's a beautiful graph. But my eyes are seeing what they're seeing. Believe me, I wish it weren't so

winterhell
01-30-2013, 01:33 AM
http://www.sweclockers.com/image/red/2013/01/28/Fc3_dfc_aresii.png?t=original&k=c7c54828
Very convincing

n0rp
01-30-2013, 10:31 AM
Beautiful graph witch a skyhigh (100ms?) framespike.. facepalm moticons? But without Radeon pro's DFC enabled http://www.sweclockers.com/image/red/2013/01/23/Fc3_aresii.png?t=original&k=5b2f3aa6

TheZone
01-30-2013, 03:26 PM
you can try this

Shadman
01-30-2013, 03:45 PM
These are recorded from fraps or other software-type recording though, right? The software may eliminate one step of the stutter process, but the only way to say 100% it's fixed and it doesn't have any microstutter anymore is to record the output straight from the DVI cable or with a high speed camera on the monitor

Fimconte
02-02-2013, 04:16 PM
I don't want to say something outright wrong. But disabling bios level hpet is extremely apparent to me. I will test the timer you linked and see as it works an something additional.

That is because if you have HPET only on in BIOS, without forcing it to be the only timer, Windows will use TSC+HPET, which might result in worse performance than with HPET off in BIOS, in which case Windows will use TSC+LAPIC.
It's worth noting that if you turn HPET off in BIOS, but still use "/set useplatformclock true", then Windows will use only LAPIC
Using one timer should be optimal in most cases, so you should experiment with useplatformclock & HPET on/off and see if there's any benefit of running pure HPET or LAPIC.

I prefer LatencyMon (http://www.resplendence.com/latencymon) over DPClat, since it gives more information on what is causing the spikes.
For me, lack of USB3.0 drivers were causing spikes for some reason, disabling the ports (don't have any usb3.0 devices atm) fixed it for me.


P.S. If you've only tried HPET on in bios, try turning it on in Windows as well. (http://www.neowin.net/forum/topic/1075781-tweak-enable-hpet-in-bios-and-os-for-better-performance-and-fps/)
It's also worth double checking if you have 32bit or 64bit HPET enabled in BIOS.

whitespider
02-02-2013, 11:55 PM
That is because if you have HPET only on in BIOS, without forcing it to be the only timer, Windows will use TSC+HPET, which might result in worse performance than with HPET off in BIOS, in which case Windows will use TSC+LAPIC.
It's worth noting that if you turn HPET off in BIOS, but still use "/set useplatformclock true", then Windows will use only LAPIC
Using one timer should be optimal in most cases, so you should experiment with useplatformclock & HPET on/off and see if there's any benefit of running pure HPET or LAPIC.

I prefer LatencyMon (http://www.resplendence.com/latencymon) over DPClat, since it gives more information on what is causing the spikes.
For me, lack of USB3.0 drivers were causing spikes for some reason, disabling the ports (don't have any usb3.0 devices atm) fixed it for me.


P.S. If you've only tried HPET on in bios, try turning it on in Windows as well. (http://www.neowin.net/forum/topic/1075781-tweak-enable-hpet-in-bios-and-os-for-better-performance-and-fps/)
It's also worth double checking if you have 32bit or 64bit HPET enabled in BIOS.

HPET is 'faster' in a raw sense, but disabling hpet completely and reverting to the old timers improves 'gaming frame perception' (also known as microstutter, or frame times) with SLI graphics cards. AFR (alternate frame rendering) seems to produce more 'even' frames this way. I am someone who notices frametimes to the letter, and I can detect microstutter in every - and I mean every - setup/situation.

Which is not to say that my 670's give me a lot of microstutter, they simply don't. The frametimes are even, even if I enable HPET across the board. They are just 'that much better' if I disable HPET. This does not seem to apply to single gpu mode. SLI only.