please get rid of the scaleform ui menus!

Discussion in 'Player Support' started by BlackDove, May 15, 2014.

  1. Lavans

    Dragram - Yeah. I used to mod my HD5970's BIOS. From what I have read, modding a GTX 680 BIOS is basically the same in concept, minus the boost aspect (obviously). The only reason why my 680 does not have a modded BIOS with custom voltages is beecause I don't want to hard mod the card.

    I would suspect that manufacturers give a little voltage overhead to ensure stability. It would be silly for a card to potentially operate at the bare minimum required voltage, since that could potentially introduce general instability.

    Obviously there could be a number of cards out there with a BIOS that does not guarantee stability. Oversights like that do happen, and games like PS2 can expose such issues. Its a shame though that there's no way for a user to grab a BIOS update without going out of their way.
  2. Dragam

    Lavans :

    Well tbh there isnt much to gain either by modding a 680 bios... if you have very good cooling, it can get you perhaps an additional 100 mhz, but with average cooling, itll maybe get 50 mhz more.

    I think that they originally did give that overhead, but that later driver performance optimizations has increased the power usage.
    If we take a game such as crysis 3 as an example, when it came out more than a year ago, the power consumption was roughly 110 % with OC's... now its roughly 130% with OC's. About the same 20% increase ive seen in average fps, so it seems to go hand in hand.

    So lets say that what they had calculated with the drivers back then, that they would allow the card to have 100 % load, and the boost activated, with the voltages at 1.150, while staying within the power limits... now many drivers later, to stay within the power limits, the boost deactivates, thus the voltage will only be 1.087, and the cards will have a hard time reaching 100% load, and if they do, they might become unstabile.

    Ofc this is just speculation, but the above seems like the most plausible reason to me, as to why many have issues with planetside 2 and gpu crashes (aside of driver issues).
  3. BlackDove

    Ok dont respond to my post because you said that TDP is something that its NOT.

    But at least dont claim things about GPU boost that are also false.

    GPU boost 1.0 uses extra available power headroom to overclock the GPU, if the load demands it. It does this dynamically and adjusts in miliseconds, based on load and power headroom.

    http://www.geforce.com//hardware/technology/gpu-boost/technology

    Nvidia implemented boost because the previous limits for a GPU were set by "worst case" usage scenarios: POWER VIRUSES.

    AFTER Nvidia and AMD blocked the code of known power viruses from running, they implemented boost to allow clock rates to vary based on whats happening in the game.

    Furmark or OCCT cause new GPUs to automatically throttle, but a power virus menu would cause exactly what youre seeing here: the GPU exceeds its TDP and attempts to boost as much as possible, while not being utilized near 100%.

    While a properly cooled GPU with a good quality PSU wont be likely to fail immediately, it DOES unnecessarily load the GPU, and everyone here should be aware that cooler electronics that arent continuously loaded beyond 100% TDP LAST LONGER.

    Other people who dont have the best components might have a hardware failure or at least terrible performance if they try to run a game with a power virus ui as well.

    In any case, fixing the menus would allow better GPU utilization to do ACTUAL WORK. Maybe they could make the graphics acctually look good if they got rid of the overhead from the 2D elements as well.
  4. BlackDove

    Been getting some friends to test various things since starting this thread, across a variety of hardware.

    Seems most of the black screens coincide with being in a terminal for them.

    Would be great to get a dev to respond as to why the menus load powerful GPUs to beyond 100% TDP while at less than 50% utilization.
  5. Zcuron

    Layman, horrid things to follow;

    Insofar as I understand it, TDP is a unit that specifies how much heat will be produced, and to me it only makes sense if this is the "upper bound" (i.e. as much as you're likely to see), which would seemingly be at "100% normal usage".

    CPU's as far as I know have specialized parts which only perform a certain function; in other words the CPU isn't built to have every part of it at 100% use, but is designed with the idea that many things won't see simultaneous usage. I believe GPU's are similar.

    Combining the above, a TDP rating is based on how much power is flowing through the chip at what is considered to be "normal use".
    In other words, normal use is where %TDP and %USE are equal, whereas abnormal is where they are not. (be it higher or lower)
    The definition of "normal" is obviously up to the designer of the card, irrespective of what users or game makers think.

    You can exceed the TDP by;
    1. Increasing the volt//clock, thereby increasing the power usage. (more//faster "normal" use!)
    2. Using parts in a way they're not meant//expected to be used. ("power virus" is a loaded term I guess, but the function if not the intention is similar)

    I'll note that "insufficient cooling" is by definition always true if heat is the cause of a failure - it's more sensible to talk in terms of what things are designed for - which in terms of cooling is TDP, so if you exceed the TDP, you exceed the cooling. (though it likely has some margin of error)

    Lastly, would you use a 95C water in a Thermos or a heater set to 40C to heat a room?
    Point being, dissipation is all about moving heat away from the point of generation, but chips are designed with the idea that they'll never be used to 100% capacity, so if you put enough heat-generating points together, the heat generated in that small area will exceed the capacity of any feasible cooling solution.

    In other words, abnormal//unexpected use can cause damage regardless of what cooling you use.

    ===END===

    If any of the above is incorrect, I appreciate the kindness of telling me so.
  6. Nasher

    Scaleform = flash. Flash is bad enough on websites. Using it in a game is just a stupid idea :/
    • Up x 2
  7. BlackDove

    Correct. And something else a lot of people dont understand is how much of a modern CPU or GPU is dark silicon. Both because of redundancy and because of power density.

    In addition to that, very little of the graphics card is the GPU chip itself. Its measured in sqare milimeters. The PCB has tons of things on it that are not designed to take the electrical or thermal loads that power virus code can put on them.
  8. warmachine1

    Transparent menu in BF4 cost me around 20 FPS, but HUD makes no real difference.
    Must try this in PS2, just need place where im not locked on 60.
  9. Dragam

    Zcuron : i agree with most of what you say, but you have to realize that the "margin of error" regarding cooling, tdp max, and max allowed temperatures, are very high.

    If we take my Graphic Cards as an example : at stock they use 180 watt and will reach 82c, but the max safe temp (as dictated by nvidia) is 95c... giving them quite a bit of headroom.

    So even when i gave the gpu's unlimited power, and they were at 140% TDP watt usage (or roughly 250 watt), they were at 88c - still well within the max safe temp.

    Ofc the standard TIM used on even the best gpu's is junk, so with that replaced with proper TIM (such as the Classic mx2), the temperatures drop 5c... add a tiny bit more aggressive fan profile (in my case, ive set the fan to spin 5 % faster than stock), and thats another 2c lower.
    Meaning that i now top off at 80-82c (depending on ambient temps), like the Cards did at complete stock, while using 140% TDP, or roughly 70 watt more than the stock settings of the gpu.

    Aka you can easily increase TDP alot, even with the stock coolers, albeit a bit of tweaking will do Wonders.
  10. BlackDove

    Doesnt matter though. You can talk about the safety of keeping the silicon at 95C but keeping a card at 140% TDP has two major problems.

    Its more likely that capacitors and VRMs will fail first.

    The GPU is at 140% because the menus are loading it for no reason, meaning the GPU cant do actual work.
    • Up x 1
  11. Irathi


    To supplement what BlackDove said;

    Also the degration of the circuits accelerate with higher temperature. At normal working temperature say 80c you have a given expected life time, at 90c it will be a lot shorter and the 95c is set as a fail safe because temperatures above it can start causing immediate damage depending on the quality of the silicone and other components.

    Anything above 110-115c is ususally considered deadly by even the most tolerant cpus.

    But fried components isn't the only issue, with higher temperatures you also get an increased chance of instability. Instability can lead to crashes, blue screens of death, writing errors to your harddrive, dead harddrives if the case ventilation is insufficient, overheated PSUs frying which can take your entire pc down with it.

    When we are on the PSU subject, a lot of mainstream gamers has PSU's with very little headroom. And a lot of those gamers also already OC their CPU and perhaps GPU as well further reducing the headroom. At higher W draw the efficiency of the PSU also drops, meaning it will increase the temperature even further. I don't remember exactly the numbers, but I think the best W draw for a PSU is about 80% of its maximum capacity.

    If a 660ti with a Windforce(2x/3x) cooler is having trouble, then there is something seriously wrong with either the software(planetside2) or that card because the windforce is recognized to be among the top air coolers only rivaled by one or two others like the TwinFrozr from MSI.

    In short; it should not be needed to have a debate around if this is a problem or not because unnecessarily high temperatures will cause damage, it's only a question of how much and how fast..



    Edit; when people tell me their GPU is constantly in the 90's and the CPU is in the higher 80's it only tells me how little they know about the harm it could cause.
    • Up x 2
  12. Krayus_Korianis

    My GPU (GTX 660 ASUS) doesn't get higher than 66 degrees celsius. It's right below my CPU which also shares that but it's a bit lower than 66 degrees. Obviously, someone's GPU isn't working right if it's hotter than 70, or they got massive cooling problems. (I use fans btw).
  13. Irathi


    just noticed I typed CPUS, I meant GPU's. Most CPU's can't handle near that high temps before they crash.
  14. Dragam

    BlackDove :

    that is very true !


    Irathi :

    Youre absolutely right - if a chip has an expected life cycle of 600.000 hours at 60c, then it will be reduced to approximately 6.000 hours at 90c.
    But my example was the max temps (which even with the OC's are still very acceptable) - usually my gpu's hower around 65-68c in ps2, due to the nature of this game (the game bottlenecking on cpu core 1, rather than gpu's).

    I also agree on psu part, and often think "REALLY?!" at the choices people make regarding psu's... mine is a bit (actually alot) overkill for my current setup, but was bought for my previous quad sli build. I think that 800watt would suit my current needs quite a bit better.


    Krays_Korianis :

    You've obviously not dealt with high-end gpu's before... they all exceed 80c at stock - only mid and low range gpu's stay below 70c at stock.

    http://www.guru3d.com/articles_pages/asus_geforce_gtx_780_strix_6_gb_graphics_card_review,8.html

    And bear in mind, that the above test was done outside a computer case, giving the gpu's alot more air to breath.
  15. Kirppu1


    5 fps isn't a big diffirence. I (as an idiot) think that they should use Direct compute, and tesselation to improve LOD performance and the former to optimize lights. Yes it needs ton of renderer stuff to be optimized but these technologies have so much potential FPS gain.

    For example Battlefield 3 runs smooth on my machine. 60 fps. No hitching. No drops and that game looks a lot better than Planetside
    in fact it sometimes has more to compute,(Particles, Lights, Ballistics, etc) than planetside. Yet my PC still runs planetside like 40-80 fps. Also notice the huge diffirences in fps.

    So what i am trying to say is:
    1 The renderer is poorly "Optimized" they just made the game look worse https://forums.station.sony.com/ps2...at-think-the-graphics-didnt-get-worse.189266/

    2. The game does not use anything properly. The 64-Bit is a joke, the game doesn't use extensions that are 64-Bit exclusive nor the multi-core CPUs even when PS4 could take advantage of these extensions

    3. Still in DX9, why is it bad?: http://www.slideshare.net/DICEStudio/directx-11-rendering-in-battlefield-3. And don't give me Bu****** about PS4's OpenGL because you can render with DX11 on PS4: http://www.geek.com/games/sony-iimprove-directx-11-for-the-ps4-blu-ray-1544364/

    So yes they are lazy or weak on resources. Of course i read the tweet about DX11 that smedley tweeted: https://twitter.com/j_smedley/status/428955184631279616, so i am optimistic about the DX11 thingy
  16. sean8102


    That article about the PS4 running DX11 is worded horribly. The PS4 dose not run DX11 or any version of Direct X. Direct X is a closed source API owned by microsoft.

    What Sony did say was that they have a higher level API programmers can use that supports all the same graphical features as DX 11.1 plus a few more features.

    "Sony is building its CPU on what it's calling an extended DirectX 11.1+ feature set, including extra debugging support that is not available on PC platforms. This system will also give developers more direct access to the shader pipeline than they had on the PS3 or through DirectX itself. "This is access you're not used to getting on the PC, and as a result you can do a lot more cool things and have a lot more access to the power of the system," Norden said. A low-level API will also let coders talk directly with the hardware in a way that's "much lower-level than DirectX and OpenGL," but still not quite at the driver level."

    http://arstechnica.com/gaming/2013/...4s-hardware-power-controller-features-at-gdc/

    I guarantee you Planetside 2 on PS4 uses that low-level API. The lower level API causes there to be less overhead and basically allows them to squeeze more out of the hardware. I would imagine the API is quite similar to AMD's Mantle but I'm not sure about that.

    Bottom line though what API's the PS4 uses dosen't matter at all when it comes to the PC version of Planetside 2. The PS4 will have its own code base and use what the PS4 uses to render, and the PC version will continue to use Direct X, if a Mac version ever came out it would use Open GL.
  17. Gammit

    For what it's worth, no computer I've used has ever benefited from turning off the UI when playing this game. Nor doI see any noticeable fluctuation in performance or temperatures when using the menu screens.
  18. BlackDove

    What kind have you used and what do you check with?
  19. Octiceps

    You must be the only one.
  20. Kirppu1


    Ok, as you can see i really haven't reseacher about the PS4 but i'd like to see that the game would use something like Directcompute, OpenCL would be nice