please get rid of the scaleform ui menus!

Discussion in 'Player Support' started by BlackDove, May 15, 2014.

  1. Lavans

    5 minutes at the loadout terminal? Ok.

    This is with a reference GTX680, and the temps look perfectly normal to me.


    Seems like there's something wrong with your setup if that's the case.
  2. BlackDove

    Wrong? I said it refers to the memory controller... did you misread it?

    And by the way GPUs dont use VRAM... havent for years. They use SGRAM lol.
  3. Lavans


    Really?
    http://www.guru3d.com/news_story/6gb_version_of_the_geforce_gtx_780_ti_planned.html

    The more you talk, the more misinformed you seem. First you reference MCU as if it has any indication on GPU usage, TDP, or temperatures (which it doesn't)...and now you say that video cards don't have VRam. Fail.
    • Up x 1
  4. LibertyRevolution


    I have an EVGA GTX570 Super Clocked, so it generally runs hot.. Its max operating temp is 97ºC.

    I don't think it is a problem with my setup.. I regularly dust my components and I have a case with 5 120mm fans and a mesh front..


    Short of liquid cooling.. not sure what more I could do. ;)
  5. Lavans


    That does not mean that the quality of the integrated components on the PCB are desirable, as they may be low yield, but passable by manufacturer standards. Also, this does not mean that the GPU is making proper contact with the GPU housing for thermal transmission. A lot more goes into component heat than airflow/cleanliness. My first reference HD5970 used to overheat on a regular basis (going well above 100c), even after disassembling the cooler and applying high yield thermal paste. After RMAing it, my replacement reference HD5970 never exceeded 90c, and usually lingered in the low to mid 80's.
  6. Smagjus

    Actually, BlackDove is somewhat correct. The term VRAM originally described a technology that differs from what we currently use. However, since the term was already well established it now commonly used as term for whatever memory a GPU uses. It is not technically correct though.
    • Up x 2
  7. Lavans


    It's ambiguous at best. Nvidia and AMD market their Quadro, Tesla, and FireGL cards with SGRAM. However, gaming level GPUs are still marketed as using VRAM. Also, there is no documentation from official sources that outlines consumer level GPUs as having SGRAM. It seems odd to me that a Tesla K40 is advertised with 12GB SGRAM, but a GTX780 Ti is advertised with 3GB VRAM. Advertising aside, if consumer level GPUs did use SGRAM (which is arguably superior), then why hasn't AMD or Nvidia marketed it as such? Surely, listing their products to have a much improved memory technology would stir a huge ruckus within the PC community.
  8. BlackDove

    Actually they DONT use VRAM. VRAM is totally different technology than the SGRAM used in modern GPUs.

    http://en.wikipedia.org/wiki/VRAM

    Thats dual ported RAM and totally different than the GDDR3 or GDDR5 SGRAM used on all modern GPUs.

    http://en.wikipedia.org/wiki/GDDR5 and GDDR3 is all SGRAM.

    http://en.wikipedia.org/wiki/Dynamic_random-access_memory#Synchronous_graphics_RAM_.28SGRAM.29

    Is the manufacturer of the chips official enough?

    http://www.micron.com/-/media/Docum...Note/DRAM/tned01_gddr5_sgram_introduction.pdf

    The Quadros, FireGL and other accelerator cards that are so expensive happen to use ECC GDDR5 SGRAM. The ECC makes it much more expensive. Its ALL SGRAM THOUGH.

    The term SGRAM is probably too technical for BS marketing. They rebrand chips and say a 280x is a new GPU but its actually the same chip as a 7970. People believe it despite it being inaccurate. Why? Lol misinformed.

    You asked me what MCU referenced and for the third time: its refering to the memory controller.

    Your screenshots indicate that your memory controllers USAGE nearly doubled from the map screen to the menu along with power and clocks etc. Thats why i brought it up along with the other factors that DO indicate the load of the GPU. Memory controller utilization isnt an indicator of workload?
  9. Lavans

    Using Wikipedia as a cited source? I love how none of those wiki pages links to a cited source stating that GDDR5 is exclusive to SGRAM

    This PDF does not confirm what type of memory gaming level AMD and Nvidia GPUs use, such as the Rx series and GTX series cards. All it does is confirm what I already said - GDDR5 SGRAM is a thing, at least with FireGL, Quadro, and Tesla GPUs.

    Tell you what, if you can find an official statement from AMD or Nvidia that outlines their gaming GPUs using SGRAM, then I'll tip my hat and say you are correct. To be clear, an official statement refers to a statement found on their site.

    You never said what aspect of the memory controller it is referencing. The correct answer is "memory controller usage". Saying "its referring to the memory controller" is an ambiguous answer at best.


    Saying that MCU is synonymous with GPU workload is the same as saying that system RAM usage is synonymous as CPU workload. You can have high MCU with low GPU usage, and vice versa. One does not denote the other.

    In the case of the screen shots, a tech savvy individual can deduce this - There are no 3d elements with textures being drawn with the map loaded up, and the 2d elements are extremely easy for the GPU to render. Combine that with the low power state of the GPU and you have a very clear indication indication that 2d elements are not acting as a power virus, despite your frivolous and unfounded claims. If 2d elements did act as a power virus, then my GPU would be fully maxed and at full power when looking at the map.
  10. Dragam

    Lavans : Regarding your video, id say that those are very good temperatures for a reference 680, which is at close to maximum load... even if your fan profile is a bit aggressive :)

    But i Wonder, how hot does it get, if you use the standard fan profile?

    Regarding my own Cards, this is after having played an entire alert.

    http://i.imgur.com/GrqOAVi.jpg

    As you can see, the max gpu load is 93%, but the max TDP is well beyond 100, despite using standard clocks (my 2nd Card is more power hungry than nr 1)
  11. BlackDove

    Ok heres a basic lesson on RAM. JEDEC sets industry wide standards that chipmakers use to make compatible products. If you dont know what JEDEC is google it.

    http://www.jedec.org/about-jedec/member-list notice AMD and Nvidia in the list. Along with ALL the suppliers of their memory and basicaly every chipmaker in the world.

    Thats what you dont seem to get. The JEDEC standard for GDDR RAM, which is Graphics Double Data Rate RAM, means that when you see "GDDR3" or "GDDR5" ITS ALL... let me repeat... ALL SGRAM.

    VRAM hasnt even been common in the last DECADE. There is literally no such thing as GDDR3 or GDDR5 VRAM! Every place on Nvidia or AMDs websites that says GDDR3 or GDDR5 isnt referencing some magic VRAM from the 80s or 90s that isnt even used anymore.

    Here is the MOST OFFICIAL info you can get: the JEDEC standard for GDDR5 SGRAM.

    http://www.jedec.org/sites/default/files/docs/JESD212.pdf

    See? Its ALL single ported SGRAM. Same for all the GDDR3 GDDR4 etc.
  12. Lavans


    With driver defaults, it bounces between 77-78c. Quite normal for a GTX680 :)
    http://i.imgur.com/Fh7g2o4.jpg
  13. Eyeklops

    Dues Ex UI doesn't have to draw a mini-map with 48+ red dots (plus vehicles & friendly dots) on it.

    IMO, GPU overheating is usually due to a poor PC build. Stuffing a high power GPU into a crap case with crap cooling and trying to run one of the most GPU/CPU taxing games in existence is just asking for trouble. GPU "overclock wars" between card vendors doesn't help any either as most of those vendor "custom" heat sinks are only marginally better than the reference cooler at best (because: they are usually out to reduce noise, not significantly increase cooling performance).

    I run a bit of overkill on my GTX690 setup and never go above 60degC on the gpu's, ever, but most nVidia GPU's are stable to around 75~80C when not seriously overclocked.
  14. Lavans


    What? No official statements from AMD/Nvidia?
    Get back on topic :)
  15. Dragam

    Lavans : indeed quite normal, and what my Cards used to be at with default settings, before i replaced the tim :)

    But i dont understand all those people worrying that their Cards go above 70c... its perfectly safe... although i probably wouldnt be ok with my Cards reaching 95c as the stock 290x does :O
  16. Lavans


    IMO, no modern card should reach 95c. 95c being a normal temperature hasn't been a thing since 2010 at the very least.

    However, people worrying over their temps being in the mid to high 80's are usually people who are uneducated in the matter, and accepts bad information being thrown around forums rather than looking at reviews from hardware sites like Anandtech, Guru3d, and Tom's Hardware.
  17. zaedas


    Worrying about my temps going over 80? no.
    Preferring to have my temps under 60 under load? Yes.

    I take proud in having a system that has efficient cooling, and cooler cannot be something else than better. The only reason for a card to get around 80+ degrees celcius on 100% load would be that a) you have some pretty nasty ambient temp, b) you are running a killer setup with 2, 3 or 4 way SLI (therefore there isn't enough fresh air / space to blow out the hot hair) or c) your card has a poor cooling solution.

    Anyway, to go back to the 95 degrees celcius, we DO have a modern card running that temp with the stock cooler. I present you : the AMD R9 290x!!!! (100% load)
  18. Dragam

    Lavans : i entirely agree - i dont understand how it wasnt completely apparent to AMD, that the cooling solution on the 290 and 290x are insufficient.

    And yeah, the reviews of the of the 680 actually shows it going into the start 80's, yet there are tons of forum posts about people worrying about their temps being in the 70s.

    I suspect a large part of those people come from low / mid end Cards, which tend to run alot cooler than highend Cards, and then get worried about the temps of their new highend Card.
  19. Lavans


    I know. I never said modern cards running at 95c doesn't exist. I simply stated there is no reason for it :)
    It's quite ridiculous that AMD couldn't pair a $600 product with a better cooling unit.


    Right? Lol!
    I still remember going from an X1600 XT (fantastic card for its time) to dual HD4870's. The difference in temperature was astounding, even with the HD4870's cooler being massive by comparison. Those were the days :D
  20. BlackDove

    Wow. You actually ignore JEDEC which literally DEFINES THE STANDARD including every technical detail about every piece of GDDR RAM manufactured by every RAM maker in the world!?

    Heres some news for you. Nvidia and AMD dont manufacture GPU chips or RAM chips so they arent the last word on RAM.

    TSMC manufactures the GPUs that Nvidia and AMD design and companies like Hynix, Micron and Samsung make the actual RAM chips that add in board makers assemble into a finished graphics card.

    You want something hilarious to show how wrong you are? Remeber how i said VRAM hasnt been common in over a DECADE?

    http://www.nvidia.com/object/IO_20020114_4543.html

    Theres a nice press release from 1998 around the time SGRAM replaced VRAM as a standard discussing their Riva GPUs lol. I guess your info about GPUs using VRAM was even more outdated than i thought lol.

    Get back on topic? Ok but first i needed to correct some technical innaccuracies that you were posting in my thread and disrupting it with.

    I dont think you ACTUALLY believe that GPUs have used VRAM since the early 2000s but rather than admit that your sources of information consist of technically inaccurate information geared towards gamers and overclockers(Anandtech, Toms Hardware etc) and not engineers(JEDEC and RAM manufacturers official technical documents), and that by definition ALL GDDR RAM IS SGRAM, and you didnt realize that, you tried to make excuses and ad hominem attacks and condescending smileys to distract from the fact that you were factually incorrect.

    Why do they just call it GDDR RAM now and leave out the SGRAM? Probably the same reason they dont list SP or DP FLOPS on their consumer parts. The market theyre sold to doesnt need to be that TECHNICAL.

    Heres some CUDA devs discussing the difference between the ECC GDDR5 SGRAM on a Quadro and the nonECC GDDR5 SGRAM on a GTX 460. Engineers and devs are that technical.

    https://devtalk.nvidia.com/default/topic/473934/gtx-4xx-series-and-gddr5-edc-memory-checking/

    So yeah go read the JEDEC standards and get back on the topic.
    • Up x 1

Share This Page