performance when using nvidia SLI

Discussion in 'Player Support' started by indian spice, Nov 24, 2014.

  1. indian spice

    Hello everyone,

    I have finally saved up enough to get a new graphics solution for my computer, and am thinking about either getting 2 970s, or 1 980. I would rather get the 2 970's because of other, non-gaming purposes.

    I used to play this game with 2 460's back before the big Performance Update, which completely ruined SLI. There was terrible stuttering, a problem that I did not have in other games like Crysis 2. I traded my 460s for a 480, which I still use today. Has SLI compatibility been fixed since then for any of you multi-gpu people?

    The main game I play is Planetside 2, and I would hope that there would be some performance boost from SLI, and not a performance loss, because that would influence which solution I get.

    Thanks.
    • Up x 1
  2. rockhead101

    I've got 2 770s and the game is now unplayable.
  3. f0d

    aww thats a shame as i was thinking of doing the same as the OP after returning from a ps2 break for about a year

    thankfully im getting 100fps on my single 670 that i can just run a single 970 and it will still be fine
  4. BlackDove

    Just curious what youre using them for other than games since the small chips have almost no DP performance.
  5. Rikkit

    hmm lol, i run with a
    gtx760
    i5
    16gb ram
    i have no problems playing this game on high (~60 fps in big fights)
  6. f0d

    i want to be able to use the new supersampling type of jigga in the nvidia driver called "Dynamic Super Resolution" and run my games at a simulated 4k res

    i already tried it and it looks nice - its just the framerate sucked
  7. HotPepperDelivery

    Well, the Dynamic Super Resolution feature from NVIDIA is just a feature that they invest which will bring AMD down for sure. Since this feature released less than a year, i wouldn't expect it to runs well. Unless NVIDIA decides to get their *** up, otherwise you have to wait for their optimization.
  8. f0d

    dynamic super resolution actually runs really well but if you want to play at 4k you need the grunt first to be able to do it

    my current gtx 670 runs it at 4k albeit at low (30fps) framerates so i the 2x 970's im getting should run it smooth at 3840x2400 resolution
    even one would probably run it smooth but it would be even better if sli would work with the game
  9. indian spice


    To answer your question, for testing parallel processing stuff. No mining, I missed that boat. Small chips? I see you point, some of the specs for the 970 and 980 do not compare with those of the 780ti, but Nvidia said (sorry no link) that the change in design reflects the future of gaming...something about textures...

    I am sad to hear that SLI is still broken, especially on a game that is/was branded by Nvidia. Remember when we had Physx? I am in no rush to upgrade though; I still get 50-60 fps on med-high settings with my gtx480 overclocked.
  10. BlackDove

    No it reflects their alternating release scheduel. With the 600 series they switched from having the big chip with compute features which is the 100 or 110 numbered chip and the small chip which is the 104 or 114 that has no compute functionality and only single precision performance out at the same time to releasing the small chip as one series(GK104) top end card(680) then releasing the big chip(GK110) as the next top end card(780ti).

    They nerf DP performance on ALL consumer cards(Geforce) and save it for their Quadro and accelerator cards. Titan was the first chip they where they left DP performance intact but it was so cheap($1000 for Titan and $5000 for K6000) because it only had 6GB of non-ECC RAM.

    If youre doing serious paralell work you should get a big chip card. A 580 is better for some compute tasks than a 680 for example. Id recommend considering that before buying a couple of small chip cards. GM200 based 1000 series will be much more impressive than GM204.

    The 900 series is only a little bit faster than the 780 was in games, and the actual GPU is smaller and lower transistor count. For compute GK104 and GM204 are useless compared to GK110 or even GF110. Nvidia counts on most gamers not knowing that or caring about compute performance.
  11. indian spice

    Good points, but I still want to play games. I would like to test stuff with CUDA, but I can use my college's high-performance computing lab for the real work.

    Back to multi-card setups. I can be sure that the announced 980ti and the rumored 990 will be out of my budget.
    Is there anyone from the RED (amd) side that has a crossfire setup working for them? I would probably wait for their 300 series cards first.
  12. BlackDove

    Crossfire, including XDMA Crossfire, does not work with DX9 games at all.

    Two 970s or 390s would be more expensive than a GM200 based card and likely perform worse in games or other applications. You could potentially do better with two 780s or 780tis assuming you dont need the double precision. Do you need double precision for anything?
  13. indian spice


    Not really. Looks like I'll have to seriously consider getting a 780.
  14. Aldaris

    Above quote based on absolutely no solid information at all, seeing as we have no confirmed benchmarks on either the 300 series or any GM200 based cards, and absolutely no pricing information at all.
  15. Jac70

    I just did some testing - I ran the game at renderQuality 2.0, which I believe is equivalent to running at 4k (correct me if I am wrong). Even on a 1080p monitor it looks really crisp. Noticeably better than running standard renderQuality. I got 35-40 fps in the WG. Two 970s then would be ideal if the scaled to near 50% performance so it's a shame that the SLI in the game is not really well implemented.

    I had considered going SLI myself but the only reason would be to go from 1.41 to 2.0 renderQuality in this game because a single 970 handles the former just fine.

    As for nVidia DSR. Has anyone tested whether this has a performance benefit over using the renderQuality setting to achieve the same effect?
  16. BlackDove

    Actually some GM200 benchmarks have leaked.

    Nvidia and AMD are also pretty predictable. Nvidia released the GK104 based 680 for about $500. Then they released the 780 for $650 and quickly dropped the price when they released 780ti.

    GM200 will probably launch as Titan X first followed by the 1080(or whatever they name it). Given the architectural changes, performance is at least 50% greater than full GK110.

    The 980 is actually the successor to the 680, which is why it barely outperforms full GK110.

    Pricing should be about the same as GK was. $1000 for Titan X, $650 for 1080 they might rename the 980 to the 1070 or 1060ti and drop the price like they renamed the 680 to the 770.

    We also know that GM200 will have a 384bit bus and be very compute focused since its the big chip. Even the core count has leaked lol.
  17. BlackDove

    If you dont use double precision you are probably better off with GM204, so 980s or 970s if you just need a lot of SP FLOPS.

    And as for Aldaris saying we dont know how a GPU which is in production but not released yet performs. We already have architectural and performance info on things like Skylake, Knights Landing and Nvidias GPUs that wont be released until 2016(GP100). Its not exact but its pretty close.
  18. Aldaris

    Alleged leaks and benchmarks and hell, even specs are not the same as concrete proof and real benchmarks/spec sheets. They're completely unreliable until the release dates etc and names have been confirmed by makers, and someone from a reputable website physically holds it in their hands and posts benchmarks. You read any website posting benchmarks from the 300 series and the GM200 and they all come with the same warning and the same style of language for a reason: because it's not confirmed. It can be faked and specs can change at last minute, especially clock speeds.

    Pricing is impossible to guess at as well. You're literally trying to use a crystal ball, especially as both sides are going to release at the same time. You'll see a pricing war like none other.

    Architectural info yes. Performance info? Erm, not at all. You can speculate based on that architecture but there's no actual benchmark info out there. Prove your point and post benchmarks and performance info on Pascal GPUs then.
  19. BlackDove

    http://blogs.nvidia.com/blog/2014/03/25/gpu-roadmap-pascal/

    No there arent exact performance numbers out but saying we dont know anything about architectures that are in development just means youre not paying attention. Same thing with pricing. Nvidia isnt going to stop pricing things competetively. Theve actually released much more info about GP100 than GM200 becauase theyre having so many yield problems with the new peocesses. They might even skip 20nm altogether and go straight to 16. If GM200 is on 28nm its not going to be nearly as impressive compared to GK110 as GK110 was to GF110.

    GP100 will also be released to compete with Knights Landing(14nm 72 Atom core with stacked DDR4) which is estimated to be 3TFLOPS DP/6TFLOPS SP, so it will be somewhere around that.
  20. Aldaris

    Not once did I say anything about architecture being released, I said concrete specs and benchmarks. You're the one who stated performance info exists, and yet the link I asked you to provide does not such thing.

    Not the same thing with pricing. Nvidia never price competitively. They charge a massive premium on everything. It's AMD who go for aggressive price cuts, such as the recent 290X price cuts leaving them in a competitive position. There's a reason Nvidia are always more expensive than AMD. But again, to reiterate, you cannot possible state what pricing they'll use. At all. You have no idea on the profit margins, you have no idea on the competition.

    Quite simply, your advice was based on speculation and what appears to be Nvidia fanboyism from all your posts. Until we get finalised specs from both families and real word benchmarks (not speculations based on architecture as you are doing), I wouldn't be suggesting any upcoming GPUs over another based on such dubious unconfirmed aspects as performance and price as you did here "Two 970s or 390s would be more expensive than a GM200 based card and likely perform worse in games or other applications."