Comments Locked

41 Comments

Back to Article

  • Stochastic - Monday, July 30, 2018 - link

    It would be a huge PR blunder to tease this and NOT announce 11-series GPUs. This is as close to a confirmation of what we all know is coming as we're going to get.
  • imaheadcase - Monday, July 30, 2018 - link

    With all the leaks out about its specs they might as well.
  • Yojimbo - Monday, July 30, 2018 - link

    I dunno, I doubt it would be a PR blunder. Look at Nintendo and the Switch. It took forever for them to tell when it would be released and they kept announcing and holding events and never talking about the Switch (which wasn't yet released as the name at those times) at them. Then the Switch was actually announced, later than everyone was hoping. It seemed to sell just fine and I haven't noticed any public backlash over it.
  • edzieba - Tuesday, July 31, 2018 - link

    Bingo. The only people this would be a 'PR disaster' for are those who take rumours as Cast Iron Double Confirmed Absolute Fact, then complain when they turn out not to be. That's an unfortunately large population in terms of internet whinging volume, but very small in terms of actual buyers.
  • Samus - Tuesday, July 31, 2018 - link

    Yep, one of many reasons people hate apple is actually something that’s out of apples control: rumors.
  • haukionkannel - Tuesday, July 31, 2018 - link

    It could be another game related announcement. Like new web based game streaming etc...
  • CaedenV - Tuesday, July 31, 2018 - link

    But they are just now announcing new GTX 10-series chips... I am not sure 11 series is really in the cards until next Spring at the earliest.
    Maybe an announcement and a soft-launch, but not expecting any real products for a bit yet
  • PeachNCream - Tuesday, July 31, 2018 - link

    Eh, whatever. Discrete graphics cards have remained above MSRP for pretty much the entire duration of the current generation's retail lifespan. In addition to that, the top end has grown significantly in price as the number of competitors has decreased over the last couple of decades. Factor in the dramatic increase in TDP that makes double slot coolers with multiple fans or system blowers plus independent, dedicated power delivery from the PSU a necessity and the graphics picture gets even less rosy. Technological advancement in graphics has included so much brute force that isn't replicated in other major components that it's just a bleak and depressing element of the computing industry. NVIDIA could announce they were selling gold foil wrapped, chocolate chicken butts and it would matter just as little to me as a new generation of graphics processors.
  • webdoctors - Tuesday, July 31, 2018 - link

    There's a lot to complain about, but this part is COMPLETELY WRONG:

    actor in the dramatic increase in TDP that makes double slot coolers with multiple fans or system blowers plus independent, dedicated power delivery from the PSU a necessity and the graphics picture gets even less rosy. Technological advancement in graphics has included so much brute force that isn't replicated in other major components

    The scaling of performance in GPU compared to TDP or Si area dwarfs what you've seen in CPU, RAM, power supply, case fans. What other components are you comparing to?

    How much faster are CPUs today compared to Sandybridge from 2011? 50% ? GPUs are several factors faster....
  • PeachNCream - Tuesday, July 31, 2018 - link

    Most AGP and 32-bit PCI graphics cards were powered entirely by the slot in which they resided and many didn't even require heatsinks like the S3 ViRGE DX or the Tseng Labs ET6100. I think the first video card I owned that even had a heatsink on the graphics chip was a Diamond Stealth V550 and that, unsurprisingly, was powered by a NVIDIA chip (the Riva TnT). So yes, graphics cards are now an obscenity when it comes to heat, noise, and power if you want anything high end. At the same time, AMD and Intel both have kept the TDP of their processors down to reasonable levels (around 35-65W for mainstream desktop hardware) so I see no reason why a GPU has to be such a power hog.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Yes AMD and Intel have kept the TDP down, but the performance increase has not been the same as a GPU. As for TDP growth, it's held steady for the past several years. Let's take the GTX 580 vs the GTX 1080 for example. The GTX 580 cost slightly less, but had a higher TDP (244W vs 180W for 1080). Now, compare the performance of the 580 vs the 1080. Now, let's take the i7-2600K vs the i7-8700K. Both have the same TDP, and the 2600K cost $30 less. Compare the performance of the 8700K vs the 2600K. Does the 8700K implement the same performance boost over the 2600K as the 1080 does over the 580? And the 1080 did all this while REDUCING the TDP
  • PeachNCream - Tuesday, July 31, 2018 - link

    Those examples are limited to a relatively narrow number of years. I agree that if you limit your time horizon to cover the 2010 to present day, you'd see a somewhat more favorable set of circumstances, but by 2010, dGPU manufacturers were already firmly entrenched in solving the technical hurdles associated with the attainment of greater performance by throwing more electrical power and larger heatsinks at the problem. The switch off 28nm helped with the current generation so that one off gain, though it was largely squandered increasing clocks rather than improving efficiency, helps to paint a slightly less painful picture. Unfortunately, the switch to 14/16nm really was ultimately wasted on ramping up speed while largely just holding the line on already fat TDP numbers. There's no reason why, in NVIDIA's current product stack, there only GPU that doesn't require direct power from the PSU is the lowly 1030. Don't get caught in the trap of being satisfied with 75+ watts just because it's been that way for a few years. It seems like that's a common stumbling block where the average enthusiast's "draw distance" is limited to a handful of years when glancing back at the past.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    I feel like you don't understand how efficiency works. Efficiency can be loosely defined as amount of "work" (work as in the layperson definition of the word, not the technical definition. I.E. "stuff done") per unit of energy expended, e.g. Joule. Increasing the clockspeeds, and "stuff done", for the same TDP is an increase in efficiency. 14/16nm did in fact increase efficiency. You're confusing increasing efficiency with lowering TDP. They're not in fact the same. If I made a product that performed 1/10 as well as a 1080 while having 1/9 the TDP, you might be happy, but it would actually have a lower efficiency.

    As for going only back to 2010, we can cast our eyes all the way back to the 8800 Ultra in 2007, which had a 171W TDP. Yes, we can go further back, but there really isn't a point. The electronics industry has changed tremendously since before then, and GPUs have changed a lot as well.

    Also, the GTX 1050 and 1050Ti have a 75W TDP, and can come without power connectors.
  • PeachNCream - Tuesday, July 31, 2018 - link

    That's just the thing, more "work" isn't being accomplished regardless of the improvements. Taken from the perspective of the home graphics card's purpose, entertaining the person at the keyboard, there's no more or less entertainment happening just because a modern graphics card requires more power. That's been the case, I'd argue, since DOS 6.22 was the primary PC operating system. From an abstract perspective, a game back then was just as much of an amusing time sink as a game now. TES: Arena could eat someone's free time and offer a compelling, amusing thing to do just as well as Fallout 4 can now. The wrinkles in that thinking come with the fact that a 486-class desktop ran full-tilt with a roughly 100W PSU (I owned a Packard Bell that was packing a 60W PSU) and that same output in a modern PC is pressed rather hard to supply a similarly smooth, seamless gaming experience. Therefore, while there are more transistors flipping on and off more quickly now, at least where gaming and gaming graphics are concerned, there's no additional work accomplishment for all the effort and the whole thing makes about as much sense as a bunny with a pancake on its head.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    So, more transistors flipping more quickly are magically supposed to consume less power? You're not defining work correctly at all. More calculations are happening at a greater speed, so more work is being done.
  • PeachNCream - Tuesday, July 31, 2018 - link

    The end result of those transistors flipping on and off is entertainment. That part isn't changing since the human looking at the screen or wearing the VR glasses is still getting the same benefit for the increasing input costs of power and money.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    You're saying that a game today is exactly the same level of complexity as the 1980-1990 era you keep comparing it to? There has been absolutely ZERO increase of performance and/or realism? You played 4K/60Hz games in the 1990s?
  • PeachNCream - Tuesday, July 31, 2018 - link

    I'm giving up after this. The end user gets x number of hours of entertainment regardless of the lighting effects, resolution, presence or absence of MSAA, or anything else the additional compute power offers. The end user got those hours of entertainment in 1993 and the end user gets them now in 2018. While there is certainly a difference in how things look, the outcome is identical. End user = amused.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    If we're just looking for x number of hours of entertainment, why use computers at all? Read a book, play cards, do whatever. Anyways, I enjoyed the debate. Have a pleasant day.
  • matthewsage - Tuesday, July 31, 2018 - link

    No, it's not just the "lowly 1030" that doesn't require direct power from the PSU. There are several GTX 1050 and GTX 1050 Ti graphic cards that are content with drawing power from the PCI-E slot. Both are pretty capable cards.
  • PeachNCream - Tuesday, July 31, 2018 - link

    The 1050 isn't an across the board power connector-less GPU. It's a bit more of a toss-up whether or not the OEM includes one or not. Since it's on the threshold, it isn't something I was willing to include. As I was typing that statement about the 1030, it crossed my mind to make that comment, but I felt like I'd have been anticipating and defeating in advance and that would have been rather rude so I didn't take that approach.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    If it consumes 75W, which is supported by the PCIe slot, and comes without a power connector, then I don't see how it's on the threshold. It's up to OEMs if they want to overclock and increase the power consumption for that, same as for every card.
  • PeachNCream - Tuesday, July 31, 2018 - link

    Not all PCIe slots support the 75W specification. For instance Dell Optiplex boxes, good candidates for low end graphics card upgrades when they enter the gray market as refurbs or second-hand systems, have had PCIe 16x slots that can't handle more than 37W. Furthermore, the 1050 at stock clocks ships with an external power connector. Since that's outside of our control, it really isn't fair to say the 1050 is free from direct-from-PSU electrical energy. Conversely, there are no 1030 cards that ship with said connector. In fact, it'd be laughable if they did given the TDP is around 30W.

    At this point, I think we're beating a dead horse. I'm not looking forward to the 11-series launch because I have set my expectations based on a lot of the things I've already explained. No amount of back-and-forth between us is going to change that. NVIDIA might be able to get me a bit excited if the company can deliver a credible product stack at a reasonable price range, but I to say I'm skeptical of that possibility is an understatement.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    I mean, I'm fine with dropping the conversation. That being said, any PCIe slot that doesn't support the spec is out of spec, and shouldn't be counted.
  • PeachNCream - Tuesday, July 31, 2018 - link

    I'm fine with not counting the slots that don't support the spec, but its equally not unfair to discount the cards that straddle the line and may or may not feature an external power connector.
    :D
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    It's definitely an ambiguous situation. I can agree on that
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    As a footnote, Intel's and AMD's TDPs have crept up for the high end too. The i9-7980XE has a TDP of 165W (greater than that of a GTX 1070), with practical power consumption higher than that (especially if overclocked). AMD's Threadripper 1 lineup has a TDP of 180W (equal to a 1080), with their Threadripper 2 lineup being rumored all over the place, but going up to 250W (equal to a 1080Ti). Lastly, the AMD FX-9590 had a 220W TDP.
  • PeachNCream - Tuesday, July 31, 2018 - link

    Yup, those TDP numbers for TR and the i9 are just as shamefully pathetic and you're absolutely right to call out both Intel and AMD for it. The difference on the CPU side is that there's still a wide range of choices with reasonable TDP numbers and the more costly, hotter-running parts add little to no performance advantage in a significant number of modern computing tasks including the enthusiast focus of gaining FPS in a benchmark. There's marginal differences between a CPU with a 65W TDP running 4 cores and a 180W model with 16 cores (for now - and I admit that can certainly change and said change will result in a valid set of complaints about the stupidity of CPU products in the future too). The probability of that changing anytime soon is low since the PC gaming market is beholden to what will be ported over from consoles or what will also run on consoles in the case of parallel development so CPU power as a factor and as a TDP concern is much reduced for a number of reasons.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Nevertheless, they're still the "high end". You cannot criticize the GPU high end, while completely ignoring the CPU high end. It's unfair to compare the power consumption of a $700 GPU to a $100 CPU. They're not even aimed at the same markets.
  • PeachNCream - Tuesday, July 31, 2018 - link

    I didn't leave out the criticism. In fact, I agree with you completely that the upper end of the price spectrum when it comes to modern CPUs deserves every bit of the criticism you've leveled against it. In fact, I'll even go as far as saying that since there's no benefit in a large number of use cases for high TDP processors where a home PC is concerned, those chips make even less sense than whatever NVIDIA is selling in the XX80-class ranges.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Alright. Then we should leave the high ends of both markets completely out of the discussion, and stick more to 1050/1060 level GPUs and i3-i5 level CPUs. In that case, the TDPs are far more comparable. 95W for highest powered i5 vs 120W for 1060, and 65W for i3 vs 75W for 1050Ti
  • philehidiot - Tuesday, July 31, 2018 - link

    Yeh I remember the days of single slot cards. Then double slot coolers came along and started blocking your PCI slots. Once mobo manufacturers starting shifting things around it became the accepted norm and we now even see the occasional triple slot card.

    I think the consumer is partly to blame in their ready acceptance. The whole "wow, my card is so powerful it needs a double sized cooler and two extra power cables" is almost a boast for many people as gaming hardware seems to edge towards implying excess (just look at the shape and size of "gaming" cases with their windows to show off your amazing, glow in the dark hardware). It's like a car having a giant spoiler and flared wheel arches. You might also stick a giant engine in there knowing full well it's overpowered, just for kicks.
  • PeachNCream - Tuesday, July 31, 2018 - link

    I think you're right in that we consumers ought to be shouldering much of the blame for the state that the PC enthusiast market is in today. What's produced is what sells and one article or another here on Anandtech pointed out that products with RGB offer better sales and higher margins, so we shouldn't be surprised. I've actually heard people brag about the number of power connectors and the size of the cooler on their graphics card and you're on point about it.

    I guess at this point though, unless NVIDIA's 11-series is going to move the TDP numbers somewhat lower, I just can't find a reason to feel excited about it. I'm expecting another ho-hum product cycle where the only two horses in the race offer an assortment of cards that spatter performance charts with a few improvements at a cost of too much power and too much space at cost I'm not interested in paying.
  • tarqsharq - Tuesday, July 31, 2018 - link

    Are you faulting AMD and Nvidia for "brute forcing" a rendering task that can be run in parallel at an enormous scale in order to give you fast frame rates?

    The only reason CPUs haven't followed suit is due to the logical constraints game engines and most program tasks that end users run being largely single thread limited.

    If games were easy to program to run on 16 threads and be faster because of it, we would be seeing massive CPUs with massive TDPs being highly sought after by the enthusiast gaming community, instead of being happily run on 4 core SMT enabled CPUs.
  • PeachNCream - Tuesday, July 31, 2018 - link

    We do see many-threaded games and applications, but not in the PC space yet. That's happening already in the mobile sector though. The paradigms are different as are the platform constraints, but the programmatic problems are similar enough and lots of common programs including games are taking advantage of lots of threads. The same holds true with consoles and they are the source of a lot of PC games through porting so I don't think it's a programming problem. In fact, I'm not really sure why we haven't seen greater parallelism at the CPU level in the home computing space. If I were to hazard a guess I'd say its because of the effectively "infinite" electrical energy available. We can afford to ramp single threaded performance despite the inefficiency because we don't have to worry as much about cooling and battery life isn't a concern at all.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Have you ever tried to program a multi-threaded program? I've read in multiple articles that multi-threading is an order of magnitude difference in difficulty than single-threading. To be upfront and honest, I have had very limited experience with multi-threading, but I've seen plenty of articles on how difficult it is to do stably and performantly.
  • PeachNCream - Tuesday, July 31, 2018 - link

    I have and it's not. We have tools now that make it nearly or completely trivial depending on the language you're using to write code.
  • MajGenRelativity - Tuesday, July 31, 2018 - link

    Ok
  • CiccioB - Tuesday, July 31, 2018 - link

    You're joking.
    Multi threading is something that is completely outside the optimization capabilities of any tools for whatever language.
    It requires the human brain to think in multitasking mode (which it is not suited for) and overcome all the overheads that occurs in the meantime.
    Thinking and creating a multi threaded algorithm is not the same as simple checking a checkbox somewhere in the graphics options of the compiler or in the framework.
    You have been talking all this time about things that it is clear you do not have a clue on.
    Keep on playing with the Commodore64 which uses very little power and is way fast enough for solving your childish thoughts and problems.
  • CaedenV - Tuesday, July 31, 2018 - link

    I am not really expecting 11 series chips to come out yet, but if there are, I am really REALLY hoping that they are a big step forward on the gaming front, and a huge step backwards on crypto mining.
    I would love to be able to sell my GTX 1080 for nearly full price to a crypto miner and upgrade to something faster that can actually play 4k games without issue. The 1080 is almost there... but frames often dip below 30fps which becomes problematic at times. A good 20% boost at 4K would be a very welcome improvement.
  • CiccioB - Tuesday, July 31, 2018 - link

    That is impossible. Mining and gaming uses the same computing capabilities and making the GPU running faster in one of those market will automatically make if faster also in the other.
    You should hope for some artificial crippling in the driver, but that would mean being stupid from the point of view of nvidia that has to sell as many GPU as possible to the highest price as possible. That's the aim of a capital society like all those whose stocks are exchange at Wall Street.

Log in

Don't have an account? Sign up now