Message boards : Graphics cards (GPUs) : GF106 and GF108 Fermi cards
Author | Message |
---|---|
GF106 and GF108 Fermi cards are due to turn up from September 13th. | |
ID: 18214 | Rating: 0 | rate: / Reply Quote | |
GF106 and GF108 will be mainstream chips. That means nVidia will have to make them as small and cheap as possible. The basis will obviously be the Fermi architecture, as any futher changes cost them money (development time etc.). | |
ID: 18230 | Rating: 0 | rate: / Reply Quote | |
I don't understand why they release their high end first, then work downwards. If software maturity is what they want, can't they wait with this instead, since hardware maturity is what most people would prefer to spend their money on? Or is this exactly what they want to avoid, when they're dealing with those who can best afford to buy new stuff? | |
ID: 18232 | Rating: 0 | rate: / Reply Quote | |
MrS, what you say is certainly more attractive; a 288 shaders GTS450 at reported frequencies would be very interesting; it could be within 5% of the GTS460 performance. | |
ID: 18233 | Rating: 0 | rate: / Reply Quote | |
I know of several manufactures, of common household appliances, that deliberately manufacture in system failures, but with built in reset switches! So when appliances fail (overheat, or are used X number of times) the Engineer just turns up and presses a reset button on you appliance, and of course charges you £70 (Euro 100) – yes, for a Muppet to flick a switch. What brands? | |
ID: 18234 | Rating: 0 | rate: / Reply Quote | |
Italian | |
ID: 18235 | Rating: 0 | rate: / Reply Quote | |
I've had experiences too with consumer products (that I also can't remember), but they were generic no-name stuff, like a Voltage Regulator that promised more consistent, clean, & efficient power that actually wasted much more electricity, didn't work, & was a total scam. | |
ID: 18236 | Rating: 0 | rate: / Reply Quote | |
I don't understand why they release their high end first, then work downwards. If software maturity is what they want, can't they wait with this instead, since hardware maturity is what most people would prefer to spend their money on? Or is this exactly what they want to avoid, when they're dealing with those who can best afford to buy new stuff? There's a simple why: it's the traditional approach. That doesn't neccessarily make it a good one, though. A better argument would be this: if you're introducing a new architecture you want people to know that it's good. You want to do it with a bang that's heard around the world. Or at least the gamer world. And that's easiest when you're setting benchmark records. That's probably why they stick to "high end first". ATI also introduced RV870 first and subsequently worked their way down. There was comparably little delay between launches, so one could say they probably got as close to a full scale launch of the entire product line as it gets (without stocking new chips for months). nVidia may have wanted to do the same, but couldn't as GF100 was late and required a redesign to become more economical. Now that they've got the redesign they'll probably take this architecture and use it for the smaller chips as soon as possible. Is there anything here which you'd want them to do in a better / different way? Well, they could have designed GF100 to be competitive in the beginning.. but I suspect they tried just that ;) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 18237 | Rating: 0 | rate: / Reply Quote | |
There's a simple why: it's the traditional approach. That doesn't neccessarily make it a good one, though. A better argument would be this: if you're introducing a new architecture you want people to know that it's good. You want to do it with a bang that's heard around the world. Or at least the gamer world. And that's easiest when you're setting benchmark records. That's probably why they stick to "high end first". I do understand the shock effect that's desirable when launching the fastest first. I'm just saying that there are women who look F.I.N.E. FINE, but they turn out to be a great disappointment & a total waste of time, there was nothing at all that wasn't skin deep. You saw everything there was to see & there wasn't any surprises at all. But there's also the type that's slightly boring, slightly not. The type that doesn't turn heads, but made you look twice. There was more than meets the eyes, & the more you looked, the more there was to look at. Mid-Range is what I'd hoped would be the start, as you can build up & you can build downwards too. It won't shock you, but it won't be something to ignore either, & if she has sisters or friends. You wouldn't likely meet them if you weren't introduced. The Internet has so many sites dedicated to reviewing products. There are benchmarks, OCing, pros & cons, comments, etc, etc. If Ati or Nvidia had to do all that themselves, they'd have to spend much more money on marketing than they do already. Bad reviews has also an impact on sales & the constant; They're work on the problem, tweaking, & fixing. Isn't a great way to start. If everybody knows there will be hiccups, why not take that into consideration instead of using that knowledge as an excuse to have a bad start? Everybody knows not to expect too much from The Special Olympics, but if you're watching The Olympics, you know that the slap on the face will be with the gloves off. It's not cute, when the biggest & baddest disappoint much more than everything else that's crippled from the start. They ALSO know that every time they try something new, they'll have issues with yields, so do they want to start with the best of the best, when they have such a hard time getting it? ____________ | |
ID: 18239 | Rating: 0 | rate: / Reply Quote | |
There's very simple reason: sales. | |
ID: 18241 | Rating: 0 | rate: / Reply Quote | |
So it's the empty the fridge, before stocking up again argument? | |
ID: 18242 | Rating: 0 | rate: / Reply Quote | |
So it's the empty the fridge, before stocking up again argument? I don't think so. The HD57x0 pretty much obsoleted HD4870 - 4830 except for double precision. Yet ATI tried really hard to launch them as soon as possible after the first Cypress cards. One could argue that it was the launch of Win 7 and DX11 which made them hurry - but then something is always around the corner, be it "back to school", Christmas or whatever. I'd rather argue that time to market is really critical in the GPU business. Besides solid execution you'll want to be first with new features (checkbox or useful - doesn't matter) and to be first to exploit manufaturing advances. That's what made ATI and nVidia survive the 3D accelerator pioneer days, whereas many others failed. On can argue that progress has slowed down in recent years (probably for good) and that the rules are being rewritten: the balance shifts from "delivering products at breathtaking schedules" to "delivering really good products". But we're not completely there yet. Speed still matters and the old spirit is still alive. There's also the chicken-and-egg problem: you can't optimize software without the hardware. And if you've got the chip design set in stone (software development won't start prior to this), you can just as well make many chips and start to sell them. Honestly I don't see much of an advantage in rolling the chips out in a different way. Sure, they have to be good and the software has to be good enough to avoid bad press. But apart from that I can't see any tangible benefits from changing the current way (high end first, then quickly work down the linup). Sure, testing a new process with an insanely large chip is a bad idea - but this is a separate topic. The ATIs HD4770 demonstrated how to do it. If software maturity is what's wanted, but many are ready to get the newest thing out there just because it's new, milk the cow, work on the software, improve on the card, release it with better software support & hardware architecture, & milk the cow again. Who said they'd want software maturity? Sure, they need it at some point.. but not at the price of shipping later. And I doubt offering midrange first and high end later would yield much more profit. They'd get bad press for the "dissappointing new chips" despite their advantages and people looking for high end would hesitate to buy. Especially since the bigger chips would probably already be in the rumor mill. And people who want to buy mid range will buy anyway. And if you've got a "buy everything because it's new" enthusiast you'd better give him something for his money. New features, better power consumption and lower noise don't cut it - his games will still look and feel the same. He'll loose his enthusiasm if he doesn't get some perceived benefit from the purchase. And benchmark records are probably the easiest benefit to ensure him "man, this was really worth it!" MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 18243 | Rating: 0 | rate: / Reply Quote | |
Thanks MrS, I really needed someone to explain that question to me so I didn't keep asking myself, "WHY?". Its just that Nvidia, were trying so many new tricks & pulling so very many fast ones lately. So I just wondered if they were open to new, untried ideas. I don't recall that they were so keen on relabeling in the past, or making something that made almost everybody say, "oh this will never work", & they got away with both things quite well, or much better than everybody else thought they would. So I was beginning to think they were either the gambling types, or all too happy to play Russian Roulette because they were adrenalin junkies. | |
ID: 18244 | Rating: 0 | rate: / Reply Quote | |
Glad I could be of some assistance :) | |
ID: 18245 | Rating: 0 | rate: / Reply Quote | |
The GF106 based cards will be called GTS 450, GTS 445 and GTS 440. | |
ID: 18247 | Rating: 0 | rate: / Reply Quote | |
Interesting! The rumored GF106 PCB features 6 memory chips - that's a 192 bit bus nowadays. That would probably suit a 250+ shader GF106 better than 128 bit. 128 would work, but would be borderline to really limiting performance. Especially since nVidia may not yet have fixed their GDDR5 controller, so they can't clock their memory as high as ATI can. | |
ID: 18254 | Rating: 0 | rate: / Reply Quote | |
251 is also very close to 256, just in case. | |
ID: 18255 | Rating: 0 | rate: / Reply Quote | |
Or the PCB was meant for GF104 with 192 bit and they mistakenly attributed it to GF106. Could have been some experimental design, which didn't make it out of the door. | |
ID: 18260 | Rating: 0 | rate: / Reply Quote | |
New Cards have been released by NVidia today. | |
ID: 18652 | Rating: 0 | rate: / Reply Quote | |
So it's 192 Sahders or 4 blocks of 48 for the GF106. Too bad the 200+ rumors didn't turn out to be true - if so I had bet on 6 shader blocks rather than 5 ;) | |
ID: 18661 | Rating: 0 | rate: / Reply Quote | |
One of the three sets of memory controller/ROP pairs is disabled. | |
ID: 18662 | Rating: 0 | rate: / Reply Quote | |
One of the three sets of memory controller/ROP pairs is disabled. That's why I find it strange that the PCB is ready for the 3rd controller. The only way it makes sense is that a GTS455 is coming soon. Which is almost neccessary due to the competition from the HD5770 anyway :p Enabling the 3rd controller will probably not cause any wonders to happen, but will help here and there a bit, especially with FSAA. The added 128kB L2 shouldn't hurt either. Furthermore nVidia could easily up the clock speed ~10% for such a card. But if the past is any indication they'll probably call it "GTS460" to make it sell better due to the good reputation of the GTX460 :D BTW: where's a HD5840 right in between 5830 and 5850? That would be at least something placed against the GTX460. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 18672 | Rating: 0 | rate: / Reply Quote | |
Some nice pics here, | |
ID: 18702 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : GF106 and GF108 Fermi cards