[Official Tread] ATI R600
-
BarboneNet ha scritto:
si...ma quando usciranno le r600 faranno loro a costare un fottio e le nvidia molto ma molto meno........questo ritardo di Ati speriamo che si faccia sentire nelle prestazioni finali....
Tanto fino a quando non usciranno le DX1O io non sento la necessità .

Ciao;)
-
Now the 360â€
s GPU is one impressive piece of work and Iâ€
ll say from the get go itâ€
s much more advanced than the PS3â€
s GPU so Iâ€
m not sure where to begin, but Iâ€
ll start with what Microsoft said about it. Microsoft said Xenos was clocked at 500MHZ and that it had 48-way parallel floating-point dynamically-scheduled shader pipelines (48 unified shader units or pipelines) along with a polygon performance of 500 Million triangles a second.Before going any further Iâ€
ll clarify this 500 Million Triangles a second claim. Can the 360â€
s GPU actually achieve this? Yes it can, BUT there would be no pixels or color at all. Itâ€
s the triangle setup rate for the GPU and it isnâ€
t surprising it has such a higher triangle setup rate due to it having 48 shaders units capable of performing vertex operations whereas all other released GPUs can only dedicate 8 shader units to vertex operations. The PS3 GPUâ€
s triangle setup rate at 550MHZ is 275 million a second and if its 500MHZ will have 250 million a second. This is just the setup rate do NOT expect to see games with such an excessive number of polygons because it wont happen.Microsoft also says it can also achieve a pixel-fillrate of 16Gigasamples per second. This GPU here inside the Xbox 360 is literally an early ATI R600, which when released by ATI for the pc will be a Directx 10 GPU. Xenos in a lot of areas manages to meet many of the requirements that would qualify it as a Directx 10 GPU, but falls short of the requirements in others. What I found interesting was Microsoft said the 360â€
s GPU could perform 48 billion shader operations per second back in 2005. However Bob Feldstein, VP of engineering for ATI, made it very clear that the 360â€
s GPU can perform 2 of those shaders per cycle so the 360â€
s GPU is actually capable of 96 billion shader operations per second.To quote ATI on the 360â€
s GPU they say. "On chip, the shaders are organized in three SIMD engines with 16 processors per unit, for a total of 48 shaders. Each of these shaders is comprised of four ALUs that can execute a single operation per cycle, so that each shader unit can execute four floating-point ops per cycle."
48 shader units * 4 ops per cycle = 192 shader ops per clock
Xenos is clocked at 500MHZ *192 shader ops per clock = 96 billion shader ops per second.
(Did anyone notice that each shader unit on the 360â€
s GPU doesnâ€
t perform as many ops per pipe as the rsx? The 360 GPU makes up for it by having superior architecture, having many more pipes which operate more efficiently and along with more bandwidth.)Did Microsoft just make a mistake or did they purposely misrepresent their GPU to lead Sony on? The 360â€
s GPU is revolutionary in the sense that itâ€
s the first GPU to use a Unified Shader architecture. According to developers this is as big a change as when the vertex shader was first introduced and even then the inclusion of the vertex shader was merely an add-on not a major change like this. The 360â€
s GPU also has a daughter die right there on the chip containing 10MB of EDRAM. This EDRAM has a framebuffer bandwidth of 256GB/s which is more than 5 times what the RSX or any GPU for the pc has for its framebuffer (even higher than G80â€
s framebuffer).Thanks to the efficiency of the 360 GPUâ€
s unified shader architecture and this 10MB of EDRAM the GPU is able to achieve 4XFSAA at no performance cost. ATI and Microsoftâ€
s goal was to eliminate memory bandwidth as a bottleneck and they seem to have succeeded. If there are any pc gamers out there they notice that when they turn on things such as AA or HDR the performance goes down thatâ€
s because those features eat bandwidth hence the efficiency of the GPUâ€
s operation decreases as they are turned on. With the 360 HDR+4XAA simultaneously are like nothing to the GPU with proper use of the EDRAM. The EDRAM contains a 3D logic unit which has 192 Floating Point Unit processors inside. The logic unit will be able to exchange data with the 10MB of RAM at 2 Terabits a second. Things such as antialiasing, computing z depths or occlusion culling can happen on the EDRAM without impacting the GPUâ€
s workload.Xenos writes to this EDRAM for its framebuffer and itâ€
s connected to it via a 32GB/sec connection (this number is extremely close to the theoretical because the EDRAM is right there on the 360 GPUâ€
s daughter die.) Donâ€
t forget the EDRAM has a bandwidth of 256GB/s and its only by dividing this 256GB/s by the initial 32GB/s that we get from the connection of Xenos to the EDRAM we find out that Xenos is capable of multiplying its effective bandwidth to the frame buffer by a factor of 8 when processing pixels that make use of the EDRAM, which includes HDR or AA and other things. This leads to a maximum of 32*8=256GB/s which, to say the least, is a very effective way of dealing with bandwidth intensive tasks.In order for this to be possible developers would need to setup their rendering engine to take advantage of both the EDRAM and the available onboard 3D logic. If anyone is confused why the 32GB/s is being multiplied by 8 its because once data travels over the 32GB/s bus it is able to be processed 8 times by the EDRAM logic to the EDRAM memory at a rate of 256GB/s so for every 32GB/s you send over 256GB/s gets processed. This results in RSX being at a bandwidth disadvantage in comparison to Xenos. Needless to say the 360 not only has an overabundance of video memory bandwidth, but it also has amazing memory saving features. For example to get 720P with 4XFSAA on traditional architecture would require 28MB worth of memory. On the 360 only 16MB is required. There are also features in the 360's Direct3D API where developers are able to fit 2 128x128 textures into the same space required for one, for example. So even with all the memory and all the memory bandwidth, they are still very mindful of how itâ€
s used.I wasnâ€
t too clear earlier on the difference between the RSXâ€
s dedicated pixel and vertex shader pipelines compared to the 360s unified shader architecture. The 360 GPU has 48 unified pipelines capable of accepting either pixel or vertex shader operations whereas with the older dedicated pixel and vertex pipeline architecture that RSX uses when you are in a vertex heavy situation most of the 24 pixel pipes go idle instead of helping out with vertex work.Or on the flip side in a pixel heavy situation those 8 vertex shader pipelines are just idle and donâ€
t help out the pixel pipes (because they arenâ€
t able to), but with the 360â€
s unified architecture in a vertex heavy situation for example none of the pipes go idle. All 48 unified pipelines are capable of helping with either pixel or vertex shader operations when needed so as a result efficiency is greatly improved and so is overall performance. When pipelines are forced to go idle because they lack the capability to help another set of pipelines accomplish their task itâ€
s detrimental to performance. This inefficient manner is how all current GPUs operate including the PS3's RSX. The pipelines go idle because the pixel pipes aren't able to help the vertex pipes accomplish a task or vice versa. Whats even more impressive about this GPU is it by itself determines the balance of how many pipelines to dedicate to vertex or pixel shader operations at any given time a programmer is NOT needed to handle any of this the GPU takes care of all this itself in the quickest most efficient way possible. 1080p is not a smart resolution to target in any form this generation, but if 360 developers wanted to get serious about 1080p, thanks to Xenos, could actually outperform the ps3 in 1080p. (The less efficient GPU always shows its weaknesses against the competition in higher resolutions so the best way for the rsx to be competitive is to stick to 720P) In vertex shader limited situations the 360â€
s gpu will literally be 6 times faster than RSX. With a unified shader architecture things are much more efficient than previous architectures allowed (which is extremely important). The 360â€
s GPU for example is 95-99% efficient with 4XAA enabled. With traditional architecture there are design related roadblocks that prevent such efficiency. To avoid such roadblocks, which held back previous hardware, the 360 GPU design team created a complex system of hardware threading inside the chip itself. In this case, each thread is a program associated with the shader arrays. The Xbox 360 GPU can manage and maintain state information on 64 separate threads in hardware. There's a thread buffer inside the chip, and the GPU can switch between threads instantaneously in order to keep the shader arrays busy at all times.Want to know why Xenos doesnâ€
t need as much raw horsepower to outperform say something like the x1900xtx or the 7900GTX? It makes up for not having as much raw horsepower by actually being efficient enough to fully achieve its advertised performance numbers which is an impressive feat. The x1900xtx has a peak pixel fillrate of 10.4Gigasamples a second while the 7900GTX has a peak pixel fillrate of 15.6Gigasamples a second. Neither of them is actually able to achieve and sustain those peak fillrate performance numbers though due to not being efficient enough, but they get away with it in this case since they can also bank on all the raw power. The performance winner between the 7900GTX and the X1900XTX is actually the X1900XTX despite a lower pixel fillrate (especially in higher resolutions) because it has twice as many pixel pipes and is the more efficient of the 2. Itâ€
s just a testament as to how important efficiency is. Well how exactly can the mere 360 GPU stand up to both of those with only a 128 bit memory interface and 500MHZ? Well the 360 GPU with 4XFSAA enabled achieves AND sustains its peak fillrate of 16Gigasamples per second which is achieved by the combination of the unified shader architecture and the excessive amount of bandwidth which gives it the type of efficiency that allows it to outperform GPUs with far more raw horsepower. I guess it also helps that itâ€
s the single most advanced GPU currently available anyway for purchase. Things get even better when you factor in the Xenosâ€
MEMEXPORT ability which allows it to enable “streamout†which opens the door for Xenos to achieve DX10 class functionality. A shame Microsoft chose to disable Xenosâ€
other 16 pipelines to improve yields and keep costs down. Not many are even aware that the 360â€
s GPU has the exact same number of pipelines as ATIâ€
s unreleased R600, but to keep costs down and to make the GPU easier to manufacture, Microsoft chose to disable one of the shader arrays containing 16 pipelines. What MEMEXPORT does is it expands the graphics pipeline in more general purpose and programmable manner. Iâ€
ll borrow a quote from Dave Baumann since he explains it rather well.“With the capability to fetch from anywhere in memory, perform arbitrary ALU operations and write the results back to memory, in conjunction with the raw floating point performance of the large shader ALU array, the MEMEXPORT facility does have the capability to achieve a wide range of fairly complex and general purpose operations; basically any operation that can be mapped to a wide SIMD array can be fairly efficiently achieved and in comparison to previous graphics pipelines it is achieved in fewer cycles and with lower latencies. For instance, this is probably the first time that general purpose physics calculation would be achievable, with a reasonable degree of success, on a graphics processor and is a big step towards the graphics processor becoming much more like a vector co-processor to the CPU.â€
Even with all of this information there is still a lot more about this GPU that ATI just simply isn't revealing and considering they'll be borrowing technology used to design this GPU in their future pc products can you really blame them?
GotFrag DPAD - DPAD Home News Story - End all arguments: PS3 vs 360
LINK 1 pagina:GotFrag DPAD - DPAD Home News Story - End all arguments: PS3 vs 360
Sulla scheda video RSX è talmente lento da essere ben 1 generazione indietro!!!
Praticamente dice la GPU X360 è + efficente di quella della PS3, per il fatto che ha una architettura unificata.
R500 oltre le 48 pipeline avrebbe dovuto avere anche 16 in +, ma la MS le ha blocccate per il costo che comportava, quindi è come un R600 castrato però sempre R600 parlando di rumor(R600).
Poi c'è la funzionalità Memexport che dovrebbe realizzare, aprire le funzionalità come in DX10.
Dice anche che nelle oparazioni vertex R500 è 6 volte + veloce di RSx.
Secondo l'articolo R500 ha moltissime aree manageriali in cui è DX10 completamente, mentre in altre no.
Grazie all'Edram e all'architettura unificata R500 può fare 4X FSAA senza alcun costo in termini di prestazioni.
Praticamente, esempio, se facciamo il 720p + 4x FSAA, in un'architettura normale occuperebbe 28 MB di memoria, invece con X360 solo 16mb, oltre che il bus dell' EDRAM è di 256GB/s, è + alto anche di quello di G80.
La gpu X360 è dal 95-99% di efficenza con 4x FSAA.
Praticamente RSX sarebbe competitivo solo per il 720p ma non per il 1080p, i sviluppatori ricercano sta risoluzione in X360 perchè Xenos ha l'architettura unificata e un'efficenza altissima alle alte risoluzioni.


Ciao;)
-
per adesso si sa che i primi sample sono lunghi 30 cm e consuma molto piu di G80 si parla di 300W in full a def vs i 245 di G80...... poi vedremo quelle finali, cmq penso saranno uguali o leggermente sopra, secondo me Nvidia ha fatto un ottima gpu, chissa che non ci sia la sorpresa come Intel /AMD ........ ciao Deos

-
il discorso delle unità SIMD si fa veramente moooolto interessante:D:D:D:D
-
Ma guarda che schedozza che ATI mi stà sfogiando!


Ciao;)
-
porcamiseira se è GROSSO!!!
sarà una bella impresa raffredare a dovere quel colosso... magari anche ati tira fuori una specie si IHS per la sua GPU!!!
speriamo cmq di no

ma quando è che esce???
ciao
CazzZ!!!
-
Cazzeggiatore ha scritto:
porcamiseira se è GROSSO!!!sarà una bella impresa raffredare a dovere quel colosso... magari anche ati tira fuori una specie si IHS per la sua GPU!!!
speriamo cmq di no

ma quando è che esce???
ciao
CazzZ!!!
be dai dalle foto non è poi così grosso rispetto alla x1900gt... poi raffreddarlo sarà un altro paio di maniche veramente:eek:

-
e chissà se sarà un degno concorrente della 8800...
Marco
-
Deos ha scritto:
Oggi Vr-Zone ha mostrato una foto di quella che dovrebbe essere la X2900XTX OEMe ancora poco visibile ma da gia l'idea di quanto grande sia :p:p
Speriamo di vedere il prima possibile la versione retail che dovrebbe essere di soli 9" al posto dei 12" della OEM
La presentazione sara ad Amsterdam (magari ce scappa pure na gita) non si sa ancora se prima o dopo il CeBIT.
Deos
cioè.. o la mostrano o non la mostrano... che senso ha farla vedere sfocata?

dentro al mio case non c'entra...


-
ammazza però che strunzi

ce vogliono fa rosicà fino in fondo

vabbè dai.... almeno l'attesa sara ripagata......... o almeno lo spero....
ciao
cazzZ!!!
-
nooooooooooooooo questo si chiama SPAAAAAM!!! vigliacchi!!! MALEDEEEEETTI!!!
boia che bestia, potrebbero metterla a fianco ad una 8800 almeno ci si fa un'idea...
Marco
-
Deos ha scritto:
bhe quella e la versione OEM ............. il pcb della scheda e di soli 9" (24cm circa) quello che va oltre e la ventla .................
tieni conto che tutta e 34cm cioe 7cm piu lunga della 8800GTX
pero noi me sa che non la vedremo mai quella scheda ................... ma vedremo solo la versione piccina della scheda che sara piu o mneo come la 8800GTS
il bello e che da quello che si dice ........... un sistema con x2900xtx come consumi stia sotto i 280W .............. quindi meno della gtx

Deos
Si, esattamente!
Ciao;)
-
cioè... mo consuma meno di una gtx?
hauahuah si sono invertite le parti... vediamo un po che ne esce fuori

-
pirella ha scritto:
cioè... mo consuma meno di una gtx?hauahuah si sono invertite le parti... vediamo un po che ne esce fuori

Cosi pare!
Come si dice che gli utenti Nvidia sono letteralmente incazzati x il supporto pessimo dei driver con Windows Vista e la scheda video G80, tanto da ricorrere untiti x vie legali!
Ciao;)
-
dj883u2 ha scritto:
Cosi pare!Come si dice che gli utenti Nvidia sono letteralmente incazzati x il supporto pessimo dei driver con Windows Vista e la scheda video G80, tanto da ricorrere untiti x vie legali!
Ciao;)
martedì mi arriva una 8800gtx... in settimana provo pure vista... sto leggendo un po in giro, e in effetti è davevro una cosa scandalosa

Ati è ati... non c'è niente da fare...
-
Deos ha scritto:
pire lascia stare vista se ti arriva la 8800...............rischieresti solo di perderci la testa
ti verreebbe solo voglia di distruggere tutto ............. bella sta 8800 ma su vista non la digerisce ...............
in ogni caso quoto
ATI e ati ............. e i driver per R600 su vista son pronti da un pezzo..........secondo me ha fatto bene ad aspettare ..............
nvidia s'è tirata la zappa sui piedi in sto modo
Deos
beh in effetti tutti i gridare allo scandalo perchè Ati non era pronta, ma in effetti la mossa di Nvidia di presentare subito VGA dx10 prima dell'uscita di Vista si sta dimostrando un pò sbagliata, indubbiamente ottime vga con il precedente SO, ma fare almeno dei driver decenti per il nuovo SO..., fa bene Ati ad aver aspettato che Vista sia uscito fuori, almeno avrà driver decenti
-
a che servono i driver per una vga se manca la vga?

-
Kioji ha scritto:
a che servono i driver per una vga se manca la vga?
anche questo è verissimo e giustissimo

-
pirella ha scritto:
martedì mi arriva una 8800gtx... in settimana provo pure vista... sto leggendo un po in giro, e in effetti è davevro una cosa scandalosa
Ati è ati... non c'è niente da fare...
Si, personalmente mi sono sempre trovato benissimo con ATI.
Anche se la tentazione di provare il G80 è statat forte,alla fine ho resistito!

Mi stò lentamente disintossicando!...

Ciao;)
-
Kioji ha scritto:
a che servono i driver per una vga se manca la vga?
La potenza è nulla senza controllo!...

Ciao;)
Ciao! Sembra che tu sia interessato a questa conversazione, ma non hai ancora un account.
Stanco di dover scorrere gli stessi post a ogni visita? Quando registri un account, tornerai sempre esattamente dove eri rimasto e potrai scegliere di essere avvisato delle nuove risposte (tramite email o notifica push). Potrai anche salvare segnalibri e votare i post per mostrare il tuo apprezzamento agli altri membri della comunità.
Con il tuo contributo, questo post potrebbe essere ancora migliore 💗
Registrati Accedi