open All Channels
seplocked EVE General Discussion
blankseplocked Carbon UI
 
This thread is older than 90 days and has been locked due to inactivity.


 
Author Topic

minx trader
Posted - 2011.05.18 14:59:00 - [1]
 

will this only be optimised for nvidea cards or will ati/amd cards be ok

Landon Donaldson
The Roche Limit
Posted - 2011.05.18 15:01:00 - [2]
 

From what I got at the Keynote speech at fan fest, its that they are only working closly with Nvidia, I doubt very much that means they will exclude other cards from development though. After all it would not make much business sense to do that.

Ehrys Marakai
Caldari
Evolution
The Initiative.
Posted - 2011.05.18 15:06:00 - [3]
 

Originally by: Landon Donaldson
From what I got at the Keynote speech at fan fest, its that they are only working closly with Nvidia, I doubt very much that means they will exclude other cards from development though. After all it would not make much business sense to do that.


It *is* possible that it will have "special" optimisations for nVidia cards though. Which could therefore fall into the realm of optimised for nVidia and not for others.

Availability, means something different =)

Landon Donaldson
The Roche Limit
Posted - 2011.05.18 15:19:00 - [4]
 

Originally by: Ehrys Marakai
Originally by: Landon Donaldson
From what I got at the Keynote speech at fan fest, its that they are only working closly with Nvidia, I doubt very much that means they will exclude other cards from development though. After all it would not make much business sense to do that.


It *is* possible that it will have "special" optimisations for nVidia cards though. Which could therefore fall into the realm of optimised for nVidia and not for others.

Availability, means something different =)


You got news I do not know off? Pray tell, I have an Nvidia, I always swore by Ati cards in the past but after having three blow up on me in less than a year I wont be going back in a hurry.
Would be nice if they did as you say have some "special optimisations" after all everyone likes to feel special :-)

Kidzukurenai Datael
Imperial Collective
Celestial Shadows
Posted - 2011.05.18 15:42:00 - [5]
 

Originally by: Landon Donaldson
I always swore by Ati cards in the past but after having three blow up on me in less than a year I wont be going back in a hurry.


Crazily I was thinking the exact opposite to you; I've also had three Nvidia cards die on me in the last year or so too and was considering going over to ATI... Seems they're a much for muchness...

Ehrys Marakai
Caldari
Evolution
The Initiative.
Posted - 2011.05.18 16:01:00 - [6]
 

Originally by: Landon Donaldson
Originally by: Ehrys Marakai
Originally by: Landon Donaldson
From what I got at the Keynote speech at fan fest, its that they are only working closly with Nvidia, I doubt very much that means they will exclude other cards from development though. After all it would not make much business sense to do that.


It *is* possible that it will have "special" optimisations for nVidia cards though. Which could therefore fall into the realm of optimised for nVidia and not for others.

Availability, means something different =)


You got news I do not know off? Pray tell, I have an Nvidia, I always swore by Ati cards in the past but after having three blow up on me in less than a year I wont be going back in a hurry.
Would be nice if they did as you say have some "special optimisations" after all everyone likes to feel special :-)


I was referring more to the possibilities mentioned by the OP's post. Whether or not it is possible. I have no other information other than that and I'm sorry if I implied otherwise ^^

On other news; regarding the ATI vs nVidia debate. Price to Performance ATI is better than nVidia, but I've always been impressed by the "refined" top end cards that nVidia make. That and the total mash-up that ATI continually make with their drivers keep me with nVidia =p

As a developer, I've always preferred writing for nVidia cards.
I'll give you an example of a difference between ATI and nVidia at the programming level which was valid as of 2010. ATI cards struggle displaying textures that are not in power of 2 dimensions. There is an extra scaling step to convert non ^2 to ^2 dimensions, effectively slowing the process down ;)

Landon Donaldson
The Roche Limit
Posted - 2011.05.18 16:06:00 - [7]
 

Edited by: Landon Donaldson on 18/05/2011 16:10:24
Originally by: Ehrys Marakai

ATI cards struggle displaying textures that are not in power of 2 dimensions. There is an extra scaling step to convert non ^2 to ^2 dimensions, effectively slowing the process down ;)


Yowza! I am not a clever programmer or a super cool dude that knows about these things. I am a "plug that thing in there and magical things appear on your screen that make you go "ooohhh shiney"" type of person. So that last paragraph you wrote does not even look like english to meLaughing

Landon Donaldson
The Roche Limit
Posted - 2011.05.18 16:08:00 - [8]
 

Edited by: Landon Donaldson on 18/05/2011 16:09:57
Originally by: Kidzukurenai Datael
Originally by: Landon Donaldson
I always swore by Ati cards in the past but after having three blow up on me in less than a year I wont be going back in a hurry.


Crazily I was thinking the exact opposite to you; I've also had three Nvidia cards die on me in the last year or so too and was considering going over to ATI... Seems they're a much for muchness...


Ha ha I seem to have got the "made on a friday" three times in a row.

Spurty
Caldari
V0LTA
VOLTA Corp
Posted - 2011.05.18 16:12:00 - [9]
 

Originally by: Landon Donaldson
So that last paragraph you wrote does not even look like english to meLaughing


The case is building up against you as well.

Just wanted you to be kept up to date with proceedings.

Even the BETA drivers for nVidia are head and neck more stable than AMD (ATi diaf mate).

I get the distinct feeling that AMD is run by drunks.

brutoid
Caldari
Posted - 2011.05.18 16:26:00 - [10]
 

Originally by: Spurty


Even the BETA drivers for nVidia are head and neck more stable than AMD (ATi diaf mate).

I get the distinct feeling that AMD is run by drunks.



http://www.youtube.com/watch?v=sRo-1VFMcbc Wink

minx trader
Posted - 2011.05.18 16:57:00 - [11]
 

well I have just bought the ati radeon HD 6850 toxic so i hope this new carbon UI will be compatible, not planning on upgrading for a few years yet

Xercodo
Amarr
Xovoni Directorate
Posted - 2011.05.18 17:27:00 - [12]
 

Originally by: Landon Donaldson
Edited by: Landon Donaldson on 18/05/2011 16:10:24
Originally by: Ehrys Marakai

ATI cards struggle displaying textures that are not in power of 2 dimensions. There is an extra scaling step to convert non ^2 to ^2 dimensions, effectively slowing the process down ;)


Yowza! I am not a clever programmer or a super cool dude that knows about these things. I am a "plug that thing in there and magical things appear on your screen that make you go "ooohhh shiney"" type of person. So that last paragraph you wrote does not even look like english to meLaughing


simple break down:

all things in computing are based on binary, 1s and 0s
the binary system is based on powers of two:
1 = 1
10 = 2
100 = 4
1000 = 8
-skip a few-
100 0000 0000 = 1024
(this is why for a while a standard screen res was 1024x768 and why there are two different sizes when you ask windows how big a file is, when they say its 1GB they actually mean 1024MB or something along those lines)

the problem stated is that ATI cards dont play nicely with textures that aren't a perfect square of 2 like 1000x1000
the developer needs to do extra algorithms to convert their non-standard 1000x1000 to a 1024x1024 and that extra step slows down the performance of a game

Landon Donaldson
The Roche Limit
Posted - 2011.05.18 17:30:00 - [13]
 

Originally by: Xercodo
Originally by: Landon Donaldson
Edited by: Landon Donaldson on 18/05/2011 16:10:24
Originally by: Ehrys Marakai

ATI cards struggle displaying textures that are not in power of 2 dimensions. There is an extra scaling step to convert non ^2 to ^2 dimensions, effectively slowing the process down ;)


Yowza! I am not a clever programmer or a super cool dude that knows about these things. I am a "plug that thing in there and magical things appear on your screen that make you go "ooohhh shiney"" type of person. So that last paragraph you wrote does not even look like english to meLaughing


simple break down:

all things in computing are based on binary, 1s and 0s
the binary system is based on powers of two:
1 = 1
10 = 2
100 = 4
1000 = 8
-skip a few-
100 0000 0000 = 1024
(this is why for a while a standard screen res was 1024x768 and why there are two different sizes when you ask windows how big a file is, when they say its 1GB they actually mean 1024MB or something along those lines)

the problem stated is that ATI cards dont play nicely with textures that aren't a perfect square of 2 like 1000x1000
the developer needs to do extra algorithms to convert their non-standard 1000x1000 to a 1024x1024 and that extra step slows down the performance of a game


My brain now has a few more connections because of you, thanksVery Happy

GateScout
Posted - 2011.05.18 18:26:00 - [14]
 

Edited by: GateScout on 18/05/2011 18:26:24
Originally by: minx trader
will this only be optimized for nvidea cards or will ati/amd cards be ok

Is there any indication that this tech (whatever it is) will be optimized for a specific GPU? ...or are you just guessing?

Selune Virra
Posted - 2011.05.18 19:27:00 - [15]
 

Edited by: Selune Virra on 18/05/2011 19:28:52
Originally by: Landon Donaldson

My brain now has a few more connections because of you, thanksVery Happy


I can help you remove those pesky extra connections... just fly a pod to deep null Twisted EvilYARRRR!!

as for the "OMG!!They're screwing ATi!!" it' probably the same thing that EA and some other companies have -- the nVidia "the way it's meant to be played" optimizations/enhancements that their games have.

Ehrys Marakai
Caldari
Evolution
The Initiative.
Posted - 2011.05.18 21:05:00 - [16]
 

Originally by: Selune Virra
Edited by: Selune Virra on 18/05/2011 19:28:52
Originally by: Landon Donaldson

My brain now has a few more connections because of you, thanksVery Happy


I can help you remove those pesky extra connections... just fly a pod to deep null Twisted EvilYARRRR!!

as for the "OMG!!They're screwing ATi!!" it' probably the same thing that EA and some other companies have -- the nVidia "the way it's meant to be played" optimizations/enhancements that their games have.


Indeed this is what I was getting at. Availability and optimisation are two different things. Just because it is, say, optimised for nVidia, doesn't mean it won't be available for ATI card users =) (vice versa is also true.

And I will confirm again that I have no prior knowledge of any optimisations or in fact anything about the Carbo UI, be they real, virtual or inter-dimensional :D

Karnitha
Posted - 2011.05.18 22:50:00 - [17]
 

Originally by: Xercodo

ATI cards struggle displaying textures that are not in power of 2 dimensions. There is an extra scaling step to convert non ^2 to ^2 dimensions, effectively slowing the process down ;)

(this is why for a while a standard screen res was 1024x768 and why there are two different sizes when you ask windows how big a file is, when they say its 1GB they actually mean 1024MB or something along those lines)

the problem stated is that ATI cards dont play nicely with textures that aren't a perfect square of 2 like 1000x1000
the developer needs to do extra algorithms to convert their non-standard 1000x1000 to a 1024x1024 and that extra step slows down the performance of a game


Lets start with the simple : The reason windows shows two different numbers for file size. The file is stored on a filesystem which uses fixed allocation units (think : clusters). The actual file size is the length of actual file data, the size on disk is the block count meaning that any partial blocks will be counted as full. It has nothing to do with mb/mib accounting.

Then lets go to 'power of two'. Lets list some : PoTD : {1,2,4,8,16,32,64,128,256,512,1024,2048 ...}
Notice that 768 is missing from that? The reason 1024 is used has nothing to do with PoT really, more that it results in nice numbers as you want everything to be a multiple of 8, since bit depth and such are going to be using them, it just makes sure you're not wasting memory.

Regarding textures in specific. If we're using standard industry APIs and not doing things manually: 1000x1000 px would be a bad texture no matter what. Also note that extures are never required to be a square, 512x64 is perfectly fine. There's a very simple reason: When you use block compression (DXTn/BCn) you're going to want to avoid any non-power-of-two as it makes decoding slow as you need to use an edge case (and its pretty crap). As such graphics cards try to always make sure a texture is PoT divisible, internal formats are less of a problem. What the driver will try doing if the application requires is that it will pad out the image to the correct dimension and then adjust the mapping co-ord calls to the new relative position, this avoids resizing the bitmap, preserving the information. Cards have done this since the TNT2 days(tho usually disabled by default), it's nothing new.

Essentially using non-PoT dimensions means that you're being inefficient and wasting memory, the texture filesize might be fractionally lower but once its in use by hardware it'll lose that anyway. There are always ways to force use of custom dimensions, but those are generally avoided for game use since they are very application specific.

Now lets get to this 'AMD' problem. The problem is that the driver is pretty smart these days. As with anything that is 'smart' it's usually annoying and trouble. If it sees you using very inefficient textures it will try using its "surface format optimization" functions which are part of Catalyst AI to automagically speed things up by using block compression or auto-generating mipmaps. However sometimes games do this stuff in software too, leading to the driver and the game both trying to fix problems at the same time : see doom3 and automips making maploads very slow. Other times it can lead to incorrectly formatted cubemaps (opengl style vs d3d). Most of the problems are related to bad hinting in OpenGL renders because the opengl docs are quite vague and lead to nvidia offering a lot of defaults which should not be, or AMD following the spec to the letter and limiting things a bit too much, both sides are not blameless. Another recent example is Brink which uses non-pot textures as well as rendering to texture in a way that results in high res temp textures getting mipmaps generated, slowing things down dramatically.

This can all be solved by just disabling CatAI, either on new cards by setting it to max (iirc, tho it drops IQ), or by the normal reg

Ehrys Marakai
Caldari
Evolution
The Initiative.
Posted - 2011.05.19 08:42:00 - [18]
 

Originally by: Karnitha

Then lets go to 'power of two'. Lets list some : PoTD : {1,2,4,8,16,32,64,128,256,512,1024,2048 ...}
Notice that 768 is missing from that? The reason 1024 is used has nothing to do with PoT really, more that it results in nice numbers as you want everything to be a multiple of 8, since bit depth and such are going to be using them, it just makes sure you're not wasting memory.


Actually it has nothing to do with bit-depth, although you are right about it saving memory space. The horizontal resolution is stored in a single byte. This isn't sufficient to store a number greater than 255. So you subtract 248 and divide by 8 (if I remember correctly) so 1024 is stored in a single byte as:
((1024 - 248) / 8) = 97
1366 is not divisible by 8 and so is not a "true native" resolution.
((1366 - 248) / 8) = 139.75 <-- The floating point is lost

Originally by: Karnitha

Regarding textures in specific. If we're using standard industry APIs and not doing things manually: 1000x1000 px would be a bad texture no matter what. Also note that extures are never required to be a square, 512x64 is perfectly fine. There's a very simple reason: When you use block compression (DXTn/BCn) you're going to want to avoid any non-power-of-two as it makes decoding slow as you need to use an edge case (and its pretty crap). As such graphics cards try to always make sure a texture is PoT divisible, internal formats are less of a problem. What the driver will try doing if the application requires is that it will pad out the image to the correct dimension and then adjust the mapping co-ord calls to the new relative position, this avoids resizing the bitmap, preserving the information. Cards have done this since the TNT2 days(tho usually disabled by default), it's nothing new.

Essentially using non-PoT dimensions means that you're being inefficient and wasting memory, the texture filesize might be fractionally lower but once its in use by hardware it'll lose that anyway. There are always ways to force use of custom dimensions, but those are generally avoided for game use since they are very application specific.

Now lets get to this 'AMD' problem. The problem is that the driver is pretty smart these days. As with anything that is 'smart' it's usually annoying and trouble. If it sees you using very inefficient textures it will try using its "surface format optimization" functions which are part of Catalyst AI to automagically speed things up by using block compression or auto-generating mipmaps. However sometimes games do this stuff in software too, leading to the driver and the game both trying to fix problems at the same time : see doom3 and automips making maploads very slow. Other times it can lead to incorrectly formatted cubemaps (opengl style vs d3d). Most of the problems are related to bad hinting in OpenGL renders because the opengl docs are quite vague and lead to nvidia offering a lot of defaults which should not be, or AMD following the spec to the letter and limiting things a bit too much, both sides are not blameless. Another recent example is Brink which uses non-pot textures as well as rendering to texture in a way that results in high res temp textures getting mipmaps generated, slowing things down dramatically.

This can all be solved by just disabling CatAI, either on new cards by setting it to max (iirc, tho it drops IQ), or by the normal reg

Agree with this. AMD and nVidia are as bad as each other, however, I wanted a simple example that didn't require 8 pages of explanation and I do have a test application around here somewhere that shows a lot of the differences between AMD and nVidia (that were valid in 2010) and that was one of them. I load a 52x52 texture, which is displayed perfectly on an nVidia card (iirc a GTX 240), but fails to render at all on the ATI card. (I believe it was a 4300 series)

Ris Dnalor
Minmatar
Fleet of Doom
Posted - 2011.05.19 09:13:00 - [19]
 

Edited by: Ris Dnalor on 19/05/2011 09:56:33
i'd be happy if my freakin avatar would stop having a siezure while I'm trying to customize him. I really wonder who thought having the thing move about all willy-nilly during creation was a good idea?

maybe if I downed the proper mixture of boosters before recustomizing it would calm his nerves a bit...

Grimpak
Gallente
Midnight Elites
Echelon Rising
Posted - 2011.05.19 09:48:00 - [20]
 

Originally by: Spurty
Originally by: Landon Donaldson
So that last paragraph you wrote does not even look like english to meLaughing


The case is building up against you as well.

Just wanted you to be kept up to date with proceedings.

Even the BETA drivers for nVidia are head and neck more stable than AMD (ATi diaf mate).

I get the distinct feeling that AMD is run by drunks.

tbh I have seen driver problems from BOTH sides: ATI and NV. Sometimes NV issues were actually worse, and to be fair about the "but omg NV deploys new drivers each month!!!11one", the timeframe between the previous and the new NV drivers is of... .hmm... 2-3 months?

ATI (AMD) actually has driver deployment each month


not saying that ATI isn't bug free, but NV isn't pristine either.

David Grogan
Gallente
The Motley Crew Reborn
Warped Aggression
Posted - 2011.05.19 10:13:00 - [21]
 

Originally by: Kidzukurenai Datael
Originally by: Landon Donaldson
I always swore by Ati cards in the past but after having three blow up on me in less than a year I wont be going back in a hurry.


Crazily I was thinking the exact opposite to you; I've also had three Nvidia cards die on me in the last year or so too and was considering going over to ATI... Seems they're a much for muchness...


nVidia cards are great once you remove the stock heatsink and fan and put a decent zalman heatsink and fan on them.

ATI/AMD cards are better bang for buck though

Gothikia
Regeneration
Posted - 2011.05.19 10:16:00 - [22]
 

The question I have for the devs about Carbon UI is the following. What makes it so special and what changes have you made specifically that will enable a faster user interface, not just in terms of performance, but in the ability to rapidly create new UI features?

Or is this a bunch of performance tweaks, code been moved around and marketing fluff?

Not-Apsalar
Posted - 2011.05.19 14:07:00 - [23]
 

Originally by: GateScout
Edited by: GateScout on 18/05/2011 18:26:24
Originally by: minx trader
will this only be optimized for nvidea cards or will ati/amd cards be ok

Is there any indication that this tech (whatever it is) will be optimized for a specific GPU? ...or are you just guessing?


Optimization is normal, but there is no indication of any features that will only be supported by one card. This is why the industry has APIs. One API might run a little faster on one card, but they all tell you what APIs are supported and you choose the card accordingly. Probably only way you'd get into "special" features being supported would be particular features tied to displays(like Eyefinity) or something tied to multiple videocard performance since both major companies use different implementations. Even then, it's not visual things. You've ever got the DX and Shader support or you don't.

Waaaaaagggh
Posted - 2011.05.19 14:10:00 - [24]
 

is this going to be the ui that was supposed to be released with incursion? the one with the nice "windows" style task bar and all that?

*fingers crossed

Gnulpie
Minmatar
Miner Tech
Posted - 2011.05.19 14:15:00 - [25]
 

All that is nonsense.

Of course the Carbon UI will run on any gfx card that meets the specification for EVE. Same as with the current UI.

The speed? Best turn all brackets off Razz


 

This thread is older than 90 days and has been locked due to inactivity.


 


The new forums are live

Please adjust your bookmarks to https://forums.eveonline.com

These forums are archived and read-only