open All Channels
seplocked EVE Information Portal
blankseplocked New Dev Blog: TQ Level Up
 
This thread is older than 90 days and has been locked due to inactivity.


 
Pages: first : previous : ... 3 4 5 6 7 [8] 9 10 11 : last (11)

Author Topic

Hawk TT
Caldari
Bulgarian Experienced Crackers
Posted - 2010.06.21 01:06:00 - [211]
 

Edited by: Hawk TT on 21/06/2010 01:06:24
Originally by: Jim Luc

I like this. I don't quite understand it, but I like this. Precise collision detection and ship model disintegration and debris would be so cool!

So, in layman's terms, how will this improve the Eve experience? If it won't fix lag, what improvements are going to be made to fix it? More hamsters?


Actually GPGPU off-loading could fix great portion of the lag, if such tech could be implemented for the SOL servers responsible for the space-simulation in EVE.

Let me explain:
Each solar system in EVE is "simulated" and "hosted" by one "SOL server". BTW, a "SOL Server" is a software process, not a physical server (machine).
Each "SOL Server" runs on one CPU core, because of some limitations of Stackless Python (scripting language used for the server side programming).
There are multiple "SOL Servers" per physical node, and they share memory / I/O / Network resources, but no "SOL Server" could use multiple CPU cores!
As a result, the computing power available to a single "SOL Server" (i.e. single EVE solar system) is limited to the computing power of only one CPU core!
That's why CCP are not using Quad & Hexa core CPUs with lower frequency (2.0-2.66GHz). They prefer to use Dual-Core CPUs with the maximum frequency (3.33GHz) they could get.
All space-simulations for a solar-system are performed by a single CPU core - this means that all interactions between the ships loaded in the grid are calculated by a single CPU core!
There are hundreds of thousands of dynamic parallel calculations performed by the CPU core, like:
a) damage calcs depending on active modules, weapons & ammo, ships' speeds & trajectories, EW modules, gang bonuses etc.
b) collision detections of some sort (not very precise)
c) etc.

There are also lot's of I/O tasks performed - taking input from EVE clients is easy, sending out the current status of the simulation (e.g. battle) to all players in the grid is another story. I don't want to go into more details why StacklessIO helped with the lag, but becasue the I/O became asynchronous, the other Python processes got "unlocked" (search for "Python GIL" in google ;-), so more CPU resources have been freed for doing the math for the simulation.

So far, so good - one big problem remains - modern CPUs have multiple computing cores, each core usually has some Integer and some Floating Point (FP) math units, but generally one CPU core could execute not more than 3-4 instructions per frequency-clock cylce. In the last 5-6 years we see increase in the number of cores per CPU, but the maximum frequency is stalled between 3-3.5GHz. Far from the Intel promises that their CPUs will reach 10GHz by 2010 ;-)

On the other hand GPGPUs are different beasts. They run @ 800-900MHz, but they have hundreds of INT/FP units capable of executing hundreds of instructions per cycle.
Another thing is that GPGPUs are hooked to much, much faster memory with lower latency than the host CPUs.

The iDataPlex M3 server-module I was writing about could have 2 x NVidia M2050 GPGPU modules.
Each Nvidia M2050 GPGPU has 448 Stream Processors and has the raw computing power of about 10 of the CPUs used by CCP (for double-precision FP).

Ok, THAT'S GREAT, BUT...there is always BUT ;-)
RAW power means nothing without the right software to unleash it.

That's why I was asking the questions above - there must be tons of problems when you try to off-load some work from the CPU to the GPGPU - moving data between CPU and GPGPU is slow (due to the PCIe Interface), GPGPU has 3-6GB of memory (vs. 16/32GB for the CPU), syncing processes runing on CPU and GPGPU could be nightmare etc.

So, in summary - GPGPUs is just a novel & very promising technology. It has great successs in off-line scientific simulations, video & 3D acceleration etc.

It would be great if CCP manages to implement GPGPU technology in MMORPG, this would really push the technology beyond the edge ;-)


Pyro Ninja
Gallente
Viral Industry
Posted - 2010.06.21 01:23:00 - [212]
 

So with all the new space can we have our space mines back? cuz those were really fun back in the day YARRRR!!

Caladain Barton
Navy of Xoc
The Remnant Legion
Posted - 2010.06.21 01:55:00 - [213]
 

Edited by: Caladain Barton on 21/06/2010 02:08:07
Edited by: Caladain Barton on 21/06/2010 02:04:43
Originally by: Hawk TT
Edited by: Hawk TT on 21/06/2010 01:06:24
Each "SOL Server" runs on one CPU core, because of some limitations of Stackless Python (scripting language used for the server side programming).

Not correct. Stackless Python does multi-core parallel programming easily.

Hell, you can do multi-core, multi-proc parallel programming pretty dang spiffily with stackless python..it's right there in the language. It's just that it's *harder* than single-thread programming. Not much harder, mind, but a bit. (edit: It's harder if you don't wrap the complexity. If you're doing raw parallel as a cowboy code job (no planning, etc) then it's a nightmare.)

(second edit: I'm referring to handling your own task switching in a real time OS with python, where you control the processes/thread creation, placement, and communication, rather than letting the kernel (linux or windows ccp?) handle it for you. You guys are on a realtime system, right?)

The real tricky part is the core of the node code is all single threaded. It means they'd have to re-write a good chunk of it to work right.

OR they can put farmville in space and all the other mini-game/sov addons onto another core in the server and leave the main nodes running apocrapha core code.

CCP is not the first to do this. They are not the first to hit these walls. :-)

*IF* They have some wonky script launching stuff going on that's preventing them from doing multi-threading over multiple cores, then they need to fix the wonky. Anything can be done..software in one step removed from the pure thought stuff of the poet. It's all imaginary bits to be twiddled.

Oh, and this is a submarine game. Not a space ship game. It's all Fluid dynamics, not Newtonian physics (which are harder to model). The calculations needed have been well modeled and optimized since the end of world war 2 thanks to a truck load of papers published by the United States Navy.

Hawk TT
Caldari
Bulgarian Experienced Crackers
Posted - 2010.06.21 08:32:00 - [214]
 

Edited by: Hawk TT on 21/06/2010 08:34:56
Originally by: Caladain Barton
Edited by: Caladain Barton on 21/06/2010 02:08:07
Edited by: Caladain Barton on 21/06/2010 02:04:43
Originally by: Hawk TT
Edited by: Hawk TT on 21/06/2010 01:06:24
Each "SOL Server" runs on one CPU core, because of some limitations of Stackless Python (scripting language used for the server side programming).

Not correct. Stackless Python does multi-core parallel programming easily.




Actually you are not correct ;-) Without taking it personally, there is a difference between "multi-threaded" and "multi-core".

Stackless Python supports the so called "Tasklets" - basicaly a tasklet is a "micro-thread", but not on a System/OS level. Tasklet is a very lightweight and portable "micro-thread", so you could have tens of thousands "tasklets" running in parallel in one OS-Level thread. Try having tens of thousands of Windows/Linux/Unix threads running in parallel and you will need Terabytes of system memory and 90% of the CPU time will be consumed for scheduling and idling ;-)

Stackless Python is an interpeter and runs on a VM (just like Java). The VM runs in a single OS-Level thread, i.e. runs only on one CPU core.
Anyway, Why "tasklets" and not "Windows OS threads"? Because "Windows OS threads" are hugely more "expensive" - each new thread consumes magnitudes more memory and puts magnitudes more pressure on the OS Kernel scheduler etc. Also, each OS thread creation is much slower process that creating a "tasklet" in Stackless Python.
EVE needs to do thousands of tiny caclulations in parallel, so running each "tiny calculcation" in an OS-Level thread is not feasible - its insane. It's just that the Windows OS kernel is not designed for doing massively parallel computing, the same is with Linux and Unix.

Actually there are two major problems with Stackless Python:
a) First is the problem with the "Global Interpeter Lock" of Stackless Python - You could have 10 000 concurrent "tasklets" in one VM (i.e. one OS thread), but there are some operations that cause GIL and until the tasklet which put the GIL releases it, all other tasklets have to wait. Great part of these "blocking" operations are related to Synchronous I/O transactions. That's why the new StacklessIO developed by CCP uses Asynchronous I/O model, thus avoiding lots of GILs, thus freeing up lots of CPU cycles.
b) Second is the problem with having one Stackless VM per CPU core - having the Stackless VM load spreaded across multiple OS-threads and CPU cores means different architecture of the VM and lots of other problems, and their resolution is beyond CCPs control.
For example - efficient syncing or exchanging data between tasklets running in different OS-Level threads / CPU cores is a serious challange, because it involves severe overhead and performance penalties. It also means developing a new Stackless VM, a probable goal of the long-term project supported by Google, which is now delayed and potentially canceled...

Hawk TT
Caldari
Bulgarian Experienced Crackers
Posted - 2010.06.21 08:39:00 - [215]
 

So, what options CCP has:
a) Giving up on Stackless Python - it would mean rewriting millions of lines of code in....what? C++? Very unlikely, even if they had the R&D budget of Google or Micro$oft
b) Throwing faster CPU cores @ the problem - this would provide limited scalability, becasue the modern CPUs go multi-core, they don't go with 10GHz frequency and the x86 architecture has very limited potential to improve in terms of IPC (Instructions per Cycle) performance.
c) Throwing faster node interconnects like Infiniband + some HPC management stuff, so they could start moving SOL Servers on-the-fly (and not only during down-time) and on demand, so at least they could dedicate CPU cores to systems with large fleet battles. Jita has a dedicated CPU core and node, but Jita is predictable. Fleet battles are not so predictable, but at least CCP could gather statistics over time and could dedicate CPU cores to specific "hot" or "violent" 0.0 systems. I guess that CCP Yokai will announce soon improvements in this direction.
d) Removing (detaching) secondary services out of the SOL Servers - Market, People & Places, Mail etc. - these are I/O intensive services and they put extra load on the SOL nodes - CPU cycles, I/O, networking etc. I guess CCP already acomplished "detaching" of Market & Mail...
e) Last, but not least - employing GPGUs on the server nodes that could off-load thousands of "tiny caclulations" from the CPUs. x86 CPU cores have intrinsic serial nature, GPGPUs have intrinsic parallel nature. The question is if it is feasible to "export" some of the Stackless math functions to the GPGU and if the penalties and the overhead of moving data between CPU and GPGPU would be offset by the better raw performance of the GPGPU. It seems to me that its worth experimenting - the current SOL nodes uses old Intel FSB architecture and the effective memory bandwidht / latency is pretty comparable to the PCIe 2.0 performance. Once the data and the math code is in the GPGPU GDDR5 buffer, the calculations could be done in a massively parallel manner and CPU of the host will have free cycles to do other stuff...


Warnings
Crimson Empire.
Nulli Secunda
Posted - 2010.06.21 12:14:00 - [216]
 

GPU is great for Physk calculation. You can see it as a math co-processor. And Latency between cpu and gpu is small enough for that kind of calc.

Rewrite in C++ the whole code of eve could be a great idea. That would be more easy to get mac/linux/windows version without any change. And write multithread code is easyer in C++.

Last OpenGL 4.0 is multi os and an open standard, great for eve.

And writing code not multithread and using is with VMs, it's suicide in long term.

And i seriously hope ccp don't plan to do more system to create less lag, and get less problem with heavy loaded system, this is a big error.

Eve is the best PVP game i ever see. If you kill pvp, you kill the game. And blob will never stop with actual system.

North and Sud coalition will never stop, and even one is dead, a new one form and fight. It's a perpetual thing.

Just Do not create new sys. Why ? because, new sys, will create a lot of sys without anybody or one or two people farming 24/24.

In npc regions, pvp is always there for stop guys to steal your pve. but in conquerable, systems are always or almost empty.

Why ? no mission, and no officier.

Change the npc sys, you'll do a lot for eve.

I think ccp must change npc system, and whole 0.0 carebear system.

Whormhole is not a good solution for alliance you can't disapear for a week or two and don't do CTAs.

The best solution for pve in 0.0 :

-Create agent upgrade for conquerable systems.
-Change pve in belt by a new kind with more smarter npc, and more heavy. Kill a bs is too much easy for a golem, or a cnr. Stop belt farming, and create npc probes, and npc in safe, and warping out when it's in danger, do primary, and can probe you and kill you. Pod too, it's 0.0 it's not a friendly space.

do it that you must be 3-4 for found npc and kill. And do it a stop called that farming. Farming mean you need do ever same thing and it's not acceptable. We are not bots !!! Do same for miners, need to probe asteroids belts.

-do upgrade for each systems, that if 10 players ask help in the same place, if they have standings, and the sov, some npc come help.

Do your best ^^


Guttripper
Caldari
State War Academy
Posted - 2010.06.21 13:00:00 - [217]
 

Hopefully the new server cluster will not ask:

"Would you like to play a game?"

TheBlueMonkey
Gallente
Fags R Us
Posted - 2010.06.21 13:19:00 - [218]
 

shiny :)

I've skipped pages so forgive me if this has been asked before.

I've seem some of the cool things that vmware can do, namely live switching.
The demo I saw had them start a video on one virtual server and then pull the power on the physical node it was sat on (an extreme example I know) but what happened was that the VM server switched the virtual server to a new node instantly with no loss or judder in video stream.

It was rather impressive, the demo then went on to show the other options where the VM server managed all the virtual servers
load increased on one node and the machine causing the cpu usage got moved to it's own node, a number of machines dropped usage so they all got moved to one node and the spare nodes were shut down to save heat+power.

cool stuff like that

umm... I should construct that as a question shouldn't I?
Is that something you've looked into and is it something that could be used as a means of helping with lag during fleet fights.

Most 0.0 systems remain rather quite most of the time so a number of them would share one node then when a large fleet approaches the constelation the systems get moved between nodes as people move around.

Kimbeau Surveryor
Gallente
Posted - 2010.06.21 14:24:00 - [219]
 

Originally by: smaster
Originally by: CCP Yokai
Edited by: CCP Yokai on 16/06/2010 14:26:51Do some research and see how many servers someone like a game about secondary life needs to operate at that level.


The last time I was doing second life, there was no lag and no users yelling at their computers, hammering their keyboards in rage.
No lag in SL? Which planet are you on?!? :-D

Hawk TT
Caldari
Bulgarian Experienced Crackers
Posted - 2010.06.21 14:27:00 - [220]
 

Originally by: TheBlueMonkey
shiny :)

I've skipped pages so forgive me if this has been asked before.

I've seem some of the cool things that vmware can do, namely live switching.



Yep, Vmware vSphere could do funky stuff like live migration of VMs between hosts. Despite it has bare-metal hypervisor, its performance is less than the native performance of the bare metal. CCP & EVE need every bit of performance and every possibility to reduce latency. Any additional layer would add latency and would "eat" from the raw performance of the server node. Anyway vSphere could not map a Virtual CPU (vCPU) to more than one physical CPU core, so it can't help with the single-OS-threaded nature of the SOL Servers Rolling Eyes

Malorek
Minmatar
Nexus Advanced Technologies
Posted - 2010.06.21 15:18:00 - [221]
 

Anyone know approx. how much this server setup costs in U.S. $?

I am just curious.

cmaxx
Posted - 2010.06.21 16:28:00 - [222]
 

Hey CCP Yokai. I'm really interested in what you're doing and I wish you guys all the best on Wed.

I am interested, and a little surprised, by the comment that you've not saturated 1 GbE yet - cos that's not what I'd be paying attention to. :)

From my perspective, looking at the saturation of a network when considering whether to move up to the next speed grade means missing a big opportunity for latency reduction/management.

Consider: a server like a SQL Server or a SOL server won't be able to do anything at all with a packet until the very last byte has been received off the wire. Only then can the payload checksum be verified and the data within it passed up through the network stack, across the kernel boundary to the process. (aside: If it's a fragmented packet it'd be even worse.)

Now, consider the respective times for the *last* byte of a packet to be delivered over 1 GbE and 10 GbE.

The 10GbE connexion cuts that last-byte latency 10-fold. The first bit of the first byte should get there about as fast in both cases, but it's the last bit of the last byte that lets work begin.

Consider also that servers receive packets in streams and bursts, from lots of clients.. so that 10-fold reduction in the time-to-last-byte for one packet leads to a 19-fold reduction in latency for the next packet in line, and so on out until you reach the end of an average burst pattern.

And you see the benefit in both directions, from client to server, and from server back to client.

For my part, I think it's really worth bumping the network speed grade when the latency seen at end-user clients has a significant component made up from the sequential and/or combinatoric latency terms within the server cluster, which is my best guess for how TQ ends up working today.

10GbE port costs are now quite affordable, as are client card costs, especially if you get a bulk discount.

Now Cisco's latest hardware may be quite good but being a corporate behemoth and the market leader, they're frankly a little complacent, a little interally disorganized and no longer the technical be-all-and-end-all they once were. I have a little insight here.

With TQ being such a small (numerically) cluster and having important latency-driven behaviour within the server cluster architecture, I'd consider looking for switches from someone who has crafted their 10GbE hardware for the perfect combination of aggressive performance (even sub-microsecond) and high reliability.. the sort of thing people would use for high frequency trading infrastructure in global financial exchanges and for serious scientific computing. Someone like Arista: http://www.aristanetworks.com/ . Turns out they're also very price-competitive. Potnential for wins all round.

Enable Chimney etc. on Windows to get the best possible assist from the hardware and it'll push the bottlenecks right back to the software, CPUs and memory subsystems again.

But I guess you've spent your budget, so.. maybe in another year or so. :P

geouss
Posted - 2010.06.21 17:46:00 - [223]
 

hi
i have a question in the new changes.

is the bulk data going to be adjusted as well. there have been a few problems with it at the moment, although it only affects a few players.

Caladain Barton
Navy of Xoc
The Remnant Legion
Posted - 2010.06.21 23:36:00 - [224]
 

Edited by: Caladain Barton on 21/06/2010 23:40:52
Originally by: Hawk TT
Edited by: Hawk TT on 21/06/2010 08:34:56
Originally by: Caladain Barton
Edited by: Caladain Barton on 21/06/2010 02:08:07
Edited by: Caladain Barton on 21/06/2010 02:04:43
Originally by: Hawk TT
Edited by: Hawk TT on 21/06/2010 01:06:24
Each "SOL Server" runs on one CPU core, because of some limitations of Stackless Python (scripting language used for the server side programming).

Not correct. Stackless Python does multi-core parallel programming easily.




Actually you are not correct ;-) Without taking it personally, there is a difference between "multi-threaded" and "multi-core".




Of course. Because i'm not currently neck deep in it day-to-day?

You are absolutely correct that running a single python interpreter instance produces, at most, microthreads. Spam them microthreads all you like, but you're trapped to a single core. BUT, it is trivial to get multiple instances of that interpreter running and talking to one another. Python is a great glue language, and it's one of the reasons i love it and why we work in it. The overhead bites you when you have 128 physical cores in a box, but the performance gains are not trivial and well ahead of the curve.

The real key is that you treat the entire system as a real time system. All communications between the OS level python threads are established during initialization of the system. You Want to keep OS level processes alive. The cost in creating, spamming, and killing OS threads/processes is too much and negates the purpose..let them idle if they are not doing anything. The microthreads still are there, but now you shift them to cross-communicate between CPU cores, keeping all the heavy lifting on the core as much as possible. *If you spam processes, you will chew huge amounts of memory. The processes are merely there to facilitate transfer of data from one core/proc to another. Data Sync is, as with all parallel programming, challenging, but not exactly new ground*

Don't want to deal with all that? It wouldn't take more than a couple weeks to rewrite the parts that need it to support it in the interpreter.

Don't like Python? You can do the same in Java.

Really, the key is to spread the SOL node calculations across multiple cores. The rest is a mix of non-trivial and trivial details, but nothing is out of reach.

Zendoren
Aktaeon Industries
The Black Armada
Posted - 2010.06.22 02:14:00 - [225]
 

Thought I should take some time and contribute something a little useful to this thread.

Another words, Pictures of hardware! =P

~ IBM HS21 ~
http://www.youtube.com/watch?v=qQq3ZeDaBRk (Video) [This is the MX version]
http://www.flickr.com/photos/e53/2367109255/

~ IBM X3850 M2 ~ (Was hard finding pics on this little bugger)

http://www-01.ibm.com/redbooks/community/pages/viewpage.action?pageId=2788036

If any information or pictures are not correct feel free to correct me! However, this is intended to hold us over until CCP Yokai posts another devblog!

=P

Mjolinnar
Posted - 2010.06.22 03:31:00 - [226]
 

From My POV I believe that near everyone has missed the real issue here..........
Being that in that dead sexy Cold so very cold server room pic I saw NO where to chill the BEER!
We cant just have the beer on the floor thats an OH&S hazard.

Why Is this so, Dont you guys drink beer ?

Mojo!Laughing

boliano
Posted - 2010.06.22 04:08:00 - [227]
 

Originally by: CCP Yokai
I'll set all my accounts to a 6 hour skill just to be sure ;)



All accts? Come on tell us how many accts a ccp employee has?

Sed Man
Gallente
Havoc Violence and Chaos
Posted - 2010.06.22 05:08:00 - [228]
 

Originally by: Laendra
Impressive Very Happy


I actually expected a lot more, at least expected to see a few JS12/JS22.... I dont quite understand why using windows to drive this system is more effective than porting to AIX or another mainstream midrange OS.... maybe in the future...

Sed Man
Gallente
Havoc Violence and Chaos
Posted - 2010.06.22 05:17:00 - [229]
 

Originally by: Hawk TT
So, what options CCP has:
a) Giving up on Stackless Python - it would mean rewriting millions of lines of code in....what? C++? Very unlikely, even if they had the R&D budget of Google or Micro$oft
b) Throwing faster CPU cores @ the problem - this would provide limited scalability, becasue the modern CPUs go multi-core, they don't go with 10GHz frequency and the x86 architecture has very limited potential to improve in terms of IPC (Instructions per Cycle) performance.
c) Throwing faster node interconnects like Infiniband + some HPC management stuff, so they could start moving SOL Servers on-the-fly (and not only during down-time) and on demand, so at least they could dedicate CPU cores to systems with large fleet battles. Jita has a dedicated CPU core and node, but Jita is predictable. Fleet battles are not so predictable, but at least CCP could gather statistics over time and could dedicate CPU cores to specific "hot" or "violent" 0.0 systems. I guess that CCP Yokai will announce soon improvements in this direction.
d) Removing (detaching) secondary services out of the SOL Servers - Market, People & Places, Mail etc. - these are I/O intensive services and they put extra load on the SOL nodes - CPU cycles, I/O, networking etc. I guess CCP already acomplished "detaching" of Market & Mail...
e) Last, but not least - employing GPGUs on the server nodes that could off-load thousands of "tiny caclulations" from the CPUs. x86 CPU cores have intrinsic serial nature, GPGPUs have intrinsic parallel nature. The question is if it is feasible to "export" some of the Stackless math functions to the GPGU and if the penalties and the overhead of moving data between CPU and GPGPU would be offset by the better raw performance of the GPGPU. It seems to me that its worth experimenting - the current SOL nodes uses old Intel FSB architecture and the effective memory bandwidht / latency is pretty comparable to the PCIe 2.0 performance. Once the data and the math code is in the GPGPU GDDR5 buffer, the calculations could be done in a massively parallel manner and CPU of the host will have free cycles to do other stuff...




this is why I wonder why not an IBM pSeries box... a 595 or even an older 690... lpars can dynamically share CPU from the pool and demand can move where its needed... cant do that with the wintel cluster.... a new 595 with power CPUs (8 core) will run the whole thing and all comms over the virtual switches are at infiniband/memory speed...and with VIO you can chuck many HBA's and MPIO up to the lpar.... many advantages over blades/blade centers... and if you have two of these 595's and the right software/firmware you can dynamically move all the lpars from one chassis to another... anyhow... I'm sure CCP have it all under control... even though it is on windows...

amarri victari
Posted - 2010.06.22 09:10:00 - [230]
 

Ignore my children Fallout - for they will always have their opinions and ideas - its the way of intorwebspaceshipz, comes from spending too long in a pod Rolling Eyes.

I seriously appreciate, and look forward to the changes you are implementing.

certainly a step in the right direction and agree that starting from the ground up makes perfect sense.

My thanks to CCP

.......now, about the ship i lost in the last lag.....Very Happy

Murauke
EvE Cookie Collective
Posted - 2010.06.22 09:31:00 - [231]
 

Will this sort out black sreening, lagging, desyncing and what ever it is that is making this games performance so bad in clustered systems? we all know what i mean here - will it prevent situations like y-2 and 6jn? and will it help blob warfare where the decision TO fight is not based on server performance but the skill and ship setup of fleets?

Lady Tramp
Amarr
Void Angels
Wildly Inappropriate.
Posted - 2010.06.22 09:50:00 - [232]
 

Originally by: CCP Yokai
I'll set all my accounts to a 6 hour skill just to be sure ;)


I think we need proof of that one. scrennie and api keys :)

Gillbird
Caldari
Posted - 2010.06.22 10:59:00 - [233]
 

Edited by: Gillbird on 22/06/2010 11:04:40
Originally by: Dhalya
And finally you are routing you heavy IP-traffic through your 7609/7613 Cisco utilzing the 720Mbps modules (route processors) & DFC's with multiple WAN links.
Although the size of your WAN links as a whole can impose a deadly factor in lag issue, your running hardware is most probably your main issue (appart from coding issues).



Is that pic in dev blog actually from your running system? IMHO 20-Gbps fabric is quite tight to your whole system if you are connected all your servers to your router modules or are you using blade rack with blade switch where is like 2 x 1Gb uplinks, then you have tight uplinks ;)

Could be that you can not tell all but would be nice to know what kind system you have in there.


Murauke
EvE Cookie Collective
Posted - 2010.06.22 11:14:00 - [234]
 

childish side note: why are you doing this during 9 and 3 when England are playing at 3.30? :)

Gedrick frogue
Gallente
ORIGIN SYSTEMS
Shadows of Light
Posted - 2010.06.22 12:10:00 - [235]
 

Yarr for the extra cold hamsters but I do hope they have been fitted out with some cozy slippers to keep their feet warm, after all hamsters with frost bite arn't going to perform well Very Happy

Thaddeus Veselic
Posted - 2010.06.22 12:12:00 - [236]
 

I'm just curious to know what vendor/model your SSD SAN is?

Gillbird
Caldari
Posted - 2010.06.22 12:47:00 - [237]
 

Edited by: Gillbird on 22/06/2010 12:48:17
Edited by: Gillbird on 22/06/2010 12:47:46
Originally by: Thaddeus Veselic
I'm just curious to know what vendor/model your SSD SAN is?

http://www.eveonline.com/ingameboard.asp?a=topic&threadID=1000604&page=5#136

Aineko Macx
Posted - 2010.06.22 12:58:00 - [238]
 

Originally by: Hawk TT
Actually GPGPU off-loading could fix great portion of the lag, if such tech could be implemented for the SOL servers responsible for the space-simulation in EVE.

The possibilities of (GP)GPU computing are often being overestimated. "Branchy" OO game code is typically something that is NOT well suited for GPU architectures. What often (but far from always) does run well on GPUs are signal processing type loads (lots of SIMD).

As for the upgrades, the "lag" improvement should be minimal, even considering that people are calling "lag" what are in fact at least four different issues (grid load, traffic controls, client side performance issues, actual lag). CCP stated that with Dominion the CPU usage on the servers actually went down, so they couldn't (and by the looks of it still can't even 6 months later) determine what was causing all those issues which were just okay before.

llllSeraphimllll
Posted - 2010.06.22 16:43:00 - [239]
 

Can anyone confirm if the skill queue will still function as normal? or will we have to put on "one" long skill to see us through.

thanks for all the info Yokai its very interesting!

Dzajic
Gallente
Posted - 2010.06.22 21:58:00 - [240]
 

Since I've been away from EVE at that time (and not a 00 denizen), I just recently red that it was Dominion that brought new sov mechanics and "star system upgrade POS modules" to game. Maybe some of those are the culprits for eating all the CPU time (or DB access).

It is a serious problem with your game CCP, and while mass testings are still going on, there haven't been any devblogs to add a single word more than that one old "we are aware of the issue".

As a lot folks said, trying to throw hardware upgrade to solve a software issue, and one that isn't innate to system (ok lag and grid load and all is innat to system but post Dominion decrease in performance is... post Dominion) is not the right way.

Hopefully all the sweet little furry hamsters will wake up healthy and cleanly tomorrow. But that fact remains that until you fix whatever Dominion broke, there is a danger that next large battle will cause some other coalition to lose 20 supercaps to not being able to control their ships for dozens of minutes or more.

What would happen if alliance leaders finally blew their lids of and joined to give a nice little interview to massively.com and tentonhammer about how EVE is broken and unplayable? That wouldn't be nice for anyone, but here is limit of what paying customers can tolerate.


Pages: first : previous : ... 3 4 5 6 7 [8] 9 10 11 : last (11)

This thread is older than 90 days and has been locked due to inactivity.


 


The new forums are live

Please adjust your bookmarks to https://forums.eveonline.com

These forums are archived and read-only