open All Channels
seplocked EVE General Discussion
blankseplocked EVE-Files' harddrive went poof again
 
This thread is older than 90 days and has been locked due to inactivity.


 
Pages: [1] 2

Author Topic

Chribba
Otherworld Enterprises
Otherworld Empire
Posted - 2007.12.28 11:20:00 - [1]
 

The database drive of EVE-Files went poof again, am on my way over to replace the disk and restore the database(s) but decided to have EVE-Files (and other sites) to remain offline until it's all been fixed. Sorry for the trouble everyone.

/c

Pilk
Evolution
IT Alliance
Posted - 2007.12.28 11:28:00 - [2]
 

We still love you.

I just wish you'd love(ship) me back. Crying or Very sad

--P

Wiggy69
5punkorp
Posted - 2007.12.28 11:28:00 - [3]
 

I'm sure we'll cope, you provide a fantastic service for nothing, I'm sure it's inevitable that things will break at one time or another.

GO CHRIBBA! TO THE CHRIBBAMOBILE! Cool

Sadayiel
Caldari
Inner Conflict
Posted - 2007.12.28 11:36:00 - [4]
 

AWWW poor chribba its seems that now piwats even gank his harddrives Sad

well be happy well keep some veldspar for you hun Wink

Lazuran
Gallente
Aliastra
Posted - 2007.12.28 12:10:00 - [5]
 

RAID-1 ftw. ...

Dave White
Interlude.
Posted - 2007.12.28 12:25:00 - [6]
 

Originally by: Lazuran
RAID-1 ftw. ...



Was thinking this Smile

Cergorach
Amarr
The Helix Foundation
Posted - 2007.12.28 12:31:00 - [7]
 

Originally by: Lazuran
RAID-1 ftw. ...


RAID6 is even saver (you can loose two drives in the array without a problem), it is a bit on the expensive side (not many RAID cards support it and those who do are expensive).

Iva Soreass
Friar's Club
Posted - 2007.12.28 12:43:00 - [8]
 

GL Mr C hope you can sort it.

Brackun
State Protectorate
Posted - 2007.12.28 12:45:00 - [9]
 

I would've thought RAID-5 would be better, it's what most NAS boxes seem to default to anyway - and they're dedicated storage devices.

Blafbeest
Gallente
North Eastern Swat
Pandemic Legion
Posted - 2007.12.28 12:46:00 - [10]
 

its my fault, can't stop downloading so i blew ur hdd sorry ><

Lazuran
Gallente
Aliastra
Posted - 2007.12.28 12:51:00 - [11]
 

Originally by: Brackun
I would've thought RAID-5 would be better, it's what most NAS boxes seem to default to anyway - and they're dedicated storage devices.


RAID-5 needs 3 drives or more and is kinda expensive to do (parity calculations). It "wastes" only 1 drive so it's a good default for NAS.

RAID-6 needs 4 drives or more and is even more expensive to do (more parity calculations). It is also rather "new" (as in not widely available for a long time yet).

RAID-1 is extremely cheap to implement and requires 2 drives, most current PCs support it out of the box at the BIOS level.

Chribba
Otherworld Enterprises
Otherworld Empire
Posted - 2007.12.28 13:03:00 - [12]
 

anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?

Mobius Dukat
Merch Industrial
GoonSwarm
Posted - 2007.12.28 13:10:00 - [13]
 

Edited by: Mobius Dukat on 28/12/2007 13:11:37
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?



Going by the Basic principal that RAM drives are just RAM sticks in a box.

No. They have no moving parts, very little heat build up...

Ask yourself "how often does a RAM stick fail in your server?" isn't often i'll bet :)

However, you lose power to the server and you lose your memory resident database [unless you have one of those s****y alternate power supply drives]

You'd have to have some process to backup said memory resident database every few minutes. But again that said, if they work as they do in principal..then unless you shut the machine down it shouldn't be a problem :)

Franga
NQX Innovations
Posted - 2007.12.28 13:12:00 - [14]
 

Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


Are you talking about the 'solid-state' drives?

Chribba
Otherworld Enterprises
Otherworld Empire
Posted - 2007.12.28 13:17:00 - [15]
 

Originally by: Franga
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


Are you talking about the 'solid-state' drives?
I am talking about either SSD's or RAMdrives eg (Gigabytes or virtual ones). iirc SSDs are not that good for databases.

Lazuran
Gallente
Aliastra
Posted - 2007.12.28 13:18:00 - [16]
 

Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.

If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.

You best tradeoff for cost/space/life is probably a raid-1 of 2 32GB ATA/SATA-SSDs. They are very fast for DB work, but make sure you do some ageing on one of the 2 SSDs before you set up the RAID or they might both fail at the same time (identical writes, finite number of total writes ...).


Kastar
Paragon Horizons
THORN Alliance
Posted - 2007.12.28 13:36:00 - [17]
 

Thank for your commitment Chribba Razz

Franga
NQX Innovations
Posted - 2007.12.28 13:40:00 - [18]
 

Originally by: Chribba
Originally by: Franga
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


Are you talking about the 'solid-state' drives?
I am talking about either SSD's or RAMdrives eg (Gigabytes or virtual ones). iirc SSDs are not that good for databases.


Okay then, it that case I echo Mobius Dukat's post regarding them. Please note, however, I have never used them or have any experience. Any of my knowledge on this subject comes from magazines and/or reviews I have read.

Meirre K'Tun
The Littlest Hobos
Ushra'Khan
Posted - 2007.12.28 13:57:00 - [19]
 

aren't ram drives rather small compared to normal ones?

Lanu
0utbreak
Outbreak.
Posted - 2007.12.28 14:16:00 - [20]
 

Chribba <3

Dark Shikari
Caldari
Deep Core Mining Inc.
Posted - 2007.12.28 14:26:00 - [21]
 

Originally by: Lazuran
Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.

If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.

Lazuran
Gallente
Aliastra
Posted - 2007.12.28 14:39:00 - [22]
 

Edited by: Lazuran on 28/12/2007 14:41:37
Originally by: Dark Shikari
Originally by: Lazuran
Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.

If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.


That is basically what manufacturers claim, as well as those reviewers who look at the MTBF numbers without asking themselves whether any of these drives have been running for 2 million hours yet. ;-)

It is a fact that Flash-based SSD has a finite number of write cycles for each flash element. This limit has seen a dramatic increase in the past few years, but still varies greatly among the manufacturers. These drives now remap overly used blocks to increase the durability and use other neat tricks, but in the end the MTBF is a fictional number based on assumptions made by the manufacturer.

Now when Samsung claims 2m hours MTBF, but their product is aimed at the laptop market rather than the server market, they will assume typical laptop usage, which is several orders of magnitude less write-intensive than DB server usage. So you cannot compare a SCSI disk for servers with MTBF of 800k with a SSD for laptops with MTBF of 2m and conclude that the latter is 2.5 times more durable. It isn't. And a lot of reviewers make the mistake of assuming that MTBF is based on the assumption of continuous use, it isn't.

Anandtech writes about this:
This means an average user can expect to use the drive for about 10 years under normal usage conditions or around five years in a 100% power-on state with an active/idle duty cycle at 90%. These numbers are subject to change depending upon the data management software algorithms and actual user patterns.

This is not higher than current server disks, where typical usage patterns are much more write intensive...

PS. the finite lifespan is of course a technological limit and cannot be removed ... current flash SSD's offer around 100k write/erase cycles per sector apparently.





Raaki
The Arrow Project
Morsus Mihi
Posted - 2007.12.28 14:53:00 - [23]
 

Everything breaks if you wait long enough.

It's redundancy you want, not something that is claimed to be unbreakable.

Bish Ounen
Gallente
Best Path Inc.
Cult of War
Posted - 2007.12.28 15:26:00 - [24]
 

Edited by: Bish Ounen on 28/12/2007 15:26:17
Chribba,

I would check to see what kind of drives your DB servers are using. Ideally you should be running a RAID 5 array on SCSI drives, NOT SATA drives!

Many lower-end datacenters have been swapping out for SATA RAID systems due to the significant cost saving over SCSI, and the much improved lifespan of the new SATA drives over older IDE and EIDE drives (now commonly called PATA). The problem with making a change to a SATA drive array is that while they are fine for raw data storage, due to the low amounts of read-write operations, for a DB server with HIGH amounts of read-writes they can burn out very very fast.

SCSI was designed for the extreme environment of a database server and the millions upon millions of read-write operations per day that a busy DB server can perform. Using a RAID5 array in SCSI is the very best long-term fault-tolerant setup you can get. It costs more, but it's better than losing the DB drives every 6 months.

-----------------------------------

Shar'Tuk TheHated
Posted - 2007.12.28 15:27:00 - [25]
 

<3 Arrow Chribba

Regat Kozovv
Caldari
Alcothology
Posted - 2007.12.28 15:34:00 - [26]
 

Originally by: Bish Ounen
Edited by: Bish Ounen on 28/12/2007 15:26:17
Chribba,

I would check to see what kind of drives your DB servers are using. Ideally you should be running a RAID 5 array on SCSI drives, NOT SATA drives!

Many lower-end datacenters have been swapping out for SATA RAID systems due to the significant cost saving over SCSI, and the much improved lifespan of the new SATA drives over older IDE and EIDE drives (now commonly called PATA). The problem with making a change to a SATA drive array is that while they are fine for raw data storage, due to the low amounts of read-write operations, for a DB server with HIGH amounts of read-writes they can burn out very very fast.

SCSI was designed for the extreme environment of a database server and the millions upon millions of read-write operations per day that a busy DB server can perform. Using a RAID5 array in SCSI is the very best long-term fault-tolerant setup you can get. It costs more, but it's better than losing the DB drives every 6 months.

-----------------------------------


SCSI is an interface, it does not necessarily indicate an enterprise-class drive. However, I see your point, and I think anyone would be hard pressed to find a SCSI drive not designed for a server environment. In any case, SCSI is largly being supplanted by SAS anyways...

That being said, there are SATA drives designed for the enterprise, and I'm not necessarily referring to 10k Raptor drives. They are still much more cost effective than SCSI/SAS, and unless you really need the high performance, you can get away with SATA just fine. The key really is in how the disks are arranged. Most good enclosures should let you hotswap SATA disks in RAID 5 with no issues.

I'm sure Chirbba has a decent solution in place. However, if that is not the case, (and if he's having to take down the server to replace the drive I suspect this might be the case) then perhaps we should all chip in for some newer hardware. That stuff ain't cheap! =)

Bish Ounen
Gallente
Best Path Inc.
Cult of War
Posted - 2007.12.28 16:07:00 - [27]
 

Originally by: Regat Kozovv


SCSI is an interface, it does not necessarily indicate an enterprise-class drive. However, I see your point, and I think anyone would be hard pressed to find a SCSI drive not designed for a server environment. In any case, SCSI is largly being supplanted by SAS anyways...

That being said, there are SATA drives designed for the enterprise, and I'm not necessarily referring to 10k Raptor drives. They are still much more cost effective than SCSI/SAS, and unless you really need the high performance, you can get away with SATA just fine. The key really is in how the disks are arranged. Most good enclosures should let you hotswap SATA disks in RAID 5 with no issues.

I'm sure Chirbba has a decent solution in place. However, if that is not the case, (and if he's having to take down the server to replace the drive I suspect this might be the case) then perhaps we should all chip in for some newer hardware. That stuff ain't cheap! =)


Heh, yeah, I know it's an interface. (Small Computer Systems Interface, to be precise) But I don't know of any modern SCSI uses outside of hard drives. I suppose I should have been a bit more specific though.

You are also correct about SAS supplanting most SCSI implementations, much like SATA supplanted PATA. But it's still basically the same thing. A SCSI drive, just with an updated data transfer interface.

I do disagree with you on one point though. While SATA drives have improved greatly in the past few years, a SCSI/SAS drive will still outperform/outlast a SATA drive any day. The manufacturing tolerances and quality control are just simply higher for SCSI/SAS, even compared to the so-called "enterprise" SATA drives.

Ultimately, I wouldn't trust ANY SATA implementation on a DB server that my job depended on keeping running. For a general storage NAS, Yes. For a high-transaction DB server? No way in Hades.

On your other comment, I too wonder about his implementation if he has to take the entire server down to fix the DB. Unless he's just talking about the web front-end being offlined while he restores the DB from backup. That doesn't involve physically shutting down a server and popping open the chassis. Honestly, I would be VERY surprised if his DB runs on the same machine as his webserver.

I'm thinking Chribba needs to run a donation drive so he can rent out a slot at a proper datacenter with a fully redundant fiber-channel SAN and full virtualization of all OSes. That should keep him up and running almost indefinitely.

-----------------------------------

Dark Shikari
Caldari
Deep Core Mining Inc.
Posted - 2007.12.28 16:16:00 - [28]
 

Originally by: Lazuran
Originally by: Dark Shikari
Originally by: Lazuran
Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?


If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.

If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.


That is basically what manufacturers claim, as well as those reviewers who look at the MTBF numbers without asking themselves whether any of these drives have been running for 2 million hours yet. ;-)

It is a fact that Flash-based SSD has a finite number of write cycles for each flash element. This limit has seen a dramatic increase in the past few years, but still varies greatly among the manufacturers. These drives now remap overly used blocks to increase the durability and use other neat tricks, but in the end the MTBF is a fictional number based on assumptions made by the manufacturer.

Now when Samsung claims 2m hours MTBF, but their product is aimed at the laptop market rather than the server market, they will assume typical laptop usage, which is several orders of magnitude less write-intensive than DB server usage. So you cannot compare a SCSI disk for servers with MTBF of 800k with a SSD for laptops with MTBF of 2m and conclude that the latter is 2.5 times more durable. It isn't. And a lot of reviewers make the mistake of assuming that MTBF is based on the assumption of continuous use, it isn't.

Anandtech writes about this:
This means an average user can expect to use the drive for about 10 years under normal usage conditions or around five years in a 100% power-on state with an active/idle duty cycle at 90%. These numbers are subject to change depending upon the data management software algorithms and actual user patterns.

This is not higher than current server disks, where typical usage patterns are much more write intensive...

PS. the finite lifespan is of course a technological limit and cannot be removed ... current flash SSD's offer around 100k write/erase cycles per sector apparently.
Better said, current consumer-grade SSD drives last longer than consumer-grade hard drives. A good modern flash drive lasts two years under continuous read-write. An ordinary consumer-grade hard disk probably wouldn't stand a year under that.

Talking Elmo
Posted - 2007.12.28 16:16:00 - [29]
 

RAID 0+1 for DB's fellas, 5 and 6 are way too slow for writes.

And SATA drives for a DB server are just fine.

Regat Kozovv
Caldari
Alcothology
Posted - 2007.12.28 16:26:00 - [30]
 

Edited by: Regat Kozovv on 28/12/2007 16:26:13
Originally by: Bish Ounen


Heh, yeah, I know it's an interface. (Small Computer Systems Interface, to be precise) But I don't know of any modern SCSI uses outside of hard drives. I suppose I should have been a bit more specific though.

You are also correct about SAS supplanting most SCSI implementations, much like SATA supplanted PATA. But it's still basically the same thing. A SCSI drive, just with an updated data transfer interface.

I do disagree with you on one point though. While SATA drives have improved greatly in the past few years, a SCSI/SAS drive will still outperform/outlast a SATA drive any day. The manufacturing tolerances and quality control are just simply higher for SCSI/SAS, even compared to the so-called "enterprise" SATA drives.

Ultimately, I wouldn't trust ANY SATA implementation on a DB server that my job depended on keeping running. For a general storage NAS, Yes. For a high-transaction DB server? No way in Hades.

On your other comment, I too wonder about his implementation if he has to take the entire server down to fix the DB. Unless he's just talking about the web front-end being offlined while he restores the DB from backup. That doesn't involve physically shutting down a server and popping open the chassis. Honestly, I would be VERY surprised if his DB runs on the same machine as his webserver.

I'm thinking Chribba needs to run a donation drive so he can rent out a slot at a proper datacenter with a fully redundant fiber-channel SAN and full virtualization of all OSes. That should keep him up and running almost indefinitely.

-----------------------------------


I should have been a little clearer, when I said SCSI is just an interface, I really mean that no one makes consumer-grade SCSI drives. =)

And yes, SCSI/SAS will outperform SATA any day. However, for Chirbba's budget, I do wonder if SATA would be more cost effective. Fiber Channel SANs and proper virtualization is great and would definitely be recommended. But who has the resources to do that?
I guess we need Chirbba to post his specs.

On a related note, I was reading this just the other day:
http://www.techreport.com/discussions.x/13849
Maybe we should buy him one! =)


Pages: [1] 2

This thread is older than 90 days and has been locked due to inactivity.


 


The new forums are live

Please adjust your bookmarks to https://forums.eveonline.com

These forums are archived and read-only