| 
		
			
			
				
			
			
				 
			
			
				
			
		 | 
		
			
			
				 
			
				
			
		 | 
	||||
| 
				Welcome to the GoFuckYourself.com - Adult Webmaster Forum forums.  You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today! If you have any problems with the registration process or your account login, please contact us.  | 
		
		 
		![]()  | 
	
		
			
  | 	
	
	
		
		|||||||
| Discuss what's fucking going on, and which programs are best and worst. One-time "program" announcements from "established" webmasters are allowed. | 
| 
		 | 
	Thread Tools | 
| 
			
			 | 
		#1 | 
| 
			
			
			
			 Confirmed User 
			
		
			
			
			Industry Role:  
				Join Date: May 2002 
				Location: Toronto 
				
				
					Posts: 8,475
				 
				
				
				
				 | 
	
	
	
	
		
			
			 
				
				Why RAID 5 stops working in 2009 - GOOD INFO!
			 
			I'm sure lots of you have RAID5's setup, I know that we just went through a RAID5 nightmare on a 4TB array... Lost a drive, then the hardware raid controller wouldn't accept a disk, and recommended we "delete the volume" ............ 
		
	
		
		
		
		
		
	
	This led to huge downtime, a massive rsync, and a rebuild of the whole array, then a copy back. No fucking fun. http://blogs.zdnet.com/storage/?p=162  | 
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 | 
| 
			
			 | 
		#2 | 
| 
			
			
			
			 Confirmed User 
			
		
			
				
			
			
			Industry Role:  
				Join Date: Oct 2002 
				Location: Toronto, ON 
				
				
					Posts: 5,247
				 
				
				
				
				 | 
	
	
	
	
		
		
		
		 Seems like scaremongering to me. Modern RAID controllers have background consistency checks to actively prevent that sort of scenario from happening. If you use RAID6, the odds of that ever happening are basically zero. 
		
	
		
		
		
		
			
				__________________ 
		
		
		
		
	
	ICQ: 91139591  | 
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 | 
| 
			
			 | 
		#3 | 
| 
			
			
			
			 Confirmed User 
			
		
			
			
			Join Date: May 2001 
				Location: ICQ: 25285313 
				
				
					Posts: 993
				 
				
				
				
				 | 
	
	
	
	
		
		
		
		 Doesn't sound like you ran into the problem described in the article. 
		
	
		
		
		
		
			The "mathematical certainty of a read failure during rebuild" has been well known for some time. This is why nearly every modern RAID controller supports RAID6 You'll see it during an actual rebuild, say halfway through, where a *second* disk will throw offline, thus trashing the array completely. Also, background consistency checks do not help in this scenario. The read failure rate given in the article is for fully operational disks - e.g. it's completely normal for them to throw a bit here and there. The article does go a bit overboard though. For one, very few arrays are going to be 100% maxed out on use, thus your chances for an error are substantially lower. We still utilize RAID5 for any 6 drives or less arrays, and have yet to have a dual disk failure as described (although, we've had two complete disk failures which are unrelated to the problem discussed). All in all though, remember backups! RAID is in no way, whatsoever, not even close, a substitute for a proper backup strategy. If the server availability is extremely important, have a warm-spare handy that is sync'ed on a regular basis, as restores from backups can take quite some time depending on your content set. -Phil 
				__________________ 
		
		
		
		
	
	Quality affordable hosting.  | 
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 | 
| 
			
			 | 
		#4 | 
| 
			
			
			
			 Registered User 
			
		
			
			
			Industry Role:  
				Join Date: Jul 2003 
				Location: Encrypted. Access denied. 
				
				
					Posts: 31,779
				 
				
				
				
				 | 
	
	
	
	
		
		
		
		 Where I live RAID kills roaches. 
		
	
		
		
		
		
		
	
	 | 
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 | 
| 
			
			 | 
		#5 | 
| 
			
			
			
			 Confirmed User 
			
		
			
			
			Join Date: Feb 2003 
				Location: Here There and Everywhere 
				
				
					Posts: 5,477
				 
				
				
				
				 | 
	
	
	
	
		
		
		
		
		
	
		
		
		
		
			 
				__________________ 
		
		
		
		
	
	Free to Play MMOs and MMORPGs  | 
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 | 
| 
			
			 | 
		#6 | |
| 
			
			
			
			 Confirmed User 
			
		
			
				
			
			
			Industry Role:  
				Join Date: Jun 2003 
				Location: cyberspace 
				
				
					Posts: 8,022
				 
				
				
				
				 | 
	
	
	
	
		
		
		
		 Quote: 
	
 raid 5 sucks all broke down with me to in the past  | 
|
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 | 
| 
			
			 | 
		#7 | 
| 
			
			
			
			 Too lazy to set a custom title 
			
		
			
			
			Join Date: Mar 2002 
				Location: Australia 
				
				
					Posts: 17,393
				 
				
				
				
				 | 
	
	
	
	
		
		
		
		 The article headline is 100% sensationalist and makes it sound like there's a fundamental bug with the RAID5 algorithm that will cause it to fail in 2009. LOL 
		
	
		
		
		
		
		
	
	Another thing the author skipped over (which Phil21 points out) is the importance of backups. I had a lot of problems with RAID5 on my Windows PC (which I later determined were probably due to dodgy SATA cables) and had to rebuild several times. So what's the first thing you do when your controller says the array is degraded? You don't rebuild, you BACK UP first. I always had at least one full backup so an immediate incremental backup of the degraded array only took about 10-20 mins. At that point my data was a lot safer, and I could then think about rebuilding the array I do agree that increasing capacities will present some serious maintenance problems in the coming years. Maximum drive capacities have increased by a factor of more than 10 times in the past 5 years but the raw read and write speeds haven't kept up with that improvement... this means longer and longer to copy or rebuild your data safely.  | 
| 
		 | 
	
	
	
		
                 
		
		
		
		
		
		
		
			
			
		
	 |