Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum > Tech Talk
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 05-18-2013, 09:56 AM   #1
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default SSD Drive for Server

Any recommendations for what kind of SSD drive to use for a server that centralizes and serves up a few thousand different files to about 10 different work stations. We are currently using Windows Server 2008 for AD and as a file server, and since upgrading our drawing software we find it a bit slow in loading the files and opening the program. Support for the design program recommends Windows Server 2012, as well as a SDD.

Looking to upgrade to both.
Azure is offline   Reply With Quote
Old 05-18-2013, 11:00 AM   #2
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Anandtech recently talked about their server infrastructure upgrades and they used Intel X25-M G2 160GB's, partitioned down to 120GB (increasing the spare space and increasing drive longevity). Not new by any means but still far far faster than a HDD.

http://www.anandtech.com/show/6824/i...d-architecture

Another article where they compare the various Intel drives in a server environment:

http://www.anandtech.com/show/5518/a...-of-intel-ssds

I think the # of writes in a situation you are describing is going to be relatively low so any modern quality SSD should be ok, but sticking with Intel wouldn't be a bad idea.

I think the server provider we use at the company I work for uses Intel SSDs, but commodity ones (520's or something), but they also keep a close eye on them and replace them whenever they wear out.

EDIT: Actually they use Intel for some, but they also use Samsung 830's. http://steadfast.net/blog/index.php/...msung-830-ssds

You sure it's being limited by drive speed?
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
The Following User Says Thank You to photon For This Useful Post:
Old 05-18-2013, 11:15 AM   #3
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

I forgot to mention the Intel S3700

http://www.storagereview.com/intel_s...ise_ssd_review

Seagate just released some drives as well, they use a known controller but I'd still be a bit leery of using something brand spaking new.

http://www.storagereview.com/seagate...ise_ssd_review

There's a Enterprise version of the m500 which probably has the best capacity for a given price:

http://www.storagereview.com/micron_...ise_ssd_review
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
The Following User Says Thank You to photon For This Useful Post:
Old 05-18-2013, 01:52 PM   #4
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default

Our network is wired with Cat 6 cable, and all the computers have gigabit adapters. Most of the workstations have above average hardware, but no SSDs.

The design program is pretty responsive once opened. Seems like getting the initial approval from the network key program on the server is a bit slow as well.
Azure is offline   Reply With Quote
Old 05-18-2013, 04:05 PM   #5
Top Shelf
Powerplay Quarterback
 
Top Shelf's Avatar
 
Join Date: Sep 2005
Exp:
Default

Maybe a bit of overkill for a small file server, but I've become a big fan of Nimble Storage. It's a SAN that back by Flash Drives for the cache.

http://www.nimblestorage.com/

They scale down, but maybe an idea for the future?
Top Shelf is offline   Reply With Quote
Old 05-19-2013, 09:46 AM   #6
FlamingLonghorn
First Line Centre
 
Join Date: Mar 2002
Location: Austin, Tx
Exp:
Default

Who is the server manufacturer? Surely they have their supported SSDs that they sell? I would recommend going this route for 2 reasons. 1. it's validated and supported with their hardware (normally this isn't an issue but if it is you are SOL) 2. if there is an issue with the server it gives them an excuse to blame something else besides their own product. This might lead them to tell you they can only help you troubleshoot the issue if you remove the 3rd party part and reproduce the issue.
FlamingLonghorn is offline   Reply With Quote
Old 05-23-2013, 09:49 AM   #7
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default

What is the expected life span of SSD drives, and how do you monitor them? I read some reviews, but they go really technical.

We want to use the Intel 520s, and replace them as needed. If they last longer than a year, that is a huge plus as I see prices going down over the next year.
Azure is offline   Reply With Quote
Old 05-23-2013, 10:15 AM   #8
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Lifespan depends on how much writing is done to the drive and what percentage of the drive is set aside as spare blocks.

To monitor them there should be utilities that monitor the SMART parameters.

Wear_Leveling_Count and Reallocated_Sector_Ct and Used_Reserve_Block_Ct_Total are the ones a guy I work with says to watch to see if the drive is starting to approach its limit. Really unless you are writing significant percentages of the drive's total capacity each day, or keeping the drives at 99% capacity, I think you'll upgrade far before you will wear the drives down.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Old 05-23-2013, 02:12 PM   #9
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default

The files that will be accessed on a daily basis are about 12 GB in total, so it seems we shouldn't have much of a problem.
Azure is offline   Reply With Quote
Old 05-23-2013, 11:17 PM   #10
Vulcan
Franchise Player
 
Vulcan's Avatar
 
Join Date: Dec 2003
Location: Sunshine Coast
Exp:
Default

Quote:
Originally Posted by Azure View Post
What is the expected life span of SSD drives, and how do you monitor them? I read some reviews, but they go really technical.

We want to use the Intel 520s, and replace them as needed. If they last longer than a year, that is a huge plus as I see prices going down over the next year.
Intel has their SSD Toolbox which optimizes your drive once a week, gives a smart summary as to 'drive health' and 'estimated life remaining'. You can also run a diagnostic scan etc.

http://www.intel.com/design/flash/nand/managessd.htm
Vulcan is offline   Reply With Quote
Old 05-26-2013, 08:03 PM   #11
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

Quote:
Originally Posted by photon View Post
Anandtech recently talked about their server infrastructure upgrades and they used Intel X25-M G2 160GB's, partitioned down to 120GB (increasing the spare space and increasing drive longevity). Not new by any means but still far far faster than a HDD.
Over provisioning shouldn't be required in any scenario where you can use TRIM. I believe the only time you can't really use TRIM would be in a hardware RAID configuration (pretty sure using Windows RAID via Disk Management will pass TRIM), since there is no knowledge by the OS of the logical to physical sector mapping with hardware RAID.

Edit: and even leaving out TRIM, I personally think _manually_ over provisioning is a crock/lies at best - once you've written to every unallocated sector, you're going to run into the same write amplification problems as you'd have had using a fully sized partition. Unless I'm missing something it seems like it would look good on short term benchmarks, but do little long term. Which is why, of course, TRIM was so important in the first place.
__________________
-Scott

Last edited by sclitheroe; 05-26-2013 at 08:06 PM.
sclitheroe is offline   Reply With Quote
Old 05-27-2013, 01:16 AM   #12
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Enterprise drives do tend to have a higher percentage of over-provisioning than consumer drives, and everything I've read tends to support that over-provisioning does have an impact on performance.

Here's a 520 review with the same drive with the regular and a enterprise level over-provisioning: http://www.storagereview.com/intel_s...erprise_review

Not every drive and workload benefits of course, but I don't think it's lies.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Old 05-27-2013, 09:54 AM   #13
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

Quote:
Originally Posted by photon View Post
Enterprise drives do tend to have a higher percentage of over-provisioning than consumer drives, and everything I've read tends to support that over-provisioning does have an impact on performance.

Here's a 520 review with the same drive with the regular and a enterprise level over-provisioning: http://www.storagereview.com/intel_s...erprise_review

Not every drive and workload benefits of course, but I don't think it's lies.
That's one of the articles I read, and I still don't think, long-term that over-provisioning makes a big difference - the goal of over-provisioning is to have blocks of flash storage that are already zero'ed out, negating the erase-before-write cycle needed when updating an otherwise allocated or previously used block.

So once you've filled the over-provision space (used each block once), you are back to having to erase blocks in advance to be able to rapidly re-use them, and doing block relocations to handle wear levelling, same as any other block on the device. You can do this in the background, but that's the exact same thing as TRIM does - and the benefit of TRIM is that you can deal with zeroing all the unallocated blocks on the drive, rather than a fixed % subset - in effect you've got a bigger pool of non-allocated space than you would with just a hard limit over-provisioning space.

I can see a clear benefit to over-provisioning in a hardware RAID configuration, where the drive cannot accept TRIM requests, but at that point, I'd probably be looking at enterprise storage solutions where a lot of the magic required to keep performance high is done without reliance on information from the operating system.

I should benchmark my system before/after a manual over-provision, as well as a couple months from now, and report back
__________________
-Scott
sclitheroe is offline   Reply With Quote
Old 05-28-2013, 10:26 AM   #14
mykalberta
Franchise Player
 
mykalberta's Avatar
 
Join Date: Aug 2005
Location: Calgary
Exp:
Default

Quote:
Originally Posted by Azure View Post
Our network is wired with Cat 6 cable, and all the computers have gigabit adapters. Most of the workstations have above average hardware, but no SSDs.

The design program is pretty responsive once opened. Seems like getting the initial approval from the network key program on the server is a bit slow as well.
Are all the switches gb switches? Does the server have the NICS teamed?
__________________
MYK - Supports Arizona to democtratically pass laws for the state of Arizona
Rudy was the only hope in 08
2011 Election: Cons 40% - Nanos 38% Ekos 34%
mykalberta is offline   Reply With Quote
Old 05-28-2013, 10:49 AM   #15
Bob
Franchise Player
 
Join Date: Jan 2007
Exp:
Default

Quote:
Originally Posted by photon View Post
Enterprise drives do tend to have a higher percentage of over-provisioning than consumer drives, and everything I've read tends to support that over-provisioning does have an impact on performance.

Here's a 520 review with the same drive with the regular and a enterprise level over-provisioning: http://www.storagereview.com/intel_s...erprise_review

Not every drive and workload benefits of course, but I don't think it's lies.
Over-provisioning was tested on Anandtech with respect to I/O latency. Basically, good modern consumer level drives perform close to the consistency of the Intel S3700 simply by leaving around 25% spare area (i.e. partition to use only 75% of space after a secure erase), and that's partly what the S3700 is doing anyway: it has 264 GiB of NAND, but only exposes 186 GiB (30% spare). Using the same method, an over-provisioned Corsair Neutron (25% spare) was consistently as fast or faster than an S3700 in I/O latency--though actual throughput in MB/s remains largely unchanged.
Bob is offline   Reply With Quote
The Following User Says Thank You to Bob For This Useful Post:
Old 05-28-2013, 10:59 AM   #16
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

I think sclitheroe's point though is that once each block of the overprovisioned area has been "touched" that the benefits will go away or be reduced.

Everything I've read (articles/white papers from engineers at SSD manufacturers) talk about overprovisioning benefits though, so for now I'll go with that, but I don't know of anything that specifically refutes that.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Old 05-28-2013, 11:28 AM   #17
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default

Quote:
Originally Posted by mykalberta View Post
Are all the switches gb switches? Does the server have the NICS teamed?
The switches are all managed gigabit switches. NICs are not teamed, but we might look into that with Windows Server 2012, as it seems to be easier to setup.
Azure is offline   Reply With Quote
Old 05-28-2013, 01:25 PM   #18
Hack&Lube
Atomic Nerd
 
Join Date: Jul 2004
Location: Calgary
Exp:
Default

If the access to your drawings is mostly read-only and writes aren't made, your SSDs will last a long time.

By "network key" are you are thinking the response from the license on the server is the issue? Is it a file on the server or a USB license dongle?
Hack&Lube is offline   Reply With Quote
Old 05-28-2013, 02:20 PM   #19
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default

Writes are being made, as the drawings are modified and saved to the server.

The network key is a program that replaces the USB license dongle. Instead of having 10 hard keys, we have 10 network licenses, so if someone isn't using their license, it is released and someone else can use it.
Azure is offline   Reply With Quote
Old 05-28-2013, 03:39 PM   #20
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

Quote:
Originally Posted by photon View Post
I think sclitheroe's point though is that once each block of the overprovisioned area has been "touched" that the benefits will go away or be reduced.
.... until the drive has had time to garbage collect when I/O demands from the operating system are lower.

The conclusion I've come to is that over-provisioning could be a means of further protecting against I/O degradation during brief periods of overwhelming write activity. But if you sustain write loads that never give the controller a chance to garbage collect, you'll run into the same performance hits sooner or later.

It's a moot point anyways if you keep a decent amount of free space available on a large single partition - the drive knows nothing about what is partitioned and what isn't. Make sure you've always got 10-15% free space (which is a good idea anyways) and you are golden.
__________________
-Scott
sclitheroe is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 09:11 PM.

Calgary Flames
2023-24




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021