SSD Drives over 5 years old | Page 2 | FerrariChat

SSD Drives over 5 years old

Discussion in 'Technology' started by Innovativethinker, Aug 9, 2019.

This site may earn a commission from merchant affiliate links, including eBay, Amazon, Skimlinks, and others.

  1. vraa

    vraa F1 Rookie
    Rossa Subscribed

    Oct 31, 2003
    3,492
    Texas
    Full Name:
    Mr. A
    Dead due to no power, or dead because the controller died, or dead because of too many bytes written? What brand and make?
     
  2. TestShoot

    TestShoot F1 World Champ
    Silver Subscribed

    Sep 1, 2003
    12,025
    Beverly Hills
    Samsungs, and they are recognized but then nothing happens and they vanish from the system, probably controller board. Tried on Kali, W10 and OSX.
     
  3. Whisky

    Whisky Two Time F1 World Champ
    Silver Subscribed

    Jan 27, 2006
    25,287
    Upper Great Plains
    Full Name:
    The original Fernando
    I was the 'first guy on the block' with a 1GB HD, Western Digital, cost me $400, around 1990-91.
    I just bought another 64GB Sandisk USB stick - about the size of my thumbnail - $10
     
  4. Innovativethinker

    Innovativethinker F1 Veteran
    Silver Subscribed

    Aug 8, 2009
    8,596
    So Cal
    Full Name:
    Mark Smith
    oops:

    Hewlett Packard Enterprise (HPE) has warned that certain SSD drives could fail catastrophically if buyers don't take action soon. Due to a firmware bug, the products in question will be bricked exactly 40,000 hours (four years, 206 days and 16 hours) after the SSD has entered service. "After the SSD failure occurs, neither the SSD nor the data can be recovered," the company warned in a customer service bulletin.

    https://www.engadget.com/2020-03-25-hpe-ssd-bricked-firmware-flaw.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cueWFob28uY29tLw&guce_referrer_sig=AQAAAMEZVPX2oQHDPzK0VD9iM09sQYnoimSbxhhZ5jQwsthuAHNZsROtHlolhGrFYyCKiYXA12cw8Q7HN4HCrGv2YWieWgmqQ5KEV1-Khk8fq587M4Djvf-NbSNTdyf9MudJf_cAZF_fj7_zN_57cQ36fEwW4Ud6biwg6G_pXZdev8E8
     
  5. losgatos789

    losgatos789 Formula Junior
    Owner Rossa Subscribed

    May 13, 2008
    464
    Silicon Valley
    Full Name:
    Stu
    #30 losgatos789, May 22, 2020
    Last edited: May 22, 2020
    Late response, but thought I would provide what I experienced (this past decade) with the different enterprise SSD vendors...YMMV since my data is 5 years old.

    From 2013 to 2016 I was CTO for a data science driven social media company. 9 petabytes of store growing 20% / year. The last year I was there was growing much faster since we began performing NVIDIA GPU machine vision identification on pictures submitted by customers. We ingested approx 1.2 to 1.8B social conversations per day. System configuration:

    Massive data API pipes inbound --> West Coast data center and East Coast data center --> each data center with 6 Kafka servers --> 8 Apache Spark (real-time) pipes with data transforms and "lite" real-time data compute (in order to prioritize certain customer processing asks --> two different compute and store platforms (one platform for high-speed Apache Solr Cloud indexing to Hadoop store); other platform for real-time ML/algo data science driven predict and recommend.

    The Apache Solr compute had 90 servers per data center; 4 petabytes of SSD. Ultra high-speed indexing was critical for < 7 second recommendations to a users' browser.

    We ALWAYS had a pallet of spare SSDs to replace unexpected dead ones and SSDs giving off alerts of imminent failures. The only thing that saved us was the fact we had 90 servers....45 primary and 45 as hot backups. Cloudera, who we had a service contract with (and because many of my former colleagues from Yahoo were there and who had built Hadoop)....said we were probably one of the largest Solr Cloud environments they had ever seen; they had the lion share of supporting big data open-source stacks at the time.

    (As a side note - if you are familiar with graph databases, we also had the largest graph database running on Titan Graph (except three letter orgs) in the world with approx 1.2T edges. Even FB didn't have a graph db this size.). If you use that graph db, then we contributed a LOT open source to help make Titan span to that size.

    We used every SSD vendor available to see if we could get longer SSD life spans in order to drive down failures for an average SSD/vendor. Bottom line, pros and cons with all vendors.

    HPE, Dell, IBM and others that I have forgotten were all tried. Missing from this list: Seagate lost the SSD war because they doubled down on spinning disk during that time; Western Digital never attempted to make an enterprise SSD contender and hence stayed in the consumer market...to this day; they compete on price and keep driving prices down to target customer market), and Samsung (was not competitive during that time frame).

    Needless to say, we were doing billions of writes to drives and could easily see issues even after a couple of months of usage. Customers paid a LOT of money to have this high performance. We had different service price offerings that could run on non SSD Solr Cloud servers.

    We had begun researching a TensorFlow based model to approximate read ahead indexing results to reduce SSD need and or use distributed Apache Ignite. One of my data scientists submitted a white paper to an NVIDIA big customer conference in San Jose; it was accepted and he presented. NVIDIA asked him to present in Barcelona, and we flew him to that as well. Then the company was sold....that's a different story.



     
    Joker, vraa and TestShoot like this.
  6. TestShoot

    TestShoot F1 World Champ
    Silver Subscribed

    Sep 1, 2003
    12,025
    Beverly Hills
    Man, SSDs in a data center is asking for trouble. I can't imagine that the m.2 stuff will fare any better. I just cooked some older SSD that were NIB doing TensorFlow stuff on my Jetson Nanos. Usually I have great luck in normal-ish duty cycles even in basic web hosting with SSDs, but the best drives are being turned into crap, like Sandisk flash drives that for me have a 50% failure rate out of the package.

    While staying back in the platter space sounds dumb, we kind of see how crap SSD are in heavy load. I had to retask some ancient Cobalt CacheRaqs and run Squid to offload some buffers, but nothing compared to your use case. I adopted those "hybrid" drives back in the day, interesting, but not great.

    In the meantime I have some UW SCSI I need to find a way to get data off of. Time to build a retrobox to hear that lovely percolator sound.
     
    vraa likes this.
  7. ruimpinho

    ruimpinho Rookie

    May 2, 2010
    5
    I have a retina MacBook Pro from 2015. SSD is still running fine. Had a 2013 as well until last year, but it was stolen. Still running like a champ.
    Still, I make monthly backups to an external HD just to be safe.
     

Share This Page