Can your cloud backup provider fail?

We see a similar failure in StorageCraft’s design. The fact that an admin decommissioning a single server caused them to lose all metadata suggests a lack of geo-redundancy and fault-tolerance in their backup storage architecture. This had a single server with a single copy of the only data that would allow them to piece together all of their customer’s backups. Again, a fire, electrical short, or flood could have just as easily wiped out this company. In the end, however, it was a single human error. As a reminder – human error and malfeasance are the number one and two reasons why we back up in the first place.

As a backup professional with decades of experience, these incidents make me cringe. The 3-2-1 backup rule exists for a reason – 3 copies of your data, on 2 different media, with 1 copy off-site. Any responsible backup provider should be architecting their cloud with multiple layers of redundancy, geo-replication, and fault-isolation. Anything less is putting customer data at unacceptable risk. The loss of a single copy of any data in any backup environment should not result in the loss of all copies.

When something like this happens, you also look to how the company handled it. Carbonite’s response was to sue their storage vendor, pointing fingers instead of taking accountability. They saw nothing wrong with their design; it was their storage vendor’s storage array that caused them to lose customer data. (The lawsuit was settled out of court with no public record of what happened.) Carbonite’s CEO also tried to publicly downplay the incident, saying it was only backup data, not production data that had been lost. This was a point that was probably lost on the 54 companies who did lose production data because they needed to perform a restore that would have been possible only with the backup data.



Source link