Fault tolerance has been at the forefront of data protection since the dawn of computing. To this day, admins continue to struggle with efficient and reliable methods to maintain the consistency of stored data, either locally or remotely on a server (or cloud storage pool) and keep searching for the best way to recover from a failure, regardless of how disastrous that failure might be.
Some of the methods still being used today are considered ancient by today’s standards. Why replace something that continues to work?
Petros Koutoupis is the self appointed BDFL of the RapidDisk project. Most of his career has been in software development in the data storage industry. He is deeply involved in open source software development and for years has written code for the Linux kernel, various open source device drivers and applications in both the embedded and server spaces.
View all posts by Petros