Storage@home
Storage@home was a distributed data store project designed to store massive amounts of scientific data across a large number of volunteer machines.[2] The project was developed by some of the Folding@home team at Stanford University, from about 2007 through 2011.[3]
Developer(s) | Stanford University / Adam Beberg |
---|---|
Initial release | 2009-09-15 |
Stable release | 1.05
/ 2009-12-02 |
Operating system | Microsoft Windows, Mac OS X, Linux[1] |
Platform | x86 |
Available in | English |
Type | Distributed Storage |
License | Proprietary |
Website | en |
Function
Scientists such as those running Folding@home deal with massive amounts of data, which must be stored and backed up, and this is very expensive.[2] Traditionally, methods such as storing the data on RAID servers are used, but these become impractical for research budgets at this scale.[3] Pande's research group already dealt with storing hundreds of terabytes of scientific data.[2] Professor Vijay Pande and student Adam Beberg took experience from Folding@home and began work on Storage@home.[3] The project is designed based on the distributed file system known as Cosm, and the workload and analysis needed for Folding@home results.[3] While Folding@home volunteers can easily participate in Storage@home, much more disk space is needed from the user than Folding@home, to create a robust network. Volunteers each donate 10 GB of storage space, which would hold encrypted files.[3] These users gain points as a reward for reliable storage. Each file saved on the system is replicated four times, each spread across 10 geographically distant hosts.[3][4] Redundancy also occurs over different operating systems and across time zones. If the servers detect the disappearance of an individual contributor, the data blocks held by that user would then be automatically duplicated to other hosts. Ideally, users would participate for a minimum of six months, and would alert the Storage@home servers before certain changes on their end such as a planned move of a machine or a bandwidth downgrade. Data stored on Storage@home was maintained through redundancy and monitoring, with repairs done as needed.[3] Through careful application of redundancy, encryption, digital signatures, automated monitoring and correction, large quantities of data could be reliably and easily retrieved.[2][3] This ensures a robust network that will lose the least possible data.[4]
Storage Resource Broker is the closest storage project to Storage@home.[3]
Status
Storage@home was first made available on September 15, 2009 in a testing phase. It first monitored availability data and other basic statistics on the user's machine, which would be used to create a robust and capable storage system for storing massive amounts of scientific data.[5] However, in the same year it became inactive, despite initial plans for more to come.[6] On April 11, 2011 Pande stated his group had no active plans with Storage@home.[7]
See also
References
- "Storage@home Installation". Folding@Home web site. September 12, 2009. Retrieved October 28, 2016.
- "General Information about Storage@home". 2009. Retrieved 2011-09-17.
- Adam L. Beberg and Vijay S. Pande (2007). "Storage@home: Petascale Distributed Storage" (PDF). 2007 IEEE International Parallel and Distributed Processing Symposium. pp. 1–6. CiteSeerX 10.1.1.421.567. doi:10.1109/IPDPS.2007.370672. ISBN 978-1-4244-0909-9. S2CID 12487615.
- "The plan for splitting up data in Storage@home". 2009. Retrieved 2011-09-17.
- Vijay Pande (2009-09-15). "First stage of Storage@home roll out". Retrieved 2011-12-14.
- "Storage@home FAQ". September 12, 2009. Archived from the original on August 15, 2011. Retrieved October 29, 2016.
- Vijay Pande (April 11, 2011). "Re: Storage@Home". Retrieved October 29, 2016.