
Ramo helps Academics Store Massive Datasets
When Dr. Drummond Fielding, a theoretical astrophysicist at Cornell University, set out to simulate the behavior of interstellar gas on one of the world’s largest supercomputers, he faced a critical problem common to many researchers: where to store the results.
A complete run of the simulation can easily exceed 5 petabytes of output—far more than can be retained long term at a cost that is viable in the context of academic budgets. But the alternative—deleting the data—comes with an even steeper price: Re-running these simulations can cost hundreds of thousands of dollars—sometimes up to a million—just in electricity.
For years, this dilemma has forced researchers to pick and choose what to keep, distilling complex results down to tiny curated subsets. This creates a frustrating cycle: insights can be published, but the raw data often vanishes—making independent verification or reanalysis nearly impossible.
Enter Ramo Cloud, it connects storage clients to a decentralized storage network powered by open protocols like Filecoin. Instead of relying on a single provider, Ramo Cloud leverages a network of independent hardware operators around the world to offer affordable and reliable storage at scale. Developed by web3mine, Ramo Cloud allows data-rich clients to upload, store and manage massive datasets seamlessly and affordably across a decentralized infrastructure network with no vendor lock-in.
Discovering Ramo Cloud Storage: Verify, don’t trust
When Fielding learned about Ramo Cloud Storage he quickly realized its potential and made it part of his workflow. Fielding now archives the large raw outputs affordably with Ramo Cloud Storage and keeps only a smaller subset locally for further study.
What makes decentralized storage unlocked by Ramo Cloud compelling for researchers isn’t just the cost-effectiveness—it’s the ability to share, verify, and preserve data in ways traditional cloud platforms can’t. Once uploaded, the network uses frequent and independently verifiable mathematical proofs to demonstrate that the data uploaded remains unaltered. This creates a system where reproducibility and data integrity are built in, not just bolted on.
Fielding plans to use Ramo Cloud for more future simulations. And he believes this isn’t just his problem. Many labs around the world face the same dilemmas seeking to store valuable data, but having limited storage and no sustainable backup solution:
"I can think of a dozen university supercomputers right now where researchers would love to put their data somewhere safe," - Drummond Fielding
What’s next: Scaling at the speed of the dataverse
Ramo Cloud continues to grow its pipeline of high-value data, helping clients turn datasets into durable, verifiable digital assets. And as Vukasin Vukoje points out, "this isn’t theoretical: We’re already helping clients in AI and beyond store far more data than their budgets would have allowed on conventional clouds—without increasing their spend". As AI scales and science expands, one thing is clear: being able to store everything might be just the right level of ambition.
If you are looking to store massive datasets visit use.ramo.computer to get started. For hardware operators with idle storage capacity, visit provide.ramo.io to learn how you, too, can provide resources on the Ramo network and help shape the future of the cloud.