We'll admit it - some of us at The Computing Center are science and space geeks. So this article about how Mars exploration will essentially require taking mainframe level computer systems along with human spacefarers caught our eye. Also, look at the author's title - definitely cool! If you're like us - read on!
At the International Astronautical Congress in September, Elon Musk announced a vision to build a base on the moon in addition to his famous plans to build a permanent human colony on Mars. The announcement came with images of rockets, landing pads, refueling tanks and structures for human habitation. It’s an inspiring vision — but it can be easy to forget the individual steps it’ll take to realize the dream.
As Musk makes clear, long before SpaceX sends humans to the moon or Mars, they’ll have to send unmanned missions to establish the early infrastructure. In addition to propellant plants and solar panels, the early missions will almost certainly require systems for receiving, storing, analyzing and transmitting huge volumes of data. And with no humans on site to intervene, those systems will have to be incredibly robust, highly automatic, adaptive, self-monitoring and self-healing.
Those adjectives will sound familiar to anyone who’s worked with mainframe computer systems. In fact, Don Haderle, a scholar of the mainframe, recently suggested that they’ll become the obvious solution for the moon, Mars, and beyond — where the time and energy costs of, say, adding a new application will need to be near zero — not just zero-cost when engineers add the application but zero-cost in terms of the ongoing need for security, monitoring, repair and more.
That observation gets to a persistent confusion about mainframe expense: People often believe they can adopt systems that cost a tenth as much as mainframes, but as Haderle notes, “You might be able to do something for a tenth of the cost for the hardware, but that doesn’t consider the overall system cost. You have to look at the manageability of the operation: security, privacy, fault failure, backup, recovery and governance.”
You might be thinking: “Fair enough if you’re on the moon or Mars, but server rooms on Earth aren’t exactly harsh environments.” That depends on what you mean. As B.C. Gobin, Jing Cao and others have pointed out, the mainframes designed and built by IBM are already the secret and silent heroes of the digital age, processing every kind of transaction and query. And, as we all know, the amount and complexity of data generated by those transactions and queries is increasing by leaps and bounds.
That growth alone would constitute a harsh environment, but it’s compounded by the urgent need to analyze data in real time. Machine learning and other analytics hold incredible promise, but that promise relies on powerful, reliable and well-governed infrastructure. More and more data architects are understanding that machine learning at scale can’t simply be bolted on to existing systems as an afterthought. Truly operationalizing machine learning requires purpose-built systems capable of ingesting, preparing, building, deploying, monitoring and continuously improving the data and models.
At the same time, hackers are finding new ways to organize attacks. And the attacks themselves vary wildly in kind — from stealing and manipulating data to destroying the infrastructure itself. Even our climate is a factor. It might not be as rough as on Mars or the moon, but fires, floods, hurricanes and storm surges all pose real — and growing — threats to infrastructure. More than ever, robust fault failure, backup and recovery can spell the difference between tranquility and collapse.
Harsh environments indeed. And as our digital environments gets harsher over time — on Earth or beyond — it’s clearer than ever that the IBM mainframe is here to stay.