Developing a program to run for 50+ years

There are existing systems (e.g. COBOL programs in financial institutions) that have been running for decades, but were never designed for that purpose and are in dire need of replacement.

Imagine that you wanted to write a program to perform some sort of ‘mission-critical’ task. The task does not involve interaction with the physical world, just the digital/online one, so it could be a long-term computation, status monitoring, collecting repo statistics, a game server, data stream filtering, anything that can be accomplished by a computer with an Internet connection.

What sort of things would you consider if you wanted the program to run efficiently for decades (at least 50 years)?

To narrow the scope a bit, assume that:

  • The Internet connection and power supply are reliable.
  • You would replace hardware every 5–7 years to benefit from more speed, memory and/or storage.
  • You want (the pretty-much inevitable) changes to underlying systems to impact your program as little as possible.
  • You want to spend as little time ‘maintaining’ the whole system as possible.

On a 50-year timeline I could see programming language, operating system, and perhaps instruction set architecture being valid factors worthy of consideration, but there are, no doubt, many more.

What would you consider?

1 Like

Approves:

1 Like

There are plenty of closed source software which have exceeded their intended support, as long as virtualization is possible on CPUs there is still the possibility to support containers or mini-root.

From past work I’ve managed to shoe-horn apps that were originally created on PalmOS to WinMo and then Linux(ARM)… software wise they involved databases, the main issue was adapting to larger storage support of WinMo 5-6.5 and it wasn’t too hard to rework stuff to work on an ARM based Linux which then relied upon SQL-like database. On the ARM side of things going from OMAP to XScale to Broadcom(Pi) were small steps and going platform neutral at that time period allowed tweaking application performance to run within a browser.

From a data analytics processing side of things, its fairly easy to create a portable platform. As long as there is C/C++, PHP/SQL and Python the main limitation would be how much Linux changes if you’re aiming at supporting software to run on as many CPU/OS platforms as possible. The only limitation with “online” stuff is nobody knows how browsers may change, Microsoft ditched developing their browser which rendered stuff better so we’re stuck with Firefox and Google’s Chrome–lesser known browsers are at the mercy of what new web standards gain traction.
(I’m speaking from a web development/mobile OS side of things, making your work as platform neutral is important for hardware mobility–you don’t want to be locked down to AMD, Intel, Freescale, Broadcom, TI, etc).

1 Like

A timeless UI, a reliable support and stable, from an end-user perspective

Seriously, though, It’s hard to imagine many use cases for a program to need to last that long. I think even space probes get reprogrammed in flight to handle changing requirements or environment. Business or government programs probably have to be updated even more often.

Once something is built to a specific task, reprogramming stuff becomes dicey as you never quite know if you’ll reach the memory/processor limits–there are stuff like overly custom CNC and automation gear which are still moving along on a Pentium 75-133Mhz and 16-32MB of memory, the old Y2K fear didn’t impact as many ‘old tech’ as many worried about.

1 Like

Some news for you then is all your tax payment information is stored in an AS400 mainframe.

Like when you go to pay property tax to your local county office. See if you can peer a glance at their terminal.

The stuff that runs the world is already 50 years old. So I speculate that this is for the next 50.

3 Likes

Beyond tax payment info, some libraries are still using databases that rely upon ancient Wyse terminals in some parts of the US of A… nothing like that amber or green CRT.

1 Like

hmmm… couple of issues hardware replacement. all silicone chips degrade over time and suffer electron leakage. the circuite boards they live on corrode over time as do the components like caps… so you would be upgrading hardware. that NEW hardware may mean you cant run your old programs, as the instructions get migrated out and replaced by more advanced routines on newer chips if chip manufacturers are innovating.
you will then have to update the program to take account of the changes. and hope your data is consistent over time.
or you can buy a 50 year supply of hardware and hope it lasts that long, remembering spinning rust degrades over decades and flash even quicker.
the silicon chips should be fine but classical storage and motherboards to run the hardware on will get rare quick due to oxidization and accidental wastage.

yeah 50 years is quite a challenge.

the last option which is the most expensive is custom, you can then balance the electrical needs of the cpu to the damage electrons cause over time and engineer it to take migration into effect buy building the circuits with bigger walls across a larger dye. then program it in assembly/machine code. give it an atomic battery, set it running then fire it into space. where oxidation cant degrade the circuits.
giving you your possible 50 years of uninterrupted operations.
then its just a case of gathering the data you want it to process and that can be done on any ui you choose on the ground.

ps… i bet elon has a better idea :smiley:

1 Like

So, given that virtualisation is an option, you’re not so much concerned about the hardware side of things, and would consider something like POSIX C for the core language?

Text?

Ah, I wasn’t suggesting that the program couldn’t be updated. Sorry if that wasn’t clear. Yes, over the course of 50 years some changes to it would likely be required. If 100% uptime is the goal, however, then does that favour languages like Erlang where code can be hot-swapped, or are there other approaches worth considering?

Correct. We have what we have now by ‘accident’ and the result is ‘poor’. I’m thinking about how we can avoid that situation in the future, by building ‘resilient’ (not even sure that is the right word) programs.

Yes, hardware would likely be upgraded on a 5–7 year basis, as mentioned. The evolution of the ISA would mean that binaries would eventually be incompatible and execution would fail. Is that a vote for RISC which has far fewer instructions and less prone to creep?

FPGAs?

1 Like

I’d be using a safe higher level language (Maybe Ada - if its good enough for the DoD, its good enough for me - but depending on the specific task i’d consider something optimized for that in terms of code simplicity rather than performance) and relying mostly on hardware advancements to get performance.

Going the other way and writing some low level code for some specific architecture to get performance will give you a bunch of technical debt to carry that will limit your options in the future.

That’s not the sexy or macho “real programmer” answer, but in terms of reliability and longevity, i think its the correct trade off.

The banking software in COBOL example is kinda the epitomy of this. COBOL is high level, optimised for doing the things they’re using it for, and compilers can be developed to get it to run faster on newer hardware - it’s hardware agnostic.

Speed for most of this long term application type stuff is mostly irrelevant, as over the lifetime of your app any initial performance concerns will be addressed with hardware. On the flipside, maintenance issues or stability/bug problems will not.

And if performance is truly a problem (that you can’t solve fast enoug with hardware), the 90/10 rule generally applies. Find the hotspot and optimise THAT with something low level, not the whole app.

edit:
oh, and also…

  • have a design spec for the app - so that there’s a “its functionally complete” point
  • resist the urge to add features “just because” (to prevent introduction of new bugs).

keep it simple!

50 years sounds like a long time, but I already have code I wrote in production at work that’s been running for 17 years (i left the company, came back and its still running, lol. i actually made a bugfix last year for a known non-critical bug i had commented into the code in 2004 :smiley: ).

Small time stuff, but hey…

From an analytics side of things, I’ve mostly moved everything closer to web based with python–hardware and OS mobility tends to be fairly common for those who grew up during the 90s anti-trust hell of Microsoft. When it comes to OS mobility, I once tried to get a personal program to work on Haiku(BeOS based) and the lack of certain stuff made it impossible as it would be too time consuming to reinvent the wheel----audio/multimedia wise Haiku continues the nice factor of BeOS in terms of a very responsive OS yet it has too little hardware support(tried some bootloader experiments on non-Raspberry Pi ARM hardware with the daily images, managed to get it semi-working).

In terms of oldest personal projects that hasn’t broken yet, it ties with an application of mine I did for data logging with a PalmOS device and MacOS(PowerPC G3 era)–somewhere between 2001-2002. Wouldn’t be surprised if I were to compile that source code of the MacOS application, it could work on a modern PowerPC Linux box but its hard to justify that kind of expense.

I mean terminal, so yes, basically text