Mine is in the picture: 1544 days and counting!
It’s an EC2 nano instance that’s used only as a monitor for a few services that are running inside my VPN. It has served me well over all these years!
EDIT: before everyone starts screaming about “security”:
It’s not internet facing and no port is opened, all it does is fire up a notification if/when something doesn’t reply.
Even in the unlikely scenario that someone gain access to it that means that my VPN is already compromised, and I’ve got bigger problems to worry about.
Not so high because of frequent updates and reboots for security
I was about to say the same. My nodes have weekly updates, but that’s fine. Just k8s things.
So you never apply patches or updates, that seems like an odd thing to be proud of but different strokes for different folks I guess.
It’s not internet facing and no port is opened, all it does is fire up a notification if/when something doesn’t reply.
Even in the unlikely scenario that someone gain access to it (nobody did in the last ~4 years) that means that my VPN is already compromised and I’ve got bigger problems to worry about.
Makes sense but even then i would just run automatic updates every few months. Just to keep best practice. Nonetheless cool uptime, now do 10 years :)
Well now it’s becoming kind of a challenge: will AWS terminate/migrate the instance at some point, or will I be forced to reboot?
My fridge has been on since it was plugged in. It’s also offline.
You have no power outages? Or you UPS your fridge? That’s commitment.
Power outages aren’t that common, so it’s quite possible
Unless you live somewhere with above-ground powerlines and wind. Biannual occurrence for me.
I remember this story from about twenty years back hitting the news:
https://www.theregister.com/2001/04/12/missing_novell_server_discovered_after/
Missing Novell server discovered after four years
In the kind of tale any aspiring BOFH would be able to dine out on for months, the University of North Carolina has finally located one of its most reliable servers - which nobody had seen for FOUR years.
One of the university’s Novell servers had been doing the business for years and nobody stopped to wonder where it was - until some bright spark realised an audit of the campus network was well overdue.
According to a report by Techweb it was only then that those campus techies realised they couldn’t find the server. Attempts to follow network cabling to find the missing box led to the discovery that maintenance workers had sealed the server behind a wall.
Personally I am shutting down my server in the midnights to make it relax for a bit. #MentallySupportingOurHomeServers Butt yes, I still agree with the comments above, even if theserver is not directly connected on Internet, upgrading is mandstory nowadays. Bots are everywhere, especially nowadays with all of these AI tools.
PBX admins are laughing in the background as their uptime is almost 4k days, running CentOS 5
About 6 year uptime on one machine before we shut it down and relocated.
I think I got up to 300 or so days on my old Athlon XP Gentoo server. I have “upgraded” since then and my current server can’t go more than 2 days. I have an arduino connected to the motherboards reset button pin that resets it whenever the bash script that communicates with the arduino stops running but even that somehow still crashes at least once a week and needs manual intervention.
Broke 14 months on my unraid server before rebooting for an update. But it’s been running 24/7 for about 6 years now with maybe 3-4 reboots.
Currently 60 days
When the power goes out my nas shuts down and also for updates
I think my first raspberry pi got to 500-ish days before we had a power outage
A month probably. I have to reboot it mainly for updates
I don’t know it depends on the patches really I have automatic updates so I guess a few months would probably be the longest between kernel patches
Many years ago working for a monitoring software company someone had found a bug in the uptime monitoring rules where they reset after a year.
It was patched and I upgraded one client and their whole Solaris plant immediately went red and alerted. They told me to double it to two years and some stuff was still alerting.
They just said they’d try to get around to rebooting it, but it was all stable.
Everywhere else I’ve worked enforces regular reboots.
My father ran an HP-UX server that did inventory management (not internet connected) that had an uptime greater than 10 years before it was migrated.