I have local incremental backups and rsync to the remote. Doesn’t syncthing have incremental also? You have a good point about syncing a destroyed disk to your offsite backup. I know S3 has some sort of protection, but haven’t played with it.
I have local incremental backups and rsync to the remote. Doesn’t syncthing have incremental also? You have a good point about syncing a destroyed disk to your offsite backup. I know S3 has some sort of protection, but haven’t played with it.
I have tailscale mostly set up. What’s the issue with USB drives? I’ve got a raspberry pi on the other end with a RO SD card so it won’t go bad.
This reminds me that I need alerts monitoring set up. ; -)
I’ll have to check this out.
I attended some LUGs before covid and could see something like this being facilitated there. It also reminds me of the Reddit meetups that I never partook in.
That’s something that I hadn’t considered!
I wasn’t aware of the untrusted setting. That sounds like a good option.
Yes. It’s the “put a copy somewhere else” that I’m trying to solve for without a lot of cost and effort. So far, having a remote copy at a relative’s is good for being off site and cost, but the amount of time to support it has been less than ideal since the Pi will sometimes become unresponsive for unknown reasons and getting the family member to reboot it “is too hard”.
Take some time and really analyze your threat model. There are different solutions for each of them. For example, protecting against a friend swiping the drives may be as simple as LUKS on the drive and a USB key with the unlock keys. Another poster suggested leaving the backup computer wide open but encrypting the files that you back up with symmetric or asymmetric, based on your needs. If you’re hiding it from the government, check your local laws. You may be guilty until proven innocent in which case you need “plausible deniability” of what’s on the drive. That’s a different solution. Are you dealing with a well funded nation-state adversary? Maybe keying in the password isn’t such a bad idea.
I’m using LUKS with mandos on a raspberry PI. I back up to a Pi at a friend’s house over TailScale where the disk is wide open, but Duplicity will encrypt the backup file. My threat model is a run of the mill thief swiping the computers and script kiddies hacking in.
You’re doing God’s work!
Over my career, it’s sad to see how the technical communications groups are the first to get cut because “developers should document their own code”. No, most can’t. Also, the lack of good documentation leads to churn in other areas. It’s difficult to measure it, but for those in the know, it’s painfully obvious.
I had one from Sony a long time ago. It even had a cable you could attach between two of 'em (600 CDs!) so that it could seamlessly start playing another track while loading the next song. I dropped it during a move and the next time I opened the door, it spit gears at me. I had intended to fix it some day, but started watching Hoarders and decided it wasn’t worth it.
Can you elaborate on the scenario this is solving for? Isn’t software RAID a performance hit?
Kubernetes is abbreviated K8s (because there’s 8 letters between the “k” and the “s”. K3s is a “lite” version. Generally speaking, kubernetes manages your containers. You basicaly tell K8s what the state should be and it does what it needs to do to get the environment as you’ve declared. It’ll check and start or restart services, start containers on a node that can run them (like ensuring enough RAM is available). There’s a lot more, but that’s the general idea.
Helm is one of the reasons I became interested in Kubernetes. I really like the idea of a package where all I have to do is provide my preferences in a values file. Before swarm was mature, I was managing my containers with complicated shell scripts to bring stuff up in the right order and it became fragile and unmaintainable.
One line from your comment struck a chord. The part about maintenance and upgrades. I feel like I get stuff set up and working and go about my life and then a failure happens at the most inopportune moment. Mostly, the failures are when I have a few hours free and decide to upgrade the OS and everything breaks and all the dependencies fall apart and some feature is no longer supported. That’s where I started looking to K8s to just roll back until I have time to manage it.
For me, I find that I learn more effectively when I have a goal. Sure, it’s great to follow somebody’s “Hello World” web site tutorial, but the real learning comes when I start to extend it to include CI/CD for example.
As far as a use case, I’d say that learning IS the use case.
This is tangential to your question, but I’ve been playing with Kubernetes and its ability to ration resources like CPU and RAM. I’m guessing that Docker has a similar facility. Doing this, I hope, will allow me to have Plex transcode videos in the background without affecting the responsiveness of a web app I’m using or will kill and restart that one app I wrote that has a memory leak that I can’t find.
I don’t really need it online all the time, but I don’t expect that I’ll find time to do it all at once and I thought swapping may be a way to break up the job into interruptible segments.
I suspect I’ll have an issue after reading many of the comments. This is also an older Dell server. My only real advantage is that it’s currently hosting 2 VMs and one is just a test server, so I don’t mind losing the data.
Perhaps I’ve been naieve.