Good news, DNS over TCP in musl has been fixed since v1.2.4 released in May https://www.openwall.com/lists/musl/2023/05/02/1
So if you use alpine >= 3.18 you should no longer have this issue.
Good news, DNS over TCP in musl has been fixed since v1.2.4 released in May https://www.openwall.com/lists/musl/2023/05/02/1
So if you use alpine >= 3.18 you should no longer have this issue.
It looks like you are trying to reinvent parts of kubernetes.
I would recommend to give it a try, it’s easy to spin up with k3s, even on a single node!
Set imagePullPolicy to Always in your deployments (this is more or less k8s version of compose) and latest tag, then every time you restart a deployment, you get the latest version, with auto rollback. Set the tag to a static version and it doesn’t update as long as you don’t change it.
For gitops, add fluxcd.io and you’re set, it doesn’t even require a CI workflow.
For the data copy, k8s provides Volume Snapshots https://kubernetes.io/docs/concepts/storage/volume-snapshots/
Syncthing is also an option.
10w is ± 87kwh/year. Depending on your electricity cost, it would take 1 to 5 years to gain anything from switching to a picopsu, that’s it if you even manage to gain 10w, which is not a certainty.
If you really care about those 10w watts, selling the optiplex and getting a second G3 would be a better option I think.
The documentation clearly states that idle vms on free tier could be reclaimed: https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier_topic-Always_Free_Resources.htm#freetier_topic_Always_Free_Resources_Infrastructure
Idle Always Free compute instances may be reclaimed by Oracle. Oracle will deem virtual machine and bare metal compute instances as idle if, during a 7-day period, the following are true: CPU utilization for the 95th percentile is less than 15% Network utilization is less than 15% Memory utilization is less than 15% (applies to A1 shapes only)
So don’t create a 4 core 32gb ram vm to run a vpn, and you should be fine :)
K8s really shines when you start hosting more stuff, even on a single node. I definitely recommend giving k3s a try. I wouldn’t recommend it for only a couple of services though.
Is it overkill? Yes, applying docker-compose manually also works. But then you still have to make your reverse proxy, your certificate and all your services work together. You can write Ansible for it, but then you end up with a lot of custom code to maintain and you still don’t get all the nice features.
For me the killer feature was flux. Your code, configs and even secrets live in git and get autodeployed and autohealed. And it has other features such as operators to fetch helm charts from other repos and apply your config to it.
The openai cookbook, while mostly focused on openai llms, provides lots of useful information about how to improve result reliability by tweaking your prompt and a lot more such as code samples: https://github.com/openai/openai-cookbook
About langchain, I’ll go a bit against the flow and would suggest against it if you want to actually understand what is happening. It provides too much abstraction that hides the prompts and prevents you to easily adapt it’s behavior. This discussion on hackernews talks more about it: https://news.ycombinator.com/item?id=36645575 Having recently dived into this topic and having been bitten by langchain shortcomings, I cannot but agree with the comments.
Great post, thanks for sharing 👍
I would suggest to give Ansible a try, it would make it really easy to deploy a new service with all required users and config.