This is how I would describe my experience. Sometimes it’s crunch time and most of the time it’s fuck around time. After crunch time I always throw a tantrum about how if we only bothered with planning we could largely avoid it.
This is how I would describe my experience. Sometimes it’s crunch time and most of the time it’s fuck around time. After crunch time I always throw a tantrum about how if we only bothered with planning we could largely avoid it.
If it ends up being ruled that training an LLM is fair use so long as the LLM doesn’t reproduce the works it is trained on verbatim, then licensing becomes irrelevant.
If something like that were to work, a lot of effort would need to be put into minimizing the UI friction. I could see something like: uploaders add topic tags to their videos, and an AI runs in the background to generate and apply new tags based on the content (most people would not understand how to properly tag content). An AI would also be used to create a graph of related tags, where similar or closely related tags are nodes joined by an edge. Then, on first login the user is prompted to pick some tags to start with. Over time, the client uses the adjacent tag graph to fine-tune users’ tags, on device. The idea here is that we could get a decent algorithm that can recommend new stuff based on what the user watches, but keep that data processing of user-specific content local. Then, the client would also have an option the user could enable that would contribute their client’s tag information back to the global tag graph, improving the global tag graph for everybody. This data could also be combined with other users data at the instance level to somewhat anonymize the data, assuming it is a large multi-user instance. If you were to host a single user instance, you’d probably not want to contribute to the global tag graph unless you’re ok with your tag preferences being public.
It’s a bit tricky but I think a privacy preserving algorithm is possible. Simply put, the more data available, the better an algorithm can be.
I think the easy discoverability on these platforms is part of what makes them so popular. Using TikTok or similar, a user typically wants to be shown new things, it maintains a sense of novelty that keeps users constantly engaged. Having to do this manually would be a huge negative.
The algorithms are what makes these services. Most interactions aren’t searching and selecting something specific or intentional, they’re just opening a fire hose and expecting the algorithm to pick content they find entertaining for them. It requires the algorithm to have a lot of information, both about the specific user, and about similar users.
Most people shouldn’t self host. It’s a hobby for people who want to do it, and there are benefits, but spending 3 hours on a weekend fixing stuff is not how most people wish to spend their time. Furthermore, it’s not a good use of most people’s time. We split labor up into specialties, forcing people to do work outside their specialty causes pointless inefficiency. I agree with what other commenters have said in that a better approach would be to have more small businesses hosting federated together, and anyone not inclined to self host should just purchase service through one of those many small providers instead.
As far as I know there is only one SSD model that meets my criteria (Samsung 870 QVO 8TB), and at $520 right now so I’ve decided it’s best to wait. I’d like it to be quieter but not so badly as to spend $1k on it (need two).
How noisy are these? I have a pair of shucked WD drives that should be equivalent to reds, and they’re pretty noisy in my otherwise quiet home office. Given they’re only 8TB, upgrading them to SSDs for full silence is something in considering as soon as the pricing and availability permits.
Blocking a large messaging platform because a minority of people are using it for piracy, of all things, seems extremely disproportionate
Out of curiosity, what are some use cases that would fit this criteria? VMs and containers are very capable and it’s much easier to debug a failed VM than a failed piece of hardware.
It’s just another form of notification fatigue.
I think because of federation, even if lemmy blows up, smaller instances and communities will be able to exist still. Over on Mastodon, the big instances have grown up quite a bit, but there is also many thriving smaller communities that either aren’t federated with the big instances, or federated very selectively to curate the community they want.
As a person who has been managing Linux servers for about a decade now, trust me that a few hours or days of learning docker now will save you weeks if not months in the future. Docker makes managing servers and dealing with updates trivial and predictable. Setting everything up in docker compose makes it easy to recover if something fails, it’s it’s self documenting because you can quickly see exactly how your applications are configured and running.