Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 218 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • Max-P@lemmy.max-p.metoGaming@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    4 days ago

    I would literally donate money directly to Valve if I could for all the good selfless work they’re doing.

    Their work on sponsoring DXVK, and Proton’s development, their contributions to make the AMD drivers even more awesome, gamescope, they’ve been driving all the HDR and VR work on Linux, and now they’re also getting even more hands on with Wayland through frog-protocols.

    Meanwhile the others are either doing nothing at all except selling the games, or actively sabotaging Linux gaming and furthering Microsoft’s monopoly like Epic Games is doing with their intrusive anti-cheat.

    Being on Steam is being strongly pro-consumer and the first thing a developer not publishing on Steam does to me is make sure I’m very unlikely to buy their games because at least on Steam I know I won’t get ripped off.

    Couldn’t care less about whiny developers complaining they make slightly less millions in sales for overpriced AAA games, and still impose their own launcher and shit because they only treat Steam like a store and nothing else. I pick what’s good for the players not the developers. If they’re unhappy there’s dozens of indie developers in line to pick up the slack willing to make games I’m willing to pay for.

    EDIT: And a couple hours later, Valve delivers once again: https://lists.archlinux.org/archives/list/arch-dev-public@lists.archlinux.org/thread/RIZSKIBDSLY4S5J2E2STNP5DH4XZGJMR/?sort=date



  • Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

    The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

    In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

    The packages in the Arch repo are legit saner than the Docker version.


  • I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

    I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

    4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

    Backblaze has pretty good drive reviews as well, with real world failure rate data and all.





  • And all that forever too. The developers don’t pay a dime after Steam’s cut to keep the game alive and downloadable and playable. Even Steam keys, you can sell as many as you want outside of Steam, for free.

    The devs can just raise the price by 30% if they feel they really need the money. I’ll pay the extra to have it on Steam and just work out of the box in Proton. Unlike Apple, it’s not a monopoly, nothing stopping anyone from just distributing on their own.


  • Epic is anti-consumer and also anti-Linux, they don’t make any effort to support other platforms, the app is shit.

    Meanwhile, Steam is

    • Actively working with the FOSS community to help preserve old games
      • Kernel improvements for better graphics performance
      • Lots of VR and HDR work
      • Many contributions to the open-source AMD drivers
    • Has been supporting Linux gaming for a decade with no signs of backing down
    • They have a portable Linux gaming console experience, and it’s intentionally left wide open for users to mess with
      • They’ve taken several community features and built them into the OS
    • Their DRM is weak and unintrusive
    • Their anticheat is ununtrusive
    • The sales are pretty good
    • They have tons of features for users:
      • Family sharing
      • Remote Play Together
      • Remote Play
      • Streaming
      • Community forums for every game
      • Mod workshop
      • Matchmaking
      • Steam Chat / Voice Chat / Streaming

    The only appealing thing for EGS is, EGS takes a lower cut from the developers who just pockets it and doesn’t even result in lower prices for users. As a Linux user, praise our Lord GabeN for all the good Valve has done for gamers. Even for the developers, most are quite happy with the services they get back from that 30% cut.

    I’d say the dislike is mainly that for the users, EGS doesn’t bring in anything new or interesting or useful that Steam didn’t already do well, and goes directly against a lot of the good Steam has been doing. It’s just a store that makes big developers slightly more happy.




  • I have both. I find that YouTube Music has a much better algorithm, but the app really does sucks, although at least it doesn’t crash for me. Spotify’s app is a lot more polished (although lately it too has started to enshittify), but the music discovery is a bit lacking. Audio quality is better on Spotify, YTM just sounds compressed to be as loud as possible.


  • I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

    That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.


  • IMO that’s more of a problem with the industry not really caring to support lower specs, or generally not seeing the deck as a real console or platform to target. People still make Switch games and the damn thing was already outdated at launch and they even underclocked it for good measures.

    At 800p you’ve got to start thinking, is most of the detail those games compute even actually visible the on screen? How many PCs does that make obsolete? If the deck can’t run it at 800p, even at 1080p you’re gonna need what, an RTX 2060 for the lowest settings on a PC?

    Some of the example titles don’t even sound like they’re the kind of titles that are made to showcase what your 4090 can do, which logically you’d want as many people as possible to be able to play it.



  • Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

    I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

    LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

    I guess one more to the pile of why everyone hates Zscaler.


  • Because it’s too flexible, and assumes everyone has source code to glue it all together. There’s endless choices you can make to have a functional system.

    • Before you even compile the kernel, you have to provide a C compiler. That can be GCC or LLVM/clang.
    • Before you even build the kernel, you have to pick a CPU architecture and subsystems to enable.
    • Before you can even boot the kernel in any useful manner, you need to select a partition table format, one or more filesystems to put on the drive, all with varying amounts of features, but are at least mostly all POSIX compliant. Or a ramdisk.
    • Even just starting at the very core of userspace, the C standard library, you have glibc, musl, uClibc. That can only be dealt with at compile time.
    • Then on top of that, for the core utilities, you have the GNU coreutils, uutils, busybox, toybox, the BSD coreutils.
    • Great, we can start booting now. Wait, now there’s the choice of init system: systemd, sysvinit, OpenRC, runit, upstart, dinit, and a lot more. Good, we’re booted.
    • Now we need a login prompt, which can be agetty, greetd, mingetty, GDM, SDDM, LightDM. You’ve entered your password: that may or may not trigger a PAM session, which can verify your password from just about anywhere (locally, Kerberos, LDAP), start a D-Bus session, register a session with logind, that can trigger decryption and mounting of a drive, which itself could be local or remote or removable.
    • We’re logged in! Now we need a shell. There’s bash, dash, zsh, ash with their own small differences, and that’s just the POSIX compatible ones. There’s also fish, nu, ksh, csh and more.
    • We have a prompt! Now we should probably install some software. Is it gonna be apt, yum/dnf, zipper, pacman, apk, xbps, emerge, port? What’s the package names? Depends on the distro!
    • We have a way to install software, now we need network to get it. How’s the network configured? ifupdown, systemd-networkd, NetworkManager, Connman, dhclient, dhcpcd, netplan, netctl. If you have WiFi, there’s iwd and wpa_supplicant.
    • Lets get a graphical session. Xorg or Wayland based? ALSA, PulseAudio or PipeWire? Window manager or desktop environment?
    • You want to mount a drive. systemd can do that, udev can do that, fstab can do that.

    That’s just the basics to make it to a desktop. Now there’s some stuff to help that a lot, like Flatpak which aims to provide a known base system for apps to target. The portals help get access to resources with varying backends. PipeWire supports pretty much every audio protocol in existence so that’s alright. Flatpak is a pretty good standard/ABI to target. For server software we have similar things in the form of Docker and Podman. But all of these solutions are basically “lets just ship the distro with the software”.

    The only really standard interface is the Linux kernel’s public interface. If you’re writing a driver, you better be ready to maintain it because stuff moves around a lot internally, the kernel doesn’t care not to break out of tree modules. Go makes use of the stable kernel API and skips the libc entirely, so Go binaries are usually fairly portable as long as the kernel is somewhat sane.

    The only real standard you can target is POSIX, which is fine if you’re writing CLI or server software, but if you want to write GUIs, you just have to make choices. Most Linux stuff runs fine on FreeBSD too, they have Wayland, PipeWire and Mesa there too, so technically at this point you’re not even targetting Linux per-se, more like generally POSIX-y systems with software that’s just very commonly used and target that.

    On Windows and Mac, you have what Microsoft/Apple provides and if you want anything else you bring it yourself. However, technically you can install PulseAudio on those, install an X server (Xming, Xquartz), run most DEs in there, run browsers and quite a bit of Linux-y stuff, natively on Windows and Mac in their respective binary formats.

    The thing with FOSS is there isn’t a single standard it targets, we just port everything to everything as needed. The closest thing we have to a standard is targeting specific versions of specific distros, usually Debian/Ubuntu or RHEL and derivatives because that’s what the enterprise customers that pays for the development tends to run. That’s why Davinci Resolve is a pain to run on anything other than Rocky Linux. Thankfully, it’s also just software and dependencies, so if you just give it everything it uses from Rocky, it’ll work just fine on other distros. And that’s why source code is important: you can make everything work with everything with enough time and patience. That’s what powers the ecosystem.



  • That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.