• 0 Posts
  • 101 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle


  • It’s not so much their death, but that they won’t create more art*. Just a short while ago I was watching the invincible special, it had Lance Reddick in it. I don’t care about him, because I don’t know him, but I know that he played many roles I loved, and now that he’s dead, that will not happen again, and it makes my life just a little bit emptier.

    Edit: * actually, not just art. Politicians like Nelson Mandela could, just by talking, improve the world a little bit. Genius scientists who do science. All that stops. And it’s sad.





  • I think ARM is still more efficient. It’s specifically the PIs (or it’s chips) that’s not that efficient by my understanding. AFAIK Intel also is not as close for power draw under load.

    The S740 is actually passively cooled, and it sits in my cupboard together with a pi4 backup server, router and modem ;) The S740 became very popular with German selfhosters (used prices actually went from 40€ to 80€ for just the base model because of high demand :D), so there’s a page with power measurements, it’s in German, but pretty self explanatory for the most parts.



  • Another option: Used ThinClients.

    I run Proxmox with HomeAssistant, Jellyfin, and some other services on a refurbished Futro-S740, it has a J4105 CPU and 8GB RAM (not officially supported, but 16GB is reported to work) and I use 2 m2 SSDs (required an adapter from AliExpress) for storage.
    It could also support 2 proper SATA drives with adapters (power issues might start if you use 3 or more HDDs, plus connection issues), but that always depends on the ThinClient in question.

    A good bit more powerful than a PI4, but can be found cheaper with roughly the same idle power draw.

    Source: CPU-Monkey



  • Nach dem, was ich gehört habe (habe selber nur Llama1 64B im Schneckentempo via CPU probiert, war viel schlechter als GPT3.5), soll LLama2 70B wohl mit GPT 3.5 mithalten können. Aber du brauchst halt ne high-end GPU, damit das auch nur ansatzweise nutzbar ist.

    Ehrlich gesagt brauche ich keine richtige AI für meine Sprachsteuerung, selbst definierbare Befehle reichen mir vollkommen aus. Leider ist was in HA ist noch nicht auf dem gleichen Stand wie Rhasspi bevor es da aufgehört hat mit der aktiven Entwicklung.




  • Personally the load on the major servers by having one more instance that subscribes to everything is why I think people should back off from creating more than the 1500 instances Lemmy network already has.

    Unless you use those anti-social seeding apps, you probably won’t subscribe to everything. My instance is subscribed to exactly the communities I want.

    And Lemmy is not mature/resilient enough for me to want to invest my time into some random instance.