Still an understatement, it deserves it and more.
I don’t even like turned based games. I don’t like most high fantasy. But holy moly, what a ride BG3 is.
I’m just gonna be pissed of their mixed support of modding (due to wotc) kills the modding community. If Skyrim and Rimworld can have a whole universe of fan content, BG3 should too.
The problem is that splitting models up over a network, even over LAN, is not super efficient. The entire weights need to be run through for every half word.
And the other problem is that petals just can’t keep up with the crazy dev pace of the LLM community. Honestly they should dump it and fork or contribute to llama.cpp or exllama, as TBH no one wants to split up LLAMA 2 (or even llama 3) 70B, and be a generation or two behind for a base instruct model instead of a finetune.
Even the horde has very few hosts relative to users, even though hosting a small model on a 6GB GPU would get you lots of karma.
The diffusion community is very different, as the output is one image and even the largest open models are much smaller. Lora usage is also standardized there, while it is not on LLM land.
Top 50% of the population still.
After all, they wrote a review.
Oh, and as for benchmarks, check the huggingface open llm leaderbard. The new one.
But take it with a LARGE grain of salt. Some models game their scores in different ways.
There are more niche benchmarks floating around, such as RULER for long context performance. Amazon ran a good array of models to test their mistral finetune: https://huggingface.co/aws-prototyping/MegaBeam-Mistral-7B-512k
Honestly I would get away from ollama. I don’t like it for a number of reasons, including:
Suboptimal quants
suboptimal settings
limited model selection (as opposed to just browsing huggingface)
Sometimes suboptimal performance compared to kobold.cpp, especially if you are quantizing cache, double especially if you are not on a Mac
Frankly a lot of attention squatting/riding off llama.cpp’'s development without contributing a ton back.
Rumblings of a closed source project.
I could go on and on, inclding some behavior I just didn’t like from the devs, but I think I’ll stop, as its really not that bad.
A+ feature, ready to monetize. Thumbs up emoji
Honestly I am not sold on petals, it leaves so many technical innovations behind and its just not really taking off like it needs to.
IMO a much cooler project is the AI Horde: A swarm of hosts, but no splitting. Already with a boatload of actual users.
And (no offense) but there are much better models to use than ollama llama 8b, and which ones completely depends on how much RAM your Mac has. They get better and better the more you have, all the way out to 192GB. (Where you can squeeze in the very amazing Deepseek Code V2)
RAM capacity and bandwidth.
That basically the only two things that matter for local LLM performance, as it has to read the entire model from memory for every token (aka half word). And for the same money, a “higher end” M2 (like an M2 Max or Ultra) will just have more of it than the equivalent cost M3 or (probably) M4.
but what am I realistically looking at being able to run locally that won’t go above like 60-75% usage so I can still eventually get a couple game servers, network storage, and Jellyfin working?
Honestly, not much. Llama 8B, but very slowly, or maybe deepseek v2 chat, preprocessed on the 270 with vulkan but mostly running on CPU. And I guess just limit it to 6 threads? I’d host it with kobold.cpp vulkan, or maybe the llama.cpp server if there will be multiple users.
You can try them to see if they feel OK, but llms are just not something that like old hardware. An RTX 3060 (or a Mac, or a 12GB+ AMD GPU) is considered bare minimum in the community, a 3090 or 7900 XTX standard.
OK, so the reaction here seems pretty positive.
But when I bring this up in other threads (or even on Reddit in the few subreddits I still use) the reaction is overwhelmingly negative. Like, I briefly mentioned fixing the video quality issues of an old show in an other fandom with diffusion models, and I felt like I was going to get banned and doxxed.
I see it a lot here too, in any thread about OpenAI or whatever.
Agreed. This is how a lot of people use them, I sometimes use it as a pseudo therapist too.
Obviously theres a risk of it going off the rails, but I think if you’re cogniziant enough to research the LLM, pick it, and figure out how to run it and change sampling settings, it gives you an “awareness” of how it can go wrong and just how fallable it is.
What RAM capacity?
Honestly, if LLMs are your focus, you should just upgrade to a used M2 Max (or Ultra) when the M4 comes out, lol. Basically the only thing that matters is RAM capacity and bandwidth, and the M2 is just going to be faster and better than a similarly priced M4.
Or better yet, upgrade to and AMD Strix Halo. This will buy you into linux and the cuda ecosystem (through AMD rocm), which is going to open a lot of doors and save headaches (while admittedly creating other headaches).
I don’t know how we’d patch that back in. Even collecting the training data is tricky.
You can just take encyclopedia articles and news articles, then train it back in. It’s easy! This is not expensive, like $100 if its a really big model, and you are uncensoring a ton of topics?
People uncensor models all the time, its an avenue of research in the LLM community. And in fact, there are many quite good chinese models (like Qwen2) that have been “uncensorsed” by the community.
rocm
exllama, llama.cpp, vllm/aphrodite, (I think) sglang, they all support it now.
Practically that just means “open weights” lol. Easier to just do that than track all the sources.
Not that I disagree.
But one sticking point is allowing commercial use, as many companies do like noncommercial licenses so they can make money off them.
For inference? AMD is more finicky to setup but totally fine once you do. 7900 XTX prices can be very good.
I feel like 3090s have bottomed out, as they are just getting more rare now, and 4090s are so freaking expensive to start with I’m not sure how much they’ll come down.
Another feature you might not be aware of, that people use now, is quantized KV cache. With it, I can run a 19GB 35B model and still fit 131K context into vram, with basically no quality loss.
Oh and I forgot to mention, instead of a 5090, buy AMD Strix Halo if its any good.
I cannot emphasize how awesome 128GB on a fast APU would be. That opens up (admittedly slow, but usable) inference of “huge” models like Mistral Large, and very fast inference of large MoE models like 8x22B.
I hate turn based combat too, but it was super enjoyable in coop. And it’s quite good for being turn based.
It’s also real-time outside of combat, FYI.
For solo, I’d probably get the mod that automates your companions, and reduce the difficulty to your taste to compensate.