• 1 Post
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle

  • So strictly speaking I don’t know yet if we are struggling for damage output because that fight has more body checks (5+) than DPS checks. The times I’ve gotten to the end, the previous body check has been so scuffed people have 1 or 2 levels of exhaustion due to deaths which really puts a sinker on the damage output. Finding groups at (insert whatever random time I pick up party finder and decide my evening shall be raiding) was largely my challenge. This is also my first tier running raids on-content.

    Week 1 I only put one day forwards, so it counts and doesn’t at the same time for me. But I did get past baits

    What I ran into mostly in 3-6 was picking the wrong time (Monday night on Crystal) to try to find groups. There was a notable vacuum of parties advertising at my prog point - it felt like (mild exaggeration) everyone was fresh, enrage, or a reclear.

    So to make sure I still got practice in, I’d take an instance that was a fresh prog and help them - but we’d time out / disband right as we got to the mechanic I was on. As a result, I got really good at phase 1 with minimal “forward” progress.

    Pulling stuff earlier in the week, even on Crystal there was a bit more variety if I looked closer to the beginning of reset, and last two weeks (week 7-8 if I counted right), I got clones > mouser 2 > and these past three days I saw raining cats and enrage.

    So, other notes

    • for one, the more mechs I can get through, the more PFs are available. This problem will be its own nature solve itself.
    • I know I could visit Aether or Primal, but as small as Crystal 's raiding scene is if we all collectively decide it’s dead than it WILL die and I’m determined to do my part to prevent that from happening
    • my FC has also been probing their way into on-content EX and savage, so with that I’ve also gotten more confident at leading/teaching, and starting next tier I plan to be part of the solution and actively put up a PF if I don’t see one rather than passively participating. I’m not going to sit and bemoan on social media a lower traffic datacenter without trying to fix this myself. There is a limit to how well that will work, because if I’m too late at night, too many PFs may “vampire” each other for members if there’s not quite enough people online. That’s something I have part of a feel of now, and I’ll get better with practice.
    • “just join a static”: something I have considered, but I need to make sure I don’t over promise my energy. Sometimes I’ll just run one instance, other times I’ll be at it for 6h enough that one of the other FC officers logs out, domes something else, logs back in later and sees where I am to say “wait you’re STILL raiding??” Statics, by nature, have a fixed schedule and it would be quite rude to them to just not have energy available at their time. Scheduling 8 people is and always will be the hardest part of any raid.

    EDIT: I counted horribly wrong I’ll fix it later.




  • Endwalker issues for me were mostly the insane login queues.

    Dawntrail went much smoother and you can actually see places where they beefed up capacity, even within a server. Field areas and large cities had more instances than they did in Endwalker - up to 6, as opposed to the 3-4.

    Aether I think kind of caught fire, but on Crystal it was pretty chill except for that one time the entire datacenter got dumped back to the login screen. I also heard stories of people getting momentarily trapped on Dynamis if they visited during peak, since the server also validates load when you’re coming home.





  • A to B made more sense in a world where devices cannot serve as both roles via negotiation. My android phone when I got it utilized a data transfer method of plugging my iPhone charge port into my Android charge port, then the Android initiated the connection as a host device.

    The true crime is not that the cable is bidirectional, the true crime is that there is little to no proper distinction and error checking between USB, Thunderbolt, and DisplayPort modes and are simply carried on the same connector. I have no issues with the port supporting tunneled connections - that is in fact how docking stations work - just the minimal labeling we get in modern devices.

    I’d be fine with a type-A to type-A cable if both devices had a reasonable chance at operating as both the initiator and target - but that type of behavior starts with USB-OTG and continues in type-C.


  • Others have some good information here - all I’d like to add to the root is that Windows and Mac have a built-in DNS cache and it’s pretty straightforward to add a DNS cache to systemd distros (if it’s not already installed or in use) using systemd-resolved or dnsmasq if you really dislike systemd. Some distros enable this from install time.

    Systems that utilize a DNS cache will keep copies of DNS query results for a period of time, making the application-level name lookup speed essentially 0ms for a cached result. Cold results obviously incur the latency of the DNS server itself.




  • TLDR: probably a lot of people continue using the thing that they know if it just works as long as it works well enough not to be a bother.

    Many many years ago when I learned, I think the only ones I found were Apache and IIS. I had a Mac at the time which came pre installed with Apache2, so I learned Apache2 and got okay at it. While by release dates Nginx and HAProxy most definitely existed, I don’t think I came across either in my research. I don’t have any notes from the time because I didn’t take any because I was in high school.

    When I started Linux things, I kept using Apache for a while because I knew it. Found Nginx, learned it in a snap because the config is more natural language and hierarchical than Apache’s XMLish monstrosity. Then for the next decade I kept using Nginx whenever I needed a webserver fast because I knew it would work with minimal tinkering.

    Now, as of a few years ago, I knew that haproxy, caddy, and traefik all existed. I even tried out Caddy on my homelab reverse proxy server (which has about a dozen applications routed through it), and the first few sites were easy - just let the auto-LetsEncrypt do its job - but once I got to the sites that needed manual TLS (I have both an internal CA and utilize Cloudflare’ origin HTTPS cert), and other special config, Caddy started becoming as cumbersome as my Nginx conf.d directory. At the time, I also didn’t have a way to get software updates easily on my then-CentOS 7 server, so Caddy was okay-enough, but it was back to Nginx with me because it was comparatively easier to manage.

    HAProxy is something I’ve added to my repertoire more recently. It took me quite a while and lots of trial and error to figure out the config syntax which is quite different from anything I’d used before (except maybe kinda like Squid, which I had learned not a year prior…), but once it clicked, it clicked. Now I have an internal high availability (+keepalived) load balancer than can handle so many backend servers and do wildcard TLS termination and validate backend TLS certs. I even got LDAP and LDAPS load balancing to AD working on that for services like Gitea that don’t behave well when there’s more than one LDAPS backend server.

    So, at some point I’ll get around to converting that everything reverse proxy to HAProxy. But I’ll probably need to deploy another VM or two because the existing one also has a static web server and I’ve been meaning to break up that server’s roles anyways (long ago, it was my everything server before I used VMs).







  • On/off:
    I have 5 main chassis excluding desktops. Prod cluster is all flash, standalone host has one flash array, one spinning rust array, NAS is all spinning rust. I have a big enough server disk array that spinning it up is actually a power sink and the Dell firmware takes a looong time to get all the drives up on reboot.

    TLDR: Not off as a matter of day/night, off as a matter of summer/winter for heat.

    Winter: all on

    Summer:

    • prod cluster on (3x vSAN - it gets really angry if it doesn’t have cluster consistency)
    • NAS on
    • standalone server off, except to test ESXi patches and when vCenter reboots cause it to be WoL’d (vpxd sends a wake to all stand by hosts on program init)
    • main desktop on
    • alt desktops off

    VMs are a different story. Normally I just turn them on and off as needed regardless of season, though I will typically turn off more of my “optional” VMs to reduce summer workload in addition to powering off the one server. Rough goal is to reduce thermal load as to not kill my AC as quickly which is probably running above its duty cycle to keep up. Physical wise, these servers are virtualized so this on/off load doesn’t cycle the array.

    Because all four of my main servers are the same hypervisor (for now, VMware ESXi), VMs can move among the prod cluster to balance load autonomously, and I can move VMs on or off the standalone host by drag-and-drop. When the standalone host is off, I usually move turn it’s VMs off and move them onto the prod cluster so I don’t get daily “backup failure” emails from the NAS.

    UPS: Power in my area is pretty stable, but has a few phase hiccups in the summer. (I know it’s a phase hiccup because I mapped out which wall plus are on which phase, confirmed with a multimeter than I’m on two legs of a 3-phase grid hand-off, and watched which devices blip off during an event) For something like a light that will just flicker or a laptop/phone charger that has a high capacitance, such blips are a non issue. Smaller ones can even be eaten by the massive power supplies my Dell servers have. But, my Cisco switches are a bit sensitive to it and tend to sing me the song of their people when the power flickers - aka fan speed 100% boot up whining. Larger blips will also boop the Dell servers, but I don’t usually see breaks more than 3-5m.

    Current UPS setup is:

    • rack split into A/B power feeds, with servers plugged into both and every other one flipped A or B as it’s primary
    • single plug devices (like NAS) plugged into just one
    • “common purpose” devices on the same power feed (ex: my primary firewall, primary switches, and my NAS for backups are on feed A, but my backup disks and my secondary switches are on feed B)
    • one 1500VA UPS per feed (two total) - aggregate usage is 600-800w
    • one 1500VA desktop UPS handling my main tower, one monitor, and my PS5 (which gets unreasonably upset about losing power, so it gets the battery backup)

    With all that setup, the gauges in the front of the 3 UPSes all show roughly 15-20m run time in summer, and 20-25m in winter. I know one may be lower than displayed because it’s battery is older, but even if it fails and dumps it’s redundant load onto the main newer UPS I’ll still have 7-10m of battery at worst case and that’s all I really need to weather most power related issues at my location.