• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • I broadly agree that “cloud” has an awful lot of marketing fluff to it, as with many previous buzzwords in information technology.

    However, I also think that there was legitimately a shift from a point in time where one got a physical box assigned to them to the point where VPSes started being a thing to something like AWS. A user really did become increasingly-decoupled from the actual physical hardware.

    With a physical server, I care about the actual physical aspects of the machine.

    With a VPS, I still have “a VPS”. It’s virtualized, yeah, but I don’t normally deal with them dynamically.

    With something like AWS, I’m thinking more in terms of spinning up and spinning down instances when needed.

    I think that it’s reasonable to want to describe that increasing abstraction in some way.

    Is it a fundamental game-changer? In general, I don’t think so. But was there a shift? Yeah, I think so.

    And there might legitimately be some companies for which that is a game-changer, where the cost-efficiencies of being able to scale up dynamically to handle peak load on a service are so important that it permits their service to be viable at all.


  • I mean, scrolling down that list, those all make sense.

    I’m not arguing that Google should have kept them going.

    But I think that it might be fair to say that Google did start a number of projects and then cancel them – even if sensibly – and that for people who start to rely on them, that’s frustrating.

    In some cases, like with Google Labs stuff, it was very explicit that anything there was experimental and not something that Google was committing to. If one relied on it, well, that’s kind of their fault.







  • tal@kbin.socialtoSelfhosted@lemmy.worldWhy is DNS still hard to learn?
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Yeah, I don’t think I really agree with the author as to the difficulty with dig. Maybe it could be better, but as protocols and tools go, I’d say that dig and DNS is an example where a tool does a pretty good job of coverage. Maybe not DNSSEC, dunno about how dig does there, and knowing to use +norecurse is maybe not immediately obvious, but I can list a lot of network protocols for which I wish that there were the equivalent to dig.

    However, a lot of what of what the author seems to be complaining about is not really stuff at the network level, but the stuff happening on the host level. And it is true that there are a lot of parts in there if one considers name resolution as a whole, not just DNS, and no one tool that can look at the whole process.

    If I’m doing a resolution with Firefox, I’ve got a browser cache for name resolutions independently of the OS. I may be doing DNS over HTTP, and that may always happen or be a fallback. I may have a caching nameserver at my OS level. There’s the /etc/hosts file. There’s configuration in /etc/resolv.conf. There’s NIS/yp. Windows has its own name resolution stuff hooked into the Windows domains stuff and several mechanisms to do name resolution, whether via broadcasts without a domain controller or with a DC whether that’s present; Apple has Bonjour and more-generally there’s zeroconf. It’s not immediately clear to someone the order of this or a tool that can monitor the whole process end to end – these are indeed independent systems that kind of grew organically.

    Maybe it’d be nice to have an API to let external software initiate name resolutions via the browser and get information about what’s going on, and then have a single “name resolution diagnostic” tool that could span multiple of these name resolution systems, describe what’s happening and help highlight problems. I can say that gethostbyname() could also use a diagnostic call to extract more information about what a resolution attempt attempted to do and why it failed; libc doesn’t expose a lot of useful diagnostic information to the application, though libc does know what it is doing in a resolution attempt.


  • make dig’s output a little more friendly. If I were better at C programming, I might try to write a dig pull request that adds a +human flag to dig that formats the long form output in a more structured and readable way, maybe something like this:

    Okay, fair enough.

    One quick note on dig: newer versions of dig do have a +yaml output format which feels a little clearer to me, though it’s too verbose for my taste (a pretty simple DNS response doesn’t fit on my screen)

    Man, that is like the opposite approach to what you want. If YAML output is easier to read, that’s incidental; that’s intended to be machine-readable, a stable output format.


  • Duplicity uses rsync internally for efficient transport. I have used that. I’m presently using rdiff-backup, driven by backupninja out of a cron job, to backup to a local hard drive and which does incremental backups (which would address @Nr97JcmjjiXZud’s concern). That also uses rsync. There’s also rsbackup, which also uses rsync and I have not used.

    Two caveats I’d note that may or may not be a concern for one’s specific use case (which apply to rdiff-backup, and I believe both also apply to the other two rsync-based solutions above, though it’s been a while since I’ve looked at them, so don’t quote me on that):

    • One property that a backup system can have is to make backups immutable – so that only the backup system has the ability to purge old backups. That could be useful if, for example, the system with the data one is preserving is broken into – you may not want someone compromising the backed up system to be able to wipe the old backups. Rdiff-backup expects to be able to connect to the backup system and write to it. Unless there’s some additional layer of backups that the backup server is doing, that may be a concern for you.

    • Rdiff-backup doesn’t do dedup of data. That is, if you have a 1GB file named “A” and one byte in that file changes, it will only send over a small delta and will efficiently store that delta. But if you have another 1GB file named “B” that is identical to “A” in content, rdiff-backup won’t detect that and only use 1GB of storage – it will require 2GB and store the identical files separately. That’s not a huge concern for me, since I’m backing up a one-user system and I don’t have a lot of duplicate data stored, but for someone else’s use case, that may be important. Possibly more-importantly to OP, since this is offsite and bandwidth may be a constraining factor, the 1GB file will be retransferred. I think that this also applies to renames, though I could be wrong there (i.e. you’d get that for free with dedup; I don’t think that it looks at inode numbers or something to specially try to detect renames).


  • For example, I might self host a server just for my account but I read all my content from lemmy.world. Am I not using their bandwidth and their resources anyway?

    Well, it’d use your CPU to generate the webpages that you view. But, yeah, it’d need to transfer anything that you subscribe to to your system via federation (though the federation stuff may be “lower priority” – I don’t know how lemmy and kbin deal with transferring data to federated servers rather than requests from users directly browsing them at the moment, but at least in theory, serving the user browsing directly has to have a higher priority to be usable).

    But what would be more ideal – and people are going to have to find out what the scaling issues are with hard measurements, but this is probably a pretty reasonable guess – is to have a number of instances, with multiple users on each. Then, once lemmy.world transfers a given post or comment once via federation, that other instance stores it and can serve up the webpages and content to all of the users registered on that other instance.

    If you spread out the communities, too, then it also spreads out the bandwidth required to propagate each post.

    As it stands, at least on kbin (and I assume lemmy), images don’t transfer via federation, though, so they’re an exception – if you’re attaching a bunch of images to your comments, only one instance is serving them. My guess is that that may wind up producing scaling problems too, and I am not at all sure that all lemmy or kbin servers are going to be able to do image-hosting, at least in this fashion.


  • I can’t speak as to why other people use their alternatives, but if you use mpv with yt-dlp like the guy above, and which I do – which isn’t really a full replacement for YouTube, just for part of it – then you can use stuff like deblocking, interpolating, deinterlacing filters, hardware decoding, etc. Lets me use my own keybindings to move around and such. Seeking happens instantly, without rebuffering time.

    Also means that your bandwidth isn’t a constraint on the resolution you use, since you aren’t streaming the content as you watch, though also means that you need to wait for the thing to download until you watch it.

    There, one is talking about the difference between streaming and watching a local video, and that mpv is a considerably more-powerful and better-performing video player than YouTube’s client is.

    I generally do it when I run into a long video or a series of videos that I know I’m going to want to probably watch.

    EDIT: It also looks, from this test video, like YouTube’s web client doesn’t have functioning vsync on my system, so I get tearing, whereas mpv does not have that issue. That being said, I’m using a new video card, and it’s possible that there’s a way to eliminate that in-browser, and it’s possible that someone else’s system may not run into that – I’m not using a compositor, which is somewhat unusual these days.




  • We can cool OURSELVES by letting a regular fan blow on us = WE are the moist filter, and the evaporation of our sweat cools us.

    https://en.wikipedia.org/wiki/Evaporative_cooler

    You do do the same thing.

    However, I have a small evaporative cooler which can evaporate 5 gallons of water a day. You aren’t gonna do that yourself, and it’d drench you in sweat.

    They’re a couple of times more energy-efficient than air conditioners, though more maintenance heavy and require being in a dry climate. They also (normally) require outside air coming in, which is nice in that it keeps carbon dioxide levels down but means pollen or whatever too unless you filter that.

    sauna

    If you’re using an evaporative cooler correctly, you have to keep (dry) outside air coming in so that it doesn’t just act like a giant humidifier.

    FWIW, you can actually use what’s called an “indirect” evaporative cooler. That has outside air come in, go through an evaporative cooler to cool it, then sends that through a heat exchangerthat dumps heat from inside air into the heat exchanger, then sends the moist air outside without increasing inside humidity.

    You can even extend that to use the cooled, humid air as the input to the “outside” side of an air conditioner’s heat exchanger that dumps best to the outdoors. That is basically an indirect evaporative cooler plus a heat pump, a “hybrid” air conditioner, which will boost the air conditioner’s efficiency.

    Unfortunately, I don’t see much by way of small indirect evaporative coolers or small “hybrid” air conditioners on the market, though it’s not technically-complicated to build one. Seems to be done by large commercial installations.



  • Just looking at the start and end figure there, the number did something like double in inflation-adjusted terms, but in the US, new build house sizes (this not being specific to rentals, dunno if one can get that figure) also roughly doubled, and I’d expect costs to be something like linear in size of house. So my off-the-cuff take is that it’s probably about reasonable.

    That being said, Jimmy McMillan was specifically talking about rent in New York City when he did the “Rent is Too Damn High” thing, not rent across the US, and that is going to have a variety of other factors going on, including restrictions on construction, rent control, other regulations that specifically impact New York City, and I would guess transportation accessibility from outside New York City, to let that housing compete for people who work in New York City. It’s very possible that New York City has local factors dominating and is doing other things.


  • I’m not actually sure that the war is as fantastic as it might seem for defense contractors, because a lot of the hardware is older stuff that I suspect would have fallen out of inventories at some point not that far down the line. Yeah, some will be replaced by new hardware and wouldn’t otherwise have been, but I bet that some won’t.

    GMLRS rockets are being produced new. That’s going to be good news for Lockheed Martin.

    But the HAWKs that found a new life as an affordable way to shoot down the the Iranian Shahed-136s are quite old, and had already been pulled out of service. I doubt that their consumption creates a new hole that will be filled by something else.

    Javelins were consumed to shoot mostly Soviet-era tanks that they were originally designed to shoot. I don’t know if there will ever be such a large mass of tanks assembled again. Russia may rebuild tanks to some degree after the war, but I doubt to the same level. If there isn’t a large stockpile of tanks, I suppose that one doesn’t need as many anti-tank weapons.



  • Well, you’ve posted two things, both memes, and given that one was talking about people not posting and the other talking about niche communities not getting traffic – and I assume that memes@lemmy.ml isn’t niche – I assume that you haven’t posted to whichever community you’d like to see more traffic in. You could do so. Each new post also helps make the community more visible, at least on kbin, since it can show up in the random threads section on kbin instances.