If there were changes in 2020 to 2024 inclusive, then yes, I’d write it as 2020-2024. But if not inclusive, then I’d write 2021-2023.
If there were changes in 2020 to 2024 inclusive, then yes, I’d write it as 2020-2024. But if not inclusive, then I’d write 2021-2023.
I’m not any type of lawyer, especially not a copyright lawyer, though I’ve been informed that the point of having the copyright date is to mark when the work (book, website, photo, etc) was produced and when last edited. Both aspects are important, since the original date is when the copyright clock starts counting, and having it further in the past is useful to prove infringement that occurs later.
Likewise, each update to the work imbues a new copyright on just the updated parts, which starts its own clock, and is again useful to prosecute infringement.
As a result, updating the copyright date is not an exercise of writing today’s year. But rather, it’s adding years to a list, compressing as needed, but never removing any years. For example, if a work was created in 2012 and updated in 2013, 2015, 2016, 2017, and 2022, the copyright date could look like:
© 2012, 2013, 2015-2017, 2022
To be clear, I’m not terribly concerned with whether large, institutional copyright holders are able to effectively litigate their IP holdings. Rather, this is advice for small producers of works, like freelancers or folks hosting their own blog. In the age of AI, copyright abuse against small players is now rampant, and a copyright date that is always the current year is ammunition for an AI company’s lawyer to argue that they didn’t plagiarize your work, because your work has a date that came after when they trained their models.
Not that the copyright date is wholly dispositive, but it makes clear from the get-go when a work came unto copyright protection.
At least on my machine, that link doesn’t work unless I explicitly change it to HTTP (no S).
It’s for this reason I sometimes spell out the Bytes or bits. Eg: 88 Gbits/s or 1.44 MBytes
It’s also especially useful for endianness and bit ordering: MSByte vs MSbit
The knot is non-SI but perfectly metric and actually makes sense as a nautical mile is exactly one degree meridian
I do admire the nautical mile for being based on something which has proven to be continually relevant (maritime navigation) as well as being brought forward to new, related fields (aeronautical navigation). And I am aware that it was redefined in SI units, so there’s no incompatibility. I’m mostly poking fun at the kN abbreviation; I agree that no one is confusing kilonewtons with knots, not unless there’s a hurricane putting a torque on a broadcasting tower…
No standard abbreviation exists for nautical miles
We can invent one: kn-h. It’s knot-hours, which is technically correct but horrific to look at. It’s like the time I came across hp-h (horsepower-hour) to measure gasoline energy. :(
if you take all those colonial unit
In defense of the American national pride, I have to point out that many of these came from the Brits. Though we’re guilty of perpetuating them, even after the British have given up on them haha
An inch is 25mm, and a foot an even 1/3rd of a metre while a yard is exactly one metre.
I’m a dual-capable American that can use either SI or US Customary – it’s the occupational hazard of being an engineer lol – but I went into a cold sweat thinking about all the awful things that would happen with a 25 mm inch, and even worse things with 3 ft to the meter. Like, that’s not even a multiple of 2, 5, or 10! At least let it be 40 inches to the meter. /s
There’s also other SI-adjacent strangeness such as the hectare
I like to explain to other Americans that metric is easy, using the hectare as an example. What’s a hectare? It’s about 2.47 acre. Or more relatable, it’s the average size of a Walmart supercenter, at about 107,000 sq ft.
1 hectare == 1 Walmart
I’m surprised there aren’t more suggestions which use intentionally-similar abbreviations. The American customary system is rich with abbreviations which are deceptively similar, and I think the American computer memory units should match; confusion is the name of the game. Some examples from existing units:
I’m afraid I have no suggestions for DoT servers.
One tip for your debugging that might be useful is to use dig to directly query DNS servers, to help identify where a DNS issue may lay. For example, your earlier test on mobile happened to be using Google’s DNS server on legacy IP (8.8.8.8). If you ran the following on your desktop, I would imagine that you would see the AAAA record:
dig @8.8.8.8 mydomain.example.com
If this succeeds, you know that Google’s DNS server is a viable choice for resolving your AAAA record. You can then test your local network’s DNS server, to see if it’ll provide the AAAA record. And then you can test your local machine’s DNS server (eg systemd-resolved). Somewhere, something is not returning your AAAA record, and you can slowly smoke it out. Good luck!
If I understand correctly, you’re now able to verify the AAAA on mobile. But you’re still not able to connect to the web server from your mobile phone. Do I have that right?
I believe in a different comment here, you said that your mobile network doesn’t support IPv6, and nor does a local WiFi network. In that case, it seems like your phone is performing DNS lookups just fine, but has no way to connect to an IPv6 destination.
If your desktop does have IPv6 connectivity but has DNS resolution issues, then I would now look into resolving that. To be clear, was your desktop a Linux/Unix system?
If you describe what you configured using DNS and what tests you’ve performed, people in this community could also help debug that issue as well.
An AAAA records to map a hostname to an IPv6 address should be fairly trouble-free. If you create a new record, the “dig” command should be able to query it immediately, as the DNS servers will go through to the authoritative server, which has the new record. But if you modified an existing record, then the old record’s TTL value might cause the old value to remain in DNS caches for a while.
When in doubt, you can also aim “dig” at the authoritative name server directly, to rule out an issue with your local DNS server or with your ISP’s DNS server.
Could you let us know what the DNS issue was?
FYI, the Intel code used to be here (https://github.com/intel/thunderbolt-utils) but apparently was archived a week ago. So instead, the video creator posted the fork here: https://github.com/rxrbln/thunderbolt-utils
Thank you for reminding me of this: https://youtube.com/shorts/XqNrO33bxmw
Do you recommend dns.sb?
Oh wow, that might be the shortest-representation IPv6 DNS server I’ve seen to date: 2620:fe::9
It is quite the mouthful, but I really hope people aren’t – whether v4 or v6 – having to manually type in DNS servers regularly. Whatever your choice of DNS server, it should be a set-it-and-forget-it affair, so the one-off lookup time becomes negligible.
For the modern IP (aka IPv6) folks: 2606:4700:4700::1111
Other brands of IPv6 DNS servers are available.
If the server is sent a signal to shutdown due to a grid outage, who is telling it the grid was restored?
Ah, I see I forgot to explain a crucial step. When the UPS detects that grid power is lost, it sends a notification to the OS. In your case, it is received by apcupsd. What happens now is a two step process: 1) the UPS is instructed to power down after a fixed time period – one longer than it would take for the OS to shut down, and 2) the OS is instructed to shut down. Here is one example of how someone has configured their machine like this. The UPS will stay off until grid power is restored.
In this way, the server will indeed lose power, shortly after the OS has already shut down. You should be able to configure the relevant delay parameters in apcupsd to preserve however much battery state you need to survive multiple grid events.
The reason the UPS is configured with a fixed time limit – as opposed to, say, waiting until power draw drops below some number of watts – is that it’s easy and cheap to implement, and it’s deterministic. Think about what would happen if an NFS mount or something got stuck during shutdown, thereby running down the battery, ending up with the very unexpected power loss the UPS was meant to avoid. Maybe all the local filesystems were properly unmounted in time, but when booting up later and mounting the filesystems, a second grid fault and a depleted battery state could result in data loss. Here, the risk of accidentally cutting off the shutdown procedure is balanced with the risk of another fault on power up.
Answering the question directly, your intuition is right that you’ll want to limit the ways that your machine can be exploited. Since this is a Dell machine, I would think iDRAC is well suited to be the control mechanism here. iDRAC can accept SNMP commands and some newer versions can receive REST API calls.
But stepping back for a moment, is there any reason why you cannot configure the “AC Power Recovery” option in the system setup to boot the machine when power is restored? The default behavior is to remain as it was but you can configure it to always boot up.
From your description, it sounds like your APC unit notifies the server that the grid is down, which results in the OS shutting down. Ostensibly, the APC unit will soon diminish its battery supply and then the r320 will be without AC power. When the grid comes back up, the r320 will receive AC power and can then react by booting up, if so configured. Is this not feasible?
I know this is c/programmerhumor but I’ll take a stab at the question. If I may broaden the question to include collectively the set of software engineers, programmers, and (from a mainframe era) operators – but will still use “programmers” for brevity – then we can find examples of all sorts of other roles being taken over by computers or subsumed as part of a different worker’s job description. So it shouldn’t really be surprising that the job of programmer would also be partially offloaded.
The classic example of computer-induced obsolescence is the job of typist, where a large organization would employ staff to operate typewriters to convert hand-written memos into typed documents. Helped by the availability of word processors – no, not the software but a standalone appliance – and then the personal computer, the expectation moved to where knowledge workers have to type their own documents.
If we look to some of the earliest analog computers, built to compute differential equations such as for weather and flow analysis, a small team of people would be needed to operate and interpret the results for the research staff. But nowadays, researchers are expected to crunch their own numbers, possibly aided by a statistics or data analyst expert, but they’re still working in R or Python, as opposed to a dedicated person or team that sets up the analysis program.
In that sense, the job of setting up tasks to run on a computer – that is, the old definition of “programming” the machine – has moved to the users. But alleviating the burden on programmers isn’t always going to be viewed as obsolescence. Otherwise, we’d say that tab-complete is making human-typing obsolete lol