• 0 Posts
  • 13 Comments
Joined 3 months ago
cake
Cake day: June 12th, 2024

help-circle





  • I thought I explained how to handle the dynamically inserted ads, but I’ll elaborate a little here.

    If your Listenarr instance is part of a broader network of other instances, they’ll all potentially receive a unique file with different ads inserted, but they’ll typically be inserted at the same cut location in the program timeline. Listenarr would calculate the hash of the entire file, but also sub spans of various lengths.

    If the hash of the full file is the same among instances, you know everyone is getting the same file, and any time references suggested for metadata will apply to everyone.

    If the full file hash is different, Listenarr starts slicing it up and generating hashes of subsections to help identify where common and variant sections are. Common sections will usually be the actual content, variants are likely tailored ads. The broader the Listenarr network, the greater the sample size for hashes, which will help automate identification. In fact, the more granular and specific the targeting of inserted ads, the easier it will be to identify them.

    Once you have the file sections sufficiently hashed, tagged, and identified, you can easily stitch together a sanitised media stream into a file any podcast app can ingest.

    You could shove this function into a podcast player, but then you’d need to replicate all the existing permutations of player applications.

    The beauty of the current podcast environment is it’s just RSS feeds that point to audio files in a standard way. This permits handling by a shim proxy in the middle of the transaction between the publisher and the player.

    This could also be a way to better incorporate media into the fediverse. One example is the chapters and transcripts generated could be directly referenced in Lemmy and Mastodon posts.




  • I create unique email addresses for every organisation and service I deal with, including an obfuscated date, so when an address is compromised I can nuke it with a hard rejection, and regenerate as needed. This all feeds into a catchall mailbox, with server-side sieve rules to filter the stuff I actually care about.

    I besides some account specific and burnt addresses, I don’t actively track of any of the addresses I’ve created, it would be in the hundreds by now.

    One benefit of this crazy setup is there’s one less common identifier to match across disparate data stores.

    Finally, no one should do any of The above. From experience I consider it pathological.




  • Configuring multiple v4 addresses on an interface is a kludge, typically only used on hosts which apply inter-network routing logic. It’s an explicit, primary function of the standard v6 specifications.

    With v4, you would use either RFC1918 and NAT, or plumb a public address to the host.

    With v6 you should use a ULA and an address with a public prefix, and selectively open ports/services to on appropriate address.

    An example is the file sharing and administration daemons on my NAS are only bound to its ULA. I don’t need to worry whether it will accidentally be exposed publicly through fat fingering my firewall config, because it will never route beyond my gateway.


  • I use ULA prefixes to ensure the management interfaces of my devices don’t leak via public routes.

    It’s one of the unique parts of the standard IPv6 stack not back ported to IPv4, that an interface on any host can be configured with multiple addresses. It permits functional isolation with the default routing logic.

    IPv6 is far from perfect, but the majority of the arguments I’ve seen against deploying it are a mixture of laziness, wilful ignorance, and terminal incuriosity.