• CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    6 days ago

    There are lots of reasons to use really low TTLs, but most are a temporary need. Most of the times I had to set low TTLs for records were for hardware migration projects where services were getting new IP addresses. But in a well managed shop this should always be temporary. The TTL would be set low the day before the change, then set back to a normal value the day after the change. I feel the author is correct in that permanently setting low TTLs just covers up a lack of proper planning and change management.

    The only thing off the top of my head that I can think absolutely requires a permanently low TTL is DNS based global load balancing for high uptime applications. But I’m sure there are other uses. I agree that the vast majority of things do not need a low TTL on their DNS record.

    • CompactFlax@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      I have a reasonably latent connection and using pihole and an anycast upstream resolver is noticeably slow. It falls out of pihole cache so freaking fast with these low TTL. I have set up unbound with aggressive caching prefetch and if I recall correctly pihole has a toggle to serve expired. Serving expired in unbound, before pihole, breaks stuff that rotates IP fast.

  • The_Decryptor@aussie.zone
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    Set that minimum TTL to something between 40 minutes (2400 seconds) and 1 hour; this is a perfectly reasonable range.

    Sounds good, let’s give that a try and see what breaks.

      • The_Decryptor@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I’ve got some numbers, took longer than I’d have liked because of ISP issues. Each period is about a day, give or take.

        With the default TTL, my unbound server saw 54,087 total requests, 17,022 got a cache hit, 37,065 a cache miss. So a 31.5% cache hit rate.

        With clamping it saw 56,258 requests, 30,761 were hits, 25,497 misses. A 54.7% cache hit rate.

        And the important thing, and the most “unscientific”, I didn’t encounter any issues with stale DNS results. In that everything still seemed to work and I didn’t get random error pages while browsing or such.

        I’m kinda surprised the total query counts were so close, I would have assumed a longer TTL would also cause clients to cache results for longer, making less requests (Though e.g. Firefox actually caps TTL to 600 seconds or so). My working idea is that for things like e.g. YouTube video, instead of using static hostnames and rotating out IPs, they’re doing the opposite and keeping the addresses fixed but changing the domain names, effectively cache-busting DNS.

    • L3s@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      18
      ·
      6 days ago

      Thats our automod, we keep an eye out for blogs. Every now and then we get spammed with personal blogs about off-topic things.

  • zeezee@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    tldr;

    Set that minimum TTL to something between 40 minutes (2400 seconds) and 1 hour; this is a perfectly reasonable range.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Btw, is there a way to tweak firefox so it always uses cache and only updates on manual site reload?

    • chaospatterns@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Are you trying to make an offline website? If so, you could look into using a Service Worker which would give you full control over when the content gets refreshed.

      • MonkderVierte@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Laptop, mobile, bad line; it’s annoying if the page (which should already be in cache since i opened it hours ago) says “No internet :(” just because it got unloaded.

        And yes, “save webpage” solves that but

        1. i have to think of it beforehand
        2. the site is already there, in the freaking cache.

        In short, i want to use Firefox as the document viewer and downloader it is, instead of a webapp platform or whatever it wants to be.