publication croisée depuis : https://lemmy.pierre-couy.fr/post/584644

While monitoring my Pi-Hole logs today, I noticed a bunch of queries for XXXXXX.bodis.com, where XXXXXX are numbers. I saw a few variations for the numbers, each one being queried several times.

Digging further, I found out these queries were caused by CNAME records on domains that look like they used to point to Lemmy/Kbin instances.

From what I understand, domain owners can register a CNAME record to XXXXXX.bodis.com and earn some money from the traffic it receives. I guess that each number variation is a domain owner ID in Bodis’ database. I saw between 5 to 10 different number variations, each one being pointed to by a bunch of old Lemmy domains.

This probably means that among actors who snatch expired domains, several of them have taken a specific interest with expired domains of old Lemmy instances. Another hypothesis is that there were a lot of domains registered for hosting Lemmy during the Reddit API debacle (about 1 year ago), which started expiring recently.

Are there any other instance admins who noticed the same thing ? Is any of my two hypothesis more plausible than the other ? Should we worry about this trend ?

Anyway, I hope this at least serves as a reminder to not let our domains expire ;)

      • Ghoelian@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        According to this service, that domain never had any subdomains, so looks like there’s just nothing there at the moment.

        Not sure how reliable it is, but it did correctly identify all of my own subdomains for a website that no one ever goes to.

        • pcouy@lemmy.pierre-couy.frOP
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          These services usually use either or both of passive DNS replication (running public recursive DNS resolvers and logging lookup that returns a record) and certificate transparency logs (where certificate authorities publish the domain names for which they issue certificates)

          • Ghoelian@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            3 months ago

            Ahh I guess they probably got my subdomains from let’s encrypt then, used them for pretty much all my websites.

            Edit: Just checked and yup, all my old subdomains are there from let’s encrypt.

            • pcouy@lemmy.pierre-couy.frOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              3 months ago

              What I did is use a wildcard subdomain and certificate. This way, only pierre-couy.fr and *.pierre-couy.fr ever show up in the transparency logs. Since I’m using pi-hole with carefully chosen upstream DNS servers, passive DNS replication services do not seem to pick up my subdomains (but even subdomains I share with some relatives who probably use their ISP’s default DNS do not show up)

              This obviously only works if all your subdomains go to the same IP. I’ve achieved something similar to cloudflare tunnels using a combination of nginx and wireguard on a cheap VPS (I want to write a tutorial about this when I find some time). One side benefit of this setup is that I usually don’t need to fiddle with my DNS zone to set up a new subdomains : all I need to do is add a new nginx config file with a server section.

              Some scanners will still try to brute-force subdomains. I simply block any IP that hits my VPS with a Host header containing a subdomain I did not configure

        • pcouy@lemmy.pierre-couy.frOP
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          The fact that it has not been bought as soon as the domain expired makes me believe this instance went down before the trend started

          • Zagorath@aussie.zone
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            I’m actually not really clear on what the status of that instance is. Like, for me, when I browse to https://pathfinder.social, I actually see what looks like an empty Lemmy instance running 0.18.2. Some communities show the same for me, while others show a generic error message. So I don’t know whether it’s running in some failed state due to caching, or deregistered, or what.

            • pcouy@lemmy.pierre-couy.frOP
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              That’s really really weird, I cannot resolve the domain to an IP, even after trying a bunch of different DNS servers. If you’re on linux, can you run nslookup pathfinder.social and paste the output here ?

              • Zagorath@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                4 months ago

                If you’re on linux

                I’m not, but I do have WSL installed. It returned “Can’t find pathfinder.social: No answer”

                Out of interest, I tried the same command in Microsoft PowerShell, I get:

                Server:  dns9.quad9.net
                Address:  9.9.9.9
                
                Name:    pathfinder.social
                

                That’s the full output. No actual list of returned addresses.

                I’m guessing my system just has pathfinder.social cached.

                • pcouy@lemmy.pierre-couy.frOP
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  Yeah, this probably has to do with the cache. You can try opening dev tools (F12 in most browsers), go to the network tab, and browse to pathfinder.social. You should see all requests going out, including “fake requests” to content that you already have locally cached

                  • Zagorath@aussie.zone
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    4 months ago

                    Oh neat, I’d never thought of that before. Woulda been handy back last time I was working on a PWA!

                    200 OK (from service worker)

                    So yeah, getting it from the cache.