I think people are misunderstanding. This isn't CT logs, its a wildcard certificate so it wouldn't leak the "nas" part. It's sentry catching client-side traces and calling home with them, and then picking out the hostname from the request that sent them (ie, "nas.nothing-special.whatever.example.com") and trying to poll it for whatever reason, which is going to a separate server that is catching the wildcard domain and being rejected.
Sounds like a great way to get sentry to fire off arbitrary requests to IPs you don’t own.
sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.
Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
I bought a SynologyNAS and I have regretted already 3-4 times. Apart from the software made available from the community, there is very little one can do with this thing.
Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?
Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.
I should have gone with something that runs proper Linux or BSD.
Isn't the article over emphasising a little bit on leakage of internal urls ?
Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.
Reverse address lookup servers routinely see escaped attempts to resolve ULA and rfc1918. If you can tie the resolver to other valid data, you know inside state.
Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.
Darknet collection during final /8 run-down captured audio in UDP.
Bad post. No information on how to resolve this situation.
Coming from someone who worked at FAANG, this is sub par post.
You block it at inside your network by having the domain name not leak to start with. Run a resolver inside your network, and have DHCP response send DNS requests to your resolver. The resolver only queries upstream if it is not your local network's domain(s). Then, block all outbound DNS from your local network using your preferred firewall.
Blocking dns leaks from the local network will not prevent sentry from sending them to the cloud. Blocking sentry from reaching the cloud (like said in the post) will.
Wow, just skip the "bad post", "took me 30 seconds", "Basic stuff" parts already, especially if you come up with something like this right after and when you are completely missing the point and don't seem to realize it even after several people point it out.
Show some humility.
What's more, one doesn't really read Rachel for her potential technical solutions but because one likes her story telling.
> Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
Unless you actively block all potential trackers (good luck with that one lol), you're not going to prevent leaks if the web UI contains code that actively submits details like hostnames over an encrypted channel.
I suppose it's a good thing you only wasted 30 seconds on this.
Not sure why they made the connection to sentry.io and not with CT logs. My first thought was that "*.some-subdomain." got added to the CT logs and someone is scanning *. with well known hosts, of which "nas" would be one. Curious if they have more insights into sentry.io leaking and where does it leak to...
But she mentioned: 1) it isn't in DNS only /etc/hosts and 2) they are making a connection to it. So they'd need to get the IP address to connect to from somewhere as well.
> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.
They don't need the IP address itself, it sounds like they're not even connecting to the same host.
Unless she hosts her own cert authority or is using a self-signed cert, the wildcard cert she mentions is visible to the public on sites such as https://crt.sh/.
Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.
The (somewhat affordable) productized NASes all suffer from big tech diseases.
I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.
If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.
From what I understand, sentry.io is like a tracing and logging service, used by many organizations.
This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.
This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.
For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.
It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users.
And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.
Haha, this obtuse way of speech is such a classic FAANG move. I wonder if it’s because of internal corporate style comms. Patio11 also talks like this. Maybe because Stripe is pretty much a private FAANG.
This is actually an really interesting way to attack a sensitive network. This is a way of allowing to map the internal network of a sensitive network. Getting access is obviously the main challenge but once you're in there you need to know where you go and what to look for. If you've already got that knowledge when planning the attack to gain entry then you've got the upper-hand. So while it kinda seems like "Ok, so they have a hostname they can't access why do I care?". If you're doing high-end security on your system admin level then this is the sort of small nitpicking that it takes to be the best.
>Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.
I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.
That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).
So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
Obl. nitpick: you mean paranoia, presumably. Schizophrenia is a dissociative/psychotic disorder, paranoia is the irrational belief that you’re being persecuted/watched/etc.
Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
>Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
Yes, but I mean being overly cautious in the threat model. For example, birds may be watching through my window, it's true and I might catch a bird watching my house, but it's paranoid in the sense that it's too tight of a threat model.
This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
> Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
We are used to the tracking being everywhere but it is scandalous and should be considered as such. Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
>This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
Sure. POST for extra security.
> Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
If this were a completely local product, like say a USB stick. Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.
It's not just Let's Encrypt, right? CT is a requirement for all Certificate Authorities nowadays. You can just look at the certificate of www.google.com and see that it has been published to two CT logs (Google's and Sectigo's)
No, it's made by systems made by people, systems which might have grown and mutated so many times that the original purpose and ethics might be unrecognizable to the system designers. This can be decades in the case of tech like SMTP, HTTP, JS, but now it can be days in the era of Moltbots and vibecoding.
I like only getting *.domain for this reason. No expectation of hiding the domain but if they want to figure out where other things are hosted they'll have to guess.
That’s really not a great fix. If those hostnames leak, they leak forever. I’d be surprised if AV solutions and/or windows aren’t logging these things.
Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).
CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.
So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.
Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.
So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.
Statistically amount of parasite scanning on LE "secured" domains is way more compared to purchased certficates. And yes, this is without voluntary publishing on LE side.
I am not entirely aware what LE does differently, but we had very clear observation in the past about it.
sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
Clown is Rachel's word for (Big Tech's) cloud.
was (and she worked at Google too)
> "clowntown" and "clowny" are words you see there.
Didn't know this, interesting!
Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?
Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.
I should have gone with something that runs proper Linux or BSD.
There are guides on how to mainline Synology NAS's to run up-to-date debian on them: https://forum.doozan.com/list.php
Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.
Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.
Darknet collection during final /8 run-down captured audio in UDP.
Firewalls? ACLs? Pah. Humbug.
Mind elaborating on this? SIP traffic from which year?
Coming from someone who worked at FAANG, this is sub par post.
You block it at inside your network by having the domain name not leak to start with. Run a resolver inside your network, and have DHCP response send DNS requests to your resolver. The resolver only queries upstream if it is not your local network's domain(s). Then, block all outbound DNS from your local network using your preferred firewall.
No leaks.
It took me 30 seconds to write this. Basic stuff.
Show some humility.
What's more, one doesn't really read Rachel for her potential technical solutions but because one likes her story telling.
awright. settle down boy. grow some UNIX beard and I'll show some of my humility.
> Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
Unless you actively block all potential trackers (good luck with that one lol), you're not going to prevent leaks if the web UI contains code that actively submits details like hostnames over an encrypted channel.
I suppose it's a good thing you only wasted 30 seconds on this.
Sure, you can get _pretty_ close to it.
Here are the major DNS-level block lists: Most Popular/Recommended
OISD - https://oisd.nl
Big, Small, and NSFW variants Widely considered the best single all-in-one list
Hagezi - https://github.com/hagezi/dns-blocklists
Light, Normal, Pro, Pro++, and Ultimate tiers Excellent maintenance and low false positives
Steven Black's Hosts - https://github.com/StevenBlack/hosts
Unified hosts with optional extensions (gambling, porn, social, etc.)
1Hosts - https://github.com/badmojr/1Hosts
Lite, Pro, and Xtra variants
Privacy-Specific
AdGuard DNS Filter - https://github.com/AdguardTeam/AdGuardSDNSFilter EasyPrivacy (converted for DNS) - via Firebog or converted lists NextDNS CNAME Cloaking List - built into NextDNS
Curated Collections
Firebog Tick Lists - https://firebog.net
Green ticks = safe, blue = may have false positives Organized by category (advertising, tracking, malicious, etc.)
Developer Dan - https://github.com/lightswitch05/hosts
Malware/Security
URLhaus - https://urlhaus.abuse.ch Phishing Army - https://phishing.army NoTrack Malware Blocklist
FireHOL IP Lists - https://github.com/firehol/blocklist-ipsets
Aggregates 100+ IP blocklists into categorized sets Includes: abuse.ch, dshield, spamhaus, blocklist.de, emerging threats, etc. Categories: abuse, anonymizers, attacks, malware, reputation, spam
IPsum - https://github.com/stamparm/ipsum
Daily updated threat intelligence feed Aggregates 30+ blacklists Levels 1-8 based on how many lists an IP appears on
Spamhaus - https://www.spamhaus.org
DROP (Don't Route Or Peer) EDROP (Extended DROP) SBL, XBL, PBL for email
Emerging Threats - https://rules.emergingthreats.net
Open rulesets for Snort/Suricata Compromised IPs, malware C2, botnets
abuse.ch Projects
Feodo Tracker - Banking trojan C2s SSL Blacklist - Malicious SSL certificates URLhaus - Malware distribution URLs ThreatFox - IOCs sharing platform MalwareBazaar - Malware samples
AlienVault OTX - https://otx.alienvault.com
Community threat intelligence
Blocklist.de - https://www.blocklist.de
IPs reported for attacks (SSH, mail, FTP, etc.)
CINSscore - https://cinsscore.com
Collective Intelligence Network Security
Binary Defense - https://www.binarydefense.com/banlist.txt Talos Intelligence - https://talosintelligence.com GreenSnow - https://blocklist.greensnow.co DShield - https://www.dshield.org
SANS Internet Storm Center
Bruteforce Blocker - https://danger.rulez.sk/index.php/bruteforceblocker
Should i continue to spend another 5s of my life answering your silly questions?
> [ .... ] over an encrypted channel
This is a different issue. Related, but different. You need a DPI gear and solution is as old as time.
Scanning wildcards for well-known subdomains seems both quite specific and rather costly for unclear benefits.
> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.
They don't need the IP address itself, it sounds like they're not even connecting to the same host.
I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.
If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.
You never could. A host name or a domain is bound to leave your box, it's meant to. It takes sending an email with a local email client.
(Not saying, the NAS leak still sucks)
I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.
also
> you notice that you've started getting requests coming to your server on the "outside world" with that same hostname.
If Firefox also leaks this, I wonder if this is something mass-surveillance related.
(Judging from the down votes I misunderstood something)
This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.
This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.
For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.
It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users. And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.
So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.
I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.
That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).
So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
>Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
Yes, but I mean being overly cautious in the threat model. For example, birds may be watching through my window, it's true and I might catch a bird watching my house, but it's paranoid in the sense that it's too tight of a threat model.
This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
> Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
We are used to the tracking being everywhere but it is scandalous and should be considered as such. Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
Sure. POST for extra security.
> Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
If this were a completely local product, like say a USB stick. Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
FWIW - it’s made of people
You meant you shouldn't right? Partially exactly for the reasons you stated later in the same sentence.
CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.
So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.
Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.
So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.
I am not entirely aware what LE does differently, but we had very clear observation in the past about it.