I used to see Pi-Hole and people would talk about blockimg ads on pandora and such and I felt that it was just used to almost steal content. I get NOBODY enjoys ads, but to advocate skipping ads on still free content REALLY bothered me and I figured pi-hole wasn’t my style.
Total opposite, wish I started using it when I read about it in 2014 or so maybe a bit later. I can’t or don’t want to imagine all the wasted network energy sending data to Google, Microsoft, Netflix, Amazon, Roku, and hundreds of other companies, wow.
This is the thing with ads on the internet, they use energy and network your local device compute and ram and storage as well as remote number crunching compute and ram and storage, mostly remote though but you probably don’t get to see that part. Some but not all of it is paid for by companies who want to sell you some product or put some idea out in front of you. Often, that money also pays for the “property”/“publisher”/“sell side” business, like Pandora or Twitter or Google Search, or others.
The better the “targeting” and ad selection algorithm, the more value the business-es like Pandora or news websites will get per Ad, and more revenue per published content.
The trouble is privacy and that ad-free alternatives are hard to setup in an easy to monetize way, and some websites and services still show you ads even after you pay for the service and there’s no ad-free option.
I like to see all the domains and stuff in the list, go back a few days and check what still works etc.
I like being able to have adlists for specific devices because some domains operate on multiple devices but for different reasons, namely MS domains. So I can block those on desktop, and let one or two through on a different system.
Later I’ll share some of my shorter blocklists and custom blacklist entries for common mobile operating systems.
2,000,000 domains blocks is too much, I don’t see the advantage of blocking websites you won’t visit.
Some examples I have are blocking mobile advertising as many of it could contain malware or just send back to google or otherwise, just buy the program and block the adware.
I do not agree… It is not too much, in my opinion it is just right. It’s about quality, not just quantity. Blocking at the dns level has little to do with what websites you visit. The thing is that it is supposed to block those addresses that you are not very aware of. I have no problem reaching 20% of blocked queries on my lists.
You can think of it in a similar way to virus signature databases. The large database is not a problem… in this database 99% of the signatures will never apply in your case. But this does not mean that such a base should be limited to only the group that has a % probability of occurrence for you.
Likewise with adlist… if you use only good quality adlist the amount will be balanced with the quality in good percentages. Nobody is saying that you should go for the quantity and use 50 million records from any larger adlist. Excess with poor quality will of course lead to a false positive.
I personally have no false positives. I do not use adlist on the basis of as many domains as possible, only in terms of adlist quality and reputation.
I have been doing this for years and I am happy, but if you prefer another variant, it is good for you.
A quicker thing to do would be to install and run WireShark, one of the most respected network analysis tools out there, and it’s free, and open-source. You can’t do much better than this one.
You’ll likely (very likely) want to temporarily filter Router Advertisements, so go into the filters, and only show the following ports:
53 (DNS, the phone-book of the Internet)
80 (non-encrypted HTTP traffic)
443 (encrypted HTTPS)
and any others that may be relevant to you. This way you won’t see all the “Who has 192.168.1.#” > Tell 192.168 etc.
Then, when you do get into Pi-Hole, you can see instead of IP addresses, you’ll see the website names instead, and you can block what you don’t want.
As a side-note, I would like to recommend running your own DNS server, Unbound. It’s amazing, and I believe it has built-in security, and it goes directly to the main DNS record-keepers of the Planet, no intermidiate, third-parties. I think that’s how it works.
If you run unbound without any specified upstreams, then yes it defaults to the root servers and recursively resolves addresses from authoritative sites (a.root-servers.net and b. and c…). But, even if you use DNSSEC, you’ve only assured yourself that the results are valid, you have not done anything to ensure privacy (all your queries to the root servers are authenticated, but not encrypted, so your local ISP - and any intermediate nodes - can watch everything you do).
So, what to do if you want unbound to ensure privacy? You need to set up DoT (DNS over TLS) and use a different upstream than the root servers. I recommend Quad9 as the upstream, their whole model is derived around privacy and proving that they mean what they say (they moved from the US to Switzerland because the Swiss privacy laws are MORE restrictive and demanding, just to prove a point). Quad9 also has several services (via different IPs), the standard 220.127.116.11 one I show below does Pi-Hole-like filtering of known malicious sites (see their descriptions at quad9.net), but you can also get unfiltered/unsecure results (18.104.22.168) and so on.
Here’s my config for using their servers in unbound:
Why should I avoid using the root servers? I think that would be the most accurate path to go through. It’s why I set up unbound, figured it would be the safest way to get to the correct place. So would going through quad9 mask my query or ensure some layer of cryptography attached to my dns request?
I read that Quad9 openly recommends DNS caching and probably other cool things, and now I learn that they adhere to strict standards, seems like a cool organization / business.
The DNS root level is the highest in the DNS hierarchy tree because it is the first step in resolving a domain name. The root DNS server is the DNS for the root zone. It handles requests for records in the root zone and answers other requests by providing lists of authoritative name servers for the appropriate TLD (top-level domain). These are the authoritative nameservers that serve the DNS root zone. These servers contain the global list of the top-level domains. The root zone contains the following:
Organizational hierarchy such as .com, .net, .org, .edu.
Geographic hierarchy such as .ca, .uk, .fr, .pe.
Currently, there are 13 root name servers specified, with logical names in the form letter.root-servers.net, where letter ranges from A to M and represent companies like Verisign, University of Maryland, NASA, and The Internet Corporation for Assigned Names and Numbers (ICANN).
Avoid root servers??? The hierarchy works a little differently and I would not define it in the matter of avoiding or not avoiding because it is above the level of concern for user.
The root servers, like most authoritative servers, are configured to not do recursive resolution, which is what your ISPs DNS servers are set up for.
Theres a good reason for that: Back in 2001, when OReilly published DNS and BIND, 4th Edition, they were already handling thousand of queries per second. I cant imagine that they handling less traffic now.
So, theyll instead cheerfully tell you which DNS servers to ask next, expecting you to iteratively ask one DNS server after another in the chain, until you get an answer from the one who actually holds the informationor can tell you definitively no such thing exists. This is what your ISPs servers are doing behind the scenes, when your PC asks them whats the IP address of www.microsoft.com?