How safe is TOR really?

In The Art of Invisibility by Kevin Mitnick | Mitnick Security Kevin goes over how just TOR isn’t enough, and honestly, even with all the gymnastics he says you have to be doing at the end of the book to have an anonymous persona, you still aren’t truly anonymous.

Stuff like TOR is cool, but for a while now it hasn’t been about the data, it has been all about the metadata.

3 Likes

I spent about three weeks digging into this rabbit hole once.

At the end of the tunnel I came out going:

“Fuck it. None of this is worth the amount of effort and hassle.”

It’s honestly kinda terrible just how easy it is to get burnt out on the topic; says a lot about the state of things too

1 Like

I think what is feasible and the effort required is highly dependant on what you want to do; imagine even with perfect technological anonymity, no “costs” at all, what is the end goal:

  1. Are you trying to create an entire persona?
    Ex: a undercover agent/spy
  2. or are you having one conversation only?
    Ex: an anonymous source
  3. or are you making a one-off submission?
    Ex: a whistleblower, or more maliciously: dox-er or dumping a trove of stolen Twitch data
  4. or are you just using browsing, but not writing or creating/submitting anything?
    Ex: reading news in some authoritarian state

I suppose an awful lot of variance falls into the persona category I have concocted; pretending to be a normal forum user might require much less time than emulating a prolific user of social media.

2 Likes

You can Vpn then run a VM with another VPN and Another then TOR. Depends on your care factor.

Me I google boobs all the time. Opps Bombs.

1 Like

People argue that VPN to VPN to Tor is worse than just using Tor, exactly because of traffic flow + that you can’t be anonymous when using a VSP (VPN Service Provider). The best it gets is Mullvad paying with crypto, but they still get your IP address, so a powerful adversary will get you nonetheless.

But if you just want to hide something that you wouldn’t want your coworkers to know (idk, you’re into BDSM or ASMR or other 4 letter dubious sounding activities), you don’t need n VPNs and Tor. In fact, just Tor or just a VPN would suffice. Depends on your threat model and what are you willing to do.

1 Like

Come done to does the VPN really not save logs.

Many VPNs don’t save logs until they do. Like it happened with ProtonMail in Switzerland (the French LEO asked Interpol to ask the Swiss LEO to force ProtonMail to log a user’s IP). This can happen to any VPN company in the 14 eyes, which includes Mullvad. And you can’t know for sure that it won’t happen to someone in non-14 eyes either, like BoxPN.

So, the conclusion is that you cannot trust people, you can only trust math and sane designs. And even then, you could have to worry about implementations and vulnerabilities.

So I have not seen this mentioned here so I will throw it out. Back in the late 2000s and early 2010s I was involved in a project known as garlic routing.

The basic concept is while an onion has many layers, you can still be identified by that one onion. Garlic routing take as slightly different approach. One clove of garlic has many pieces. While you are still attached to the clove of garlic, you can break it up into smaller pieces that detach and reattach to different cloves to make you look like normal traffic along the way. The only problem with this is that it is slower than onion routing and while you can define how many times your can detach and reattach, exit nodes can override that and force more, but never less.

Just like Tor, it uses its own protocol and because there were far less users of the system, It was extremely slow. The other issues good/bad, was that it could never drop out to access the regular internet. You would never be able to use it to check your gmail.

4 Likes

Based!

1 Like

From my understanding, the Navy didn’t develop this awesome routing protocol (TOR) alongside DARPA and say “Oh shit, we don’t know what to do with this, lets just give it to MIT”. The release to the public was always part of the plan and critical to how TOR works. If the only people using TOR were alphabet agency spooks, diplomats, specops etc. then it would be entirely useless. Anybody looking for endpoint connections on the network would immediately know they’re looking at a person of interest.

By letting the TOR network be open for anyone to use it provides security through obscurity for the state actors using the network. It could be a US agent connecting, or a criminal, or a paranoid schmuck surfing ebay; there’s no way to tell them apart. That is the layer of anonymity that the US Govt wanted when they released TOR to the public.

2 Likes

This. Same with p2p file sharing protocols. Anonymity comes from blending into the background noise. Using a unique protocol — specifically designed to avoid detection — that is used by only a tiny number of people is like painting a massive bullseye on your chest and waving a red flag.

“Obscurity” got a bad name from the “Security vs Obscurity” debate, but it really does have a place in the toolkit of folks that care about privacy.

4 Likes

As with most things, everything is good in moderation and the truth is “a little of column A and a little of column B”

It’s like saying:

Bank Manager: “We have the most secure bank in the world! We’re located at ***** ; just try and break in!”

a short time later

Same Bank Manager: “…I can’t believe they managed it…”

lol the new CoD DRM comes to mind where they did the same thing, effectively.

You should always strive to implement good, mult-layered security, but why on earth would you then go and fucking brag about it? You’re security layers aren’t something you want people testing

1 Like

“nvidia unhackable driver”

I would argue you would want people testing your security, to make sure it actually works. That’s why Tor is public.

To stay on topic. TBH, most of Tor’s “issues” aren’t really Tor, the protocol (onion routing), issues, but user errors and the modern bloated web issues. If you are using sites you trust (basically like alphabet boys do), it should be fine, but when you don’t trust the sites you browse and you keep JS enabled, you always risk hitting a honeypot or a malicious website and get in trouble for no good reason other than curiosity.

Obscurity vs obfuscation

It has a bad name for a good reason. Just because you run Telnet on port 22 (SSH port) or an unencrypted website on port 443, that doesn’t make those protocols secure or even hidden, you can see the unencrypted traffic. And arguably this is somewhat better than doing what some people do and run insecure protocols, like RDP, on an ephemeral port (this can get discovered within minutes and you can see failed login attempts in the logs). But still, it’s not secure.

I think the term you are looking for is Obfuscation. Obscurity and Obfuscation are sometimes used interchangeably, but they are not quite the same (and no, I’m not “akhshually” you). In a dictionary, obfuscation means “hiding the truth behind complicated sounding words” (basically what politicians and lawyers do), while obscurity means “hard to see or rare.” In IT, obscurity has the meaning of “hiding in plain sight,” like the aforementioned usage of insecure protocols on well-known ports of secure protocols or on ephemeral ports, while obfuscation means “blending with something else.”

To give some examples of obfuscation, there are some privacy tactics that use obfuscation, like Ad Nauseam and TrackMeNot. Ad Nauseam is an ad clicker, as opposed to a mere ad blocker. It does hide ads from a webpage, but it also registers a click on every ad that gets blocked. The content doesn’t get loaded on your machine, but the ad servers will see a registered click. That way, your real preferences will get hidden in plain sight, with no real ability for profiling you, because you “like everything” and “click on everything.” TrackMeNot works in a similar fashion. The obfuscation part comes by doing random search queries on major web searches at random intervals, so your real preferences get hidden (like it searching for “dog food” for you, but you don’t have a dog). That way, your real queries are hidden among a sea of useless garbage queries, with not much one can do to profile you.

In the years, there might have been updates to search engines and ad networks to prevent these kinds of tampering, not sure if those browser extensions got updated to counteract them though (simple things like having to click a link in your search engine query would show which were the real and which were fake queries, so TrackMeNot would need to also send a fake click on links too, just as an example, not sure what is happening behind those, because I’m not using them anymore).

Onion vs garlic obfuscation

This gets into the obfuscation of onion routing and garlic routing. Both onion and garlic routing work by creating virtual tunnels that your traffic has to go through. In onion land, obfuscation comes from the fact that all users’ traffic goes through certain nodes and get mangled together, so you don’t know which request comes from where. With Tor, it gets a little sketchy when a powerful actor can launch many nodes and monitor the traffic, because onion routing is vulnerable to timing analysis (basically monitoring the metadata, so one can guess with a pretty good accuracy that a certain computer made a connection to another computer when you control a part of the tunnel and especially when you also control a honeypot). Garlic routing obfuscates traffic by combining multiple users’ packets (“cloves”) into a single bigger packet (“garlic”). When there is not enough traffic to make a whole “garlic,” from what I recall, additional junk is added to hide the real size, making timing and traffic analysis exponentially harder to do, if not impossible. Bonus point for garlic is that every user is a node (router) and the tunnels are short-lived, so traffic can go anywhere and you can’t guess which user goes where or if one goes anywhere at all and is not just traffic from other users.

Garlic routing suffers from the same web 2.0 issues that onion routing does, so a user can be deanonymized through malicious fingerprinting code (JS) and through user habit and speech (unless you only live inside the darkweb and you don’t have a persona on the clearnet - or if you did have one in the past, you used different enough speech and patterns to not be recognizable, not to mention not using the same usernames).

Anyway, the original point, which I deviated from, was that obscurity is garbage when it comes to security, obfuscation has its benefits in regards to privacy, but privacy cannot exist without security and people could use a good amount of both.

1 Like

There are very practical and life-saving reasons to use TOR, such as a reporter in an oppressive country who wouldn’t think twice to launch a missile at an independent news reporter’s location.

TOR has it’s uses. It adds a layer of security, but how you use it is the key to make it secure.

If you want to use it, you have to determine who you are trying to protect yourself against. A full on government sponsored agency, or hide your torrent of Emoji movie torrent from your ISP (or anything in-between) and use it as such to achieve the level of anonymity you need

BTW: Please don’t use TOR for torrents. It just slows down the network for everyone else. I was just making a point.

1 Like

True, but that is not the context in which the term is being used now, so that old argument is not relevant.

The context we were talking about was “hiding in plain sight” so obscurity is precisely the right word to use. Obfuscation is different, and not what we were talking about.

If your activities are responsible for 10% of the total transferred via a protocol like TOR via/from a particular server, it is relatively easy to work out who you are. If your activities are responsible for 0.0000001% of the total transferred it is orders of magnitude harder.

You do not need to manipulate (i.e. obfuscate) your connection or requests to benefit from the security that being 1-in-a-billion provides. You are inherently a smaller, less-noticeable and more difficult-to-isolate target. You are almost indistinguishable from background noise.

When you are 1-in-10, the concept of background noise barely even exists.

Increasing the number of TOR users would increase the background noise level, and thus raise the bar for detection for everyone.

“Security vs Obscurity” is a dated and dangerously narrow argument that rail-roads people into simplistic thinking — “obscurity bad”. The real world is much more nuanced than that. Context is everything.

Again, different terms. You are not obscuring your traffic. ISPs and any powerful opponent can see that you are doing Tor traffic. What you are doing by doing Tor traffic is obfuscation of your traffic, because they cannot know which traffic is yours. They can see “what” (Tor traffic), but they cannot see “which”. That’s obfuscation, not obscuring.

Again, context is everything. We’re not talking about the choice of using TOR or some other protocol. Yes, the choice of protocol is an obfuscation technique. We are not talking about that. We are talking about increasing the number of TOR users (technically, the volume of data generated by those users). So all things in the universe stay the same EXCEPT the number of users (the volume of data). Will you be more secure if you are 1-in-10 or 1-in-a-billion. It’s a no-brainer.

Evolution has proven time and time again that ‘flocking’ is a viable security and survival strategy. There has always been, is, and will always be ‘safety in numbers’ when predators are out to get you.

I don’t understand what you are trying to say.

Obviously 1-in-10 < 1-in-1b. If you are a lone Tor user, your traffic is identifiable. You combine it with 9 other users and you all obfuscate your traffic. You combine that with billions of users and you still obfuscate traffic, but on a large scale.

But that traffic is still seen as Tor traffic, not as https traffic or ssh traffic. What’s not known is where it goes, because that’s the point of obfuscation.

I said that obfuscation and obscurity are usually used interchangeably, but they are not the same thing. That was my point. I don’t understand your point though.

My point is that statements like “obscurity is garbage when it comes to security” are flat-out wrong and should cease to be spread as they are too simplistic to have any meaningful value. It is true that obscurity does not provide any meaningful increase in security in a handful of cherry-picked cases. Increasing the number of TOR users is not one of them. More users == more security for all thanks to the obscurity that being part of a larger group provides.

Well, I didn’t say that Tor is garbage, I said obscurity doesn’t mean security, explicitly explained what I meant by obscurity, explicitly mentioned why obfuscation is not bad. Then you used the 2 terms interchangeably, saying that I said obfuscation is garbage, which I did not.

1 Like