Removed

Yes. BUT a hypothetical TLA knows that. They don’t even need to torture you. Do you have loved ones? Friends? Family? They don’t even need to torture them, they just need to promise you that they make their lives as miserable as possible. Your own pain threshold is something completely different than inflecting something like that on others. Of course they still can’t force you give out a password - Just for most people the option of potential Jail time is the better option.

So, theoretically you can’t be forced to give out a password, but practically everybody’s vulnerable. Kind of reminds one of Mob tactics, doesn’t it?

Most people aren’t willing to go that far.

They do, but not because of some moral values.
Having people know about their doings works against them in many ways. If they have a reputation for being immoral(or corrupt, etc.) makes it way harder for them to stay unnoticed, recruit new people(This is a real issue, BTW!), get net budged requests through, etc.

Now, getting press to be on your side, that is the real issue here. And you can’t plan on that, which kind of makes my point mood.

Criminals don’t have it easier in general. TLAs have the infrastructure, the knowledge, the will, and the history.
I meant specifically the give-me-your-password-at-gunpoint thing.

It’s not. If somebody had physical access to your machine you lost. No anti-tamper hardware is going to protect you effectively.
Booby traps can only work if your adversary does not know about them.
Same goes for self-destroying hard drives etc. - Did you know you can buy NAND flash with built-in gunpowder? Still won’t protect you though.

The sophistication of possible attacks is just to high if you assume a TLA really wants to get you. Crypto keys in RAM? Secure enclave? They’ll bore a whole in the lid of your RAM/CPU, pour liquid nitrogen in it, pull the power, and analyze the keys with a Scanning electron microscope. Seriously, attacks like that have been demonstrated.

Not an FBI agent, sadly(Or any glowie, for that matter). They would get paid to write this stuff :stuck_out_tongue:
Just don’t trust me more than I trust you.

No. You have to interact with a lot of systems that are potentially compromised. Ever wrote an email, forum post, or gotten money from an ATM? There are so many computers in these kinds of systems, I can’t tell you if one of them might be compromised.
Of course potentially is the key here, if you find TLA malware(You’ve got a good eye… :open_mouth: ) of course you should get rid of the device, but probably not before talking to a lawyer.

My point is, before you use a device for anything, but especially compromising things, make sure you would know the impact of the compromise. If all you do is watch pr0n and netflix(The lonely version of netflix and chill) you don’t need to worry much.
If you’re a crypto gazillionaire accessing a large fund of money, I’d think twice about accessing my funds, even if I had a device that I mostly trust.

The problem really is that there is no such thing as perfect security.
The person that can and has read and deeply understood every line of code, every transistor on a chip, and every system that interacts with any of that doesn’t exist. If you claim you do, you’re either some hyper-intelligent super-AI, or you’re plain wrong.
If you want to do secure things on a computer, NEVER ASSUME PERFECT SECURITY.

I mean legit we JUST started this…

True, but irrelevant. The point of this thread is to canvass ideas for improving opsec. Improving. Not perfecting. Improving. It matters not one iota that neither X nor Y nor Z are perfect. What matters is that they are all better than nothing.

It is up to the individual to consider their own circumstances and decide what, if anything, from this thread is of value. That’s not your call to make.

Shooting down ideas on the basis that they are “not perfect” is counter-productive. Even if you are right. A defensive layer that is only 24% effective is still better than no layer at all. It is entirely possible to add enough layers to opsec for an individual to be confident that what they have in place has raised a sufficiently high barrier to withstand the threat(s) they are personally concerned about. That’s what SECURE means in this context. It doesn’t mean 100%. No-one cares about 100%. “Perfect security” is not relevant to the discussion.

2 Likes

“Thin clients” booting over wireless.

Have a laptop with WiFi and a lot of RAM, but no onboard storage (HDD or SSD). Configure the BIOS/UEFI to retrieve the boot image wirelessly (using TFTP or similar). You end up with a long boot time, but once booted your entire operating environment resides in RAM.

The laptop’s battery acts as a UPS, so no unexpected power interruptions (and subsequent data loss) should occur.

All you need to do to completely wipe your system is power it off or pop the battery out.

If you use your laptop on a table most of the time, you can deliberately break the battery retention clip, so that unless it is picked up in a very specific way, the battery will simply fall out. Thus if someone attempts to steal/seize your laptop, it gets auto-wiped the moment it gets lifted off the table.

Since booting is over WiFi, the actual server that holds your operating system, and all of your data, is located somewhere else… where “somewhere” could be, in effect, anywhere — a different room, underground, a different building, a different town, city or even country.

A non-booting laptop with no internal storage will take a sufficiently large amount of time to examine and understand that, if and when the means of operation is finally deduced, a deadman’s switch (or similar protocol) has already activated to safeguard all remote data.

For bonus points, actually have internal storage but configure your system to never mount it. Take a payload and encrypt it in the strongest way possible. Then encrypt that with something weaker. Then something weaker again. Repeat a half-dozen times so you end up with something like a Russian nesting doll of encrypted/encoded layers. Have that file be the only thing stored on the drive. Since the weakest layer will be on the outside (perhaps BASE-64, ROT-13, or similar) anyone examining the drive should be able to peel the outer layer relatively easily. The next one should be doable, but a little trickier. The typical mentality of people examining such drives, driven by easy and early success, should guarantee that they will spend inordinate amounts of time and effort trying to unwrap the whole thing.

Of course, your ‘payload’ would probably have to be a copy of…

this

Rick Astley - Never Gonna Give You Up (Official Music Video) - YouTube

May as well mess with ‘their’ heads for a change.

1 Like

The best opsec (legit guide 2021):

  1. Don’t use electric - at all
  2. Grow your own food, only using open-source soil and water (compiled from source)
  3. Be born && die before computers / databases / the internet were even invented

E.Z., follow this guide and you will be fine :^)

2 Likes

I posted about such things, because those are the interesting protections to me, more of them have general applicability, and they are things one could actually fix rather than mitigate.

Having multiple completely independent setups of devices in Faraday cages, with (if you want to take a quasi-psychological approach) entirely different keyboards and distinct surrounding decor to prevent even accidents of unconscious mistaking of one setup for the other and entering information (passwords, site visits, etc.) in the wrong domain is perhaps the ideal, but at some point it really just becomes a game of maximising separation to impractical degrees.

I think it is far more beneficial to primarily try to fix surveillance/tracking with technical fixes that more people can easily benefit from. Surveillance, be it foreign, private, or public-sector, will only become easier and more automated over time, and I worry that mainly focusing on buying more computers will not really improve the situation as a whole, whereas technical approaches can create a market for computers which are themselves safer.

As I see it, privacy/security/opensource enthusiasts will always be better protected, but work on technical solutions can act as a proving ground for privacy technologies that less niche users may be able to use. For examples, ad-blockers, the Signal protocol, and full disk encryption come to mind.

That said, I am all for hearing interesting ideas for improving the human component, but as just maybe ease up a little on the fatalism?

I think relative benefits could be interesting to discuss, as long it is a discussion and not just a continuous cycle of, “this sucks.”, “No this sucks”, “go live in the woods”.

Tam, ta ra ra, ta tam and he’s done it. I don’t like shilling for other channels other than L1T, on their own forum nonetheless, but this is good:

It’s not a step-by-step tutorial, but more of an overall “things to do.” It has links to some good resources, like the NSA Linux hardening docs.

Some of the things in the video are debatable, like flatpak, VMs, Whonix and Qubes, but overall it’s ok.

1 Like

Instead of breaking battery retention clips, maybe use this tool and keep a usb device in a loose port tethered to your wrist or belt?

So basically we need to live in Minecraft? :stuck_out_tongue_winking_eye:

1 Like

In the future we will all live in something like The Matrix, but its just Minecraft :joy:

1 Like

Any guide to “Paranoid schizophrenic” OpSec without using/considering Qubes instead of any other OS is fundamentally not paranoid enough.

Also, as above, all the security measures in the world won’t work very well against handcuffs and a lead pipe or psychedelic drugs.

Unless you can rapidly distance yourself from the “secure machine” in a way that feasibly isolates you from any connection to it, the TLA can just beat the shit out of you to get into it.

So really if you want to be properly secure, you probably need to (also, as well as running a secure endpoint) run/store your stuff on somebody else’s machine that they can’t physically confiscate. Let’s say a cloud provider in Russia or somewhere else that will tell the relevant TLAs you’re worried about to go take a hike when asked to hand over your stuff or are presented with a warrant.

Store the encryption key(s) on something easily destroyed in an emergency. Timebomb that remote VM or whatever to self-destruct if it isn’t tended to every 24-48 hours (or 6, or 12 or whatever), so that even if somebody gets your keys, and even if they work out what you were connecting to remotely, and even if they convince hostile third party country X to cooperate, by the time all that is lined up the machine has nuked itself.

3 Likes

I would argue having physically separate hardware and using different networks (so to avoid inter-device communication on a LAN for example, to potentially avoid compromised devices doing shady stuff like trying to break into your switch or router and scanning traffic) is inherently more paranoid, because you’re so paranoid that you don’t even trust an OS to securely sandbox different programs.

1 Like

Qubes isn’t just about sandboxing programs from one another.

It’s about sandboxing your drivers and input devices from your data too.

So even if you use seperate hardware for seperate purposes, qubes still has value. Your network, firewall code and IO drivers run in different VMs to your data.

If you’re properly paranoid you don’t trust hardware. Qubes is at least open so more trustworthy than firmware blobs.

3 Likes

Cue Beavis and Butt-Head reference

1 Like

This sounds cool, but it’s a potential liability, i.e. getting struck with a (somewhat bogus) tampering / destruction of evidence charge, even if there’s no evidence a PC may contain evidence of any wrong doing. It’s one thing to refuse to unlock your device (5th amendment and stuff, while it still lasts, like, the part where “nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.” is completely ignored nowadays) and a completely different thing to destroy stuff.

I’m not against it, especially when the 5A goes completely out the window with the “nor shall be compelled in any criminal case to be a witness against himself” part as well. It could be done somewhat smartly, like for example have 2 profiles, 1 hidden profile for things you don’t want to be found out and one for normal use. When you insert a panic code in the normal profile login, the hidden profile gets wiped. But your normal profile needs to be genuinely used, in order not to make it obvious it’s just a setup, any investigator with more than 2 brain cells can figure out a profile is just a coverup, like with those phones that when you unlock, you are greeted with lots of fake programs, like a fake Candy Crush that is a hidden Tor Browser or something.

This would be a nice feature for QubesOS.

I see no other possible real-life scenario where you have the option to insert a panic code. Like, what, a ransom or kidnapping? Statistically not likely to happen to people. And while not out of the question for such usage, the LEO case is the obvious one. To be honest, even in that case, I personally see it as completely justified to not allow people into your private files (what happened to " The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated" ? - I’m not a constitution apologist or something, I just point out the obvious that pieces of paper won’t restrict governments from being abusive).

I would however choose a different route maybe. Alpine Linux can be installed in diskless mode, which is basically a glorified live session which runs from a ramdisk, but with the ability to save some current configurations using lbu. Save whatever you need, like a browser and whatever else you need minimally. If you are forced to unlock your device, refuse to do it. If they try restarting your device to install a rootkit or boot into another distro and see your data if it’s not encrypted (you technically don’t need encryption in this case, but I recommend doing it nonetheless, encrypted RAM FTW, continue reading), then they just destroyed the data, because obviously nothing was stored on disk, everything was in RAM. You didn’t tamper with evidence, but you did however refuse to unlock. Again, it’s currently legal, but who knows for how long - also, probably not legal to refuse to comply with LEO orders in other backward countries. “Human rights” my ass, UN, you f***ing hypocrites.

I believe some VPN services claim to use these kinds of live sessions for their limited logs (no VPN collects no logs) and for their VPN servers themselves, so if someone tries seizing them and shut them down, they are turned useless and no decryption key can be used if there’s no data to decrypt anymore in the first place. Technically you can get around this by freezing the contents of the RAM using liquid nitrogen or something, but it’s really expensive to pull off, especially if there are many servers to seize.

2 Likes

Why not set up a legit user, with a script that silently nukes it (or even just decryption keys) upon login?

1 Like
What I would do if I were The Man

[heavily edited. my paranoid ramblings gave the impression of an actual rumour, rather than hypothetical ramblings

Don’t If I were a TLA, I would mandate every VPN in my territory allow the TLA to run their own session loggers? Then the VPN host literally is not keeping any logs? And TLA can fetch what it wants, when it wants? And session data is meta data, ruled by law not to be covered under privacy rights.

1 Like

First time hearing this. Could be an exploit with “military technology” like encryption, which hasn’t been purely military for almost a century at this point, but some of that law still remains and may be used to exploit stuff like that (talking specifically about “exporting encryption,” which you need to have a license for).

Is this even constitutional? 4th amendment, hello? Not that it matters, the government is above the law (law that was supposedly put in place to restrict it). And it has to be, otherwise it can’t function, different rules for rulers. You can’t reasonably expect the government to limit itself or especially to apply the same rules to itself that it does to its subjects.

Just use i2p.

2 Likes

Hmm, I should have put it as more hypothetical. Not Real.

But constitutional; Only the contents of calls / letters is protected, not the metadata. Same with normal ISP’s

But I need to stop s posting on this thread an move on

sorry

1 Like

The first thing the digital forensics folks do with seized devices is create an image of the storage device. Having done that, they then perform the bulk of their forensics work on read-only mounted copies of the image.

If you give them a self-destruct code, that will only be used on a read-write mounted copy of the image. Apart from forcing them to discard the self-destructed image, it doesn’t really slow them down.

It does, however, let them then charge you with destroying evidence, obstruction, or whatever else garbage they want to make up. On that basis, I would advise against giving up a code that destroys data.

Having said that, if you have your self-destruct code written down on a piece of paper in your wallet, or scrawled on the inside of the battery bay cover on your laptop, then their discovery and use of said code is without your knowledge or consent, so any/all negative repercussions can be avoided.