A person on Slashdot asked a very interesting question regarding the news that Whatscrapp is limiting frequently forwarded messages to a limit of basically one person or group. The question they asked was: “How do they know a message is highly forwarded or not unless they have hacked WhatsApp to no longer be secure?”
Assuming they can’t read your messages (lol), or wont ever give you an update that enables said feature, I am guessing forwarded messages get hashed and flagged before they get to the software on your phone which then prevents the childlike mind from being an adult and deciding for themselves whether they want to forward it or not to whomever they want. But… how could you get similar hashes from encrypted messages?… Is your phone hashing all messages in unencrypted form and reporting back to the command and control servers which tell “your” app to set a flag on the post limiting your forward capability?
I’m sure people can figure out how to ask people to stop forwarding stuff to them or block them. Sure people abuse things but do we all need to be treated like children with this authoritarian tech?
Seems like whatsapp is going the same way as WeChat which automatically blocks thought crime messages to groups living on animal farm.
I’ll stick to Signal and promote it where I can but the normie drones seem thoroughly at home on Mark (insert another word for rooster) Zuckerberg’s lap.
“… no longer secure?” …
Anyone have any other ideas of how they are doing this? Seen any other technical analysis of how dear leaders are doing this?
Maybe my first through about hashing is correct if Whatsapp has end-to-end-to-end encryption.
They could give each unique message a uuid and pass that back to the backend, so that the message contents aren’t shared. If the message is forwarded uuid isn’t changed.
Not that I trust Facebook to not have broken encryption.
Anyone who thinks whatsapp is secure is fooling themselves.
Or they could just SHA256 the payload and if the hash matches keep a running log of everything whose hash matches and is popular up to a specific threshold.
Edit: that article basically described my original post but better…
Have a server side db of “viral” message signature to block. Require the client to first encrypt message body and send to server, hash the message, run through bloom filter, if true accept message, log it but refuse to forward.
If false, accept message, use hash from above against an LRU. Each time hash signature hits it’s respective bucket, increment a property of that bucket. Once bucket hits threshold to be “viral” add it to bloom filter, otherwise forward message.
Depending on how it’s done, for example using the algorithm from above - The data can be kept encrypted the entire time and would keep people from spamming or “shitting up” the applications through put/decrease availability. I’m not sure what the problem would be with that.
An obvious workaround, at least for text and images, is to copy and paste and alter the message to create new hashes and forward from there on. Given how obvious that is I would think the Whatsapp malware copies more than just the message or image in the background while within the app - probably copies data related to censorship flags so new message bodies can be tied to previous flags without having the same hash. Just thinking out loud.
They probably also reply on peoples laziness, if they get confronted by a message telling them they cannot pass it to more than one person they likely will just not bother after that, after all ti takes literally SEVERAL SECONDS so go into a new chat and share again.
Sounds silly but peoples attention span for loading stuff is less than 2 seconds apparently now.
Nice idea but try convincing all you regular contacts to pay for what i free on X number of other apps.
I know why you should, and I have friends who would understand but also say, fuck that. I mean in this day and age how long can it be expected to exist.
But that just undoes all the good intentions, email is at the mercy of the hoster which for the VAST majority is people is insure or mining/monitoring, which leads back to " pay this unknown company to message me, and maybe you money disappears maybe it does not". I may not have many fiends but the few I do I want to keep and keep in contact with. They are not all as tech savvy or as weary.
It is a nice idea and I am sure works in certain circles but yeah… not really a majority option. And I still wonder what happens when the server costs outrun the price paid by people?
Mother fuck EVERYONE needs to listen at 6:59 onward. He said it with a British accent so my yelling and screeching wouldn’t even compare but Jesus Christ everything * 9,001
I’m not disagreeing but it’s uses PKI. It’s pretty hard to break PKI. It’s also not generally worth Facebook’s effort. I don’t see how they would profit much from messages. Now farming your interests on their main app sure.
Here’s my schtick … It uses Curve25519 for key exchange and AES256 in CBC mode for message encryption and uses HMAC-SHA256 for message authenticity and integrity. If they managed to break that. The government would not be so keen on wanting to break it. It would have already been broken
Also trust nothing is the best security motto. I laugh when people say I trust an app. No don’t be slow. Trust it with what it would need to know as if it’s public.
Buddy signal is no more secure. It goes through a central server. You don’t control the whole trust chain. Realistically the most secure messenger would be none at all … followed close behind a completely Decentralized end to end encrypted system. Signal does client side scanning in the background it just doesn’t use it for much of anything.