It was eye opening to me when I first got a real firewall (pfSense, ca. 2014) and realized just how often I was being probed and how many random connection attempts there were.
I knew it was a possibility, but I didn’t realize how incessant it was. The log entries show a never ending stream of attempted attacks and port scans.
I wish I had that level of discipline.
The smart phones just make life so much easier.
I do the next best thing. Whenever I get a new phone, I go through every single setting in the settings menus and disable almost everything that can be disabled. It it says AI, Sync or cloud or anything like that (or sounds like it might be using code words to refer to that) it gets disabled.
Then I make it connect to my own VPN, one of the many VM’s (well, this one is actually an LXC container, but close enough) that run on my server. All traffic goes through my home network.
My home network then has a piHole instance (that runs in another LXC container on the server) with serious rule sets for blocking out as much nonsense as possible at the DNS level.
And then after that, all local network traffic exits via a trusted 3rd party VPN (Mullvad, as from what I can tell, they are the real deal and actually practice what they preach when it comes to privacy)
And if this breaks a site (or app) then so be it. I guess I’m just not using that site or app.
I use Firefox, but I have gone through every single setting to try to cut out as much of the spyware crap and abilities as possible. I also use mobile Firefox on my phone, instead of the built in garbage.
I know some stuff still slips by, and I hate that, but I hope that I am at least limiting it as much as I possibly can.
I also do my best to be as unattractive to advertisers as possible. When the random ad that I don’t succeed in blocking pops up, I usually play a game of “can I successfully avoid even noticing what this ad is for?” Often I am successful.
For the ads I do see, and know what they are for, I consider having seen them to be an insult with some brand trying to corrupt and influence me. I don’t keep an active boycott list (that would be a little much) but I do try to do casually give brands for which I see ads as little of my money as humanly possible.
It’s not that I am against advertising per se (though there are exceptions, as some ads can be manipulative) but I want to disincentivize them having any interest at all in my data. I want them to view me as a drain on their advertising budget rather than as an opportunity, with the hope that I get targeted less.
I do - however - realize that ad blocking harms content creators. I tend to try to make up for this through small regular sustaining donations via Patreon to my favorite sites, if they are set up that way.
It is my dream that some day we can use the big regulatory sledgehammer to completely ban any and all collection, transaction in (buying selling) or use for any purpose (including both monetization, and non profit driven use) of any user data what so ever. The user data should exist on a site or server for the sole purpose of directly serving the user it describes, and should be illegal to use for any other purpose.
In other words, sure Facebook. If Sally wants to share a picture with uncle jack, you can store that picture on your servers, but only for the purpose of serving it to uncle Jack. It may not be used, shared or analyzed in any other way than that is consistent with Sally’s intent of sharing it with Uncle Jack. And you are responsible for keeping it secure such that no one else can access it for any unintended (by Sally) purpose either.
And I suggest violators go to Federal - pound me in the ass - prison, not just some corporate slap on the wrists.
Big social media giants will survive. If profiled data-based ads are no longer an option, then there will be a return to contextual ads that are innocuous from a spying perspective. Other consumer goods companies that build in spying into their products, and the data brokers that facilitate this - however - will be hurt, and I don’t have a problem with that. They deserve anything and every bad that can possibly happen to them.
I’d even take it further.
I propose that any content a user posts online can only be used for the purpose that user reasonably intended.
If I post a forum post helping someone restore booting ability to their PC, then that is the reason it was posted. It was not posted so it can be analyzed by others, monetized somehow, or used to train AI. That kind of scraping or otherwise unintended (by me) use should be illegal. At least without my express written consent (and no, not in a EULA that is required for me to sign up for a forum. it has to be a separate thing, that I can opt into.)
I propose every single person who ever goes on the internet have a perpetual license to every single bit of data they produce while they are online.
Sure, traditional fair use exceptions are fine (newsworthy content, parody/comedy, etc.) but not for profit uses (except maybe market research if you are a business looking to make your product better and want to learn what people are saying) and absolutely not for training some faceless corporations for profit AI.
I absolutely resent that AI companies are training on the open internet. I even resent that Reddit thinks they have the right to sell access to their users content. No. That should be the call of each and every individual user. They should have to contact and request permission of each user individually if they want to use their posts for anything at all.
Sam Altman seems to think this would kill the AI race. Heck, he thinks it would kill the AI race if he can’t just steal whatever copyrighted content he desires. (That man is the living anti-christ)
And quite frankly, I’m fine with that. We don’t need it.
AI has some pretty cool applications, but most of those are in small, highly targeted models trained on special purpose datasets fully owned by the company that is doing the training. And that’s probably the way AI should remain.
If there is any justice what so ever in the world, these large everything AI language models should probably die all together. And I hope they do.
