Got this email today. Neither of their alternatives offer local-to-local backups, which is a huge deal for those of us in the bushes of Canadistan with sub-megabit upload speeds and draconian data caps. Looks like I’ll be hunting for a DIY solution, and I’m probably not the only one…
Looks like a spot in the market just opened up.
Find a laptop and a multi-drive cage and make that shit burn!
I guess this is the perfect time for me to talk about what I’ve been working on for the past couple weeks.
I’ve been frustrated with crashplan not being open source, so in a bout of autism, I decided to make an open source clone of crashplan in python. I guess it’s not so much of a stupid thing anymore since crashplan going towards enterprise means we’re going to need an alternative.
It’s really early days (think 5 hours spent) but it’s got potential, I think.
If people are interested, I can put this up on github.
decides to close the source and monetize /s
@SgtAwesomesauce
puts on cryptocurrency hat
Open source + cryptocurrency donations = best of both worlds? If it works, I’d happily throw digital moniez your way on a regular basis.
I have no problem with this, but at this point, It’s really early days. I’ll throw it up on github when I get a chance (lots of meetings today) and I’ll start creating issues so that if people want to contribute code, that would be awesome!
I’m trying to figure out twisted perspective brokers so I can start doing rudimentary network operations.
I’ve always planned to release any software I write on my own as Open Source. Not because I don’t believe in muh capitalism but because I like the idea that not everything serves to create a profit-driven system. I might split the difference and offer a managed service for people (kinda like the crashplan cloud) where they can back up to the target, but that’s at least 6 months away.
Rotate through some hard drives. It’s not that hard.
A local sync system that does occasional (something like weekly) complete backups and regular deltas is going to be much better than whatever rate you will actually rotate through real drives manually, especially because it never gets lazy or forgets, and wastes less of your time doing so. Software solutions also normally let you backup to cold disks, but often also let you designate a few levels of importance, so super critical files can be backed up to locations over the internet while the bulk of data is only kept on a few computers within your building, and replaceable multimedia (games, movies ect you can just redownload) isn’t backed up at all.
I personally do just back up to some external drives “often enough”, but I’m keenly aware this is not an equivalent service to these sorts of software, and at times “often enough” wouldn’t have been if an unexpected disk failure did hit mid project.
Also I’m extremely happy nobody has suggested just using raid yet… normally that happens on forums, and isn’t in any way a solution, so yay for high average intelligence levels on level1tech.
I’ll simply state that even though it is a bummer CrashPlan has been abandoned we now have a forum member that is going to be making some awesome stuff and just proves that we have an active community willing to put forth the effort to fill necessary gaps in the tech world!
Thank you @SgtAwesomesauce!
I got a special unlimited plan with SpiderOak, but their standard plans are not too bad. They all allow unlimited computers, encrypt on the client side with a key you control, and work on Windows/Linux/MacOS with a GUI or command line.
I used to use rotating HDDs and a NAS, but it’s not enough. Offsite backup is by far the best option, with local copies of really critical stuff. It’s automatic, which is great because I’m lazy and will forget to use any kind of manual system.
Always happy to help. If others want to take part, there will come a time when I am going to be actively searching for contributors, probably in about 3 weeks when it gets sorted.
I’m going to do the best I can, but I am not a programmer and most importantly, I have never done any programming that involves custom network communication. HTTPS and REST aren’t going to work for this because they’re too inefficient and there aren’t many already designed protocols that will work for this.
Let me give a brief status update: I’ve got some basic code done and I’m happy with it. (actually, quite surprised with myself TBH) From here, I’ve got to start integrating networking. That’s going to be extremely complicated since the last time I tried to get this done, I didn’t even know where to start.
None of my code is commented in any meaningful way, but I’m going to be working on the comments tonight and aiming to get some basic network code done this weekend.
My company is going through a major migration of services right now and I’m an integral player in the migration, so I’ve been completely unable to work on personal during work hours, despite working from home. I expect this to change in about 3 weeks, so this is just bad timing, but hopefully, I’ll have this all sorted soon.
As soon as I get a bit of working network code, I’ll be putting it up on github. The problem is that the code is changing way too much for me to be comfortable putting it out there since I don’t want people to think I don’t know what I’m doing. (I don’t)
Dunno if this would be suitable but I throwing it out there as something to consider.
So I have my Linux PC which syncs selected directories in \home to a Nextcloud server running on a Raspberry Pi. The backend storage for the Nextcloud server is a mount point to a local FreeNAS. The Nextcloud server syncs all files to an AWS S3 bucket via a script at 3:00am each day.
My phone is set to do the auto picture sync thing with the Nextcloud server as well.
This way you can have any number of local PCs (Windows / Mac / Linux) sync up to the Nextcloud server / FreeNAS combo and then safely store those files in AWS each day.
You could even create accounts for friends and family to use but this would impact on your data cap of course.
My bill this month will be $0.09 which includes a bunch of pictures stored in Glacier.
Of course the FreeNAS server cost me some money but you could just throw a 1TB USB drive on the Pi to use for storage instead.
And if you have a FreeNAS you don’t actually need the Pi as you could run the Nextcloud server as a virtual server inside the FreeNAS if you like.
Or you could just create the Nextcloud server on an EC2 instance with S3 storage but then it is not local.
Why not just use nextcloud in a FreeNAS jail?
How much data? I have ~8TB to back up.
Also, is this the AWS intro pricing?
Yes, I could have used a FreeNAS jail or VM but the wife got me a Pi for Xmas and it seemed like a fun thing to do with it.
I am storing 11GB of data in Glacier and about 0.5GB in S3. So a very small amount.
My S3 bill says
$0.024 per GB - first 50 TB / month of storage used, so it would be $192 per month for 8TB of storage if my sums are correct. Data transfer in is free but out costs money. Gotta love the vendor lock-in policy.
Check out https://calculator.s3.amazonaws.com/index.html
I am not aware of any intro pricing other than getting the first 5GB free.
If your data is not gonna change you could consider Glacier which is $0.004 per GB / month
Am 100% interested in using it. I don’t know python though so can’t contribute, but would if I could.
shit
I’m working on it right now and when it’s a bit more finished, I’ll share it. I’m one of those people who are embarrassed to release something too shitty.
To diverge from Sgts idea for a second:
Are there any home priced replacements that run on a server OS(08R2)?
CP did it, BB doesnt, Carbonite doesnt. Equivalent from S3 slow is 11+ USD/mo. Mozy is stupid money. Right now Im tempted to go to fucking tapes and leave them with family instead.
This isn’t my topic, feel free to diverge all you want.
Not that I’m aware, I’ve been following the /r/datahoarder and /r/homelab threads relatively closely and no one has any alternatives that fit this use case. The problem is that data is expensive to host. We sort of knew that CP’s unlimited was too good to last. I’m just surprised it’s taken them this long to kill it off and I’m really disappointed they’re not just setting caps or increasing the price.
An option would be to run a W7 VM on the server and mount the shares (I think) or to do that with an OSX machine and mount CIFS shares in /etc/fstab
.
I tried that with BB about 2-3yrs ago, their client detected it as an SMB mount not a local disk and refused to play ball. Not attempted with Carb.
Oh, interesting.
I wonder if you could symlink to a smb share to get around it.