Return to - A website I built to download video/audio from the internet with a free to use API

Hello everyone!

So last week I decided to work on my Web Development Skills, so I made a website that I would personally use as well.

I called it SASRip because…well, it’s me, SAS, and the website is for ripping media off the internet.
I’m sorry, I have the creativity of a beached whale.

The website has no ads, cookies or any thing like that, as I call it, no bullsh*t.
The API is free and open for anyone to use as they please, but a head’s up would be nice.

I also have a Firefox addon and Webkit (Chrome, Opera, etc.) extension in the works too, if anyone wants to Beta test those, that would be awesome!

Anyways, the project is very much a WIP and I would love it if you guys and gals could give it a try and let me know what you think, I will hand out Internet cookies for people who find bugs and might dedicate a page for them along with anyone who donates to help with server costs.


P.S. If you have a project idea you need a programmer for, feel free to contact me!


The project is intended to be OpenSource, I will release the source somewhere (most likely GitHub) as soon as I get around commenting and figuring out the proper namespaces for some classes.

Hey cool project. Just a thought, since you’re using youtube-dl as the backend, why not return the resource URL instead of downloading the file then sending it again? -g flag gets just the url. Saves a ton of resources and bandwidth.

Also I may have crashed your service by trying to pass in garbage, you should validate the user input on the back end if you aren’t doing so already. Sorry about that :^)

Hey, Service seems to be working fine, but could you still give me an example of how you broke it? (There is both client and server-side validation).

As for why I don’t return resource URLs, it is because the best quality is not always a single resource.

Here are a couple of examples:

  • Youtube will separate most 720p+ videos in to two files, Video and Audio, so that you can request a different video quality and save the trouble of having to re-download the same sound. (Some low quality ones come as a single file)

  • All videos hosted on reddit are without sound, they, again, come as separate files.

My previous weekend project was a FF Addon named Reddit Video Downloader, which did exactly what you suggested, just return the resource URL instead of doing anything with a back-end, but people complained, even though I left a message telling them why there is no sound how they can use Youtube-DL if they need the sound, but people don’t read, they just complain.

So after getting some e-mails and 1 star reviews, I decided my project should be this, as I would use it too.

Ah I see, I thought I broke it as I could not fetch afterwards.

I did it by changing the input type in Firefox from url to text, and typed a bunch of gibberish.

I mean, you seem to have stumbled on to something, but I think the issue is URL encoding, let me check it out real quick.

It was the percentage sign, issue resolved!

Keepvid burned into ashes when they decided to monopolize people instead of offering the service.

I would really appreciate a Keepvid alternative, but you’d need a in-browser stream muxer that doesn’t rely on FFmpeg for most stuff using MPEG DASH.

If there’s a Javascript/Rust video muxer you can find that’s open source, that allows the final muxing to happen on the end user machine without grabbing FFmpeg from somewhere shady, that would be great.

Same for PBS. A guy on the youtube-dL subreddit made a post about it.

I am sure if I do that, people will start complaining about downloads taking too long or something.

But stitching things together client side is a good idea, but perhaps that will be a different project.

I am guessing a lot of websites do this now days, it does make sense in a way, but like many things, it has downsides, one being that simply downloading a file isn’t enough now.

1 Like