pfSense - Internet very slow while downloading

So I recently built a pfSense router. I tried to set up traffic shaping using the wizard, but it didn't seem to do anything. I also couldn't really find the options that I was looking for.
Basically I want websites (=small HTTP-traffic) to have way higher priority than HTTP-downloads/uploads (=big HTTP-requests/responses).

Has anyone of you got that to work? Or at least an idea of how I could solve this problem?

Traffic shaping can be tricky, as you cannot totally control what comes in. pfSense does not to my knowledge analyse the packets and control the rate at which data has been requested by clients, if that makes sense.

One of the most important things is to ensure the ack packets get the highest priority, otherwise your clients cannot get out the confirmation that they received data, so it stops getting sent.

I usually (and my advice to you is to) just run the wizard for multiple lan/wan, even though I only have 1 of each.

Alas I do not know how to shape based on the size of the requested file, or even if it is possible.
That does a good basic setup, then you can tweak the rules a bit if you want.

I'm not sure you can shape based on downloads vs Web traffic as there's no way for the firewall to know the difference. What will help though is prioritising dns and icmp above everything else that will atleast make Web browsing more responsive.

Once you have the shaper configured you also have to configure the firewall to put the traffic into queues, it won't do anything until then. You can check it it's working by going to the status (or diagnotics) tab and look for the queues menu.

ICMP and DNS is already priozitized. I did that using the wizzard and I assume firewall rules were created accordingly.

One more idea though, would it maybe be possible to set up a squid proxy to help doing that? Maybe even with https decryption so that it can properly see all http requests?

I'm not sure if it does, have a look at the floating rules tab and check, and also check the queues status page.

Squid will, if anything, slow down your browsing. It's not made to improve performance it's made to save bandwidth. And if you're the only user, or there are only a small number of users, you're not going to get enough cache hits to make it worthwhile.

Well, squid is simply an http proxy isn't it? Caching is only one of its usecases.
And although I haven't used squid yet, I don't think that it would introduce any noticeable latencies. I think that because when I run a debugging proxy like Fiddler on my machine (which parses every single http request and response including decrypting https), it doesn't cause any noticable lag and it's pretty light on the CPU, too.
But the real question would be how prioritization like that could be accomplished in squid. To differentiate between high prio and low prio traffic, squid would only have to parse the Content-Length header of every http request and response. But that's about all I know.

The floating rules were created automatically btw.

The other problem I ran into is that Squid doesn't appear to be able to do anything with HTTPS. Considering about half the websites out there use SSL these days, that's quickly becoming an issue for the usefulness of something like a squid proxy.

I imagine if squid can't do it, there will be a package that simply decrypts the https traffic. Usually for something like this you'll have to manually install a CA certificate on every computer and browser that you want to be able to decrypt.

For caching it will as you will be doing multiple cache checks before any data is actually transferred. But if you're not caching then you probably won't notice any difference. I'm not sure you can get squid to do QoS, especially on download traffic as you would already have had to have downloaded it in order for it to be processed. The same goes for the traffic sharper too but with TCP traffic you can tell the sending server to slow down by dropping packets, so in a way you have some qos control over incoming traffic. But it works best for upload traffic where it is queued and things with a higher priority get to jump the line.

Anyway, the squid3-dev package can do https but the last time I tried using it it didn't work properly because it's internal certs were out of date which cause a lot of https sites not to work though the proxy. I don't know if that's been fixed since then but it would only be a matter of time before the new certs become out of date and you have the same problem. Plus you lose the ability to manually verify a server certificate as you are relying on the proxy to decide if the server cert is legit or not.

Are you talking from experience about the caching delay? Because I'm pretty sure I could write an http caching mechanism that does not introduce noticeable delays. If you implement it correctly, the basic information about which responses have been cached will be in the ram.

I'm also pretty sure that squid can parse the headers, especially the Content-Length before receiving the body. Otherwise squid would run into memory problems extremely quickly in some scenarios.

It is probably possible to generate your own certificates and tell squid to use these.

The problem is you have a browser cache bellow the squid cache and every time you want to get something from a web server you have to check each cache and each cache has to check for a new version. This adds latency, not because of processing or memory speed but because it has to ask the web server if there is a new version, so whatever the latency is to the web server you're effectively doubling it. It is a noticeable delay. This is why you should generally not cache something that maintains it's own cache if latency is something you care about (ie you shouldn't cache the file system used by a virtual machine or a database, stuff like that). The benefit of using a web cache is that if you have a lot of users you can reduce the network load by having a local cache, but performance will usually be worse.

I don't know if squid (or some other kind of web proxy) can be used for QoS in the way you are looking for.

The problem is specifically with the squid3-dev package for pfsense, it uses an internal set of CA certs and not the ones used by the system. If there is a way to update them I wasn't able to figure it out. If you run squid on linux then it will just use the system certs which should always be up to date but in the pfsense package it doesn't do this.

What? Are you talking form experience or not? Because you only have to check for a new version once the cached response has exceeded its max-age. You definitely wouldn't do that with every request.
I also don't understand what you mean by "doubling the latency". Once a response is too old, you'll simply have to make a request to the original server to get a new one. How would that be doubling the latency?

Yes this is from personal experience. If you don't have enough users to keep the cache fresh and the hit ratio high then the cache will be expiring all the time resulting in a lot of checks to the web server for newer versions. The increase in latency comes from having multiple caches doing the same thing. You have your browser cache which has to check, then when it needs to get the new data it asks the squid cache which has to go through the same process.

It's not a massive difference but it is noticeable. For a single (or small group of users) a cache will hurt performance in most cases. But you're talking about using the proxy for qos so it's not the same thing as you wouldn't need to use the cache for that, assuming it's something which is possible.

First of all, all communication between local clients and pfSense should be like 1 millisecond or even less, unless the request/response body is really big or cached on a very slow hard drive..
So once your browser cache is outdated it will try to send a new request to the server. This request simply goes through pfSense and since squid doesn't have a newer version it will also just go through that, too. So it is still just one request that gets sent to the server. When the response hits pfSense it will simply go through squid which should cache it and (at the same time) let it be routed back to you.
Even if you wouldn't have set up the squid cache, that request would have had to be sent to the server anyway.

It doesn't make sense that the latency would be doubled here. The only thing which adds latency here would be that the traffic is sent through the squid proxy. And I really can't imagine that this latency would be anywhere near noticeable unless poorly configured.

But yes I mainly want to use squid for some kind of QoS. Maybe someone else has an idea how it could be done.

Kay.