pfSense Squid Cache Proxy with JBOD?

Good day everybody!
I've been reading up on this and have seen that JBOD is great for the local Squid Cache
I currently have pfSense with SC on a 1TB single drive but I have two additional drives completely available that are 250GB and could be used to spread the cache to more platters
My question is, can this be done within PF somehow and can someone point me to some info on how to set it up? So far I have seen how to move the Squid cache to a different disk which will give a performance boost I know, but I am unsure as to how to use both of the 250GB drives and a portion of the 1TB drive as effectively 3 cache disks, if I understand what I have read correctly and that would in fact be the best setup
I do NOT have a RAID controller but by the sounds of it from what I have read I don't know that I need one (???), but rather it seems just like a configuration issue in Squid...?
The hardware is just sitting around so if I can use it to make the cache a bit faster I might as well
I realize that if one disk goes down then SC turns off and I will have to either modify the cache or replace the disk, that's OK with me

Any suggestions or how-to info?
Sorry if this is posted elsewhere, I did a quick check and didn't notice it yet...

just add them to the zfs pool.

http://prefetch.net/blog/index.php/2006/12/26/adding-disks-to-zfs-pools/

Im using UFS, this doesn't have a helluva lot of RAM, only 4GB, so i steered clear of ZFS

should only need 1 gb per tb storage on ZFS normally. are you saturating the box as is?

Regarding UFS, I think the procedure is to format and slice the new device, mount it someplace, then use growfs to expand a partition onto the mountpoint.

Haven't used UFS in a (long) while though, so I'd check out the doc and don't take my word for it:

https://www.freebsd.org/doc/handbook/disks-adding.html

this guide may help as well:
http://www.timharrison.me.uk/guides/pfsense-firewall/adding-a-second-disk-to-pfsense

You'll save yourself a lot of pain later if you just reconfigure with ZFS though; the overhead isn't that bad and it is a lot more flexible for things like this.

just set up a pool on the three disks with striped or none and you're good to go.

Normally LVM would be the best solution for something like this, but it's not supported on FreeBSD/pfSense. LVM actually doesn't help all that much here if read speed is the priority.

You could use software RAID 0 (GEOM stripe in BSD terms). There is probably not a way to set it up in the GUI, but there is a guide in the FreeBSD handbook.

I think normally the usable size will be equal to the smallest disk times the number of disks, so overall you would be sacrificing half the total capacity of the three, but it might be faster.

you know ZFS solves everything LVM does here much more cleanly, right?

losetup /dev/loop1 my_disk0.img
pvcreate /dev/loop1
vgextend vgmy_disk /dev/loop1
lvextend -L+10G /dev/vgmy_disk/my_logical_volume
e2fsck -fy /dev/vgmy_disk/my_logical_volume
resize2fs /dev/vgmy_disk/my_logical_volume

vs.

zfs set quota=20G my_disk

"Best" is relative here.

also the zfs eq. of JBOD doesn't sacrifice capacity.

I suppose based on my experience I only tend to recommend ZFS if the data integrity aspect is required, mainly because it seems that the performance is often worse on the same disks than other solutions for data not in the ARC. Since Squid does its own in-memory caching, the ARC doesn't really add much value here.

However, I have not used the striped vdev configuration, nor this specific workload, so maybe I'm off base here. I do agree that ease of setup and management is a useful feature.

AAAA damn i thought the ZFS mem requirement was higher so I will have to look into using that instead
RAID 0 isn't recommended for these from my understanding so I will stay away from that

thanks yall ill check that out and maybe reinstall today

sorry for the stupid question, i should have just read about ZFS more carefully :frowning: