Hi all,
Thought I’d share another quick repo I’ve put together as a result of toying with Plex today on my Threadripper box.
I wanted to test out performance where I share an NFS mount from the Threadripper box, to Plex sitting insider a Docker container, and the NFS share is served from the FreeNAS rig.
Config is super simple, and this time around I when with a vanilla docker cli
config, but I’ve linked in the README should anyone here want to contribute switching it over to the docker-compose
YAML definition, which is always less of a an eye sore for sure.
CC @SgtAwesomesauce more Docker fun…
3 Likes
Oh goes without saying, I didn’t want to use the FreeNAS Plex plugin… I prefer isolated systems.
I’m actually getting ready to move my freenas server over to ZFSOnLinux, so this is perfect for me. Thanks!
@bsodmike: making my life easier, one github repo at a time.
1 Like
Ha pleasure!
What I plan to do is fire up another VM on bhyve
and have that load up my media volume over NFS — I essentially PoC’d the same thing, just using the TR box to do the ‘same thing’. In any case though, once I push the Docker image to AWS ECR, it’s just 1-bash script away from moving the container across any instance of my choice really.
Once you start playing and enjoying the convenience of pushing/pulling to a private registry (AWS ECR offers this, at a low cost) it becomes really powerful quickly.
Sort of OT…
A while back I was watching an AWS keynote where Riot games was basically detailing their ‘magic sauce’ and how they leveraged Docker. Just based on that, I set about building something very similar, well distilling the essence of it… -
Advantages
- Guaranteed at run time
- Able to easily move containers around using private registry
Hard stuff
- Tackling container security. I personally prefer taking a HashiCorp Vault based approach like Kickstarter has and this is my favourite talk on the subject.
Harder stuff, when in a ‘cluster’
- May need further magic a la Kubernetes etc. Basically, you need some internal fabric to auto-route traffic to the ‘right’ container(s) inside the cluster.
- When in a cluster, you’d want each container piping logs to an ELK stack and/or a Graphana/Kibana stack for real-time metrics. Chances are you’d need both.
- Side-loading data into the container at run time; I’m looking at using S3 as an ingress source.
- There are still some aspects I haven’t looked at 100% of course.
The points above aren’t exhaustive by any means…
1 Like
Would be keen to hear what’s involved and how you plan to go about it - looks like it’s not ‘as straightforward’ for the uninitiated and my ZFS use has been rather limited till now Mostly tinkering with zpool
at the cli is about the most I’ve gone…
You have to rebuild the zpool. BSD ZFS and Linux ZFS are a bit different and usually the BSD version is ahead by a couple feature tags.
I’ve picked up 2 of those WD easystore 8TB drives (the ones that people shuck to get reds) and I’ll be backing up my data there and rebuilding the array after installing Fedora.
1 Like