Need some suggestions for a NAS build


Aaaand that’s the hardware. Next week will gonna be a software week, with be getting up and running with TrueNAS Scale.

1 Like


Well, it’s now built, now I only have to get up and running with TrueNAS Scale… Not the easiest to use and I really need some replacement for the DSM File UI…

Well, turn’s out TrueNAS scale doesn’t let me use the iGPU… Also tried TrueNAS Core, which doesn’t seem to find the NIC, soooo… It seems like I’ll be using Ubuntu and doing storage on my own… I also though about using CephFS… maybe I’ll give that a try, I can mount it on windows, linux, kubernetes and it has a proper API, which I should be able to use I think…

Realtek LAN… Use the kmod package although I would highly recommend getting another NIC. There have been reports of data corruption and/or crashes on both FreeBSD and Linux if you use older kernels etc so you probably want to look at something relatively recent.

Well I doesn’t solve the core problem I think, with Core I’ll have to use a VM for docker, hope that I can get the GPU into it, get the storage to the VM (likely NFS, because apparently it can’t map storage).

Also at least Ubuntu and True NAS Scale use 5.15 and the NIC was completely fine, I know it’s not the greatest, but it did the job and did it well on TrueNAS Scale.

The easiest thing would be probably to get TrueNAS Scale to find the iGPU and give me the /dev/dri/renderD128 device, then I could just use that and everything is fine, but even with manually installed drivers (which I know you’re not supposed to do), I couldn’t get the drm device to work.

You’re using pretty much bleeding edge hardware, there simple isn’t much of support in older LTS releases.

I’m pretty sure when I try Ubuntu with the same kernel version, it’ll work, VAAPI is pretty old of an API and the i915 driver should handle it well. I think it’s down to TrueNAS doing something with their kernel and/or drivers, I mean I didn’t even get a /dev/dri directory… It not like it wasn’t working or had an error, it just straight up ignored it. Not even a fresh log about it and as fast as I can tell it wasn’t even properly in lshw, it just was a basic VGA adapter and kms just straight up didn’t seem to work…

Edit: it seems you are right and I am wrong, that version seems to be old enough that I need to add a kernel parameter… (Potentially I still need to confirm that)
Edit 2: So yes, basically the kernel was like expected new enough, but it just wasn’t enabled by default i915.force_probe=4680 fixed that… Jellyfin (so basically ffmpeg) is working perfectly now.

Is there any good software besides nextcloud to access you files from a browser? With Synology you always could browse, download and upload files via the web GUI and also generate share links for that. I know nextcloud has many of the same features, but it has it’s own storage backend and it seems like it can’t use my regular home folder or any other dataset.

1 Like

Bit bare bones and I’ll have to think of something regarding the authentication and sharing doesn’t seem to be a thing, but it’s a start for sure.

In the readme, it says you can change authentication method with the file browser server.

I believe to prevent command runner from executing, you need to set the users to /bin/false or /bin/nologin, the later being preferred. If it is only for yourself, it should be fine.

Yeah I’ve seen that, would’ve been great though if I could have done it via pem pam, because I don’t have SSO currently (still in progress, I’ll work on it some day…). I’ll probably check if I can use the webserver auth with pem pam…

1 Like

If it uses Apache or nginx in the backend, it should be doable do use server-side authentication and use pem.

Meant to say pam not pem, but should still be possible I think

Funnily enough I was also thinking of pam and wrote pem, probably because I saw it above.

I am correct in the assumption that a cache drive also improves directory browsing (if they are cached of course), right? Just switched TrueNAS itself from the 1TB NVMe to a 250GB SATA SSD and added the NVMe as cache drive.

No. Caching the data itself won’t help you to browse directories faster. For that you need a metadata special device on a faster storage media (on SSDs). That way, the directory tree and other small, but important stuff will reside on SSDs.

Say you got a zfs metadata special device, data on HDDs, but cached in ARC, i.e. RAM (you don’t need L2ARC if your files are small enough to live in RAM and even then, unless you got a buttload of users, access times would still be decent, making the experience ok). You will be browsing the directory structure on a SSD array and get the data from RAM. Without the special device on flash, you would be browsing the data on spinning rust and getting the files from ARC (if cached).

1 Like

So directories will not land in L2ARC/RAM? Normally they should behave like files and be cached if often accessed or do they receive special treatment? If so maybe I should swap the L2ARC for a mirrored special metadata device, although I’d of course need another 980 NVMe…

I believe I mentioned it around the start of the thread. Directories themselves are not stored in ARC or L2ARC. You don’t know if and what you will browse, so the file system just scans / browses through the data on the array. It doesn’t cache anything like that. If it matches an access to a file that it already has in ARC, it serves you the cached file. But the browsing of the directory structure will be as fast as the spinning rust can fetch the data.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.