So this all started with me wanting to set up a NAS for home use, and after some discussion and tons of help from people here on this forum I concluded that I wanted to use the guides made by Alex Kretzschmar and his Perfect Media Server project. You can find the original thread here: (solved) Noob asking for some help: Setting up a Linux NAS - #21 by Rainmaker91
Sufficed to say, I am still very much a noob. Though I am slowly learning things as I go along. The main issue is that I still have no idea of how things work at a basic level, so I am fairly certain this is what is hindering me at this point. So far I have managed to get through the Bare Metal Manual Install guide well enough with the help of some tutorials on https://www.digitalocean.com combined with just plain old trial and error.
Right now though I am stuck at the point where I am told to mount the drives in fstab. I have so far tried opening the file using the following lines in the terminal:
$ sudo su
$ gedit /etc/fstab
Which did in fact open the file for me. At which point I copied the examples that he listed there and swapped out the drive IDs with the ones I am using for my own setup. This all seemed to kinda work, and when I do the mount -a command it actually showed one of the drives as mounted in gnome-disks which was fun to see. Except it was the only one of 3 drives that showed as mounted, and one of the drives continuously just sat there and made a constant ticking noise (as if the arm is attempting to read something but isn’t actually doing anything). Sufficed to say I got kinda worried by that and tried to make it stop, which I didn’t know how to do. So I just decided to reinstall Ubuntu yet again, and start from scratch. Now the drive seems to work fine again, but I have yet to actually restart the PMS setup so who knows.
With that in mind, I figure I should ask how to proceed since I don’t want the exact same issue to occur again.
Oh and for those who want to follow the guide, you can find it here:
So I think you may have two separate issues here. One being with fstab and the other being a drive failure. I’m going to focus on the ticking noise with my response as I think it needs to be addressed first before moving on with troubleshooting your mount points (and others are probably more qualified to help with that too).
If it’s a rhythmic ticking noise it’s almost certainly a failed drive. Here’s a link on how to check the SMART status of the drive and run some self-tests from command line using smartctl. Or if you find it easier you can attach the drive to a workstation and use GSmartControl as it has a GUI interface to smartcrl.
If using command line you can match the S/N printed on the drive with the output of
sudo smartctl -i /dev/sdX (with sdX being the device name).
Edit: You can get a list of device names with
sudo fdisk -l
Honestly I think I was just being dumb, I kinda figured out things after a while. I had to go back and mount all the drives properly and not just sdc like I did the first time. As for the drive issue, I am now hearing it much more clearly and it’s more akin to the drives working their butts off. Which I take to mean is part of the process of allocating the space from the drives into the mounted areas. In this case the following:
(I couldn’t be bothered to copy the actual ID into here since it’s on the server and not my desktop where I’m writing this, so the Drive names are just substitutes for those)
Sorry for the confusion here, but this is really just the stuff that happens when a noob attempts these things I suppose. I also removed an old 2tb drive that has issues with running it’s SMART check, which seems to be a common issue with WD drives. I’ll connect it to my windows machine and see if the WD software can fix it as many claim.
Edit: As far as listing the drives goes, I used the following command. Though I don’t know if it’s even remotely the same since I specifically was after the drive ID:
Normally UUIDs are used in
/etc/fstab, as listed by
sudo blkid. The drive ids as found in
/dev/disk/disk/by-id seem too reliant on the whims of the manufacturers, and problematic with hardware failures.
I much prefer labelling partitions reasonably, and using these in
/etc/fstab. I can read and remember them, and being shorter makes it possible to line up the columns, making a lot of errors obvious at a glance.
After quite a few reinstalls I ended up using UUID instead of the disk ID. I am unsure how I would go about labeling the partitions so that I can use them like you mentioned, but for now UUID seems like a bit of a better solution than I used to do.