I have a set of services on a small machine that have to be started only if the NAS share is mounted. This means that when the machine boots up it has to mount the NAS first and then start the other services.
Is there another/better solution than writing a script that checks if the NAS is mounted? Is there a good general way to deal with this kind of things "do this then that only if...." on a system services level.
Btw, I'm on ubuntu server on this machine.
I'm curios to hear your propositions.
If it's in the fstab, the boot should fail if it can't mount the NAS first, or there should be a way to make it fail if it doesn't. I don't know if it's different for network-mounted devices, but I imagine there's a way to configure that.
On the other hand, it could be as simple as
if [[ ! -z `mount | grep '/mountpoint'` ]] ; then
# is mounted, start service ...
Good info about the fstab, but it still would not shut down the services cleanely if the NAS goes down during operation. Also it would be neat to boot anyway and show a message on a web interface (via lighhtpd for example).
I thought about the shell script version too, but I am not sure how to implement the shutdown on NAS failiure/mount unavialability.
You should be able to do this with systemd (if you're on a version of Ubuntu which uses it). Add the NAS mount as a dependency of the services. That way it won't start them until the NAS share is mounted.
I can't help you with the specifics though as I don't understand systemd well enough for that.
Any good references for systemd? I am on 16.04 so systemd is an option for sure.
Not that I can think of off the top of my head, but if you find a good one let me know. I really need to find a way to get the network to start before the cryptdisks service but so far nothing I've tried has worked.
What protocol are you using for the mounts? a network device just vanishing is going to be dealt with differently depending on that. It's very difficult to do this cleanly without something like Gluster, DRBD, but I'm probably assuming you want to write data to the mount. If read-only (enforce this in options too) its less of an issue.
In general, if your storage may randomly become unavailable a synchronizer is always going to work better (owncloud, syncthing etc)
My approach to the mount is to have a file, eg ~/remote/.not-mounted and have a script that does this:
checks for 'remote', creates if not there
check for .not-mounted marker, creates if not there
check for .not-mounted and mounts if it finds it
This was for sshfs, and I kept having issues when the laptop went into standby, but I moved to syncthing rather that fix it.
However, in general, have a 2nd script as a service that uses ping, netcat, nmap or something to verify the host, and you could use Zenity to create a info box, or a dialog to deal with it. At least initially "I think it's gone, do you want me to get on with these kill commands?" is advisable.
Im using nfs for the file mounts, the data lives on a FreeNAS box.
The idea would be to send a notification that something is wrong if the storage is not avilable and stop the serivce so to avoid further problems.
Good comment about the synchronizer. In my case it would be unpractical as I use the share to write downloads to it, that then are used by another program on another server. Using the synchronizer would mean a lot of data duplication.
Thanks for the input @Dexter_Kane
NFS, unless it's improved in the last few years, does not deal with disconnects very well.
rsync has a
--remove-source-files option and you can use
inotify-tools to trigger it, but there was a weakness where if a file was added during the rsync run it would not get pushed up to the server until another got added. You could use
netcat to check for an open port or interpret the rsync exit codes
@Buckshee Wow this is what I call news just off the press.
How do I get that version of systemd now? XD (i guess I'd have to build it from source I guess)
I was surprised too
Upstart has had
mountall for a while