I am currently setting up a FreeBSD based cloud server for my friends and me, so that we can push our ZFS datasets and general backups off site. Every user gets basically one dataset with a quota, in which he can work.
I would like to restrict the visibility of the datasets, so that every user can only see his own dataset and none other. Having much experience with ZFS, I however don’t recall a way to achieve this.
Does anybody have an idea as to how to realize this? Any help would be greatly appreciated.
Thanks in advance.
So more than just using permissions on the mount points, and sharing by user for a flat remote folder, you would rather each user be able to send their own dataset to a central pool?
Sounds fun, but I don’t have a scoobie how one would do that, but would also be interested in hearing if it is practicable.
Would you like to add zfs tag to the thread to? in case it helps pull in some of the amazing Big Brains here…
What will they be using to access these filesystems? What operations will they be doing on it?
For example, a directory can be traversable, without being readable. You can give it --x permissions and they’d have to guess the name. Of a subdirectory to be able to cd into it or access file through it…
There’s also jail, chroot, and friends where you can install another instance of FreeBSD with just the bare minimum utilities they’re expected to be using into a directory, and run server software pretending that directory is the root of the filesystem on the machine. Various software has different levels of support for this setup, at worst you give each jail its own network ip, using resources to have a network daemon permanently running wasting a bit of ram most of the time. At best, you run one server on the host, and when a user authenticates to this server the server forks and transfers itself to a jail, and then stops once a jail disconnects.
So, what software/protocols do you expect them to use?
I want to give the users (which are all sysadmins themselves and I trust each and everyone of them) the ability to manage their ZFS datasets themselves and to send and receive replication streams. I can allow and unallow specific ZFS functions for every dataset, so that they can just mess with their own dataset. But what I apparently can’t do, is basically set ownerships for the datasets, so that way each user only can see his dataset and those below.
I thought I did add the zfs tag. Could you double check? I’m fairly new here and I am still getting used to the UI.
Anyway, thanks for your input!
Some are just rsyncing via SSH, but others (including me) are replicating their ZFS datasets. It’s basically just a backup server, nothing fancy, just off site on a rented datacenter server.
Good point. Permissions are already set to 0700.
Basically I am very picky. Many of my own directories are datasets and thus half of my directory structure is mapped in datasets. Nobody can see the files inside the datasets (they are not even mounted), but they can see the designations of all datasets and that’s what I want to limit.
That’s actually a really good idea. I could set up jails for everyone and I pass their ZFS dataset to the jail. They could play around inside their jail all day long and would not even see the outside datasets. I hope I’m correct, please correct me, if I am not. But this sparks some ideas.
Thanks for that! I really appreciate it!
Until now everybody is connecting via SSH and either just plain copying, rsyncing or ZFSing their stuff into their dataset. Nothing else until now.
Thanks again for your input!
in both of these cases you’re using ssh to authenticate the client and spawn some kind of command
ssh has historically had good support for allowing itself to be locked down to allow only certain commands and utilities to be run, rsync itself also has some options for chroot type environments.
Theoretically you could setup a user key, that can only ever be used to run a subset of zfs/receive commands (see
restrict in the ssh man pages).
Similar can be setup for rsync.
The interesting thing about these, is that theoretically you could allow the user to rsync over a new authorized_keys file without the restrictions, … if you’re not careful - but otherwise… maybe you don’t even need jails?
For example, sftp has its own ssh subsystem that can prevent the user from going outside a subdirectory using sftp protocol, this is possible without having to create a jail and so on.
Wow, I didn‘t know, that was possible. You never stop learning I guess. I will definitely look into that, it sounds very promising. But I guess it makes sense, because SSH is basically a shell itself.
I guess it is possible to realize that solution with different command subsets for different users?
Anyway, thanks for the input!
Basically yes, you can. For example, you can set ForceCommand to by a short Python program that can safely parse and manipulate SSH_ORIGINAL_COMMAND from the environment variable (e.g. using
shlex.split) and then decide whether to return some kind of error if you don’t like what you see, or run the command in a jail/chroot/container or what have you…
… or maybe just handle it directly.
There’s a set of scripts called
gitolite for example, it leans heavily on this mechanism to allow git hosting for multiple users, without these users having to have a valid/proper uid / user created on the system, and without them having access to anything else on the system. In that setup, identities correspond to different keys folks are using, and gitolite determines which identities can read/write which repos.
Let us know what you come up with in the end.
I still don’t think it will allow admin commands to a child dataset while blocking other parts of the pool.
I just don’t think the back end of it is set up that way.
Data side, yeah.
And it has a permission system, but if you can manipulate the child, I think it lets you do the rest of the pool.
In the same way, locking the shell to a few commands/locations would restrict the mountpoints, hence data, but not pool wide/other peoples child datasets?
Wait, I am thinking of Zfs on Linux.
ZFS Does allow permissions to be delegated to a user for child- and root datasets.
But, Linux requires zfs commands to be run as root/with sudo.
If you use sudo to run the command, you already elevated your permission.
I think that is where I was mixed up.
Sorry for mis-information.
If the host is *BSD or Free/TrueNAS, then data set allow permissions might allow the access requested.
Sorry, I overlooked this before
That actually sounds kinda perfect. I will defenitely try some approach in this direction first.
I work with a gitolite based git-server every day and did not realise this.
Thanks for your input, it has been very valuable!
Yeah, the FBSD ZFS does things a little different. I already have some permissions set and it works great. Users without root-rights can actually access the respectively enabled commands without a problem.
No problem, every bit of input is useful. Thanks!