Absolutely.
I did this about a year ago when I wanted to replace my mirrored boot drives with a smaller pair of drives. (ZFS allows growing pools, but not shrinking pools).
I made a copy of the entirety of /etc, /var/lib/pve-cluster/config.db, exported the ZFS pools, reinstalled, re-imported the ZFS pools and then overwrote the default config with the one from the old install, and the system was right back where it previously was.
From the command line just type “zpool export poolname”
This will unmount/remove the pool from the system and prepare it to be imported on another system.
You can then connect the drives to the other system (or as it were in this case, the same system) and do “zpool import poolname”
If a host becomes unbootable or otherwise unusable, zfs will complain that the pool is in use on another system when you try to import it. You can usually still import pools that have not been properly exported, by using the “-f” option to force it at import time.
All of that said, Proxmox can be a little tricky when it comes to exporting ZFS storage. It has a bunch of services running to provide system statistics, and these will often make the ZFS pool appear busy, and unable to be exported.
If you run into this problem, you will need to find the offending service and manually shut it down using systemctl, before you can export the pool.
Unfortunately I can’t remember which service it was that caused the problem. It was an annoyance for me last time I needed to do this, but eventually after shutting them down I was able to export the pools.
After importing the pool(s) on the new proxmox install, if you are not importing the proxmox configuration, you may have to manually re-add the pools as storages in the web interface again, or the management interface won’t see them.
I can’t remember this part exactly.
A slight aside, but you may want to consider the below.
If the system is part of a Proxmox cluster, if you don’t duplicate the config, it will no longer be a part of that cluster upon reinstall.
You can duplicate the complete working configuration of that proxmox node by copying /var/lib/pve-cluster/config.db (and probably /etc too) to the new install, but then you might just be duplicating whatever networking issue you are trying to get rid of.
Unfortunately I did this quite a while ago, and while I am intimately familiar with managing ZFS pools from the command line, I can no longer remember the Proxmox particulars perfectly, as I do that so infrequently.
Hopefully this will at least be enough to point you in the right direction.
Too bad you didn’t back up your config before messing with it.
Actually, I just had a thought.
Do you boot this proxmox system off of zfs? If you do, did you ever snapshot the root pool (rpool by default)?
If you did, you can probably retrieve the old network configuration from that snapshot.
You can browse the content (read only) of ZFS snapshots without restoring them by navigating to mountpoint/.zfs/snapshot/snapshot name
From there you can look at the old content of files, and even copy the files out of the old snapshot without restoring the whole thing and overwriting the things you don’t want to change.
Maybe looking at “/.zfs/snapshot/snapshot name/etc/network/interfaces” will give you some hints as to how you used to have it working?
Heck, with a little luck you could just overwrite your current “/etc/network/interfaces” file with the one found in “/.zfs/snapshot/snapshot name/etc/network/interfaces” and - network wise - you will - after a reboot - be right back where you started?
(except for /etc/hosts, /etc/hostname, and /etc/resolv.conf Depending on what you have changed, you may have to restore those too)
Best of luck!