I tend to deal with servers rather than workstations, but the preferences are mostly the same thing.
Debian, all the way down. I know it, I love it, and only two things about it piss me off enough to kick puppies. It also eliminates the entire licensing and true-up process from the scenario.
Initial boot using PXE and the netinstall image, configuration via debian-installer preseed, stored in source control.
I don’t use config management. I’ve got extensive experience using Salt, CFEngine, Ansible, Chef, and Puppet, and every one of them has challenges that eventually become workflow bottlenecks, or present scaling challenges. When you’ve got a few hundred servers, they work fine. When your rolling restarts upgrade 20k servers at each invocation, the probabilty of network flapping or drive failure approaches 1, and a lot of them don’t handle that well. Puppet was particularly bad at this, and the master nodes scale very badly in the first place.
There’s also the tendency among tech people to flat out disable them. We tried using Puppet on the company issued laptops at my last company, and every developer had a nightmare story about the company’s printer drivers (or something) hosing the versions of Python they need, and it’s always easier to disable the thing that makes changes to your system than fix the problems with it.
Bash scripts everywhere! Seriously. It works for bare metal, nearly every cloud provider offers running an arbitrary script on provisioning, and you can build minimal containers from them quite easily. Simple tools work surprisingly well when you’ve eliminated complexity elsewhere.
Taking inspiration from the “immutable infrastructure” camp, we shoot any system in the head that isn’t operating in spec, and “spec” includes software versions. When file state is important (as it sometimes is), we abstract that away and solve it at the storage level (usually Ceph). Wiping the local disk shouldn’t ever be scary; anything important should be on redundant, remote storage.
Because of this, we don’t need to ensure that our systems upgrade properly; we never upgrade. Every system is in a fresh state every time the configuration script runs, and we run automated tests before putting the system back in rotation. The declarative benefits of config management aren’t super important under this paradigm. It also gives developers leeway to bring their systems out of spec as long as those deviations aren’t actually harmful. If they do something that violates policy (like open disallowed ports), the monitoring tests pull it from rotation. In the case of laptops and desktops, we just drop access to all but a limited subset of read-only systems on the internal network.
This same process works pretty well with graphical systems too. Ultimately, video drivers and Xorg are just packages that need to get installed. Giving devs the leeway to adapt their system also eliminates the preferred desktop environment debate.
As to centralized logins; we don’t do that either. I’ve had to fuss with far too many LDAP servers to put my trust in them. Even redundant setups feel a lot like single points of failure. The user database is stored in Vault, and our provision scripts build the appropriate local configuration. This is immensely helpful for the rare times someone actually needs to get on the box rather than reprovision it. You can still authenticate when you don’t have a working network and shared passwords aren’t ever needed.
The only change I might make from this process if I were dealing with only desktops would be OSTree and Fedora Atomic. It’s got a lot of promise, but is still rough around the edges. When that matures, it might be a viable replacement for the server-centric setups I use, while still keeping the immutable-like nature of the infrstructure.
That hasn’t existed very long, so hasn’t become a major consideration for anything I’ve built out, but I’d give it a shot for future deployments.