It really depends on your environment. For a typical office worker, especially cloud connected, 100 megabit switching is fine.
Also, if you’ve had an existing VOIP deployment for some time with VOIP handsets that use POE - for a very long period of time, CISCO have been raping you on $ per port for gigabit POE.
Gigabit is/was cheap
100 megabit POE is/was cheap
Gigabit POE, below a certain size deployment was expensive (at least where i am it was something like $30-40/port for 100meg POE vs. $100/port for gig POE - roughly, from memory - for a 48 port layer 2 switch - prices in AUD).
Once you get up into the chassis switches that changes somewhat and the step to gig POE isn’t anywhere near as big, but for a small office deployment saving 50% on the workgroup switch is a significant saving.
Hence, there’s quite a lot of 100 megabit POE out there in the wild, still. My smaller regional offices are mostly 100 megabit POE at the desktop.
If you’re not doing POE, and/or not running CISCO gear then you’re probably not facing that sort of dilemma, but if you’re an all Cisco network (for various reasons), it was a bit of a shit sandwich…
edit:
you’d be amazed how little traffic a typical “office” worker generates. For some real world “typical office user” stats (from my HQ which is gig-POE at the desktop)…
I’ve got a bunch of 4506s (Sup7-LE) in the office here with 2x10 GbE LACP port channels back to a central 4507 (Sup7-E with 4x 12 port 10 gig SFP line cards and 1x 48 port gigabit). 240 ports in each 4506.
About half the ports on each 4506 are in use (we deploy 2 ports per desk in case someone needs to share a desk, has a visitor, whatever), so in theory i have 120x gigabit connected users per switch, oversubscribing a 20 gigabit uplink at 6:1 contention or thereabouts.
Average usage on the uplinks is generally under 5%… 95th percentile traffic for example on the 4506’s uplink that i am connected to (which has IT/power users on it) is 32 megabit. Peaks to 150 megabit (over 15 minute average).
Typical office users just don’t do a lot of traffic - and in a lot of cases, gigabit is a waste. I’m not deploying 100 meg anywhere anymore, but could still get away with it if i had to.
For servers, stuff doing NFS or iSCSI, etc. - 10 gig or faster is really what you want. But thats as much to do with command latency as throughput.
/tangent