The fundamental problem is that USB is congestion. USB works like old school ethernet hubs. Every packet in is rebroadcast to every other device on the same bus, and every USB port/daisy chain from a given controller just makes the bus bigger and bigger, like adding hubs on hubs on hubs. Latency gets worse and worse because each device has to wait for its turn to talk on the bus, and only one device can talk at any given time.
So, in reality, you’d need to create a custom PCB with somewhere between 25 and 100 USB controllers for the 100 USB devices you want. See where the performance degredation really hits you. 4 devices per controller? 1? 7? Once you know how many devices, in your use case, a single controller can handle, you can go about designing (or having designed) a custom PCB that will have enough hardware for what you’re trying to do.
Keep in mind, USB controllers are not cheap. Somewhere between $1 and $3 each. For every controller, you’ll also need a physical USB port (they can’t be shared because, as mentioned, bus architecture). Those are also around $1 each.
The next challenge is how to get all that data off the board. 60MB/s * 1000 = 60,000 MBs. That’s 60 GB/s. So 100GB fiber optics could do it. However, the physical space needed for 500 USB ports is so massive that you’d probably end up making many smaller devices and networking them together rather than having a single monolithic board with 500 ports on it (I’m assuming you can at least share 2 cards per controller without congestion). So you’ll probably want to design some sort of fiber channel/fiber ethernet solution into the board as well, so that you can plug all these devices into a switch. Ethernet only really makes sense if you are building individual boards that only soak up data from 20 cards each or less. (60 * 20 = 1200 MB/s, which is right around the limit for 10Gb ethernet).
Either way, you’re going to need some sort of infrastructure level switch to handle all that data, with a backplane capable of handling the 100GB networking load.
The final piece is the custom firmware to drive all those controllers and keep them working, aggregate the data together, and ship it off to the network interface for handling. Plus whatever software you use on the server side to ingest 60GB of data and do something with it (this is no trivial task).
This is…a monsterouly complex project to do as requested, as you can see. The other option, buy a LOT of cheap mini-PCs, or raspberri pis, and treat them as $5 microsd card readers. You’ll end up spending a lot getting power supplies and networking gear for the 100s of devices, but that would work too.
If you do go the 100 raspberri pi route, get benchtop power supplies and use that to create a common bus that drives maybe 20-30 pis each, rather than getting a wall-wart for each one. It will be MASSIVELY more efficient.
Wifi with so many devices so close together won’t really work, so expect to have to use physical networking of some kind, so an ethernet hat for each pi.