Hello Level1 Forums!! Happy to be back, and happy to have the content I really enjoyed available again! I had to step away for a while, because I was beginning to feel…doesn’t matter. Just glad to be back.
I have little (read: none) experience with servers, and server grade hardware. However, I have the opportunity to work with someone who has been networking and installing systems for some time, and am learning more than ever. But, I digress…
Anyway, I have found some external PCIe enclosures that seem like, technically, they’d get the job done, but I haven’t found any reviews online, and am a little weary to jump on this idea. Does anybody have experience with any external enclosures, either commercial or self-fabricated? Any other suggestions/ideas?
You may have bought the wrong rackmount then. You may have been better off ordering a supermicro case that's 4U and a board that has the extra pcie slots.
For a "total uptime" viewpoint, I'd be weary of using enclosures that weren't certified to work specifically with the server itself.
But you don't even know how you are going to implement the solution the client is asking for. Seems like an impulse buy. If you have a more senior level tech in your organization who is familiar with project management (from an IT standpoint) I'd let him/her take on the project and show you their thought process.
Is there a reason the client needs GPU's and not a virtualized video card to display the workstation desktop? Is the client accessing this from inside the LAN or remotely accessing this?
You fucked up man. You guys bought a 2U rackmount, when you need to install 4GPU's? That isn't happening. If you can return it, do that. If not, sell it, and buy something more appropriate, like an HP DL580.
If you don't need a rackmount, go for something like an HP Z800. You can fit 4 GPU's in there, however, the PSU will limit how powerful of a GPU you could put in.
Your best bet would be to probably just build your own server, with something like one of those fancy X99 motherboards which support 4 dual-slot GPU's, and get an appropriately sized power supply (maybe even set up redundancy, because of it being a workstation for multiple people).
There's a lot of other things you could have bought or done which would have better suited the task...
I should've given more information...We don't NEED to install 4 GPU's, but we were playing with the idea of being able to offer that as an option within the constraints of the customer's overall budget. There is a x16 PCIe slot (that currently has a riser in it), that we had no intent on using, but the idea came to us to see if this might be possible.
What I was really wanting to know, was if anyone had experience with the external PCIe enclosures, or using GPU's in this manner within a server...
Thanks for the reply, though, and I agree that if we needed the GPU's we should've chosen a different setup!
I didn't see this response before I replied to the one below, but I think that my reply would be about the same. The only (server-side) requirement of the customer is to 'get rid of the individual towers because I'm tired of upgrading four systems at a time'. We already have the server software, virtualized systems, and RAID storage set up on the unit, and were able to find like-new peripherals (customer's request) at a much lower price-point than we had initially hoped. Therefore we have some room in the budget to play with...
They will be accessing it primarily via the LAN, and (very) occasionally remotely.
Worried about having to upgrade system, so he installs an 8 year old server. I don't know what's going on with your thought process, but I'd say you didn't think this through.
Frankly, systems have plateaued. Get them towers and just update them or replace out drives every so often. I understand why you're doing what your doing, but it's going to be no less work. and you're going to have all the negatives of having a virtualized desktop environment... something the users haven't even complained about yet.
Being in a rural area, and being the 'only shop in town', we aren't able to be in every place that calls at once, and we have a fairly wide area that we serve. That being said, when one of our customer's machines goes down, and they have to wait (hours sometimes) for us to respond onsite, they have lost quite a bit of productivity. Now, you might say that we need to hire an extra technician, and that wouldn't be wholly incorrect, but (in a rural area) a lot of the time business isn't exactly explosive. Travel takes more time than anything, and is usually the reason we are out-of-pocket.
Another reason for us setting it up in this way is that we can remote into the server using a VPN. That way, we can provide a solution (even if only temporary), with a much faster response time. Also, we can help to manage the software that is installed on the VMs, keeping the clutter away. I have seen that the common user will click on, and install, most things that momentarily seem appealing, and such has been the case here. We have provided training to the employees, but over time it is forgotten/ignored, and there are always the new employees...
They have been using virtualization for a little over a month now, off of an older server that we had laying around the shop, and things have been fairly smooth...another reason this decision was ultimately made.
That's nobodys fault. If they need immediate response, they need to contract with you to be onsite 24/7 or hire someone themselves. If a customer is unreasonable, you need to adjust their expectations to be reasonable, or you might as well fire the customer because they'll never be happy with your service.
You don't need virtualization to do remote assistance.
for this and many other reasons, users aren't qualified to have administrative privileges.
It sounds to me that you are trying to insulate your customer from the full cost of managing their IT environment. You're working extra hard to prevent them from spending more. That's admirable, but ultimately that only hurts you.
They aren't unreasonable. We would like to be able to provide faster service at times, though.
There are security concerns that kept the customer from allowing remote access to the storage of the individual towers. We thought about setting up a NAS, that way the storage could be encrypted, and external to the OS's, but that did not satisfy the entirety of the request.
I agree with the last comment...you are 100% correct. The only thing I'll say about that, is we strive to provide an attainable and sustainable service in a small community that is one of the poorest (no exaggeration) in the nation. Sometimes it's ridiculously difficult and is not successful from our perspective, but it helps the community, which will ultimately reflect positively on us.
As to your OP, I have a poweredge 2950 III that I got a gpu into. It required cutting the back of the x8 slot off of the riser so the x16 gpu would fit, finding a card that has a one slot bracket, is short enough to fit, doesn't require pcie power, and removing the dell DRAC card that I have since lost... :(
You can do it, but be prepared for some obstacles and mods. I tried sticking the gpu directly into the x16 slot where the riser goes, and the server threw an error and wouldn't boot, so if your idea involves ribbon cables directly to that, you may have to mod some board firmware or something.
FWIW I went with an hd7750, though some newer cards like a 1050ti may not require pcie power. As far as the external enclosures, I have no experience with that, although I did thoroughly consider it when the 2013 (trash can) mac pro was announced. I decided it was too much of a hassle and needlessly expensive. There are rack solutions for external pcie cards if you can get a thunderbolt card in the server, but I wouldn't expect much.
I hadn't thought about the board throwing an error, but did see that the slots for the card interface were horizontal, and kind of figured we'd have to use the riser regardless...but, that is a bummer.
I was actually kind of looking forward to the obstacles and mods in this instance, lol.
Thanks for your response, and for the thunderbolt-card idea, I will look into it and see if it might be a possibility in our case!