so i just got an LSI 9207-8i off ebay. i have installed it in my system and connected my drives i can boot into LSI config utility bios and can see the 4 drives i have attached (i have 8 but just testing with 4 right now) it shows the drives capacity and everything seems good but as this is an IT mode card there are no options to create a raid array (i have had some experience with other lsi cards in raid mode). the issue is when i boot into a live ubuntu desktop, the drives dont show up with gparted or fdisk -l. im just using that for testing and plan to use proxmox in this build but i want to be sure everything is working first. anyway is there something i need to do in the LSI bios to enable those drives? do i need to format them or verify them in the LSI bios first? or does desktop ubuntu just not have the drivers for this?
i found an other post that was similar but his drives didnt seem to be showing up in the lsi bios which mine are and OP gave up at the end
any help would be appreciated
Hello.
I’d think ubuntu should at least in principle be able to see the drives - if they’re not showing in `lsblk’ then it seems at least the live boot can’t.
If not, can it not see the LSI card:
lspci -vnnk
( -v verbose)
( -nn vendor id and name)
( -k kernel module)
Installing proxmox shouldn’t be a long operation, I’d try that to see if it can see them, especially if it’s to be the target platform (if you’re planning a pass through, you don’t need to set them up straight away, after all).
I ended up installing proxmox and truenas scale and they had no issues with the drives so i guess im good. i can not decide what software to go with for OS. im tempted to just do a linux server installation and then add whatever i need but i feel i can save a lot of time going with the convenience of proxmox or truenas. the main this is that id like to get full performance of my 8 hhds, even if i have to do a raid 0 type of setup. i feel like if i just do ubuntu server i can set up lvm to strip across all drives and for more important data i can have a small mirrored LV. ive not used zfs much before but with truenas scale even a striped pool doesnt seem to get me the sequential read and write speeds i could get (maybe 40MBps but shouldnt i be able to get close to 1GBps? that was a quick test over wifi but i think my speed test on the same device to the internet is faster so i dont think wifi is the bottle neck). ive heard zfs needs ram and cpu to perform well so i still need to upgrade ram and cpu… ideally id have two systems one to just store needed data on and one to play with until i figure out what to do but im loosing my unlimited google drive in may so i kinda want something figured out before then haha
ok so figured out that my previous speed issue was a wifi being stupid bottleneck. i plugged in straight to the router with ethernet and maxed out that gigabit connection at about 120MBps write speeds. so next id like to test my 10gb nics but im having this issue where the truenas scale web ui freezes and i cant access the SMB share. i can still ping and the CLI on the monitor is still responsive but im getting error messages on the screen. any ideas? seems like an issue with one of my drives but im sorta thinking its the boot drive since why would an issue with a data drive cause the whole UI to crap out
With the setup on the lsi card & proxmox I’d almost assumed you’d been thinking of proxmox host & passthrough the lsi card to truenas as a vm, already - but there’s a bit of bias in that this was a scenario I considered at my last rebuild. everything else you’d want to run is then available as another vm.
zfs will use whatever ram it’s allowed to but doesn’t require it. the focus is data integrity not speed, however - if you’re seriously considering raid 0 on 8 drives, then yes i’m sure you can do faster but I can’t imagine that’s worth the trouble of restoring a failure.
if it’s always the same drive address with the error, could be cable/connection or an issue to the drive itself - is it the same model as the other drives, is it spinning down/going to sleep. it will likely be needed to determine which drive it is referring to and perhaps swap it out for now and see if it occurs with any other.
Yeah im not sure what drive it is. it doesnt seem to state sda (boot drive) or sdb through sdi (data drives) but like i said , I dont think the ui would hang if a data drive went down so that makes me think my installation drive is having issues. (its an old laptop drive as this whole build is used everything haha) i have a small ssd i can use so ill try that and see if that makes a difference
oh as far as raid 0 vs raid 6 (zraid3 or whatever) i want be able to max out my 10bg connections in a test run first even if i dont end up using that raid configuration in the long run.
ok so i swapped out my hdd boot drive for an ssd and i still get that same issue. i need to do some more testing with it but im thinking it might be my LSI 9207-8i card. ive heard and noticed the card gets pretty hot. I have a fan mounted to the case side panel pulling in fresh air to the card but maybe i need some more airflow since i have 3 10gb nics there as well and the space between cards is small. you think this could be caused by the card over heating ?
Heat isn’t the worst idea, cards for rack servers can depend on the chassis to force air over them. I’ve had this with 10Gb cards (and then cable tied fans to them).
I don’t know the LSI gets that hot however.
if it’s a random location reporting every time (not sd 2:0:1:0 everytime - i’m presuming it’s the drive address, not the pci address of the card ), it’s more likely to be the card.
if it’s the same address always erroring, may have to test until can isolate it (*or possibly its cable).
OK so i ended up putting an other small fan right over the lsi card and that seemed to fix the issue. ill still get a freeze up every now and then and im going through the process of removing faulty drives… these drives seem to have about 30k-40k hours on them. is that a lot? from what ive read online that doesnt seem to be too much but i dont know ive had 3 drives out of 8 cause issues. i took out two and a 3rd one i left in but just didnt include it with any vdevs.
is there a way i can check what device has that 2:0:1:0 address so i can see if its a drive or if its a pcie device ?
so update to this most of my drives died or are limping along. i got them for free from work and i guess they were on their last leg. i went back to the 2tb drives i was originally using and yeah i can see them in the LSI card bios config tool thing and even format them there but once i boot into truenas scale i get no drives. lspci -vnnk does have the LSI card present which makes sense because my work drives were showing up but lsblk only returns the boot drive
This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.