The one I had would frequently drop the drives, wreaking havoc on my (software) RAID5. I later found out that it was splitting 2 ports into 4 in a way that completely broke spec.
The one I had would frequently drop the drives, wreaking havoc on my (software) RAID5. I later found out that it was splitting 2 ports into 4 in a way that completely broke spec.
I don’t want to speak to your specific use case, as it’s outside of my wheelhouse. My main point was that SATA cards are a problem.
As for LSi SAS cards, there’s a lot of details that probably don’t (but could) matter to you. PCIe generation, connectors, lanes, etc. There are threads on various other homelab forums, truenas, unraid, etc. Some models (like the 9212-4i4e, meaning it has 4 internal and 4 external lanes) have native SATA ports that are convenient, but most will have a SAS connector or two. You’d need a matching (forward) breakout cable to connect to SATA. Note that there are several common connectors, with internal and external versions of each.
You can use the external connectors (e.g. SFF-8088) as long as you have a matching (e.g. SFF-8088 SAS-SATA) breakout cable, and are willing to route the cable accordingly. Internal connectors are simpler, but might be in lower supply.
If you just need a simple controller card to handle a few drives without major speed concerns, and it will not be the boot drive, here are the things you need to watch for:
Also, make sure you can point a fan at it. They’re designed for rackmount server chassis, so desktop-style cases don’t usually have the airflow needed.
To anyone reading, do NOT get a PCIe SATA card. Everything on the market is absolute crap that will make your life miserable.
Instead, get a used PCIe SAS card, preferably based on LSi. These should run about $50, and you may (depending on the model) need a $20 cable to connect it to SATA devices.
I did this back in the days of Smoothwall, ~20 years ago. I used an old, dedicated PC, with 2 PCI NICs.
It was complicated, and took a long time to setup properly. It was loud and used a lot of power, and didn’t give me much beyond the standard $50 routers of the day (and is easily eclipsed by the standard $80 routers of today). But it ran reliably for a number of years without any interaction.
I also didn’t learn anything useful that I could ever apply to something else, so ended up just being a waste of time. 2/10, spend your time on something more useful.
The big caveat is that the BIOS must allow it, and most released versions do not.
What is your use case? I ask because ESXi is free again, but it’s probably not a useful skill to learn these days. At least not as much as the competition.
Similarly, 2.5" mechanical drives only make sense for certain use cases. Otherwise I’d get SSDS or a 3.5" DAS.
They all have to work (at least to an extent) using only x1. It’s part of the PCIe spec.
Missing pins are actually extremely common. If your board has a slot that’s x16 (electrically x8), which is very common for a second video card, take a closer look. Half the pins in the slot aren’t connected. It has the full slot to make you feel better about it, and it provides some mounting stability, but it’s electrically the same as an x8 that’s open.
USB the protocol, or just uses a USB cable? If it’s not using the protocol, the cables are a cheap way of getting cables of a certain spec.
This can get a bit complicated with federation. This community is hosted on LW. I am accessing it via sopuli.xyz, and you via feddit.uk. All (presumably) have full bidirectional federation with each other.
When I hit send, this message will go to Sopuli’s outbox, which will then sync to LW for this community. At that point, this post will live on LW’s servers. Anyone accessing LW directly can see it, even if Sopuli were to go down. Later (probably less than a minute), LW will sync to Feddit.uk, at which point you will be able to see it.
Note that this is for text posts only. There have been some changes around images and video, both for bandwidth and liability reasons.
I post this every time someone is looking for “free speech”, and it has been completely relevant every single time.
This is almost always relevant as soon as someone complains about free speech, censorship, etc:
Correct me if I’m wrong, but aren’t most of them available as containers? Why would you need any hypervisor support beyond containers?
Also, be sure to run extensive burn in tests before deploying for production use. I had an entire batch from GoHardDrive fail on me during that testing, so my data was never in danger.
Thank you for the extra context. It’s relieving to know you don’t just have a bunch of USB “backup” drives connected.
To break this down to its simplest elements, you basically have a bunch of small DASes connected to a USB host controller. The rest could be achieved using another interface, such as SATA, SAS, or others. USB has certain compromises that you really don’t want happening to a member of a RAID, which is why you’re getting warnings from people about data loss. SATA/SAS don’t have this issue.
You should not have to replace the cable ever, especially if it does not move. Combined with the counterfeit card, it sounds like you had a bad parts supplier. But yes, parts can sometimes fail, and replacements on SAS are inconvenient. You also (probably) have to find a way to cool the card, which might be an ugly solution.
I eventually went with a proper server DAS (EMC ktn-stl3, IIRC), connected via external SAS cable. It works like a charm, although it is extremely loud and sucks down 250w @ idle. I don’t blame anyone for refusing this as a solution.
I wrote, rewrote, and eventually deleted large sections of this response as I thought through it. It really seems like your main reason for going USB is that specific enclosure. There should really be an equivalent with SAS/SATA connectors, but I can’t find one. DAS enclosures pretty much suck, and cooling is a big part of it.
So, when it all comes down to it, you would need a DAS with good, quiet airflow, and SATA connectors. Presumably this enclosure would also need to be self-powered. It would need either 4 bays to match what you have, or 16 to cover everything you would need. This is a simple idea, and all of the pieces already exist in other products.
But I’ve never seen it all combined. It seems the data hoarder community jumps from internal bays (I’ve seen up to 15 in a reasonable consumer config) straight to rackmount server gear.
Your setup isn’t terrible, but it isn’t what it could/should be. All things being equal, you really should switch the drives over to SATA/SAS. But that depends on finding a good DAS first. If you ever find one, I’d be thrilled to switch to it as well.
You currently have 16 disks connected via USB, in a ZFS array?
I highly recommend reimagining your path forward. Define your needs (sounds like a high-capacity storage server to me), define your constraints (e.g. cost), then develop a solution to best meet them.
Even if you are trying to build one on the cheap with a high Wife Acceptance Factor, there are better ways to do so than attaching 16+ USB disks to a thin client.
I think you mean LGA (Land Grid Array), meaning the pins are on the motherboard. Ball Grid Array (BGA) is used for embedded, non-removable CPUs.
The only thing I’ll add is that RAID is redundancy. Its purpose is to prevent downtime, not data loss.
If you aren’t concerned with downtime, RAID is the wrong solution.
You’re overlooking a very common reason that people setup a homelab - practice for their careers. Many colleges offer a more legitimate setup for the same purpose, and a similar design. But if you’re choosing to learn AD from a free/cheap book instead of a multi-thousand dollar course, you still need a lab to absorb the information and really understand it.
Granted, AD is of limited value to learn these days, but it’s still a backbone for countless other tools that are highly relevant.
Careful with this- since MLC just means multi, I’ve seen drives marketed as “3-bit MLC”, i.e. TLC
Kind of. They will be multiples of 4. Let’s say you got a gigantic 8i8e card, albeit unlikely. That would (probably) have 2 internal and 2 external SAS connectors. Your standard breakout cables will split each one into 4 SATA cables (up to 16 SATA ports if you used all 4 SAS ports and breakout cables), each running at full (SAS) speed.
But what if you were running an enterprise file server with a hundred drives, as many of these once were? You can’t cram dozens of these cards into a server, there aren’t enough PCIe slots/lanes. Well, there are SAS expansion cards, which basically act as a splitter. They will share those 4 lanes, potentially creating a bottleneck. But this is where SAS and SATA speeds differ- these are SAS lanes, which are (probably) double what SATA can do. So with expanders, you could attach 8 SATA drives to every 4 SAS lanes and still run at full speed. And if you need capacity more than speed, expanders allow you to split those 4 lanes to 24 drives. These are typically built into the drive backplane/DAS.
As for the fan, just about anything will do. The chip/heatsink gets hot, but is limited to the ~75 watts provided by the PCIe bus. I just have an old 80 or 90mm fan pointing at it.