I’m looking to replace my sff J5040 Wyze machine. Its still plenty fast enough, but storage has become an issue with its limited USB endpoint availability of ~50 device limit.

I know that just switching it up to a newer Intel system could give me double the endpoints because of the two XHCI chip setup, but I was thinking that if I’m going to replace it, I’d like to not limit myself.

As such, even though Ryzen is far faster than I need, it does now support USB4. Does anyone know if the switch to USB4 would give the system a larger address range and have more than 127 USB devices or is that limitation still in place and I might as well not waste my money?

  • breakingcups@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    13 days ago

    I have to ask, even though I’m afraid of the answer, exactly how many USB storage devices are you (planning on) hooking up to that poor machine?

    • frazorth@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      13 days ago

      Its currently got 16 disks, and a ZigBee. So not a lot from my point of view.

      However its also got the internal hubs to split the front and back ports, I think the Bluetooth is hooked up to USB on the board and there are a few other things that appear as codes. What it means is that trying to connect another disk to swap out on my ZFS fails to enumerate on the USB. I dont think the number of items are unreasonable but this little box wasn’t quite designed for this usecase.

      [Edit] As mentioned on the other thread, these only have 50 endpoints because Intel, and each device is 2 endpoints so there are only 20 devices total that can be plugged in.

      • Nollij@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        13 days ago

        You currently have 16 disks connected via USB, in a ZFS array?

        I highly recommend reimagining your path forward. Define your needs (sounds like a high-capacity storage server to me), define your constraints (e.g. cost), then develop a solution to best meet them.

        Even if you are trying to build one on the cheap with a high Wife Acceptance Factor, there are better ways to do so than attaching 16+ USB disks to a thin client.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        13 days ago

        Oh my do not do this USB can be very fragile and you array might just implode one day.

        Go get yourself 2-3 servers and then load them each with some drives. From there setup Ceph and profit.

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            13 days ago

            Your setup is definitely “non standard”

            I would strongly look into proper SAS or Sata. USB is not designed for what you are doing. Also writing a long comment comment about how you have been in tech for decades does not make your setup any less crazy. I can not understate that a standard setup would be way better.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    21
    ·
    13 days ago

    There was a recent video from everyone’s favorite youtube Canadians that tested how many USB devices you can jam onto a single controller.

    The takeaway they had was that modern AMD doesn’t seem to give a shit and will actually let you exceed the spec until it all crashes and dies, and Intel restricts it to where it’s guaranteed to work.

    Different design philosophies, but as long as ‘might explode and die for no clear reason at some point once you have enough stuff connected’ is an acceptable outcome, AMD is the way to go.

    • frazorth@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      13 days ago

      Nice!

      The 127 endpoint limit isn’t even that hard, the spec is 127 endpoints per controller, however Intel appears to have done the dirty and restricted systems to 127, and even then you need two controllers to get that high. 😡

      If AMD let’s us use multiple controllers to go higher then thats awesome, but its also following the spec.

      [Edit] I’ll have to see if I can find the video.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    13 days ago

    You are doing this all wrong

    If you are trying to use more than 50 devices you need to rethink your setup. USB isn’t designed for that.

    • frazorth@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      13 days ago

      Citation please? The spec is fine with it, its just Intel that put in artificial limitations.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        I never said it wasn’t possible. It just is not a good idea.

        You want proper SAS as that is a interface that is designed to talk to drives. USB is not optimized for that and it comes with some overhead. The biggest issue with USB is that the addressing is easy to screw up. Devices may disconnect and reconnect at random which is normally part of USB. However, this will cause ZFS to panic as it sees this as a hardware problem.

      • ForgotAboutDre@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        13 days ago

        A glass may have a spec that allows it to be filled to the brim. Doesn’t mean it’s a good idea, especially when you want to run up stairs with it.

        Your going to spill water everywhere.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    13 days ago

    A quick Google does not produce any results showing an increased device limit.

    I’m curious, what are you doing to reach 127 device endpoints, especially on a thin client?

    • frazorth@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      13 days ago

      I’m not. Intel does not let you get that high.

      https://community.intel.com/t5/Embedded-Intel-Core-Processors/Hardware-limitations-on-USB-endpoints-XHCI/td-p/264556

      From Intel.

      First Limitation

      Fundamentally, the customer is correct. You will never be able to achieve 127 actual USB devices attached to a Host Controller (i.e. Intel system). As the customer pointed out, each USB device (USB key, keyboard, mouse, etc.) is typically counted as two endpoints (two logical USB units), and each USB hub, multiplier, or repeater is counted as another 4+ endpoints. So, it comes to about 50+ devices per each Host Controller.

      With these systems it was only 25 devices total (unless you use hubs), so they introduced two controllers to allow 100+ endpoints.

      It’s because of this reason that Intel added a second USB Host Controller in the majority of its Core platform chipsets. If you look at the Features page of the 8 Series Chipset ( when partnered with the 4th Gen Intel Core processor), page 39, http://www.intel.com/content/www/us/en/chipsets/8-series-chipset-pch-datasheet.html http://www.intel.com/content/www/us/en/chipsets/8-series-chipset-pch-datasheet.html, it states having “Two EHCI Host Controllers, supporting up to fourteen external USB 2.0 ports”. So, with two Host Controllers, each at 50+ device, you have a potential of connecting up to 100+ USB devices per 4th Gen Core platform system.

      And thats not considering the internal hubs that split back and forward ports.

      • stuner@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        This seems to be a limitation of Intel host controllers. The USB 2.0 specification (including 12 Mbps Full Speed) allows for up to 127 devices. Each of those devices can have up to 16 IN and 16 OUT endpoints, c.f. https://www.usbmadesimple.co.uk/ums_3.htm Depending on how you count, that would be a maximum of 2k to 4k endpoints in total. I guess Intel thought it wasn’t worthwhile supporting that many endpoints.

        Some quick searching turned up this post that claims that USB3 controllers often support up to 254 endpoints (in total). https://www.cambrionix.com/a-quick-guide-to-usb-endpoint-limitations/ Other posters have also said that AMD appears to have higher limits. You could also consider adding more USB root hubs to your system (with PCIe cards).

        • frazorth@feddit.ukOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          12 days ago

          Yep, completely agree that its an Intel limitation.

          I didn’t see that about USB3 (my Intel system provides USB 3, and still has the 50 endpoint/25 device limit), I’ll take a look, however it sounds like AMD is just generally better.

          It’ll be a shame to lose Quick Sync, but it’ll probably be worth it.

          However this reiterates my thoughts about USB4, since it is a Thunderbolt derivative, and as mentioned in your link Thunderbolt doesn’t have these same limitations.

  • frazorth@feddit.ukOP
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    13 days ago

    It’s unfortunate that the conversation has been derailed by people advising me on “better” implementations, so should probably summerise my journey that has got me to this point.

    I started home labbing and self-hosting over 25+ years ago, with a large HDD connected to my home PC, HDMI to the TV and ripping DVDs. The disk was probably PATA back then. Yeah, single disk and it probably died at some point making me realise I needed backups in future. I replaced it with a dedicated server I build in an ITX case. A four disk CHENBRO ES34069, back in the good old Athlon days. Each one of those disks was SATA directly connected to the motherboard. And it did the job, except for getting extremely hot and I had a number of disk failures over the years.

    Looking back, I can’t guarentee that it was the heat from the system that caused premature failures rather than 2008 era disks just not being as reliable as they are now. But it was hot, and that could not have helped and I had a number of disks over the years that failed in various ways.

    I learnt about RAID, and ADM on Linux making arrays and generally not losing any data any more.

    My upgrade on that was required as I find that storage requirements outpace what I have, so it ultimately was replaced with a full tower system. A Zalman MS800. At this point I tried to go the whole way. SAS controller cards! Silverstone SAS/SATA hot swap drive bays! RAID disks!

    This is when I learnt that I don’t like large RAID arrays, when scrubs took several days and either extending/upgrading/replacing a disk took forever and started using 4 disk arrays in parallel with mergerFS overlaid. Honestly that was the best discovery and its not lost me anything in over a decade.

    But SAS became a hassle. I had a controller card fail at one point, so I picked up a replacement which turned out to be a fake, and I’ve had to replace an 8087 cable a couple of times. Its hot, I’ve gone through a few replacement drive bays and they’ve all had small cheap fans on the back of them that over heat, make noise, and don’t do a great job of shifting heat when its part of the same case as the rest of the computer. So I’ve investigated alternatives.

    The current solution has evolved to now have 4 “TERRAMASTER D4-300” external storage devices. These are probably the best I’ve come across, they fully expose the disks to the hosts including SMART and even the full correct serial numbers. The speed is absolutely not an issue and I find the benefits of SAS have been completely overstated when dealing with 5/10GB USB connections. I can do a full scrub of a 4 disk RAIDZ over night. These have large fans on the back, and I’ve never had disks this cool when dealing with internal storage, SMART is much happier. You are allowed to have your own opinions, but honestly I’ve been using this setup for a long time and I have no regrets. It is much simpler, and the limitation is very much on the speed of the storage device rather than the type of cable.

    The only issue is older Intel gimping the USB enumerator to only allow 50 endpoints, which is 25 devices (less if you have hubs), and even on newer systems they only doubled it.

    Let me know why you think my set up is wrong, and I’ll explain how I’ve probably tried your way and I don’t care for it.

    • SpikesOtherDog@ani.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      Those 2008 disks weren’t great, but airflow advice at that time wasn’t great either.

      I can understand how you got where you are, but my concern is that your raidz pools will get more slow as the shared bus fills. You might be able to mitigate this by distributing your pools across different root hubs.

      I get the point that external devices will have less heat entrapment, but laying your components in your case to optimize air flow will keep your drives cooler and extend their life. Also, I personally have a lot of risk in my house due to children and pets and can’t imagine leaving my devices out like that.

      What size drives and what quantity are you considering?

    • Khanzarate@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      Just wanna throw in a voice saying your setup sounds completely fine to me. Maybe it’s a bit odd but it also sounds like how I’d do it if I had storage needs that large.

      My current storage needs are currently met with a 2.5" SSD connected to a raspberry pi shared with samba over WiFi though so I’m pretty sure every storage nerd in here is gonna tell me my opinion doesn’t count, take it with a grain of salt.

    • Nollij@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 days ago

      Thank you for the extra context. It’s relieving to know you don’t just have a bunch of USB “backup” drives connected.

      To break this down to its simplest elements, you basically have a bunch of small DASes connected to a USB host controller. The rest could be achieved using another interface, such as SATA, SAS, or others. USB has certain compromises that you really don’t want happening to a member of a RAID, which is why you’re getting warnings from people about data loss. SATA/SAS don’t have this issue.

      You should not have to replace the cable ever, especially if it does not move. Combined with the counterfeit card, it sounds like you had a bad parts supplier. But yes, parts can sometimes fail, and replacements on SAS are inconvenient. You also (probably) have to find a way to cool the card, which might be an ugly solution.

      I eventually went with a proper server DAS (EMC ktn-stl3, IIRC), connected via external SAS cable. It works like a charm, although it is extremely loud and sucks down 250w @ idle. I don’t blame anyone for refusing this as a solution.

      I wrote, rewrote, and eventually deleted large sections of this response as I thought through it. It really seems like your main reason for going USB is that specific enclosure. There should really be an equivalent with SAS/SATA connectors, but I can’t find one. DAS enclosures pretty much suck, and cooling is a big part of it.

      So, when it all comes down to it, you would need a DAS with good, quiet airflow, and SATA connectors. Presumably this enclosure would also need to be self-powered. It would need either 4 bays to match what you have, or 16 to cover everything you would need. This is a simple idea, and all of the pieces already exist in other products.

      But I’ve never seen it all combined. It seems the data hoarder community jumps from internal bays (I’ve seen up to 15 in a reasonable consumer config) straight to rackmount server gear.

      Your setup isn’t terrible, but it isn’t what it could/should be. All things being equal, you really should switch the drives over to SATA/SAS. But that depends on finding a good DAS first. If you ever find one, I’d be thrilled to switch to it as well.

    • johntash@eviltoast.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      12 days ago

      What kind of speeds do you get, and how much storage approx?

      I’ve gone the route of raiding usb drives before, but 5/10/40gb ports didn’t exist so it was always slow and not worth it.

      Sounds like you basically have a DAS that connects over usb, that’s pretty cool if it works well.

      • johntash@eviltoast.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        Oh also, drives have become a lot better at handling heat or at least are more reliable imo. If you can, try to stick to drives rated to go in a nas. I used to have drives failing all the time, but not so much over the last few years.

  • Ajen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    If you have a spare pci-e port you can get a card that adds a USB controller. Some m.2 ports also have a pci-e channel you could use with an adapter.

  • SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    13 days ago

    I know it’s not your question, but you might get much better results from a SAS card and adding SATA SSD drives. You can slowly upgrade an ATX system to meet increased power and space needs. I have done the same using truenas and it’s working great.

    Continuing to add USB devices does increase the serial traffic per bus since it is shared. Increased traffic increases errors, lowering your bandwidth further.