• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle







  • user134450@feddit.detoLinux@lemmy.worldMigrating to ZFS
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    datasheet for one of the drive models apparently these have a dual SAS interface, so what you are seing could be completely normal. i dont have any experience with this type of setup though.

    btw you can uniquely identify partitions by using something like lsblk -o+PARTUUID,FSTYPE the partuuid should never repeat in the output even if the partition table was somehow used as a template (though "dd"ing from disk to disk will duplicate those of course)

    also check out the “SERIAL” column for lsblk to uniquely identify the drives themselves.



  • user134450@feddit.detoLinux@lemmy.worldMigrating to ZFS
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    could you run something like sudo lsblk -o+MODEL and note down the model for the drives? i kind of suspect that the HBA you are using is still doing some abstraction and is not in IT mode. the duplication could come from connecting two SAS cables to the same backplane, thus creating a sort of double image of the enclosure. this is usually handled and hidden by the HBA though if it is configured correctly.

    pls also check that you are in fact using the correct ports on the enclosure. if you are not building a SAN only the “A” ports are supposed to be used and the “B” ports should be unused/free.



  • user134450@feddit.detoLinux@lemmy.worldMigrating to ZFS
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Initiator-target (IT) mode enables creating a JBOD with zfs vdevs on it. You can have the zfs vdevs in raidz configuration (which would give you the same drive redundancy as a hardware raid, with raidz1 performing similar to RAID5)

    zfs is commonly used with a JBOD configuration on a raid controller but you can also use any other kind of controller as long as the individual drives can be written to. examples for this would be NVMe drives directly attached to the PCIe bus or normal SATA controllers. This is more a performance optimization than an issue with compatibility.