At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I’ll need to IT flash the HBA, or get another. I’m guessing it’s best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

  • minnix@lemux.minnix.dev
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    5 hours ago

    ZFS is great, but to take advantage of it’s positives you need the right drives, consumer drives get eaten alive as @scrubbles@poptalk.scrubbles.tech mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.

    • RaccoonBall@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      32 minutes ago

      Complete nonsense. Enterprise drives are better for reliability if you plan on a ton of writes, but ZFS absolutely does not require them in any way.

      Next you’ll say it needs ECC RAM

    • blackstrat@lemmy.fwgx.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Could this because it’s a RAIDZ-2/3? They will be writing parity as well as data and the usual ZFS checksums. I am running RAID5 at the moment on my HBA card and my limit is definitely the 1Gbit network for file transfers, not the disks. And it’s only me that uses this thing, it sits totally idle 90+% of the time.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      4 hours ago

      No idea why you’re getting downvoted, it’s absolutely correct and it’s called out in the official proxmox docs and forums. Proxmox logs and journals directly to the zfs array regularly, to the point of drive destroying amounts of writes.

      • ShortN0te@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        What exactly are you referring to? ZIL? ARC? L2ARC? And what docs? Have not found that call out in the official docs.

      • blackstrat@lemmy.fwgx.ukOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        I’m not intending to run Proxmox on it. I have that running on an SSD, or maybe it’s an NVME, I forget. This will just be for data storage mainly of photos that one VM will manage and NFS share out to other machines.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Ah I’ll clarify that I set mine up next to the system drive in proxmox, through the proxmox zfs helper program. There was probably something in there that set up settings in a weird way

        • minnix@lemux.minnix.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Yes I’m specifically referring to your ZFS pool containing your VMs/LXCs. Enterprise SSDs for that. Get them on ebay. Just do a search on the Proxmox forums for enterprise vs consumer SSD to see the problem with consumer hardware for ZFS. For Proxmox itself you want something like an NVME with DRAM, specifically underprovisioned for an unused space buffer for the drive controller to use for wear leveling.