I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.

I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.

If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.

  • undu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    Xcp-ng might have the edge against bare metal because Windows uses virtualization by default uses Virtualization-Based Security (VBS). Under xcp-ng it can’t use that since nested virtualization can’t be enabled.

    Disclaimer: I’m a maintainer of the control plane used by xcp-ng

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      Oooh, that explains it! I wondered what is going on. Thank you very much. And thank you for working on XCP-ng, it is a fantastic platform :-)

  • 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 days ago

    Not being an expert it seems as though your setup is windows-centric, whereas KVM tends to shine on linux.

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      Yes, it is Windows centric because that is where the workload is based on I need to run. It would be cool to see a similar comparison with a workload under Linux that puts strain on CPU, Memory and Disk.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Interesting article, but man, the random capitalization thing going on there is distracting as hell.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    What I am missing is ESXi/vSphere. Would be quite important for the few people that have access to the eval ressources to set it up.
    Same for the BSD versions. I think Beehive?

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Sure, ESXi would have been interesting. I thought about that, but I did not test it because it is not interesting to me anymore from a business perspective. And I am not keen of using it in my Homelab, so I left that out and use that time to do something relaxing. It’s my holiday right now :-)

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 days ago

        You ask for a “why deploy this [software]” in this community?

        Anyway…Simply: Why not? =)

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Proxmox does clustering and should have most of the same features. While you are welcome to run whatever you want I think vSphere is getting a bit pricey.

          • Zeoic@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Not even just pricey, but unpurchasable in many cases. Broadcom is really fucking it up

            • Possibly linux@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              2 days ago

              They are just making a living.

              In fact, there customers were stealing from them previously.

              • unnamed VMware rep
  • node815@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I discovered about a few months ago that XCP-NG does not support NFS shares which was a huge dealbreaker for me. Additionally, my notes from my last test indicated that I could not mount existing drives without erasing them. I’m aware that I could have spun up a TrueNAS or other file sharing server to bypass this, but maybe not if the system won’t mount the drives in the first place so it can pass them to the TrueNAS . I also had issues with their xen-orchestra which I will talk about below shortly. They also at the time, used an out of date CentOS build which unless I’m missing something, is no longer supported under that branding.

    For the one test I did which was for a KVM setup, was my Home Assistant installation, I have that running in Proxmox and ccomparativelyit did seem to run faster than my Proxmox instance does. But that may be attributed to Home Assistant being the sole KVM on the system and no other services running (Aside from XCP-NG’s).

    Their Xen-Orchestra for me was a bit frustrating to install as well, and being locked behind a 14 day trial for some of the services was a drawback for me. They are working on the front end gui to negate the need for this I believe, but the last time I tried to get things to work, it didn’t let me access it.

    • turnip@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      What do you use NFS for, isn’t NFS relatively obsolete by now?

      Assume I know not much about file shares.

      • node815@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        NFS4 I don’t think its obsolete.

        I use it for my Desktop computers to connect to the server. All of my systems use Linux so that’s my primary use. They backup to the server nightly.

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      I had a rough start with XCP-ng too. One issue I had was the NIC in my OptiPlex, which worked… but was super slow. So the initial installation of the XO VM (to manage XCP-ng) took over an hour. After using a USB NIC with another Realtek Chip, Networking was no issue anymore.

      For management, Xen-Orchestra can be self-built and it is quite easy and works mostly without any additional knowledge / work if you know the right tools. Tom Lawrence posted a Video I followed and building my own XO is now quite easy and quick (sorry for being a YT link): https://www.youtube.com/watch?v=fuS7tSOxcSo

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 days ago

    What are your disk settings for the KVM environments? We use KVM at work and found that the default configuration loses you a lot of performance on disk operations.

    Switching from SATA to SCSI driver, and then enabling queues (set the number equal to your number of cores) dramatically speeds up all disk operations, large and small.

    On mobile right now but I’ll try to add some links to the KVM docs later.

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 days ago

      That’s a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.

      Edit: I did a ‘virsh dumpxml <vmname>’ and the Disk Part looks like this:

        <devices>
          <emulator>/usr/bin/qemu-system-x86_64</emulator>
          <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2' cache='none'/>
            <source file='/mnt/0b89f7ac-67a7-3790-9f49-ad66af4319c5/8d68ee83-940d-4b68-8b28-3cc952b45cb6' index='2'/>
            <backingStore/>
            <target dev='sda' bus='sata'/>
            <serial>8d68ee83940d4b688b28</serial>
            <alias name='sata0-0-0'/>
            <address type='drive' controller='0' bus='0' target='0' unit='0'/>
          </disk>
      

      It is SATA… now I need to figure out how to change that configuration ;-)

  • hempster@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    Kinda surprised to see XCP topping the charts, time to benchmark my server currently running Proxmox I guess.

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      It would be cool to see how linux centric workloads behave on those Hypervisors. Juuust in case you plan to invest some time into that ;-)