This hasn’t happened to me yet but I was just thinking about it. Let’s say you have a server with an iGPU, and you use GPU passthrough to let VMs use the iGPU. And then one day the host’s ssh server breaks, maybe you did something stupid or there was a bad update. Are you fucked? How could you possibly recover, with no display and no SSH? The only thing I can think of is setting up serial access for emergencies like this, but I rarely hear about serial access nowadays so I wonder if there’s some other solution here.

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    8
    ·
    15 days ago

    Serial is still a thing.
    Get a cheap video card.
    Or a usb to vga adapter.
    A server class system with BMC.
    Live CD.

    There are options.

    • berylenara@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      15 days ago

      Serial is still a thing.

      Good to know 👍

      Get a cheap video card.

      I’d be tempted to just pass it through as well 😅

      Live CD.

      Doesn’t work if you have encrypted disk (nevermind I was wrong about this)

      Or a usb to vga adapter.

      A server class system with BMC.

      Interesting ideas, I’ll look into them thanks

      • eldavi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        Doesn’t work if you have encrypted disk

        this this because you are unable to provide the encryption password?

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    7
    ·
    15 days ago

    I just have a boot entry that doesn’t do the passthrough, doesn’t bind to vfio-pci and doesn’t start the VMs on boot so I can inspect and troubleshoot.

    • berylenara@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 days ago

      That sounds brilliant. Have any resources to learn how to do something like this? I’ve never created custom boot entries before

      • Max-P@lemmy.max-p.me
        link
        fedilink
        arrow-up
        5
        ·
        15 days ago

        I use systemd-boot so it was pretty easy, and it should be similar in GRUB:

        title My boot entry that starts the VM
        linux /vmlinuz-linux-zen
        initrd /amd-ucode.img
        initrd /initramfs-linux-zen.img
        options quiet splash root=ZSystem/linux/archlinux rw pcie_aspm=off iommu=on systemd.unit=qemu-vms.target
        

        What you want is that part: systemd.unit=qemu-vms.target which tells systemd which target to boot to. I launch my VMs with scripts so I have the qemu-vms.target and it depends on the VMs I want to autostart. A target is a set of services to run for a desired system state, the default usually being graphical or multi-user, but really it can be anything, and use whatever set of services you want: start network, don’t start network, mount drives, don’t mount drives, entirely up to you.

        https://man.archlinux.org/man/systemd.target.5.en

        You can also see if there’s a predefined rescue target that fits your need and just goes to a local console: https://man.archlinux.org/man/systemd.special.7.en

    • berylenara@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      15 days ago

      As mentioned in another reply, this doesn’t work if you have encrypted disk. The price for security I suppose

      Edit: nevermind I thought that secure boot and disk encryption would prevent you from mounting the disk to another system, but that appears to be wrong

  • I passthrough a GPU (no iGPU on this mobo).
    It only hijacks the GPU when I start the VM, for which I haven’t configured autostart.
    Before the VM is started it’s showing the host prompt. It doesn’t return to the prompt if the VM is shutdown or crashed, but a reboot would, hence not autostarting that VM.
    If it got borked too much, putting a temporary GPU might be easier.

    Also, don’t break your ssh.
    Pretty easy with PKI auth.

    • berylenara@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      It only hijacks the GPU when I start the VM

      How did you do this? All the tutorials I read hijack the GPU at startup. Do you have to manually detach the GPU from the host before assigning it to the VM?

      • Interesting.
        I’m not doing anything special that wasn’t in one of the popular tutorials and I thought that’s how it was supposed to work, although it might very well be a “bug” how it behaves right now.

        I don’t know enough about this, but the drivers are blacklisted on the host at boot, yet the console is still displayed through the GPU’s HDMI at that time which might depend on the specific GPU (a vega64 in my case).

        The host doesn’t have a graphical desktop environment, just the shell.

        • berylenara@sh.itjust.worksOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 days ago

          the drivers are blacklisted on the host at boot

          This is the problem I was alluding to, though I’m surprised you are still able to see the console despite the driver being blacklisted. I have heard of people using scripts to manually detach the GPU and attach it to a VM, but sounds like you don’t need that, which is interesting

  • qjkxbmwvz@startrek.website
    link
    fedilink
    arrow-up
    2
    ·
    15 days ago

    For very simple tasks you can usually blindly log in and run commands. I’ve done this with very simple tasks, e.g., rebooting or bringing up a network interface. It’s maybe not the smartest, but basically, just type root, the root password, and dhclient eth0 or whatever magic you need. No display required, unless you make a typo…

    In your specific case, you could have a shell script that stops VMs and disables passthrough, so you just log in and invoke that script. Bonus points if you create a dedicated user with that script set as their shell (or just put in the appropriate dot rc file).

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    15 days ago

    Proxmox on the host. It uses a webserver for admin stuff.

    No other things that run on the host ––> no other things that break on the host.

    • berylenara@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      If you want to lock down the web server and ssh behind a VPN, that’s where you can fuck up and lock yourself out though.

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    15 days ago

    Live boot, plug in a display?

    Maybe I’m missing something here, but won’t booting from live media run a normal environment?

    If you don’t have a live boot option you can also pull the disk and fix it on another machine, or put a different boot disk in the system entirely.

    You can probably also disable hardware virtualization extensions in the bios to break the VM so it doesn’t steal the graphics card.

    • berylenara@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 days ago

      A rescue iso doesn’t work if you have encrypted disk. I thought everybody encrypted disk nowadays.

      If you don’t have a live boot option you can also pull the disk and fix it on another machine, or put a different boot disk in the system entirely.

      This is an interesting idea though, as long as the other machine has a different GPU then the system shouldn’t hijack it on startup.

      You can probably also disable hardware virtualization extensions in the bios to break the VM so it doesn’t steal the graphics card.

      AFAIK GPU passthrough is usually configured to detach the GPU from the host automatically on startup. So even if all VMs were broken, the GPU would still be detached. However as another commenter pointed out, it’s possible to detach it manually which might be safer against accidental lockouts.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        arrow-up
        1
        ·
        15 days ago

        How’s the disk encrypted? I’ve never heard of anyone setting up an encrypted drive such that you can’t manually mount it with the password. It’s possible but you’d have to go out of your way to do that and only encrypt the drive with a TPM-managed key. It’s kind of a bad idea because if you lock yourself out your data’s gone.

      • mvirts@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        15 days ago

        😅 naa for me encryption a bigger risk than theft

        That said, you should be able to decrypt your disks with the right key even on a live boot. Even if the secrets are in the tpm you should be able to use whatever your normal system uses to decrypt the disks.

        If you don’t enter a password to boot, the keys are available. If you do, the password can decrypt the keys afaik.

        Again, I don’t do this but that’s what I’ve picked up here and there so take it with a grain of salt I may be wrong.

        • berylenara@sh.itjust.worksOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 days ago

          Actually that might work. I thought that secure boot and disk encryption would prevent mounting the disk to a different system, but now I can’t think of any reason why it would. Good idea