Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I’m trying to migrate services from my NAS (currently docker) to this machine.

How should Jellyfin be set up, lxc or vm? I don’t have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech’s setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn’t working for me: curl doesn’t work on my machine, most install scripts don’t work, nano edits crash, and mounts are inconsistent.

My Synology NAS is mounted to the host, but making mount points to the lxc doesn’t actually connect data. For example, if my NAS’s media is in /data/media/movies or /data/media/shows and the host’s SMB mount is /data/, choosing the lxc mount point /data/media should work, right?

Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don’t persist.

Any suggestions for resource allocation? I’ve been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.

If you suggest command lines, please keep them simple as I have to manually type them in.

Here’s the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe

  • LazerDickMcCheese@sh.itjust.worksOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Do they show up as resources? I add my mount points at the CLI personally, this is the best way imo: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media

    I’d love to check that, but you lost me…

    So the NAS was added like you suggested; I can see the NAS’s storage listed next to local data. How does one command an lxc or vm to use it though?

    • curbstickle@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      This line right here shares it with the LXC, I’ll break it down for you:

      pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media

      pct is the proxmox container command, youre telling it to set the mount point (mp0, mp1, mp2, etc). That point on the host is /mnt/pve/yourmountname. In the container is on the right, mp=/your/path/. So inside the container if you did an ls command in the directory /your/path/, it would list the files in /mnt/pve/yourmountname.

      The yourmountname part is the name of the storage you added. You can go to the shell at the host level in the GUI, and go to /mnt/pve/ then enter ls and you will see the name of your mount.

      So much like I was mentioning with the GPU, what youre doing here is sharing resources with the container, rather than needing to mount the share again in your container. Which you could do, but I wouldn’t recommend.

      Any other questions I’ll be happy to help as best as I can.

      Edit: forgot to mention, if you go to the container and go to the resources part, you’ll see “Mount Point 0” and the mount point you made listed there.

      • LazerDickMcCheese@sh.itjust.worksOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Are there different rules for a VM with that command? I made a 2nd NAS share point as NFS (SMB has been failing, I’m desperate, and I don’t know the practical differences between the protocols), and Proxmox accepted the NFS, but the share is saying “unknown.” Regardless, I wanted to see if I could make it work anyway so I tried ‘pct set 102 -mp1 /mnt/pve/NAS2/volume2/docker,mp=/docker’

        102 being a VM I set up for docker functions, specifically transferring docker data currently in use to avoid a lapse in service or user data.

        Am I doing this in a stupid way? It kinda feels like it

        • curbstickle@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          For the record, I prefer NFS

          And now I think we may have the answer…

          OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we’d mount NFS directly inside the VM.

          Did you make an LXC or a VM for 102?

          If its an lxc, we can work out the command and figure out what’s going on.

          If its a VM, we’ll get it mounted with NFS utils, but how is going to depend on what distribution you’ve got running on there (different package names and package managers)

          • LazerDickMcCheese@sh.itjust.worksOP
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            Ah, that distinction makes sense…I should’ve thought of that

            So for the record, my Jellyfin-lxc is 101 (SMB mount, problematic) and my catch-all Docker VM is 102 (haven’t really connected anything, and I don’t care how it’s done as long as performance is fine)

            • curbstickle@anarchist.nexus
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 day ago

              Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.

              • Shut down 101 and 102
              • In the Web GUI, go to the JF container, go to resources, and remove that mount point. Take note of where you mounted it! We’re going to mount it back in the same spot.
              • Go to the web GUI, go to Storage, select the SMB mount of the NAS, and select Edit - then uncheck Enable.
              • With it selected, go ahead and click remove
              • For both 101 and 102, lets make sure they aren’t set to start from boot for now. Go to each of them, and under the options section, you’ll see “Start at Boot”. If they say yes, change it to No (click edit or double click and remove the check from the box).
              • Reboot your server
              • Lets check that the mounting service is gone, go to the host then shell, and enter systemctl list-units "*.mount"
              • If you don’t see mnt-pve-thenameofthatshareyoujustremoved.mount, its removed.

              That said - I like to be sure, so lets do a few more things.

              • umount -R /mnt/pve/thatshare - Totally fine if this throws an error
              • Lets check the mounts file. cat /proc/mounts - a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line. nano /proc/mounts, find the line if its still there, and remove it. ctrl+x then y to save.

              Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.

              Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again: ls -la /mnt/pve/thenameofyourmount/

              Is your data showing up? If so, great! If not, lets find out whats going on.

              Now lets add back to your container mount. You’ll need to add that mount point back in again with: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media (however you had it mounted before in that second step).

              Now start the container, and go to the console for the container. ls -la /whereveryoumountedit - if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable “Start at Boot” if you’d like it to.

              Onto the VM, what distribution is installed there? Debian, fedora, etc?

              • LazerDickMcCheese@sh.itjust.worksOP
                link
                fedilink
                English
                arrow-up
                0
                ·
                23 hours ago

                Well, now the jelly lxc is failing to boot "run_buffer: 571 Script exited with status 2 Lxc_init: 845 failed to run lxc.hook.pre-start for container “101"”

                But the mount seems stable now. And the VM is Debian 12

                • curbstickle@anarchist.nexus
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  23 hours ago

                  That usually means something has changed with the storage, I’d bet there is a lingering reference in the .conf to the old mount.

                  The easiest? Just delete the container, start clean. Thats what nice about containers by the way! The harder would be mounting the filesystem of the container, and taking a look at some logs. Which route do you want to go?

                  For the VM, its really easy. Go to the VM, and open up the console. If you’re logging in as root, commands as is, if you’re logging in as a user, we’ll need to add a sudo in there (and maybe install some packages / add the user to the sudoers group)

                  1. Update your packages - apt update && apt upgrade
                  2. Install the nfs tools - apt install nfs-common
                  3. Create your directory where you’re going to mount it mkdir /mnt/NameYourMount
                  4. Lets mount it to test - sudo mount -t nfs 192.168.1.100:/share/dir /mnt/NameYourMount
                  5. List out the files and make sure its working - ls -la /mnt/NameYourMount. If you have an issue here, pause and come back and we’ll see whats going on.
                  6. If it looks good, lets make it permanent - nano /etc/fstab
                  7. Add this line, edited as appropriate 192.168.1.100:/share/dir /mnt/NameYourMount nfs defaults,x-systemd.automount,x-systemd.requires=network-online.target 0 0
                  8. Save and close - ctrl+x then y
                  9. Reboot your VM, then login again and ls -la /mnt/NameYourMount to confirm you’re all set
                  • LazerDickMcCheese@sh.itjust.worksOP
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    21 hours ago

                    I solved the LXC boot error; there was a typo in the mount (my keyboard sometimes double presses letters, makes command lines rough).

                    So just to recap where I am: main NAS data share is looking good, jelly’s LXC seems fine (minus transcoding, “fatal player error”), my “docker” VM seems good as well. Truly, you’re saving the day here, and I can’t thank you enough.

                    What I can’t make sense of is that I made 2 NAS shares: “A” (main, which has been fixed) and “B” (currently used docker configs). “B” is correctly connected to the docker VM now, but “B” is refusing to connect to the Proxmox host which I think I need to move Jellyfin user data and config. Before I go down the process of trying to force the NFS or SMB connection, is there any easier way?