Is this within a config file for ntfy? When I access it through local host it tells me it needs to be accessed through https. Which I setup through cloudflare tunnels and the error went away.
There was no config file generated for my docker container for ntfy.
Edit: I have fixed this error but notifications still do not work.
I was able to get it working with this docker compose!! Thank you!
I can now sign inyo element x and schildi next! The only problem is none of my chats are showing up and when I create a new room it dissapears as soon as I back out into the chats tab.
Any idea on how to fix it? Do I need a seperate sub domain for the sliding sync proxy?
I am using cloudflare tunnels as of right now. I would be very appreciative if I could take a look at your settings!
THANK YOU ALL!
It was a problem with my docker compose file! I didn’t list the needed devices from the jellyfin documentation. I thought the Container was detecting the gpu but it wasn’t. Docker exec <container-name> nvidia-smi is your friend!
Edit: so now it doesnt kick me out saying the playback failed but its just a black screen with 4k media
Edit 2: my bad forgot to enable some transcoding settings in jellyfin lol
This is what thay compose looks like now:
services:
jellyfin:
image: jellyfin/jellyfin
user: 1000:1000
network_mode: 'host'
volumes:
- /DATA/AppData/jellyfin/config:/config
- /DATA/AppData/jellyfin/cache:/cache
- /DATA/AppData/jellyfin/media:/media
- /mnt/drive1/media:/mnt/drive1/media
- /mnt/drive2/Jellyfin:/mnt/drive2/Jellyfin
- /mnt/drive3:/mnt/drive3
- /mnt/drive4/media:/mnt/drive4/media
- /mnt/drive5/jellyfin:/mnt/drive5/jellyfin
- /mnt/drive6/jellyfin:/mnt/drive6/jellyfin
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: cdi
device_ids:
- nvidia.com/gpu=all
- /dev/nvidia-caps:/dev/nvidia-caps
- /dev/nvidia0:/dev/nvidia0
- /dev/nvidiactl:/dev/nvidiactl
- /dev/nvidia-modeset:/dev/nvidia-modeset
- /dev/nvidia-uvm:/dev/nvidia-uvm
- /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools
count: all
capabilities: [gpu]
Edit: when I try and compose up it says “yaml: lin 30 mapping values are not allowed in this context” when I remove line 30 and 31 the output is “validating /DATA/AppData/jellyfin/docker-compose.yml: services.jellyfin.deploy.resources.reservations.devices.1 must be a mapping”
I tried this and it says:
OCI runtime exec failed: unable to start container process: exec: “nvidia-smi”: executable file not found im $PATH: unknown
I ran it as two commands instead of one before and still got that error message.
However, I tried again with a different jellyfin image and the command seems to have ran fine.
Here is a pic of my nvidia-smi output:
I followed this guide and seemed to get it working.
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
However jellyfin transcoding sttill doesn’t work. I have tried adding the “nvidia devices= all” environment variable, it still didn’t work.
I tried using the docker conpose from here
But when I try and run this command: “docker exec -it jellyfin ldconfig sudo systemctl restart docker”
It says the container is restarting and to try again when the container has started.
So I seem to have gotten it up and running with the guide the only issue is It seems really slow (compared to conduit, conduwuit, dendrite and synapse on sqlite). Also when I pick up calls on element x on graphene os I cannot hear the other user (this is on the same network both devices running graphene I did configure my coturn server but didn’t try out of network calls.). When I was trying to sign up it would error on me. In order to bypass the error I just kept clicking sign up/sign in and it worked on the second or 3rd try. Device verification seems to work but you have to be really slow about working through the steps.
Any ideas on how to fix this?
I am not dead set on using ansible that is just the top recommendation I was recieving purely based off of how much documentation there is for it. That and iy sets up element chat for you which is what I really needed.also a postgres database. This seems a lot closer to my tempo and I will totally try to get it running! Thank you!
My bad I had replied to two people who had asked for it. I didn’t know how to format it so I had fixed it once, but I will delete it and try to add it to the main post.
EDIT: hopefully that is better to look at!
Please explain how I am spamming when my last post in this community was 3 days ago?
I find this resource very helpful, I am trying to learn.
My bad it is now fixed
deleted by creator
deleted by creator
deleted by creator
deleted by creator
No I was deploying regular conduit. Now I am trying conduwuit and when I try and connect it says it doesn’t support sliding sync. I cannot seems to find it referenced in the config file either.
EDIT: nvm I just read it is not implemented yet on conduwuit. 🥲 kind of a dealbreaker because I am trying to get element x working for group calling.
My bad sorry should have thought about making an official matrix account and testing there. Based off of what I can tell my ntfy container is working because it works flawlessly with an official matrix account.
That leaves me with two ideas so far, there is something wrong with matrix dendrite container, or my vps coturn server (which I forgot to mention). It looks like traffic is coming through just fine on my co turn server though. I am curious if this is a firewall issue with my co turn server. That would make the most sense given that element call is also not working on element x.
It’s weird though because calling works on seperate networks just fine so I had assumed that my co turn server just worked. Odd.
Edit: I think it is the turn server even though my calls are going through. I went to my maytrix url with “/_matrix/client/r0/voip/turnServer” to diagnose webrtc and it says “errcode: ‘M_MISSING_TOKEN’” "error: ‘missing access token’ "