I just use the Voyager app, which has a great UI, with no need to visit the website at all.
They/them
I just use the Voyager app, which has a great UI, with no need to visit the website at all.
It’s making me login to see that page
The Linux app SpeechNote has a bunch of links to models of both varieties, in various languages, and supports training on a specific voice.
I just use Auxio on Android or GNOME Music on Linux to listen to my downloaded files, and sync them via Syncthing.
I just realised I have 3. 9K comments in just one year.
It apparently has an Android version, but for an open source app, it’s not on F-Droid?
I use Seal on Android and yt-dlp-gui on Linux because they’re native apps using native theming/design languages, but it’s always cool to have another option!
I don’t have an answer for you, but I’m also interested in this and would like to see the responses
Cool, thanks!
I use Jan already, and I like that it’s a native app rather than a webui, I don’t really like webuis. I wasn’t saying there weren’t any local model apps, but that there are far less than glorified ChatGPT clients.
And if they were going to make theirs cross platform, it would in fact be the first FOSS local model app for Android. (Layla Lite exists but is not FOSS).
If you want to make it more unique than ‘just another ChatGPT client’, you could try adding local model support, not sure how difficult that would be.
I didn’t know, so I asked Microsoft Copilot:
Tubular, which is a fork of NewPipe, does implement SponsorBlock1. It also supports ad-free versions of platforms like SoundCloud, Bandcamp, and PeerTube2. This means that you should be able to use SponsorBlock with PeerTube videos through Tubular. However, it’s important to note that there has been a discussion about the lack of PeerTube support in the SponsorBlock repository itself3, which suggests that while Tubular supports PeerTube, there might be some limitations or ongoing development regarding full integration. For the most up-to-date information, you may want to check the latest updates on Tubular’s GitHub page or the SponsorBlock issue tracker13.
Idk, all I know is that I don’t have NewPipe installed and when someone shared a PeerTube link in Lemmy the other day, Tubular opened it with no problems.
I use Tubular which integrates Sponsor Block and comment replies.
Any tips on how to get stable diffusion to do that? I’m running it through Krita’s AI Image Generation plugin, and with my 6GB VRAM and 16GB RAM, the VRAM is quite limited if I want to inpaint larger images, I keep getting ‘out of VRAM’ errors. How do I make it switch to RAM when VRAM is full? Or with Jan for that matter, how can I get it to partially use RAM and partially VRAM so I can get it to run models larger than 7B?
I have a Asus laptop with a GTX 1660 ti with 6GB VRAM. I use Jan for LLMs, only the 7B models or lower are small enough for my hardware though, and Krita with the AI Image Generation plugin for image generation, most things work in it, except it fails with an ‘out of VRAM’ error if I try to inpaint an area more than about 1/8 of my canvas size.
Is that 128GB of VRAM? Because normal RAM doesn’t matter unless you want to run the model on the CPU, which is much slower.
It’s a community
I need the js and html as well as the css for the frontend, but I can’t easily see where that’s located
Ooh, what’s mine?