

It’s literally says in the link. Go to the link and it’s the title.


It’s literally says in the link. Go to the link and it’s the title.


Really with they would take security vulnerabilities seriously 😞
Because they are significant, and broad reaching.
How does organization work out?
We have dozens of workflows for our monorepo CI/CD stuff. GitHub organization with the flat structure is incredibly annoying.
GitLab is a single file?? (Or am I misinformed? )How does that work out?
A lot of that pain can be reduced by writing and running your code locally before pushing it to a CI environment. Generally with our automation we write a CLI, And GitHub actions is just an execution environment that calls the CLI.
And if what you’re trying to do must execute inside an action. You can run workflows locally with docker!
GitHub Actions mostly.
The rest is usually plumbing and code to support it. The actions are just the automated execution environment.


Not a single one of the robot vacuums that I’ve bought in the last 2 years seem to be able to function without internet access.
It’s asinine.
Also they break down so freaking fast. It’s not even funny. Even worse when the part that’s broken is non-replaceable and it’s like a $3 part.
Development time and user support?
These are two pretty obvious reasons. It takes time and time is a limited resource. Therefore, time should be spent on solving impactful problems. Lemmy account login is extremely low impact, it’s not a bad thing, it’s just not something that improves immich for a large portion of its user base.
Another thing is user support. Since the many instances are self-hosted for the most part, and they will go offline, and they will go away forever in some instances. Users asking for support for this login type and asking for additional features to make up for this baked in instability.
Essentially. Low impact work that may drive a higher volume of support efforts.
It’s the same reason some niche projects stop supporting Linux. Low user volume and disproportionately high “neediness” of those users.


Does it support multi-tenancy?
For instance, being a backup and media manager solution for multiple people in my family hosted on one server.
The same with a few friends that want to get out from under Google’s thumb.
I mean, yeah, probably all of these things.
But I wasn’t facility. I was a student.
The dorm itself did not have internet and had no plans of running ethernet or providing internet to students.
I got internet through a wireless access point, positioned very carefully in window for a WISP. And they distributed that to rooms near me.
It’s old magic too and in a pinch it works reasonably well.
We use this as a networking option for an old dorm I was in which didn’t have ethernet and the concrete walls between rooms made Wi-Fi unusable.
Blocks of rooms were not separated on different circuits which made this possible


Not exactly ideal archival software…
It doesn’t store files in a human readable way and requires a separate DB and application to interpret your stored data. Without controls over how it stores that data.


There’s a big difference between desktop environment needs and headless server needs.
Anything with user interaction will require an enormous number of additional services, which consumes resources.
I expect to run simple headless software on 256-512 MB of RAM. For example.


Are you really so naive that you believe that a VPN subscription is more difficult or a higher bar than actually getting up and moving?
Potentially meaning you need to find new jobs, new friends, new support structures…etc


Samesies
These are all holes in the Swiss cheese model.
Just because you and I cannot immediately consider ways of exploiting these vulnerabilities doesn’t mean they don’t exist or are not already in use (Including other endpoints of vulnerabilities not listed)
This is one of the biggest mindset gaps that exist in technology, which tends to result in a whole internet filled with exploitable services and devices. Which are more often than not used as proxies for crime or traffic, and not directly exploited.
Meaning that unless you have incredibly robust network traffic analysis, you won’t notice a thing.
There are so many sonarr and similar instances out there with minor vulnerabilities being exploited in the wild because of the same"Well, what can someone do with these vulnerabilities anyways" mindset. Turns out all it takes is a common deployment misconfiguration in several seedbox providers to turn it into an RCE, which wouldn’t have been possible if the vulnerability was patched.
Which is just holes in the swiss cheese model lining up. Something as simple as allowing an admin user access to their own password when they are logged in enables an entirely separate class of attacks. Excused because “If they’re already logged in, they know the password”. Well, not of there’s another vulnerability with authentication…
See how that works?
Please to see: https://github.com/jellyfin/jellyfin/issues/5415
Someone doesn’t necessarily have to brute Force a login if they know about pre-existing vulnerabilities, that may be exploited in unexpected ways
Fail2ban isn’t going to help you when jellyfin has vulnerable endpoints that need no authentication at all.
Jellyfin has a whole host of unresolved and unmitigated security vulnerabilities that make exposing it to the internet. A pretty poor choice.
Yeah, it should balloon out to 15TB or more I think