Well yeah… those are the only ones that matter.
Nope. I don’t talk about myself like that.
Well yeah… those are the only ones that matter.
Correct. It can’t go anywhere because you won’t acknowledge that a dockerfile is literally just a script. So arguing that one type of script is any different than any other type of script is just silly from the get-go. Not sure what point you’re trying to prove. Also, “one-off” literally linked you to a whole thing of “one-off” LXC containers and you still say dumb shit like this…
You can version LXC containers.
Every command (https://docs.docker.com/build/concepts/dockerfile/) is literally just an alias of something from linux bash anyway. So I’m really not understanding why you think there’s any difference here. It’s literally a dumbed down bash for one specific purpose.
I’m starting to think you just don’t know anything about docker. Or LXCs for that matter.
If so, that’s pathetic and weird.
Pathetic and weird is complaining about downvotes when they don’t even tally up anywhere. So not only were they meaningless to begin with, they’re not even as useful as they are on Reddit.
I did downvote, not because of disagreeing with me, but because
The issue with LXC is that it doesn’t set the software up for you.
is factually wrong in this context. You can absolutely distribute software in an LXC. I even pointed you directly at one such repository of hundreds of images that do exactly that. And they’re repeatable and troubleshoot-able all the same. The script that a dev would publish would be doing literally the same exact thing as a dockerfile.
A dockerfile is just a glorified script. Treating it as if it’s something different is intellectually dishonest. Anything in a docker can be edited/modified the same as an LXC. docker exec -it <> /bin/bash
puts a user in the same position as being in an LXC container. Once again. Aside from some additional networking stuff, Docker was literally based on LXC and is more or less functionally the same. Even in their own literature they only claim that they’ve enhanced LXC by adding management to it… (https://www.docker.com/blog/lxc-vs-docker/) Except Proxmox can manage an LXC just fine… LXD as well.
As far as CI/CD stuff… It works on LXC containers as well… Here’s an example from 3 years ago that I found literally in 10 seconds searching for LXC ci/cd https://gitlab.com/oronomo/docker-distrobuilder.
Also you can even take a DOCKERFILE and other OCI compliant images and push them directly into an LXC natively. https://www.buzzwrd.me/index.php/2021/03/10/creating-lxc-containers-from-docker-and-oci-images/ (Create LXC containers using docker images section).
like pretty much the entire development community does?
This is also a bullshit appeal/fallacy. The VAST majority of development communities don’t use ANY form of containerization. It’s only a subset that works on cloud platforms that now push into it… It’s primarily your exposure to self-hosted communities that makes you believe this. But it’s far (really far) from true. Most developers I work with professionally have no idea what docker is other than maybe have heard about it from somewhere or another. It’s people like me who take their shit and publish it into a container and show them that they understand and learn more about it. And even in that environment, production tends to not be in docker at all (usually kubernetes, Openshift, Rancher, or other platforms that do not use the Docker Runtime) but that choice is solely up to the container publisher.
I didn’t like docker for the longest time
Good for you? I see docker as a useful tool for some specific stuff. But there’s very few if any cases where I would take Docker over an LXC setup, even in production. I don’t hate or love docker (or LXC for that matter). However… I find I get better performance, lower overhead, and better maintainability with LXC. So that’s what I use. I don’t delude myself that LXCs are somehow not containers… and that Docker does anything different than any other container platform.
They did… That’s why there’s timestamps in the description.
Docker doesn’t setup anything for you either without a dockerfile (which is literally just a list of commands to setup the docker container).
There’s no reason that a script cannot be used in the exact same way for an LXC container. To that point… There’s already a repo of stuff to do exactly that. Which I’ve linked above.
Edit:
A docker is a distribution method for the software, not the operating system
And yet most docker containers first lines are something line “FROM Alpine”… Much the same that an LXC would be. Last I checked Alpine is an OS…
Keep in mind that docker used to be based on LXC… and they fulfill virtually the same niche, outside of Docker having more obfucated shit for networking (specifically inter-container networking).
It’s funny because I view LXC’s the same. They’re more practical than both VMs and dockers to me. Outside of community-scripts though, it seems most people don’t like LXC’s nearly as much as I do…
those tremendous amounts are not that big because with PeerTube you share the bandwidth with other instances
I have 8gbps, I’m perfectly willing to federate with Futo’s instance and take some of (if not all of) the load for this video. But I don’t think they peer with that many other people. At 15mbps that’s about 500 people watching simultaneously.
I agree I’d like to see it on peertube for sure.
You underestimate how much knowledge it actually takes to do selfhosting stuff. To truly explain things. This stuff is clearly aimed at really low prerequisite knowledge people. It’s only with pre-req knowledge that you can skip out on a lot of content. This is the exact same complaint I got when I was teaching certain 100 level courses at a major university… 135 hours of coursework just to get students to a baseline competence on a number of introductory topics for IT… 14 hours for basic self-hosting knowledge is likely not enough to actually be sufficient either (which is likely why they specifically hamstring the options and go straight for using just one specific software)… But it takes time to explain all the items that goes into everything you need to know for self-hosting.
The fact that you’re already aware of what Docker is means that this video and wiki were already a “miss” for you.
Most of the hardware itself was free(business decommissioned) or auction wins (4.5TB of ram out of a $600 auction, selling some servers paid the whole auction off). So quite cheap in that respect. And it’s not strictly private use. Lots of functions in there to keep my business going/make it easier to track taxes/auto billing clients/email/etc. Though only typically operationally, not as an income generator itself (eg, not hosting other companies stuff so much).
And yeah, if energy was 10x more expensive (I think it was you or someone else that said $0.60 per kWh?) I’d probably rethink my situation/stance a bit.
But 3kW service is awfully low. Standard around here is 200amp service to a house (at 120v, so 24kVA service), 100amp (12kVA) if you have gas utilities handling range/heating, as A/C is heavily required where I live. My PV setup is rated 15.9kW, though caps out at 11kW on the best days. I can’t imagine living off of 3kW. My desktop uses 1/10th of that. My idle usage in my house minus the servers is ~2kW. I can see why you’re squeezing Watts. Some googling shows Italy being a country that does this… You probably are in similar situation as them where A/C isn’t really common, heating and cooking isn’t electric based, etc… Most of the year I’m not allowed to even make a fire, so I’m forced to rely on electricity.
But no SUV here… 1 hybrid sedan for this family of 4. Gas costs too much and we drive too little.
And here I am with a 5 server cluster, 2x custom servers running opnsense for redundancy (8gbps internet connection needs real horsepower for IDS/firewall/routing), and a 36 bay storage truenas node… that’s getting upgraded to 72 bay version for more drives (34 additional drives ready for install RIGHT NOW)… I see your 50 and 38 W… and raise you
This
2200-ish watts? Oh… and cooling the servers to keep them to about 75 degrees intake temp.
So really closer to 3400 watts.
Taking your numbers of 6 watts saved per drive would only save me 180w currently and 432w after I install the additional 32 drives next week. I’d still be in the 3kW territory.
…
I also have solar…
I generate (orange) enough to export (purple) a little during the day… but that’s about it… Battery (light green) usage just kills peak hours.
The electrical usage costs me about $100-110 a month in electricity after solar ($0.06 per kWh), probably closer to $150 if solar wasn’t eating up a bunch of it. Less than subscriptions to all the shit that I’m hosting for myself by a long long shot. Forget the family and other users.
Nextcloud - 5TB, google drive is $10/mo for 2TB
MSTY - AI stuff, another $10/mo subscription if you want google gemini. $20 for ChatGPT.
Minecraft - private, $5 a month minimum. Probably closer to $10 for reasonable specs to do anything with the kiddos.
Email - 1TB across all users right now, ~$5 minimum for just me, though I’m oversize for many platforms as I have everything going back to 2006 or so. So probably close to $8-10 for just me.
Private search aggregator - apparently a paid service now with the likes of kagi. $10
Home assistant - $6.50 through nobucasa.
$46-66 on this stuff alone…
Frigate… 8 cameras with corals for inferencing. God know what that cost would be. I keep 30 days of 24 hours, 6 months of detected items and 1 year of snapshots. I’m at 50TB of usage there. This probably could/should be cut down significantly, at least halved. but even 25TB is a fuck-ton of money per month on any VPS/hosted system. Ring’s plan is about half what I’m doing at $20/mo. No idea what other services would end up being. Not even sure how ring and other make money at that cost when storage is expensive otherwise.
Paperless-ngx, lubelog, grocy, gramps for organization/documentation would need a VPS service… or migrating to a non-hosted solution (so can’t really be shared easily, or shared through google docs sort of thing).
Self-hosted things like lemmy, mastodon, matrix, peertube, etc… VPS costs would be something substantial as well. And business operation stuff like my invoices, jump hosts, secure vms, etc…
And lastly, the cost of owning my own data… where no company can spy on me. Or monetize me in ads. Invidious, my own dns with custom rules for me vs the kids, etc…etc…etc… Priceless.
Then multiply the parts of the list for other users on my system (wife, both kids, father, etc…)
And of course the massive porn collection… Gotta have that on a moments notice.
No, that’s an entire external service + a script.
Requires running https://github.com/Cloudbox/autoscan and that custom script.
At that point I might as well tell plex to rescan the library every x hours itself.
Edit: I forgot to add this even though I meant to
Autoscan, A-Train and Bernard are no longer actively maintained.
And that github… it no longer maintained.
It does name management on the files for other players. Such as plex, emby, jellyfin, or kodi. About all I see that’s any special here.
Edit: seems to also do some metadata magic for plex at the very least to make it somewhat usable.
Well… plex support in that it can chuck files into a structure that plex understands. It doesn’t seem to notify plex to rescan libraries…
Same on the official lemmy web UI.
Does no one care about power consumption?
It takes several SSDs to make up the capacity difference between an HDD.
I run 62 16TB HDDs. To make up the same capacity in SSDs I need 2-4x the bays. I don’t know of any cheap systems that can hold ~250 bays of ssds.
So an SSD that may only take 1-3w all day… 2-4x that is already equal to the HDD regardless. You’re not going to make any ROI metric here.
You said you’re using OPNSense for routing… Just keep it up to date and you’ll be fine.
If you’re worried about your ap, I think you can set omada APS to restart nightly… Though I could be misremembering.
Every network manufacturer has had some CVE for something.
Use anything… Mailcow or otherwise. Just don’t expose the ports on your firewall/router to connect back to you.
No there isn’t anymore. yt-dlp, what all those syncing tools rely one, is basically fucked at this point. Youtube has made it fucking impossible to grab content off their platform and it’s really damn annoying. Even for my private IP address, I’ve earned what seems to be a permanent ban from Youtube.
Every video shows either this…
Or I login, and it only shows me the first 60 seconds of content before it just buffer loops forever
But I wouldn’t want to sync the content from youtube anyway… Youtube compresses the shit out of everything.
I get your point. It’s not hard for them to make a second post of the same video content to another platform. Many just don’t see the value in it. I agree that at least FUTO should see the point of putting it up… Hell I’m even willing to share the load in the bandwidth (with my own instance that’s currently up and running). Is what it is.