I currently run Nextcloud inside a Debian 11 LXC container on Proxmox, together with Apache, Mariadb, and PHP. I followed this guide. Once Apache and PHP were running, the rest of the process was straightforward.
I take my shitposts very seriously.
I currently run Nextcloud inside a Debian 11 LXC container on Proxmox, together with Apache, Mariadb, and PHP. I followed this guide. Once Apache and PHP were running, the rest of the process was straightforward.
You mean a service that translates between ActivityPub and another API? I’m pretty sure that’s just a bridge.
As for the challenges:
Take a look at this list: https://networkupstools.org/stable-hcl.html
I use an older APC Back-UPS 500 to power my homelab and all network devices. So far it’s saved me from 3 power outages, and can last about 30 minutes with a 50W power draw. It doesn’t have data connections of its own (newer devices do), so I had to improvise with an ESP32 board that reports if it detects a voltage on the beeper, plus some cron jobs on Proxmox.
I simply use Nextcloud to sync the vault directory. It has clients for both desktop and mobile and works perfectly fine. I use it to sync basically everything between my work, home, laptop, and mobile.
The only drawback is that I don’t know if Obsidian automatically reloads a file if it is changed - if not, and you leave the file open in the editor, you might accidentally overwrite the new file with old data.
“Archiving legally purchased content as an insurance against corporate-sanctioned theft”?
This. I’ve had issues at work while imaging classroom computers where some would finish in ~30 minutes and a few would need hours. All of the computers used Cat6 cables. This being a classroom, and students being absolute wankbags, they kept yanking the computers and kicking the cables, so the wires came loose from the plugs. I later used ethtool to debug the slow computers – the switch would only allow 10baseT link modes.
I just simply set up a script to export my Trilium notes
edit the notes with an external editor, and then you can just re-import the note
Those two lines right there.
I value interoperability between software. Using a container format to store plaintext files and metadata introduces an XKCD 927 situation where it’s just another reinvention of the wheel that requires additional software support or a whole other workflow for no real benefit. Why is it necessary, for example, to store plaintext data and the related hierarchical structure in a container format when the same feature is already present in the filesystem with files and directories? It adds unnecessary complexity, roadblocks, and points of failure.
I’m using QOwnNotes at the moment. If I want to edit a note, for example, using neovim through SSH, all I need to do is navigate to the markdown file and open it. No scripts, no export/import. Only text files, and that is all it ever needs to be.
They all offer more or less the same network services with different UIs.
OpenWRT is specifically designed to work as a lightweight system running on consumer-grade routers. If you want this, you’ll have to check the website’s Table Of Hardware to determine if your hardware is compatible.
OPNsense and pfSense are general-purpose FreeBSD-based operating systems that you can run on discrete computers or in VMs that act as network gateways. All three are free/gratis, but you have to make an account and go through the store page to download pfSense.
I personally use OPNsense in a VM.
If you really, really, really don’t want to buy a keyboard and monitor, you can buy a USB KVM console, but it’ll likely cost more.
I’m in the same position, and it feels so damn powerful. I’ve convinced an entire university to ditch Ubuntu in favor of Linux Mint, and I’m also advocating for replacing our aging VMWare servers (with a soon-to-expire license) with Proxmox.
Damn, I had no idea netcat
had a hardware implementation
I haven’t tried, but you might be able to set up a samba share that points to /var/www/nextcloud-data/USER/files
, just make sure that it uses the www-data
user.
deleted by creator
Might be “exploitation” instead of “abuse”.
At some point, you have to compromise.
Debian, all the way. I’ve got both ubuntu (made by my predecessor) and debian servers at work, and as far as maintenance and administration, they’re more or less identical. The one thing that sometimes catches me off-guard is that sudo is not installed by default, and you have to su -
into a root session.
Wireguard
You mean Wireshark? It’s possible. You might even capture the DHCP exchange.
The two best programs for the job are nmap
and arp-scan
.
Nmap is like ping on steroids. You can use it for network discovery, port scanning, fingerprinting, and basic pentesting. As long as the pi can talk to the computer, nmap
will sniff it out.
ARP-scan works on the data link layer to identify hosts using ARP. It should be able to return the IP address of all ethernet devices even if they end up in different subnets. It took me a little over two minutes to scan a /16 subnet with one retry and 0.1 second timeout.
If you are really concerned about the pi’s address, you should run a local DHCP server on the laptop. dnsmasq
for Linux and Mac, but I have no idea what to use on Windows (other than a VM bridged to the ethernet interface).
What does oVirt offer that proxmox doesn’t? I’m asking because I want to move an ESXi server to another hypervisor, I’m 90% sure it’ll be Proxmox, but I’d like to know my options.
Not true. It was most likely a spam filter. Images and longer messages work fine.