I’m reading this scratching my head going “If your unit tests need a database they ain’t a unit test”.
I’m reading this scratching my head going “If your unit tests need a database they ain’t a unit test”.
It’s the multiple volumes that are throwing it.
You want to mount the drive at /media/HDD1:/media
or something like that and configure Radarr to use /media/movies
and /media/downloads
as it’s storage locations.
Hardlinks only work on the same volume, which technically they are, but the environment inside the container has no way of knowing that.
Easily doable in docker using the network_mode: "service:VPN_CONTAINER"
configuration (assuming your VPN is running as a container)
It’s unfortunate that (at least on the Bluesky side) an attempt at following a person doesn’t result in them getting a DM asking for that to be ok.
Which means following a person on Bluesky is not possible unless they’ve already opted in.
All I want to do is follow a couple of authors or content creators but none of them know what bridgy.fed is :(
I’ve not used dockge so it may be great but at least for this case portainer puts all the stack (docker-compose) files on disk. It’s very easy to grab them if the app is unavailable.
I use a single Portainer service to manage 5 servers, 3 local and 2 VPS. I didn’t have to relearn anything beyond my management tool of choice (compose, swarm, k8s etc)
“…prohibits repair stores from repairing components on the mainboard. Instead, the entire component must be replaced…”
A flagrant disregard for the costs of e-waste on the environment. What a surprise.
Privately operated ICBM’s. I can’t see how that’ll fly but I look forward to finding out.
Documentation people don’t read
Too bad people don’t read that advice
Sure, I get it, this stuff should be accessible for all. Easy to use with sane defaults and all that. But at the end of the day anyone wanting to using this stuff is exposing potential/actual vulnerabilites to the internet (via the OS, the software stack, the configuration, … ad nauseum), and the management and ultimate responsibility for that falls on their shoulders.
If they’re not doing the absolute minimum of R’ingTFM for something as complex as Docker then what else has been missed?
People expect, that, like most other services, docker binds to ports/addresses behind the firewall
Unless you tell it otherwise that’s exactly what it does. If you don’t bind ports good luck accessing your NAT’d 172.17.0.x:3001 service from the internet. Podman has the exact same functionality.
But… You literally have ports rules in there. Rules that expose ports.
You don’t get to grumble that docker is doing something when you’re telling it to do it
Dockers manipulation of nftables is pretty well defined in their documentation. If you dig deep everything is tagged and natted through to the docker internal networks.
As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.
So to be clear, you want traffic coming out of your VPS to have a source address that is your home IP?
No that’s not how I read it at all. He wants his VPS to act as a NAT router for email that routes traffic through a wireguard tunnel to the mail server on his home network. His mail server would act as if it was port forwarded using his home router, only it won’t be his home IP, it’ll be the VPS’s
Flash drive hidden under the carpet and connected via a USB extension, holding the decryption keys - threat model is a robber making off with the hard drives and gear, where the data just needs to be useless or inaccessible to others.
This is a pretty clever solution. Most thieves won’t follow a cable that for all intents looks like a network cable, especially if it disappears into a wall plate or something.
If you’ve got a good network path NFS mounts work great. Don’t forget to also back up your compose files. Then bringing a machine back up is just a case of running them.
Reads nice but your docs are 404’ing so I can’t investigate much :D
EDIT. Found it. You’ve got a ‘.com’ instead of a ‘.io’.
Mastodon doesn’t just use storage for local image uploads. It pulls, thumbnails and saves images from any incoming posts, including the thumbnails you might see on website links (pulled from the opengraph data most websites implement)
It’s possible to set a pretty short timeout for that data though.
I looked into Proxmox briefly but then figured that since 99% of my workload was going to be docker containers and I’d need just a single VM for them it made no sense to run it.
So that’s what I did. Ubuntu + Portainer and a shed load of stacks.
Hugo can be as simple as installing it, configuring a site with some yaml that points at a really available theme and writing your markdown content.
It gets admittedly more complex if you’re wanting to write your own theme though.
But I think this realistically applies to most all static site generators.