Browse r/selfhosted on any given day and you will find two kinds of posts: developers celebrating how they have taken back control of their data, and developers who spent their entire weekend debugging why their Nextcloud instance is throwing 502 errors. The self-hosting community is full of genuine wins and genuine cautionary tales, and the difference between the two almost always comes down to one thing: knowing which services actually reward the effort.
This is not a debate about whether self-hosting is inherently good or bad. It is about building a rational self-hosting strategy that recognises where your time, risk tolerance, and technical overhead actually sit. As we explored in our piece on VPS vs self-hosting, the real question is not "cloud or self-hosted?" but "what belongs where?" Getting that boundary right is the most valuable infrastructure decision most developers will make in 2026.
Not all self-hosting is equal. Some services are lean, well-documented, and largely run themselves once configured. Others demand constant attention: dependency updates, certificate renewals, storage management, and performance tuning. The r/selfhosted community has collectively stress-tested most of the popular options, and a clear picture has emerged of what crosses the threshold.
Before pulling any service off a managed platform, ask yourself the following:
Services that score well on privacy sensitivity, low update complexity, and where managed alternatives are genuinely overpriced make strong candidates. Services where downtime carries real consequences for real users are a different matter entirely.
Vaultwarden, the lightweight Bitwarden-compatible server, is consistently cited by the r/selfhosted community as the single best self-hosting win. It is a small Rust binary, runs comfortably on a Raspberry Pi or a cheap VPS, requires almost no ongoing maintenance, and gives you complete ownership of your credential vault. Updates are infrequent and clean. The privacy upside is real and immediate.
For teams managing infrastructure secrets rather than personal passwords, tools like HashiCorp Vault are worth self-hosting if your organisation already has the DevOps maturity to run them properly. If you do not, the managed tier is the right call.
Obsidian syncs locally. Joplin has a clean self-hosted server. Logseq stores to a plain-text folder. These tools are designed with local-first or self-hostable architectures, which means self-hosting is the design intent, not an afterthought. For personal or team knowledge management, the operational overhead is low and the control you gain over your organisation's data is worth it.
Jellyfin, Plex, and Home Assistant are genuinely excellent self-hosted applications. They are designed for home infrastructure, have active communities, and the managed alternatives are either non-existent or severely limited. If your use case is personal media or home automation, self-hosting is the obvious and correct choice. These applications reward the investment and rarely become the operational headaches that production workloads do.
The headline experience of self-hosting is control. The hidden cost is the maintenance tax, and it is one of the most underestimated self-hosting challenges developers encounter in practice.
Every self-hosted service is a background responsibility. Container updates, certificate renewals, backup verification, storage monitoring, and security patches: none of these are difficult individually, but collectively they create a non-trivial operational burden. Ask yourself honestly how consistently you will run the following across your entire stack:
docker compose pull && docker compose up -d
Most self-hosters admit their update cadence slips within a few months. This is less of an issue for personal or hobbyist infrastructure, where the learning process is genuinely part of the value. Running your own stack is one of the best ways to build real operational intuition, as we discuss in our guide on how junior DevOps engineers build hands-on experience. But when uptime matters to other people, that calculus changes completely.
When you self-host a service exposed to the internet, you own the entire security surface. Misconfigured reverse proxies, unpatched CVEs, weak authentication defaults, and exposed admin panels are common failure modes. The r/selfhosted community regularly sees posts about compromised instances, and the root cause is almost always a service that was set up and then forgotten.
For services handling customer records, financial information, or anything subject to data protection regulation, this responsibility is substantial. Managed platforms earn their keep partly by handling CVE response, compliance patching, and infrastructure hardening on your behalf.
The honest answer, informed by both community experience and real operational data, is: most production application workloads.
Running your portfolio or a small personal project on a self-hosted VPS is entirely reasonable. Running a production SaaS application with real users on self-managed infrastructure is a fundamentally different proposition. You become responsible for uptime, scaling, DDoS mitigation, SSL management, and incident response, on top of everything else you are building.
This is where the managed vs self-hosted distinction becomes a business decision rather than a technical preference. The developer time spent keeping infrastructure running is time not spent on the actual product. As we have written about in our analysis of why developers are moving away from expensive platforms, the issue is rarely that managed infrastructure is wrong. It is that many developers either overpay for complexity they do not need, or underestimate the real cost of running their own.
PostgreSQL and MySQL are excellent databases. Running them reliably at production scale, with proper backups, failover, and performance tuning, is a specialised discipline. Managed database services handle automated backups, point-in-time recovery, connection pooling, and replication out of the box. For most teams, this is worth the cost.
The self-hosted AI space is one of the fastest-moving areas in 2026, but as we covered in detail in self-hosted AI apps in 2026, building is straightforward and deploying reliably is genuinely hard. GPU availability, model serving infrastructure, latency requirements, and memory constraints make production AI workloads a poor fit for casual self-hosting in most cases.
This is where the most important deployment decisions happen. There is a significant gap between "running everything on a VPS I maintain myself" and "paying enterprise prices for AWS or GCP." That gap is where a PaaS like Code Capsules becomes the right choice.
Knowing when to use a PaaS rather than self-hosting comes down to a few clear signals:
Code Capsules is built precisely for this position: the middle ground between self-hosted DIY infrastructure and enterprise cloud overhead. Deploy your backend, frontend, and database from a single dashboard connected to your Git repository, without managing servers, configuring reverse proxies, or writing infrastructure-as-code from scratch.
# Push to your connected repo branch and Code Capsules handles the rest
git add .
git commit -m "feat: add user authentication"
git push origin main
# Deployment triggers automatically, no server config required
No Dockerfile required for standard runtimes. No Nginx configuration. No certificate management. Your organisation's engineering time stays focused on building, not maintaining infrastructure.
Use this as a starting point for your own self-hosting strategy:
| Service Type | Self-Host? | Reason |
|---|---|---|
| Password manager (Vaultwarden) | Yes | Low maintenance, high privacy value, minimal downtime risk |
| Personal media server (Jellyfin) | Yes | Designed for self-hosting; no meaningful managed alternative |
| Home automation (Home Assistant) | Yes | Local-first architecture, active community, low stakes |
| Production web application | Use PaaS | Uptime and operational overhead outweigh the cost of managed |
| Production database | Use managed | Backup, failover, and tuning require specialised operations |
| AI inference workloads | Use managed | GPU infrastructure and reliability complexity are prohibitive |
| Internal dev tooling | Either | Depends on team size, existing tooling, and risk appetite |
The self-hosting sweet spot is real, but it is narrower than the community sometimes suggests. Personal tools, media servers, and privacy-sensitive applications where managed alternatives are expensive or inadequate make excellent candidates for self-hosting. Production application workloads, databases at scale, and anything where downtime carries real cost belong on infrastructure that handles operations for you.
The goal is not to self-host everything or to outsource everything. It is to make deliberate deployment decisions based on actual trade-offs rather than ideology or inertia. Recognise what you are good at running, be honest about what will drain your time, and put each service in its appropriate home.
If your production workloads belong on managed infrastructure but you would rather not pay AWS prices for AWS complexity, Code Capsules is built for exactly that position. Deploy your apps, backends, and databases simply, without managing servers, and keep your engineering effort where it actually matters.
Get started on Code Capsules and deploy your first app in minutes. No infrastructure configuration required.