A post hit r/devops this week that made me stop scrolling. A software engineer — full-stack, startup, 30,000 daily active users — described a trajectory that's becoming painfully common.
They started on Railway. It was brilliant. Push code, it deploys. No YAML files, no SSH keys, no 3 AM pager alerts. Then the app grew. Railway's pricing stopped making sense. So they did what thousands of developers do: they migrated to EC2.
Now they're setting up blue-green deployments, configuring GitHub Actions pipelines, researching Prometheus and Grafana, sizing instances, and asking strangers on Reddit whether 2 GB of RAM is enough for their monitoring stack. They went from shipping features to babysitting servers.
This is the PaaS trap, and almost every growing startup falls into it.
Act 1: The Honeymoon. You pick Railway, Render, or Heroku. Deployment is a git push. You're productive. Life is good.
Act 2: The Bill. Your app gets traction. Costs balloon. Railway charges per resource, per minute. Render's free tier vanishes. Heroku — well, Heroku's basically in maintenance mode at this point. You do the maths and realise you're paying three to five times what bare metal would cost.
Act 3: The Migration. You spin up EC2 instances. You learn Docker properly. You write CI/CD pipelines. You configure load balancers, set up SSL certificates, manage security groups, create IAM roles. You become a part-time DevOps engineer whether you wanted to or not.
The Reddit post that inspired this article is textbook Act 3. The developer is clearly sharp — they built blue-green deployments from scratch. But now they're asking about instance sizing for Prometheus, wondering about disk space for Grafana, and trying to figure out observability for a system they built because the old platform was too expensive.
They're not solving customer problems any more. They're solving infrastructure problems.
Let's be honest about what pushes people away from entry-level PaaS platforms. It's not that these services are bad — they're genuinely good at what they do. The problem is that "what they do" has a ceiling.
Railway's pricing is usage-based, which sounds fair until your app actually gets used. A Node.js backend handling 30k daily active users can easily rack up hundreds of dollars a month on Railway. The same workload on a properly sized EC2 instance might cost $50-80.
Render has similar dynamics. Their free tier is generous for hobby projects, but production workloads hit paid tiers fast. And once you need background workers, Redis, multiple services — the bill compounds.
You can't fine-tune kernel parameters on Railway. You can't adjust connection pool limits at the OS level on Render. You can't run a sidecar container for log aggregation on Heroku.
These aren't hypothetical complaints. At 30k DAU, you start needing connection pooling for your database, custom health check endpoints, specific resource limits per container, and probably a caching layer. Managed PaaS platforms give you toggles. You need knobs.
Each platform has its own configuration format, its own CLI, its own deployment model. Moving off Heroku taught the industry this lesson — migration is never just "point it somewhere else."
So you move to EC2 (or DigitalOcean, or Hetzner, or Linode). And you get everything you wanted: full control, lower costs, total flexibility.
You also get everything you didn't want.
When that EC2 instance runs out of disk space at 2 AM, that's your problem. When the security group misconfiguration lets someone port-scan your box, that's your problem. When Ubuntu pushes a kernel update that breaks your Docker socket, guess whose Saturday that ruins.
The Reddit poster mentioned they successfully set up CI/CD and blue-green deployments. That's impressive — but it's also weeks of work that could have gone into product features. And blue-green deployment is just the beginning. What about:
Each of these is a project in itself. Each one has failure modes. Each one needs monitoring — which brings us back to the Prometheus question.
The original poster asked about running Prometheus and Grafana on a 2-core, 2 GB instance. The honest answer: it'll work for a while, then it won't. Prometheus is a memory hog. At 30k DAU with proper instrumentation, you'll be retaining gigabytes of time-series data. Grafana itself is lightweight, but the dashboards everyone builds tend to run expensive queries.
So now you need a dedicated monitoring instance. Which needs its own monitoring (who watches the watchers?). Which needs its own backups. Which needs its own security hardening. The infrastructure to monitor your infrastructure starts costing more than the infrastructure itself.
Another r/devops thread from this week — "How do you handle AWS cost optimisation?" — catalogued the waste patterns: unattached EBS volumes, ancient snapshots, dev databases running 24/7, NAT gateways in environments that don't need them. The poster had audited 50+ AWS accounts and consistently found 20-30% waste.
This isn't because people are careless. It's because AWS gives you 200+ services and charges for resources the moment they exist, whether you're using them or not. Managing AWS costs is literally a career specialisation.
Here's the thing: the choice between "simple PaaS that doesn't scale" and "DIY everything on VMs" is a false binary. It feels like those are the only options because the platforms that sit in the middle don't get talked about enough.
Code Capsules is built for exactly the gap that Reddit poster fell into.
It's a PaaS — you push code, it deploys. But it runs on dedicated infrastructure, not shared containers with opaque pricing. You get the git push deployment experience without Railway's per-minute billing. You get Docker support without having to manage Docker hosts. You get CI/CD without writing GitHub Actions YAML.
More specifically, here's what changes for someone in that poster's situation:
Connect your repo. Push code. It builds and deploys. Blue-green deployments aren't something you build — they're something the platform handles. That's weeks of engineering effort you get back immediately.
Need more capacity? Adjust the capsule size. Need a background worker? Add another capsule. Need a database? Provision one from the dashboard. You're not sizing EC2 instances or calculating EBS IOPS.
Logs, metrics, and health checks come out of the box. You don't need a dedicated Prometheus instance eating 2 GB of RAM on a separate server. You don't need to configure Grafana dashboards. The platform shows you what's happening.
Flat pricing based on the resources you provision, not metered per-request or per-minute billing that spikes with usage. For a startup at 30k DAU, predictable infrastructure costs mean you can actually plan a budget — something that's surprisingly difficult with both Railway's usage pricing and AWS's 47 different billing dimensions.
Code Capsules runs standard Docker containers. Your app doesn't get locked into proprietary buildpacks or custom runtimes. If you ever do need to move to bare metal or a hyperscaler, your containers work anywhere.
Not every project needs a middle-ground PaaS. If you're building a side project with zero traffic, Railway's free tier is fine. If you're a company with a dedicated platform engineering team and 500 microservices, you probably want Kubernetes on your own infrastructure.
The sweet spot — where the pain is worst and the solution matters most — is startups and growing products between roughly 1,000 and 100,000 daily users. You've outgrown hobby-tier hosting. You don't have budget or headcount for a DevOps hire. And your developers' time is worth more than the $200/month difference between a PaaS and DIY.
That Reddit poster? They're a software engineer. Their job is building the product that has 30k daily active users. Every hour they spend tuning Prometheus or debugging GitHub Actions is an hour not spent on features, performance, or user experience. That's not a good trade.
If I were replying to that thread, here's what I'd say:
You've already proven you can handle DevOps. Blue-green deployments with zero downtime isn't trivial. You clearly know your way around infrastructure. The question isn't whether you can manage EC2 — it's whether you should.
The Prometheus/Grafana setup will work, then it won't. Start with 2 cores and 4 GB minimum if you want breathing room. Set retention to 15 days. Use node_exporter and your application metrics. But know that you're signing up for ongoing maintenance of a monitoring stack.
Consider whether the cost savings actually exist. Yes, EC2 is cheaper per-compute than Railway. But add your time: the hours setting up CI/CD, configuring monitoring, hardening security, managing updates. At a startup engineer's hourly rate, the "savings" often evaporate within a month or two.
Look at platforms that grow with you. You don't have to choose between Railway's simplicity and EC2's complexity. Platforms like Code Capsules give you production-grade infrastructure with the deployment experience you liked about Railway — minus the per-minute billing that pushed you away.
The dev community has this weird assumption that "real" engineering means managing your own servers. That using a PaaS is somehow less legitimate than SSHing into a box and configuring nginx by hand.
That's nonsense.
The best engineers optimise for leverage. They build systems that let them move fast. They automate away toil. And when a managed platform can handle their deployment pipeline, monitoring, and scaling — they use it, because that frees them to work on the thing that actually matters: the product.
The Reddit poster's app has 30,000 people using it every day. That's an achievement. The last thing they should be doing is sizing Prometheus instances.
Ship the product. Let the platform handle the plumbing.