What I Learned Installing Umami: Database Decisions and Docker
Seven lessons from installing Umami analytics: database compatibility, EOL software risks, Docker isolation, and why documentation is as valuable as code.
Introduction
In my previous post, I documented installing Umami analytics on my blog. The installation worked, analytics are live, and everything's running smoothly.
But the journey there taught me several valuable lessons that apply far beyond just Umami - lessons about database compatibility, end-of-life software, Docker deployments, and infrastructure planning.
This post captures those lessons while they're fresh.
📝 This is part of the Blog Infrastructure series - documenting how I built this platform to share my homelab journey.
Other posts in this series:
- Ghost on Red Hat
- Configuring Ghost
- The Umami Journey
- What I Learned Installing Umami: Database Decisions and Docker (you are here)
Lesson 1: Use the Database the Platform Expects
The mistake: Trying to force Umami onto MySQL/MariaDB when it was built for PostgreSQL.
What happened: 68 type compatibility errors. Every UUID, timestamp, and integer type needed manual conversion.
Why it was wrong:
- Fighting against the platform's design
- Every update would require re-converting types
- Ongoing maintenance burden
- Time wasted on a fundamentally flawed approach
The lesson: When documentation says "built for PostgreSQL" (or any specific database), they mean it. The entire schema, migrations, and architecture assume that database's features.
How to recognize this pattern:
- Check what database the project uses in their examples
- Look at the schema files - do they use database-specific types?
- Is there official support for multiple databases, or just community workarounds?
What I should have done: Accepted PostgreSQL from the start instead of trying to fit it into my existing infrastructure.
This applies beyond Umami: Many modern applications are built for specific databases:
- Ruby on Rails apps often assume PostgreSQL
- Laravel can use MySQL or PostgreSQL, but has different features per database
- Django supports multiple databases but some features are PostgreSQL-only
Don't fight the architecture. Work with it.
Lesson 2: EOL Software is a Liability
The situation: My database server runs PostgreSQL 12, which reached end-of-life in November 2024.
The impact:
- No security updates
- No bug fixes
- Package repositories removed (
contribpackage unavailable) - Blocked my Umami installation (needed
pgcryptoextension)
The lesson: End-of-life software isn't just about missing features - it actively blocks new deployments.
Why this matters:
- Security risk - No patches for vulnerabilities
- Integration problems - New software expects current versions
- Support dead ends - Community moves on, documentation becomes outdated
- Technical debt accumulates - The longer you wait, the harder the upgrade
Action item: I need to upgrade data.luwte.net to PostgreSQL 16. This is now a priority, not "someday."
How to prevent this:
- Track EOL dates for all infrastructure components
- Plan upgrades 6 months before EOL
- Test new versions in non-production first
- Document the current state before upgrading
Resources for tracking EOL:
- https://endoflife.date/ - Comprehensive EOL database
- Subscribe to vendor security announcements
- Set calendar reminders for major software versions
Lesson 3: Docker for Database Isolation
The decision: Deploy PostgreSQL 16 in Docker instead of installing it directly on a server.
Why Docker was the right choice:
Isolation:
- PostgreSQL runs in its own container
- Doesn't interfere with other services
- Easy to remove or recreate without affecting the host
Version management:
- Pin to specific version (postgres:16-alpine)
- Upgrade by changing image tag
- Roll back if needed
Portability:
- Same container runs anywhere
- Easy to move to another host
- Kubernetes migration path is clear
Simple deployment:
- Docker Compose file is the complete documentation
docker compose up -dand it's running- No distribution-specific package issues
The pattern I'll use:
services:
database:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- /persistent/storage:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
When to use Docker for databases:
- ✅ Development and testing environments
- ✅ Small to medium production deployments
- ✅ When you need multiple database versions
- ✅ As a stepping stone to Kubernetes
When NOT to use Docker for databases:
- ❌ High-performance requirements (though gap is closing)
- ❌ When you don't understand Docker networking/storage
- ❌ If your team is unfamiliar with containers
My future plan: Everything new goes in Docker first. Later, migrate to Kubernetes on the TuringPi cluster.
Lesson 4: Clear Code Organization Matters
The detail that saved time later: Adding clear comment blocks around the Umami tracking script in Ghost.
What I did:
<!-- ==================== Umami Analytics ==================== -->
<script defer src="https://analytics.vluwte.nl/script.js"
data-website-id="074b5ca4-08fd-4e46-b292-5e0412c8863d"></script>
<!-- ==================== End Umami Analytics ==================== -->
Why this matters:
- 6 months from now, I'll know exactly what this code does
- Easy to find when I need to modify it
- No guessing which script belongs to which service
- If something breaks, I know where to look
The pattern:
<!-- ==================== Service Name ==================== -->
<code here>
<!-- ==================== End Service Name ==================== -->
Apply this everywhere:
- Configuration files
- Scripts
- Code injection
- Docker Compose files
- Any file that gets modified over time
It's not over-engineering - it's making your future self's life easier.
Lesson 5: npm Dependency Management is Messy
The reality: Modern JavaScript projects have complex dependency trees.
What I encountered:
- React version conflicts (Umami uses React 19, some deps want React 18)
- Missing peer dependencies (
prop-types) - Security vulnerabilities in transitive dependencies
The solution: --legacy-peer-deps flag
npm install --legacy-peer-deps
What this does:
- Ignores peer dependency version conflicts
- Allows installation to proceed
- Accepts the warnings
Why this was necessary:
- Umami is on React 19 (cutting edge)
- Some dependencies haven't updated yet
- The alternative is waiting or forking packages
The vulnerabilities:
- 8 high severity issues reported
- Mostly in development dependencies
- Acceptable risk for self-hosted analytics with controlled inputs
When to worry about npm vulnerabilities:
- ✅ Production applications handling sensitive data
- ✅ Public-facing services with untrusted input
- ✅ If
npm auditshows runtime vulnerabilities
When not to worry:
- ❌ Development-only dependencies
- ❌ Build-time tools
- ❌ Self-hosted services with controlled access
My approach:
- Monitor for Umami updates
- Run
npm auditoccasionally - Update when major versions release
- Don't panic over every warning
Lesson 6: Privacy-Friendly Analytics Work
The myth: "You need Google Analytics to understand your audience."
The reality: Umami provides everything I need:
- Page views
- Visitor counts
- Referrer sources
- Popular content
- Real-time data
What I'm NOT missing:
- User tracking across sites
- Personal information collection
- Marketing attribution
- Demographic profiling
The benefits of privacy-first:
- No cookie consent needed - GDPR compliant by design
- Faster page loads - Lightweight script
- Reader trust - No creepy tracking
- Data ownership - Everything on my servers
The philosophical alignment:
- Building a self-hosting blog
- Teaching others to own their infrastructure
- Using Google Analytics would contradict that message
This works for:
- Personal blogs
- Technical documentation
- Portfolio sites
- Any site where you care about content, not marketing
This doesn't work for:
- E-commerce (need conversion tracking)
- Ad-supported sites (advertisers want detailed analytics)
- Marketing-heavy businesses (need attribution)
For my use case - a technical blog about self-hosting - Umami is perfect.
Lesson 7: Infrastructure as Code (Accidentally)
What happened: By documenting everything in files, I accidentally created infrastructure as code.
The artifacts:
- Docker Compose file (PostgreSQL deployment)
- Systemd service file (Umami service)
- Apache VirtualHost config (reverse proxy)
.envfile (application configuration)
Why this matters:
- I can recreate this setup in minutes
- Migration to Kubernetes is documented in the code
- Nothing is tribal knowledge
- Disaster recovery is just restoring files + data
The pattern I'm following:
- Write configuration files first
- Test the deployment
- Document the process
- Store files in version control (next step for me)
My next improvement:
- Create a Git repository for all config files
- Version control changes
- Document dependencies between services
- Automate with Ansible (future goal)
Future Migration Path
This Umami deployment isn't the final destination - it's a stepping stone to running everything on my TuringPi Kubernetes cluster.
Current state:
- PostgreSQL in Docker (docker.luwte.net)
- Umami as systemd service (sulu.luwte.net)
- Configuration in
.envfile
Migration plan:
- Export PostgreSQL database
- Deploy PostgreSQL as StatefulSet in Kubernetes
- Import database
- Deploy Umami as Kubernetes Deployment
- Configure Ingress for https://analytics.vluwte.nl
- Update Ghost to point to new URL (or keep same URL via DNS)
- Verify data continuity
- Decommission old services
Why this is easier than it sounds:
- PostgreSQL already in Docker (container experience)
- Configuration already in files (portable)
- Data export/import is standard PostgreSQL
- Kubernetes is just orchestrating containers
This is exactly the kind of hands-on learning I want to document.
The Meta-Lesson: Documentation is the Product
The biggest lesson: The documentation I'm creating is as valuable as the infrastructure itself.
Why:
- Future me will need this when things break
- Other people can learn from my mistakes
- The process of writing clarifies my thinking
- It creates accountability (can't cut corners when documenting)
What I'm documenting:
- Every command I run
- Every configuration file I create
- Every error I encounter and how I fixed it
- Every decision and why I made it
The format that works:
- Real commands, not sanitized examples
- Actual errors, not hypothetical problems
- Honest about what didn't work
- Time estimates (6-7 hours for Umami)
This blog is my documentation system. And it's helping me build better infrastructure because I have to explain my choices.
Applying These Lessons
For my next infrastructure project:
- Research the platform's database expectations first
- Don't assume I can use my existing database
- Check schema files for database-specific types
- Start with the recommended database
- Check EOL dates before choosing versions
- Use supported software
- Plan upgrade paths
- Don't deploy EOL versions
- Default to Docker for new services
- Isolation
- Portability
- Kubernetes migration path
- Document as I build
- Comment blocks everywhere
- Clear naming
- Explain decisions
- Accept npm complexity
- Use
--legacy-peer-depswhen needed - Monitor but don't panic over every vulnerability
- Update when major versions release
- Use
- Prioritize privacy and ownership
- Self-host when possible
- Own my data
- Respect readers
Conclusion
Installing Umami took 6-7 hours. Most of that time was learning lessons about database compatibility, EOL software, and Docker deployments.
Was the troubleshooting wasted time? No.
These lessons apply to every future infrastructure project:
- Respect the platform's architecture
- Keep software current
- Use containers for isolation
- Document everything
- Own your data
The analytics are working. But more importantly, I understand why they're working - and how to do it better next time.
← Previous: The Umami Journey
→ Next: Upgrading PostgreSQL
Questions or suggestions? Leave a comment below or reach out at igor@vluwte.nl.