Adding Analytics to My Blog: The Umami Journey
Privacy-friendly analytics without Google: how I deployed Umami with PostgreSQL in Docker after learning hard lessons about database compatibility.
Introduction
After setting up my Ghost blog and getting it discoverable via Google Search Console, I needed analytics to understand if anyone was actually reading what I write. But I had specific requirements:
- Privacy-friendly - No tracking cookies, GDPR compliant
- Self-hosted - My data, my infrastructure
- Lightweight - Minimal resource usage
- No Google - I want to own my analytics, not feed data to Google
Enter Umami - a simple, fast, privacy-focused analytics platform. Getting it running turned into an educational journey through database compatibility issues and Docker deployments.
This is the story of that installation, including the challenges I encountered and how I solved them.
📝 This is part of the Blog Infrastructure series - documenting how I built this platform to share my homelab journey.
Other posts in this series:
- Setting Up Ghost on Red Hat
- Configuring Ghost for Technical Blogging
- Adding Analytics to My Blog: The Umami Journey (you are here)
- What I learned Installing Umami
Why Not Google Analytics?
The obvious choice for blog analytics is Google Analytics. It's free, comprehensive, and widely used. But it has significant downsides:
Privacy concerns:
- Tracking cookies require GDPR consent banners
- Data stored on Google servers
- Users tracked across the web
- Potential privacy law violations
Overkill for my needs:
- I don't need detailed user profiles
- Marketing features are irrelevant
- Complexity I won't use
Philosophical reasons:
- I'm building a self-hosted homelab
- Sending data to Google contradicts that goal
- I want to control my own analytics
For a technical blog about self-hosting, using Google Analytics would be hypocritical.
Choosing Umami
I evaluated several self-hosted analytics options:
Matomo (formerly Piwik):
- Pros: Feature-rich, mature project
- Cons: Heavy resource usage, complex setup
Plausible:
- Pros: Beautiful UI, simple
- Cons: Opinionated (some features require paid version)
Umami:
- Pros: Lightweight, privacy-focused, clean interface, truly open source
- Cons: Fewer features than Matomo
Why Umami won:
- Simple installation
- No cookies needed (GDPR friendly)
- Beautiful, minimal dashboard
- Perfect for my needs: page views, referrers, popular content
- Active development
For a technical blog with straightforward analytics needs, Umami is ideal.
The Database Challenge
My initial plan was simple: install Umami on my Red Hat web server (sulu.luwte.net) and connect it to my existing MariaDB database server.
That didn't work.
I tried MySQL/MariaDB first - it failed with 68 PostgreSQL-specific type errors.
I tried SQLite next - same errors.
The lesson: Umami v3 is built specifically for PostgreSQL. The schema uses PostgreSQL-specific types:
@db.Uuidfor unique identifiers@db.Timestamptzfor timestamps with timezone@db.Integerfor integers
MySQL and SQLite don't support these natively. I could have manually converted every type in the schema, but:
- Time-consuming and error-prone
- Every Umami update would break it
- Fighting against the platform's design
When the documentation says "built for PostgreSQL," they mean it. Don't fight the architecture.
The PostgreSQL Problem
My database server (data.luwte.net) runs PostgreSQL 12, which reached end-of-life in November 2024.
Umami needs the pgcrypto extension for UUID generation. In PostgreSQL 12, this requires the contrib package - which isn't available for EOL versions.
yum search postgresql12-contrib
# No results - EOL version, no packages
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- ERROR: could not open extension control file
The choice was clear: Deploy a modern PostgreSQL version.
I chose PostgreSQL 16 in Docker on my existing Docker host (docker.luwte.net) because:
- Clean, isolated deployment
- Modern, supported version
- Easy to manage via Docker Compose
- Good practice for eventual migration to Kubernetes
PostgreSQL 16 in Docker
Creating Storage
I have a separate Docker host for container workloads.
# Create logical volume
lvcreate -L5G -n postgresql root_vg
mkfs.ext4 /dev/mapper/root_vg-postgresql
# Add to fstab for persistent mounting
echo "/dev/mapper/root_vg-postgresql /postgresql ext4 defaults 0 0" >> /etc/fstab
# Reload systemd to mount the filesystem
systemctl daemon-reload
Note: The Docker container automatically created the required directory structure with correct permissions. No manual chown needed.
Docker Compose Configuration
Created /root/Docker/postgresql16/docker-compose.yml:
services:
postgres:
image: postgres:16-alpine
container_name: postgres-umami
restart: unless-stopped
environment:
POSTGRES_DB: umami
POSTGRES_USER: umami
POSTGRES_PASSWORD: your-secure-password
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- /postgresql:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U umami -d umami"]
interval: 10s
timeout: 5s
retries: 5
Deployment
cd /root/Docker/postgresql16
docker compose up -d
# Check logs
docker compose logs -f
Output:
database system is ready to accept connections
✅ PostgreSQL 16 running!
Network Configuration
# Allow PostgreSQL through firewall
firewall-cmd --permanent --zone=public --add-port=5432/tcp
firewall-cmd --reload
Also configured my internal firewall to allow traffic from sulu.luwte.net (10.0.1.12) to docker.luwte.net (10.0.0.80) on port 5432.
Installing Umami
Getting the Code
cd /opt
git clone https://github.com/umami-software/umami.git
cd umami
git checkout v3.0.3 # Latest stable release
Configuration
Created .env file:
DATABASE_URL=postgresql://umami:password@10.0.0.80:5432/umami?schema=public
APP_SECRET=randomly-generated-secret-here
DISABLE_TELEMETRY=1
Note: Generate a secure random secret with:
openssl rand -base64 32
Installation
# Install dependencies
npm install --legacy-peer-deps
# Apply database migrations
npm run update-db
Output:
Applying migration `01_init`
Applying migration `02_report_schema_session_data`
...
Applying migration `14_add_link_and_pixel`
All migrations have been successfully applied.
✅ Database schema created!
Building Umami
npm run build
First attempt failed:
Module not found: Can't resolve 'prop-types'
Quick fix:
npm install prop-types --legacy-peer-deps
npm run build
Success!
✓ Compiled successfully in 69s
Running Umami as a Service
Systemd Service
Created /etc/systemd/system/umami.service:
[Unit]
Description=Umami Analytics
After=network.target
[Service]
Type=simple
User=igor
WorkingDirectory=/opt/umami
Environment="NODE_ENV=production"
ExecStart=/usr/bin/npm start
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and Start
sudo systemctl daemon-reload
sudo systemctl enable umami
sudo systemctl start umami
sudo systemctl status umami
Output:
● umami.service - Umami Analytics
Active: active (running)
✓ Ready in 980ms
✅ Umami is running on localhost:3000!
Apache Reverse Proxy Configuration
Virtual Host Setup
Created /etc/httpd/conf.d/011-analytics.vluwte.nl-ssl.conf:
# HTTP - Redirect to HTTPS
<VirtualHost *:80>
ServerName analytics.vluwte.nl
Redirect permanent / https://analytics.vluwte.nl/
</VirtualHost>
# HTTPS - Umami Proxy
<VirtualHost *:443>
ServerName analytics.vluwte.nl
# SSL Configuration
SSLEngine on
SSLCertificateFile /etc/ssl/analytics.vluwte.nl/analytics.vluwte.nl.cer
SSLCertificateKeyFile /etc/ssl/analytics.vluwte.nl/analytics.vluwte.nl.key
SSLCertificateChainFile /etc/ssl/analytics.vluwte.nl/fullchain.cer
# Modern SSL configuration
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite HIGH:!aNULL:!MD5
# Logging
ErrorLog /opt/websites/analytics.vluwte.nl/logs/error.log
CustomLog /opt/websites/analytics.vluwte.nl/logs/access.log combined
# Reverse proxy to Umami
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
# Headers for proxy
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Port "443"
</VirtualHost>
SELinux and DNS
# Allow Apache to write logs
sudo semanage fcontext -a -t httpd_log_t "/opt/websites/analytics.vluwte.nl/logs(/.*)?"
sudo restorecon -Rv /opt/websites/analytics.vluwte.nl/logs
# Create log directory
sudo mkdir -p /opt/websites/analytics.vluwte.nl/logs
Added DNS record: analytics.vluwte.nl → server IP
SSL certificates were already in place from my certificate provider.
Reload Apache
sudo systemctl reload httpd
Test:
curl -I https://analytics.vluwte.nl
# HTTP/1.1 200 OK
✅ Accessible via HTTPS!
Creating the Admin Account
When you first access a fresh Umami installation, a default admin account already exists:
Default credentials:
- Username:
admin - Password:
umami
Visit: https://analytics.vluwte.nl and login with these credentials.
Critical: Immediately change the password!
- Login with default credentials
- Go to Settings → Profile
- Change the password
- Store the new credentials in your password manager
Security note: The default credentials are well-known and publicly documented. Leaving them unchanged is a security risk. Change them before doing anything else in Umami.
Adding the Blog to Umami
In Umami Dashboard
- Settings → Websites → Add Website
- Name:
vLuwte.nl Blog - Domain:
vluwte.nl - Save
Received tracking code:
<script defer src="https://analytics.vluwte.nl/script.js"
data-website-id="074b5ca4-08fd-4e46-b292-5e0412c8863d"></script>
Adding to Ghost
Ghost Admin → Settings → Code Injection → Site Header:
Added at the top with clear comment blocks:
<!-- ==================== Umami Analytics ==================== -->
<script defer src="https://analytics.vluwte.nl/script.js"
data-website-id="00000000-0000-0000-0000-000000000000"></script>
<!-- ==================== End Umami Analytics ==================== -->
Verification
- Visited https://vluwte.nl
- Opened browser DevTools → Network tab
- Confirmed
script.jsloaded with 200 status - Checked Umami dashboard
- Saw visitor count increase!
✅ Analytics working!
The Complete Architecture

The Umami analytics stack — Ghost serves the blog and loads the tracking script, Umami captures the data and stores it in PostgreSQL on a separate Docker host.How it works:
- Reader visits blog - Browser loads https://vluwte.nl from sulu.luwte.net
- Ghost loads Umami script - The tracking script is served from analytics.vluwte.nl
- Script sends data to Umami API - Page views and visitor info flow to the Umami application (port 3000)
- Umami stores data in PostgreSQL - Analytics data is persisted in PostgreSQL 16 running on docker.luwte.net (port 5432)
- Igor views dashboard - The analytics dashboard at https://analytics.vluwte.nl provides real-time visibility
Infrastructure:
- sulu.luwte.net - Apache reverse proxy, Umami application (Node.js port 3000)
- docker.luwte.net - PostgreSQL 16 in Docker container (port 5432)
- vluwte.nl - Ghost blog with tracking script
Why this works:
- Clean separation of concerns
- PostgreSQL isolated in Docker
- Easy to migrate to Kubernetes later
- Self-hosted, privacy-friendly
What's Working Now
✅ Privacy-friendly analytics - No cookies, GDPR compliant
✅ Self-hosted - My data, my infrastructure
✅ Real-time tracking - Visitor counts, page views, referrers
✅ Clean interface - Simple dashboard, no clutter
✅ Production-ready - Systemd service, Apache proxy, SSL
Metrics I'm Tracking
Essential data:
- Page views per post
- Visitor counts (unique and total)
- Referrer sources (where traffic comes from)
- Popular pages
- Real-time active visitors
What I'm NOT tracking:
- Personal information
- User profiles
- Cross-site behavior
- Marketing attribution
Simple, privacy-focused, and sufficient for my needs.
Time Investment
Total time: About 4-5 hours including troubleshooting database compatibility issues.
Was it worth it? Absolutely.
I now have:
- Self-hosted, privacy-friendly analytics
- Modern PostgreSQL deployment in Docker
- Experience with database compatibility challenges
- Documentation for future reference
Most importantly, I own my analytics data. No third-party tracking, no privacy concerns, no dependency on external services.
Next Steps
This analytics system is now tracking visitor behavior on my blog. In my next post, I'll share the specific lessons I learned from this installation - about database compatibility, Docker deployments, and infrastructure decisions that apply beyond just Umami.
For now, the analytics are live and working. Time to see which posts resonate with readers!
Questions about self-hosting analytics? Leave a comment below or reach out at igor@vluwte.nl.
← Previous: Configuring Ghost for Technical Blogging
→ Next: What I learned Installing Umami