Upgrading PostgreSQL: From 12 to 18 on RHEL8
PostgreSQL 12 reached EOL in November 2024. Here's how I upgraded to PostgreSQL 18 using the dump-and-restore method on RHEL 8.
Introduction
PostgreSQL 12 reached end of life in November 2024. I mentioned this as a liability in my Umami lessons post — EOL software is a real risk, and it was time to fix it on my own database server.
The plan was to upgrade to PostgreSQL 16. I ended up on 18. If you're going through the effort anyway, why not go to the latest stable release?
📝 This is part of the Blog Infrastructure series - documenting how I built this platform to share my homelab journey.
Other posts in this series:
- Ghost on Red Hat
- Configuring Ghost
- The Umami Journey
- Umami Lessons
- Upgrading PostgreSQL: From 12 to 18 on RHEL8 (you are here)
The Setup
data.luwte.net is my dedicated database server — it runs MariaDB and PostgreSQL. When I originally set up the server back in 2022, I installed the official PGDG repository from PostgreSQL.org:
dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
This turned out to be a good decision. The PGDG repository carries all major PostgreSQL versions side by side, which means I could install PostgreSQL 18 alongside the running PostgreSQL 12 on the same server without any conflicts. PostgreSQL 12 had been sitting there since then, not causing any obvious problems, but quietly accumulating risk.
There are two ways to upgrade PostgreSQL across major versions: pg_upgrade for an in-place upgrade, or the dump-and-restore method. pg_upgrade is faster and keeps your data directory intact — no need to dump and reimport all your data. I tried it first.
The (Failed) pg_upgrade Attempt
pg_upgrade works by starting both the old and new PostgreSQL instances temporarily on different ports and migrating the data in place. With both postgresql12-server and postgresql18-server installed, I ran:
/usr/pgsql-18/bin/pg_upgrade \
--old-datadir=/var/lib/pgsql/12/data \
--new-datadir=/var/lib/pgsql/18/data \
--old-bindir=/usr/pgsql-12/bin \
--new-bindir=/usr/pgsql-18/bin \
--check
It failed with pg_ctl: server did not start in time. The log showed that PostgreSQL 12 started successfully on its temporary port (50432) and was ready to accept connections — but pg_upgrade didn't see it in time and gave up:
2026-02-20 22:11:00.113 CET [13731] LOG: database system is ready to accept connections
........................................................... stopped waiting
pg_ctl: server did not start in time
The server did start. pg_upgrade just timed out waiting for it. This can happen under load, in a slow environment, or when SELinux or other security policies slow down socket creation. I didn't dig further into the root cause — I only have two small databases, so the dump-and-restore method is a perfectly reasonable alternative and simpler to reason about.
The Dump-and-Restore Method

The dump-and-restore flow — pg_dumpall out, psql in, with the compressed SQL dump as the safety net in between.Step 1: Install PostgreSQL 18
With the PGDG repository already in place, installing PostgreSQL 18 alongside the running PostgreSQL 12 is straightforward:
dnf install -y postgresql18-server postgresql18-contrib
This pulls in postgresql18, postgresql18-libs, postgresql18-server, and postgresql18-contrib — about 10 MB to download, 45 MB installed.
Then initialize the new data directory:
/usr/pgsql-18/bin/postgresql-18-setup initdb
This creates /var/lib/pgsql/18/data/ with a fresh postgresql.conf and pg_hba.conf. At this point PostgreSQL 12 is still running and untouched.
Step 2: Dump Everything from PostgreSQL 12
As the postgres user, dump the entire cluster — all databases, roles, and permissions — into a compressed file:
/usr/pgsql-18/bin/pg_dumpall -p 5432 | gzip > /var/lib/pgsql/full_backup_v12.sql.gz
Note: I'm using the PostgreSQL 18 pg_dumpall binary here, pointed at the running PostgreSQL 12 instance on port 5432. This is intentional — using the newer binary to dump an older server is supported and avoids compatibility warnings during restore.
The compressed dump is your safety net. Don't skip it, don't rush past it.
Step 3: Stop 12, Start 18
As root, swap the services:
systemctl stop postgresql-12
systemctl start postgresql-18
PostgreSQL 18 comes up on port 5432 (its default), which is the same port 12 was using. From a network perspective, nothing changes for connecting clients.
Step 4: Restore
Back as the postgres user, pipe the compressed dump into the new instance:
zcat /var/lib/pgsql/full_backup_v12.sql.gz | /usr/pgsql-18/bin/psql -d postgres
This prints a stream of CREATE TABLE, INSERT, GRANT, and similar messages as each object is restored. Mostly reassuring output — with one notable warning:
CREATE ROLE
WARNING: setting an MD5-encrypted password
DETAIL: MD5 password support is deprecated and will be removed in a future release of PostgreSQL.
HINT: Refer to the PostgreSQL documentation for details about migrating to another password type.
This means the roles I dumped from PostgreSQL 12 had MD5-hashed passwords. PostgreSQL 18 restored them faithfully, but flags that MD5 is deprecated. More on fixing this below.
Step 5: Vacuum
Run vacuumdb with analyze to update statistics across all databases:
/usr/pgsql-18/bin/vacuumdb --all --analyze-in-stages
Step 6: Configure and Enable
Before testing connections, there are things to restore in the PostgreSQL 18 configuration that aren't set by default.
/var/lib/pgsql/18/data/postgresql.conf — PostgreSQL 18 defaults to only listening on localhost (listen_addresses = 'localhost'). My applications connect from other servers on the network, so this needs to change:
listen_addresses = '*'
Without this, remote connections are silently refused — the service is running fine, it just isn't listening on the network interface. I found this out the hard way when my first connection test failed. The fix is obvious in hindsight, but it's easy to forget that a fresh initdb doesn't carry over your old postgresql.conf.
Also needed, to let the MD5-hashed passwords from the dump still work:
password_encryption = md5
/var/lib/pgsql/18/data/pg_hba.conf — Restore the client authentication rules from the PostgreSQL 12 config. This controls which hosts can connect, with which users, to which databases. Another thing that doesn't carry over from the old installation.
Restart to apply the configuration changes:
systemctl restart postgresql-18
Then disable the old service and enable the new one to survive reboots:
systemctl disable postgresql-12
systemctl enable postgresql-18
Step 7: Verify Connections
With the configuration in place, test that connections from the application servers actually work. After fixing listen_addresses and pg_hba.conf, all services came up without issues.
The MD5 Password Problem
This is the one thing left to clean up. PostgreSQL has been moving away from MD5 password hashing in favor of scram-sha-256, which is more resistant to offline attacks. PostgreSQL 18 still supports MD5, but warns that it will be removed in a future release.
The fix is straightforward:
-- Connect as postgres
psql -U postgres
-- Update each role that had an MD5 warning
ALTER ROLE rolename WITH PASSWORD 'newpassword';
Then update pg_hba.conf to use scram-sha-256 instead of md5 for authentication, and update postgresql.conf:
password_encryption = scram-sha-256
Reload to apply:
systemctl reload postgresql-18
I have this on my list as the final cleanup step, along with removing the PostgreSQL 12 packages and data directory entirely.
What's Still To Do
- Update role passwords to use
scram-sha-256 - Remove
postgresql12-server,postgresql12-contrib,postgresql12, andpostgresql12-libs - Delete
/var/lib/pgsql/12/once I'm confident everything is stable - Update
postgresql.confto setpassword_encryption = scram-sha-256
Lessons
The dump-and-restore method is slower than pg_upgrade but much easier to reason about. You get a clean PostgreSQL 18 installation with all your data migrated, and if something goes wrong during restore you still have the running PostgreSQL 12 instance to fall back to — you haven't touched it.
The MD5 deprecation warning is worth taking seriously. It's not urgent today, but the migration path is simple and doing it now means one less thing to deal with before the next major version upgrade.
← Previous: Umami Lessons
→ Next: Installing Talos Linux on TuringPi RK1
Questions or suggestions? Leave a comment below or reach out at igor@vluwte.nl.