Oddbean new post about | logout
 @b17a3b4f @67975218 @15066d8d If the DB is not large enough, a simple shell script with pg_dump can work. Larger DB, I wouldn't recommend, it takes a lot of time and stress on the server. Especially if you compress it. I had it for a while on a smaller server, but one night I started to get alerts every night that the server CPU is 100% for longer than 5 minutes.

Instead I set up WAL archive.

Basically:
1. you create a base backup, either with pg_basebackup or just simply stop postgres and make a copy of the data dir (usually PATH/version/main)
2. Enable WAL archive:

# /etc/postgresql/14/main/postgresql.conf
archive_command = '/var/lib/postgresql/wal/archive.sh "%f" "%p"'
restore_command = '/var/lib/postgresql/wal/restore.sh "%f" "%p"'
archive_mode = on
wal_level = logical3.  set archive_command and restore_command to something that handles wal files. For example:

archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
restore_command = 'cp /mnt/server/archivedir/%f %p'
my script is simply copies the file to an external ObjectStore.

Read more: https://www.postgresql.org/docs/current/continuous-archiving.html

With that you can recover to any point in time, with exact timestamp.