Oddbean new post about | logout
 @15066d8d Should be recorded under /var/log/syslog though, right? 
 @b17a3b4f @15066d8d when i was setting up my cron thingies i checked the syslog for it its there 
 @67975218 We shall then indeed find out! ​:bing_chilling:​ @15066d8d 
 @b17a3b4f @67975218 @15066d8d bimg chiling? 
 @67975218 @b17a3b4f @15066d8d I think the easiest one (if you are using systemd, journalctl is one of the best friends):

sudo journalctl -u cron | grep "scriptname"
It will show you when they were executed (and what user):

# Run by misskey user
Sep 17 20:30:01 slippy CRON[389091]: (misskey) CMD (/home/misskey/kiki fetch && /home/misskey/kiki publish)
# Run by root (creates daily mysql dumps)
Sep 17 01:00:01 slippy CRON[246758]: (root) CMD (/root/backup/backup.database.mysql.sh)
# Run by root (creates weekly basebackup for WAL)
Sep 17 02:00:01 slippy CRON[252575]: (root) CMD (/root/backup/backup.database.postgresql.sh)
# Run by my own user to clean up old remote files through API calls.
Sep 17 03:00:01 slippy CRON[258304]: (efertone) CMD (/home/efertone/slippy-maintenance/clean-old-remote-files.py)
It will not record the exist status, but I get an email always when stderr is not empty.

Hope it helps. 
 @9046526e I really ought look into PostgreSQL scripts or Python scripts for interfacing with PostgreSQL for doing regular database backups and vacuums (although just reading about autovacuum right now, struggling to understand if it's active and configured by default or not). @67975218 @15066d8d 
 @b17a3b4f @9046526e @67975218  @jaf 

#!/bin/sh
pg_dump -F c --username dbuser dbname > /path/to/backupdir/whatever-bak`date +%Y-%m-%d`.pgdumpThere, make it run as the postgres user. 
 @15066d8d Thanks! This is one of the things on my "to do" list. I was going to try messing with it today, among other things, but then my company decided to fly me up to Boston for the night. @9046526e @67975218 @jaf 
 @b17a3b4f @67975218 @15066d8d @9046526e i live in "boston". 
 @b17a3b4f @9046526e @67975218 @jaf 
Simplest way to do it if you aren't working with a huge database which you won't be on a small instance 
 @15066d8d I certainly hope this database doesn't become huge. @9046526e @67975218 @jaf 
 @b17a3b4f @9046526e @67975218 @jaf 
Just maintain it and you'll be fine 
 @15066d8d Right now I'm mostly just running a script that @9046526e wrote to delete remote files older than 3 weeks. Beyond that, I think PostgreSQL performs autovacuum by default? I don't know, unsure how to really check and find out. Something more to research/read about when I get some extended downtime on my computer. 

Otherwise, I occasionally run VACUUM ANALYZE manually. Might be nice to combine the script you made for pg_dump with one that can perform VACUUM ANALYZE so that the database gets dumped, vacuumed, and analyzed daily + older files get pruned daily + the server reboots daily (right now all my VMs reboot daily). @67975218 @jaf 
 @b17a3b4f @9046526e @67975218 @jaf 
Only run full vacuums once in a while when you wanna reclaim disk space 
 @15066d8d I've never once yet run a VACUUM FULL and I've been warned against its utilization in general, outside of extraordinary circumstances. @9046526e @67975218 @jaf 
 @15066d8d @b17a3b4f @67975218 @jaf Yes, full vacuum is slow and explicite locks the table while it's working on it, meaning for example, can't record new notes in the database. So be careful with that. 
 @9046526e Whenever I run VACUUM ANALYZE and pg_dump I stop Misskey so that no activity is taking place while PostgreSQL does it's things. @15066d8d @67975218 @jaf 
 @b17a3b4f @9046526e @15066d8d @jaf i leave misskey on for pg_dump with no issues 
 @67975218 It's probably fine, I'm just trying to be very cautious. Not like I have hundreds of users upset if there is downtime anyway. @9046526e @15066d8d @jaf 
 @b17a3b4f @67975218 @15066d8d @jaf @9046526e Use pg repack,  its like vacuum but your db stays online l. 
 @b17a3b4f @67975218 @15066d8d If the DB is not large enough, a simple shell script with pg_dump can work. Larger DB, I wouldn't recommend, it takes a lot of time and stress on the server. Especially if you compress it. I had it for a while on a smaller server, but one night I started to get alerts every night that the server CPU is 100% for longer than 5 minutes.

Instead I set up WAL archive.

Basically:
1. you create a base backup, either with pg_basebackup or just simply stop postgres and make a copy of the data dir (usually PATH/version/main)
2. Enable WAL archive:

# /etc/postgresql/14/main/postgresql.conf
archive_command = '/var/lib/postgresql/wal/archive.sh "%f" "%p"'
restore_command = '/var/lib/postgresql/wal/restore.sh "%f" "%p"'
archive_mode = on
wal_level = logical3.  set archive_command and restore_command to something that handles wal files. For example:

archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
restore_command = 'cp /mnt/server/archivedir/%f %p'
my script is simply copies the file to an external ObjectStore.

Read more: https://www.postgresql.org/docs/current/continuous-archiving.html

With that you can recover to any point in time, with exact timestamp.