I ran into a rare problem recently with a large Postgresql database that was filling up the local disks of a server. The database was large, over 100 GB and about 300 million records. There was a lot of churn and it had not been vacuumed in a long time. When I manually ran a vacuum on it, there was not enough working disk space to complete the operation, creating a bind.
What I decided to do instead of using vacuum was to dump it to a remote backup location, then drop the database and restore it from the remote dump. I used SSH to run the remote commands.
Dump a remote Postgresql database to the local machine
ssh user@remote-database-server 'pg_dump database-name -t table-name' > table-name.sql
Restore a remote Postgresql database dump to the local database server
ssh user@backup-machine 'cat table-name.sql' | psql -d database-name
Note that the dump command is run from the backup machine and the restore command is run from the database server.
Also note the single quotes around certain parts of the command.
Wednesday, September 15, 2010
Subscribe to:
Posts (Atom)