We want to programmatically detect errors in cron scheduled pg_dumps.
Apart from checking whether or not the log file ends with pg_dump: saving database definition:
What other tell-tale strings are there to grep for in order to programmatically check if the dump is OK?
grep the output for mentions of "ERROR:".
Related
I was sent a .sql file in which there are two databases. Previously, I only dealt with .sql files in which there is one database. I also can't ask to send databases in different files.
Earlier I used this command:
psql -d first_db < /Users/colibri/Desktop/first_db.sql
Databases on the server and locally have different names.
Tell me, please, how can I now restore a specific database from a file in which there are several?
You have two choices:
Use an editor to delete everything except the database you want from the SQL file.
Restore the whole file and then drop the database you don't need.
The file was probably generated with pg_dumpall. Use pg_dump to dump a single database.
If this is the output of pg_dumpall and the file is too big to edit with something like vi, you can use a stream editor to isolate just what you want.
perl -ne 'print if /^\\connect foobar/.../^\\connect/' < old.sql > new.sql
The last dozen or so lines that this captures will be setting up for and creating the next database it wants to restore, so you might need to tinker with this a bit to get rid of those if you don't want it to attempt to create that database while you replay. You could change the ending landmark to something like the below so that it ends earlier, but that is more likely to hit false positives (where the data itself contains the magic string) than the '^\connect' landmark is.
perl -ne 'print if /^\\connect foobar/.../^-- PostgreSQL database dump complete/'
I want to get an idea of how long it will take to copy a csv to a postgresql table. Is there a way to print the rows copied in a reasonable fashion or is there another way to somehow display the progress of the copy?
Perhaps there is a verbose setting or I should use --echo or -qecho
I am using:
psql -U postgres -d nyc_data -h localhost -c "\COPY rides FROM nyc_data_rides.csv CSV"
In Postgres 14, it's now possible to query the status of an active COPY via the internal pg_stat_progress_copy view.
e.g. to watch progress in terms of both bytes and lines processed:
select * from pg_stat_progress_copy \watch 1
Refs:
https://www.postgresql.org/docs/14/progress-reporting.html#COPY-PROGRESS-REPORTING
https://www.depesz.com/2021/01/12/waiting-for-postgresql-14-report-progress-of-copy-commands/
There is no such thing unfortunately.
One idea would be to divide the input into chunks of 1000 or 10000 lines, which you then import one after the other. That wouldn't slow processing considerably, and you can quickly get an estimate how long the whole import is going to take.
use pv tool
pv /tmp/some_table.csv | sudo -u postgres psql -d some_db -c "copy some_table from stdin delimiter ',' null '';"
and as a result, it will show
1.42GiB 0:11:42 [2.06MiB/s] [===================================================================================================================================================================>] 100%
As Laurenz Albe said, there's no way to measure how many time remaining to conclude the entire process. But one thing that I did today to take a good approximation was:
Start the "Monitor System" in my Linux
In this application there's a counter that how many data was uploaded since I started this application
Using the size of the file that I was uploading I made a good prediction about how many data was left to send to the server.
I am trying to read a CSV file located on a postgres 8.4 server filesystem:
COPY ip2location_db1 FROM '/pgsrc/IP2LOCATION-LITE-DB9.CSV' WITH CSV QUOTE AS '"';
I am getting the error:
Cannot open file for read access: Permission denied
The file has owner postgres and I tried putting it on /var/lib/pgsql and also on /pgsources folder, to which I gave ownership to postgres user.
What am I doing wrong?
I have run into this issue before, and rather than jockey around with permissions all the time, I just import from STDIN.
This would accomplish what you want (albeit not precisely the way you want to do it), but I think it's a lot less cumbersome and error-prone. Try:
cat /pgsrc/IP2LOCATION-LITE-DB9.CSV | psql -c "COPY ip2location_db1 FROM STDIN (FORMAT CSV);"
This does imply that you're running the query from a shell script or something, but to implement it the other way, you'd have to incorporate the change of permissions with a shell script or something.
(Also, according to the docs, the default quote is the double quote, so you don't need to specify the quote.)
I am trying to log a complete session in psql into a .txt file. The command given to me was initially this:
psql db_name| tee file_name.txt
However, my SSH client does nothing until I quit it. That means, it does not recognize any command. More like a document, no action happens no matter what I write. So far, only '\q' is recognised which lets me get out of it. Any ideas what is happening? How am I to write the query if shell will not read anything. Also, I tried the following (this is before connecting to database) :
script filename.txt
It does show the message : script started, file is filename.txt, but I dont know where this file is stored and how to retrieve it.
Any help with the above will be welcome and really appreciated! Thanks a lot :)
There is option to psql for log query and results:
-L filename
--log-file filename
Write all query output into file filename, in addition to the normal output destination.
Try this:
psql db_name -L file_name.txt
Stackoverflow and MySQL-via-command-line n00b here, please be gentle! I've been looking around for answers to my question but could only find topics dealing with GitHubbing MySQL dumps (as in: data dumps) for collaboration or MySQL "version control" via GitHub, neither of which tells me what I want to know:
How does one include MySQL database schemas/information on tables with PHP projects on GitHub?
I want to share a PHP project on GitHub which relies on the existence of a MySQL database with certain tables. If someone wanted to copy/make use of this project, they would need to have these particular tables in place to make the script work (all tables but one are empty in the beginning and only get filled by the user over time, via the script; the non-empty table holds three values from the start). How does one go about this, what is common practice?
Would I just get a (complete) dump file of my own db/tables, then
delete all the data parts (except for that one non-empty
table), set all autoincrements to zero and then upload that .sql file
to GitHub along with the rest of the project?
OR
Is it best/better practice to write a (PHP) script with which the
(maybe not-so-experienced) user can create these tables without
having to use mysqldump/command line magic?
If solution #1 is the way to go, would I include further instructions on how to use such a .sql file?
Sorry if my questions sound silly, but as I said above, I myself am new to using the command line for MySQL-related things and had only ever used phpMyAdmin until yesterday (when I created my very first dump file with mysqldump - yay!).
Common practice is to include an install script that creates the necessary tables, so solution #2 would be the way to go.
[edit] That script could ofc just replay a dump. ;)
You might also be interested in migrations: How to automate migration (schema and data) for PHP/MySQL application
If you want also track database schema changes
You can use git hooks.
In directory [your_project_dir]/.git/hooks add / edit script pre-commit
#!/bin/sh -e
set -o errexit
# -- you can omit next line if not using version table
version=`git log --tags --no-walk --pretty="format:%d" | sed 1q | sed 's/[()]//g' | sed s/,[^,]*$// | sed 's ...... '`
BASEDIR=$(dirname "$0")
# -- set directorey wher schema dump is placed
dumpfile=`realpath "$BASEDIR/../../install/database.sql"`
echo "Dumping database to file: $dumpfile"
# -- dump database schema
mysqldump -u[user] -p[password] --port=[port] [database-name] --protocol=TCP --no-data=true --skip-opt --skip-comments --routines | \
sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > "$dumpfile"
# -- dump versions table and update core vorsiom according to last git tag
mysqldump -u[user] -p[password] --port=[port] [database-name] [versions-table-name] --protocol=TCP --no- data=false --skip-opt --skip-comments --no-create-info | \
sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | \
sed -e "/INSERT INTO \`versions\` VALUES ('core'/c\\INSERT INTO \`versions\` VALUES ('core','$version');" >> "$dumpfile"
git add "$dumpfile"
# --- Finished
exit 0
Change [user], [password], [port], [database-name], [versions-table-name]
This script is executed autamatically by git on each commit. If commiting tag new version is saved to table dump by tag name. If no changes in database, nothing is commited. Make sure if script is executable :)
Your install script can take sql queries from this dump and developer can easy track database changes.