Using triggers with different users (MySQL error 1449) - triggers

Question: How can I export the database contents (including triggers) from my dev system and import the data to the live server without running into error 1449 when a trigger gets triggered?
In a recent php project I am making extensive use of mysql triggers but ran into a problem when deploying my database from dev to live system.
E.g. one of my triggers is defined as follows (output generated by using mysqldump)
DELIMITER ;;
/*!50003 CREATE*/ /*!50017 DEFINER=`root`#`localhost`*/ /*!50003 TRIGGER update_template BEFORE UPDATE ON template
FOR EACH ROW BEGIN
SET new.mod_date := now();
END */;;
DELIMITER ;
That trigger was defined on my dev system using the user root#localhost which creates the DEFINER=root#localhost clause in above statement.
root#localhost does not exist as a user on the live server which causes the following error when ever the trigger gets triggered (e.g. by using update templates set...) by the live systems user
1449: The user specified as a definer('root'#'localhost') does not exist
Currently I use mysqldump --add-drop-table --user=root -p my_project > export.sql for export and mysql -u devuser -p my_project < export.sql for importing data.
Export/import works flawless. The error occurs only in cases when I manipulate the data via sql and a trigger gets involved.
Edit:
MySQL version is 5.5.47 (live and dev)

Once you've exported to export.sql, next you'll need to sanitize your trigger lines using regex via sed, like this:
sed -i -- 's/^..!50003\sCREATE.....!50017\sDEFINER=.root...[^]*.....!50003\s\([^;]*\)/CREATE DEFINER=CURRENT_USER \1/g;s/^\s*\([^\*]*;\{0,1\}\)\s\{0,1\}\*\/;;$/\1;;/g' export.sql
This works on Linux, but if you're on Mac OS X, the built-in sed command won't work. You'll need to first brew install gnu-sed, then use gsed instead of sed in the above command.

In my case, the trigger that caused the problem didn't have the BEGIN and END statements. So I applied the corresponding DROP TRIGGER and CREATE TRIGGER, after that I made again a backup that latter restored without problems. i.e:
DROP TRIGGER `incorrect_trg1`;
DELIMITER ;;
CREATE DEFINER = `root`#`localhost` TRIGGER `incorrect_trg1` BEFORE INSERT ON `table1` FOR EACH ROW
BEGIN
SET NEW.col = DATE_FORMAT(NEW.col,'%Y%m');
END;;
DELIMITER ;

Use the following sed command to remove the DEFINER part from the dump file and then import it to your live server.
sed 's/\sDEFINER=`[^`]*`#`[^`]*`//' -i dumpfile.sql
The triggers will then be created by the user importing the dump file by default.

Related

Creating Batch Files with PostgreSQL \copy Command in Jetbrains Datagrip

I'm familiarizing myself with the standalone version of Datagrip and having a bit of trouble understanding the different approaches to composing SQL via console, external files, scratch files, etc.
I'm managing, referencing the documentation, and am happy to figure things out as such.
However, I'm trying to ingest CSV data into tables via batch files using the Postgres \copy command. Datagrip will execute this command without error but no data is being populated.
This is my syntax, composed and ran in the console view:
\copy tablename from 'C:\Users\username\data_file.txt' WITH DELIMITER E'\t' csv;
Note that the data is tab-separated and stored in a .txt file.
I'm able to use the import functions of Datagrip (via context menu) just fine but I'd like to understand how to issue commands to do similarly.
\copy is a command of the command-line PostgreSQL client psql.
I doubt that Datagrip invokes psql, so it won't be able to use \copy or any other “backslash command”.
You probably have to use Datagrip's import facilities. Or you start using psql.
Ok, but what about the SQL COPY command https://www.postgresql.org/docs/12/sql-copy.html ?
How can I run something like that with datagrip ?
BEGIN;
CREATE TEMPORARY TABLE temp_json(values text) ON COMMIT DROP;
COPY temp_json FROM 'MY_FILE.JSON';
SELECT values->>'aJsonField' as f
FROM (select values::json AS values FROM temp_json) AS a;
COMMIT;
I try to replace 'MY_FILE.JSON' with full path, parameter (?), I put it in sql directory etc.
The data grip answer is :
[2021-05-05 10:30:45] [58P01] ERROR: could not open file '...' for reading : No such file or directory
EDIT :
I know why. RTFM! -_-
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server.
Sorry.....

PostgreSQL PL/PerlU trigger issue

I'm trying to create a PostgreSQL trigger on Linux written in Perl which should execute code based on external libraries. The SQL script containing the trigger looks like this:
CREATE OR REPLACE FUNCTION notify_mytable_update() RETURNS trigger AS $$
use lib "full_path_to_lib_dir";
use MyModule;
return;
$$ LANGUAGE plperlu
SECURITY DEFINER
SET search_path = myschema, public, pg_temp;
DROP TRIGGER IF EXISTS notify_mytable_update ON mytable;
CREATE TRIGGER notify_mytable_update AFTER UPDATE ON mytable
FOR EACH ROW
EXECUTE PROCEDURE notify_mytable_update();
The issue with this is that whenever I try to this script with psql I get a permission denied error in the Perl code for accessing MyModule. Giving full access to my home directory to postgres didn't help.
Thank you in advance!
Don't forget that to have access to a file, you not only need permissions to the file and the directory where it resides, but also to all directories in the path.
So if your module is /home/george/MyModule.pm, you need access to / and /home in addition to /home/george and the file itself.
You'll have to give these permissions to the operating system user running the PostgreSQL server process, commonly postgres.

How to include MySQL database schema on GitHub?

Stackoverflow and MySQL-via-command-line n00b here, please be gentle! I've been looking around for answers to my question but could only find topics dealing with GitHubbing MySQL dumps (as in: data dumps) for collaboration or MySQL "version control" via GitHub, neither of which tells me what I want to know:
How does one include MySQL database schemas/information on tables with PHP projects on GitHub?
I want to share a PHP project on GitHub which relies on the existence of a MySQL database with certain tables. If someone wanted to copy/make use of this project, they would need to have these particular tables in place to make the script work (all tables but one are empty in the beginning and only get filled by the user over time, via the script; the non-empty table holds three values from the start). How does one go about this, what is common practice?
Would I just get a (complete) dump file of my own db/tables, then
delete all the data parts (except for that one non-empty
table), set all autoincrements to zero and then upload that .sql file
to GitHub along with the rest of the project?
OR
Is it best/better practice to write a (PHP) script with which the
(maybe not-so-experienced) user can create these tables without
having to use mysqldump/command line magic?
If solution #1 is the way to go, would I include further instructions on how to use such a .sql file?
Sorry if my questions sound silly, but as I said above, I myself am new to using the command line for MySQL-related things and had only ever used phpMyAdmin until yesterday (when I created my very first dump file with mysqldump - yay!).
Common practice is to include an install script that creates the necessary tables, so solution #2 would be the way to go.
[edit] That script could ofc just replay a dump. ;)
You might also be interested in migrations: How to automate migration (schema and data) for PHP/MySQL application
If you want also track database schema changes
You can use git hooks.
In directory [your_project_dir]/.git/hooks add / edit script pre-commit
#!/bin/sh -e
set -o errexit
# -- you can omit next line if not using version table
version=`git log --tags --no-walk --pretty="format:%d" | sed 1q | sed 's/[()]//g' | sed s/,[^,]*$// | sed 's ...... '`
BASEDIR=$(dirname "$0")
# -- set directorey wher schema dump is placed
dumpfile=`realpath "$BASEDIR/../../install/database.sql"`
echo "Dumping database to file: $dumpfile"
# -- dump database schema
mysqldump -u[user] -p[password] --port=[port] [database-name] --protocol=TCP --no-data=true --skip-opt --skip-comments --routines | \
sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > "$dumpfile"
# -- dump versions table and update core vorsiom according to last git tag
mysqldump -u[user] -p[password] --port=[port] [database-name] [versions-table-name] --protocol=TCP --no- data=false --skip-opt --skip-comments --no-create-info | \
sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | \
sed -e "/INSERT INTO \`versions\` VALUES ('core'/c\\INSERT INTO \`versions\` VALUES ('core','$version');" >> "$dumpfile"
git add "$dumpfile"
# --- Finished
exit 0
Change [user], [password], [port], [database-name], [versions-table-name]
This script is executed autamatically by git on each commit. If commiting tag new version is saved to table dump by tag name. If no changes in database, nothing is commited. Make sure if script is executable :)
Your install script can take sql queries from this dump and developer can easy track database changes.

Automating PostgreSQL output to csv

mohpc04pp1: /h/u544835 % psql arco
Welcome to psql 8.1.17, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
WARNING: You are connected to a server with major version 8.3,
but your psql client is major version 8.2. Some backslash commands,
such as \d, might not work properly.
dbname=> \o /h/u544835/data25000.csv
dbname=> select url from urltable where scoreid=1 limit 25000;
dbname=> \q
This is took from a link online of basically what I have been doing, but what I need to do is make a script that I can use to produce csv files daily
So my aim of the script is to while in the script connect to the db, run the \o etc commands then close it
but I'm having trouble scripting it to say go into the psql arco database then run those queries.
command line to connect to db = psql arco then once the scrits recognised I'm in that databse perform those commands to automate a query to a csv file.
if anyone can get me started or point me towards reading material for me to get past that bit, it will be duely appreciated.
i'm running all this off a standard windows xp, ssh'ing to a SLES set-up web server that holds my postgresql database running psql version 8.1.17
First of all you should fix your setup. As it turns out, we are dealing with PostgreSQL 8.1 here. This version has reached end of live in 2010. You need to seriously think about upgrading - or at least remind the guys running the server. Current version is 9.1.
The command you are looking for:
psql arco -c "\copy (select url from urltable where scoreid=1 limit 25000) to '/h/u544835/data25000.csv'"
Assuming your db is named "arco". Adjusted for changed question (including changed port).
I now see version 8.1 popping up in your question, but it's all contradictory. You need Postgres 8.2 or later to use a query (instead of a table) with the \copy meta-command.
Details about psql in the fine manual.
Alternative approach that should work with obsolete PostgreSQL 8.1:
psql arco -o /h/u544835/data25000.csv -t -A -c 'SELECT url FROM urltable WHERE scoreid = 1 LIMIT 25000'
Find some more info about these command line options under this related question on dba.SE.
With function (syntax compatible with 8.1)
Another way would be to create a server side function (if you can!) that executes COPY from a temp table (old syntax - works with pg 8.1):
CREATE OR REPLACE FUNCTION f_copy_file()
RETURNS void AS
$BODY$
BEGIN
CREATE TEMP TABLE u_tmp AS (
SELECT url FROM urltable WHERE scoreid = 1 LIMIT 25000
);
COPY u_tmp TO '/h/u544835/data25000.csv';
DROP TABLE u_tmp;
END;
$BODY$
LANGUAGE plpgsql;
And then from the shell:
psql arco -c 'SELECT f_copy_file()'
Change the separator
\f sets the field separator. I quote the manual again:
-F separator
--field-separator=separator
Use separator as the field separator for unaligned output.
This is equivalent to \pset fieldsep or \f.
Or you can change the column separator in Excel, here are the instructions from MS.
Thanks to Erwin's help and a link I read up on he posted for me I managed to combine the two to get
#!/bin/sh
dbname='arco'
username='' # If you actually supply a username, you need to add the -U switch!
psql $dbname $username << EOF
\f ,
\o /h/u544835/showme.csv
SELECT * FROM storage;
EOF
which will write my queries to a csv file etc for me.
From what there is above, it is not separating the sql query so if I load it straight into excel, they all stay in the same column too which means I'm having a problem with the delimiter
I've tried tabbed delimiters, also tried , ; etc but none are letting me separate it
I need for it
is there an option I can click to see which delimiter is being used with my psql? or a different way of dumping the data from a query into a file that can be read by excel, so a different column for each row etc

PostgreSQL - batch + script + variable

I am not a programmer, I am struggling a bit with this.
I have a batch file connecting to my PostgreSQL server, and then open a sql script. Everything works as expected. My question is how to pass a variable (if possible) from one to the other.
Here is my batch file:
set PGPASSWORD=xxxx
cls
#echo off
C:\Progra~1\PostgreSQL\8.3\bin\psql -d Total -h localhost -p 5432 -U postgres -f C:\TotalProteinImport.sql
And here's the script:
copy totalprotein from 'c:/TP.csv' DELIMITERS ',' CSV HEADER;
update anagrafica
set pt=(select totalprotein.resultvalue from totalprotein where totalprotein.accessionnbr=anagrafica.id)
where data_analisi = '12/23/2011';
delete from totalprotein;
This is working great, now the question is how could I pass a variable that would carry the date for data_analisi?
Like in the batch file, "Please enter date", and then the value is passed to the sql script.
You could create a function out of your your SQL script like this:
CREATE OR REPLACE FUNCTION f_myfunc(date)
RETURNS void AS
$BODY$
CREATE TEMP TABLE t_tmp ON COMMIT DROP AS
SELECT * FROM totalprotein LIMIT 0; -- copy table-structure from table
COPY t_tmp FROM 'c:/TP.csv' DELIMITERS ',' CSV HEADER;
UPDATE anagrafica a
SET pt = t.resultvalue
FROM t_tmp t
WHERE a.data_analisi = $1
AND t.accessionnbr = a.id;
-- Temp table is dropped automatically at end of session
-- In this case (ON COMMIT DROP) after the transaction
$BODY$
LANGUAGE sql;
You can use language SQL for this kind of simple SQL batch.
As you can see I have made a couple of modifications to your script that should make it faster, cleaner and safer.
Major points
For reading data into an empty table temporarily, use a temporary table. Saves a lot of disc writes and is much faster.
To simplify the process I use your existing table totalprotein as template for the creation of the (empty) temp table.
If you want to delete all rows of a table use TRUNCATE instead of DELETE FROM. Much faster. In this particular case, you need neither. The temporary table is dropped automatically. See comments in function.
The way you updated anagrafica.pt you would set the column to NULL, if anything goes wrong in the process (date not found, wrong date, id not found ...). The way I rewrote the UPDATE, it only happens if matching data are found. I assume that is what you actually want.
Then ask for user input in your shell script and call the function with the date as parameter. That's how it could work in a Linux shell (as user postgres, with password-less access (using IDENT method in pg_haba.conf):
#! /bin/sh
# Ask for date. 'YYYY-MM-DD' = ISO date-format, valid with any postgres locale.
echo -n "Enter date in the form YYYY-MM-DD and press [ENTER]: "
read date
# check validity of $date ...
psql db -p5432 -c "SELECT f_myfunc('$date')"
-c makes psql execute a singe SQL command and then exits. I wrote a lot more on psql and its command line options yesterday in a somewhat related answer.
The creation of the according Windows batch file remains as exercise for you.
Call under Windows
The error message tells you:
Function tpimport(unknown) does not exist
Note the lower case letters: tpimport. I suspect you used mixe case letters to create the function. So now you have to enclose the function name in double quotes every time you use it.
Try this one (edited quotes!):
C:\Progra~1\PostgreSQL\8.3\bin\psql -d Total -h localhost -p 5432 -U postgres
-c "SELECT ""TPImport""('%dateimport%')"
Note how I use singe and double quotes here. I guess this could work under windows. See here.
You made it hard for yourself when you chose to use mixed case identifiers in PostgreSQL - a folly which I never tire of warning against. Now you have to double quote the function name "TPImport" every time you use it. While perfectly legit, I would never do that. I use lower case letters for identifiers. Always. This way I never mix up lower / upper case and I never have to use double quotes.
The ultimate fix would be to recreate the function with a lower case name (just leave away the double quotes and it will be folded to lower case automatically). Then the function name will just work without any quoting.
Read the basics about identifiers here.
Also, consider upgrading to a more recent version of PostgreSQL 8.3 is a bit rusty by now.
psql supports textual replacement variables. Within psql they can be set using \set and used using :varname.
\set xyz 'abcdef'
select :'xyz';
?column?
----------
abcdef
These variables can be set using command line arguments also:
psql -v xyz=value
The only problem is that these textual replacements always need some fiddling with quoting as shown by the first \set and select.
After creating the function in Postgres, you must create a .bat file in the bin directory of your Postgres version, for example C:\Program Files\PostgreSQL\9.3\bin. Here you write:
#echo off
cd C:\Program Files\PostgreSQL\9.3\bin
psql -p 5432 -h localhost -d myDataBase -U postgres -c "select * from myFunction()"