Does the Postgres client psql have a functionality like the ## of Oracle SQLPlus?
For those who are wondering what ## does: it allows to call a relative sql script from another script.
Quote from the manual:
\ir or \include_relative filename
The \ir command is similar to \i, but resolves relative file names differently. When executing in interactive mode, the two commands behave identically. However, when invoked from a script, \ir interprets file names relative to the directory in which the script is located, rather than the current working directory.
Related
There is a script.sh file
set FABRIC_CFG_PATH=<some path>
set CORE_PEER_LOCALMSPID=<some id>
If I'm running this script in windows, the env variables are not getting set.
Whereas if setting the env using the cmd approach,
E.g., on windows cmd
set FABRIC_CFG_PATH=<some path>
It works fine.
So how can I set the env in windows through a shell script file?
Since your intent is to define current-process-only environment variables (rather than persistently defined ones, which on Windows are stored in the registry) you need to use a script file / batch file that runs in-process in order for environment variables defined therein to be seen by the script's caller.
Therefore:
If the caller is a cmd.exe session, you must use a batch file: a plain-text file with filename extension .cmd (or, less preferably, .bat[1]) that uses cmd.exe syntax.
If the caller is a PowerShell session, you must use a PowerShell script: a plain-text file with filename extension .ps1 that uses PowerShell syntax.
Note: While you can call a .cmd file (batch file) from PowerShell too (but not directly vice versa), this will not work as intended, because of necessity it runs in a (cmd.exe) child process, whose environment variables aren't seen by the PowerShell caller.
As for .sh files: they have no predefined meaning on Windows, but may be defined by third-party applications, such as Git Bash. In the case of the latter, invoking a .sh file passes it to the POSIX-compatible Bash shell, which has its own syntax. More importantly, invoking such a file won't work as intended when called from either cmd.exe or PowerShell, because Bash must run in a child process, and child processes cannot set environment variables for their parents.
cmd.exe / batch-file example:
Create a file named envVars.cmd, for instance, and place the following lines in it:
#echo off
:: Note: Do NOT use `setlocal` here
set "FABRIC_CFG_PATH=C:\path\to\some directory\config"
set "CORE_PEER_LOCALMSPID=42"
Then, from your cmd.exe session / another batch file, call the file as follows to make the environment variable-definitions take effect for the current process (assuming the file is in the current directory):
.\envVars.cmd
You will then able to refer to the newly defined variables as %FABRIC_CFG_PATH% and %CORE_PEER_LOCALMSPID%.
PowerShell example:
Create a file named envVars.ps1, for instance, and place the following lines in it:
$env:FABRIC_CFG_PATH='C:\path\to\some directory\config'
$env:CORE_PEER_LOCALMSPID=42
Then, from a PowerShell session / another PowerShell script, call the file as follows to make the environment variable-definitions take effect for the current process (assuming the file is in the current directory):
./envVars.ps1
You will then able to refer to the newly defined variables as $env:FABRIC_CFG_PATH and $env:CORE_PEER_LOCALMSPID.
[1] See this answer.
After some study on the executables/batch files in windows, I have come to the conclusion that I need to write a batch .bat file to use the set command to set the env variables as I desire.
I am experiencing issues with running this psql script on ubuntu terminal to map the mimic3 database to omop common data model, the code used is
psql "mimic3" --set=OMOP_SCHEMA="$OMOP_SCHEMA" -f "mimic-omop/etl/etl.sql"
the code stops running at the last truncate table command where it should call this sql script titled pg_function but it gives this error:
psql:mimic-omop/etl/etl.sql:28: etl/pg_function.sql: No such file or directory
I have attached a section of the sql file below as proof that it really exists:
The last part of the query has this code where it calls all the sql files listed in my screenshot below:
I am following the instructions in this link:https://github.com/MIT-LCP/mimic-omop/blob/master/README-run-etl.md
can I run multiple db2 commands in one command?
i.e: from cmd:
db2cmd /c db2 /c connect to sample user sample_user using sample_pwd /c
"SELECT * FROM table;"
I also tried the following:
db2 connect to sample user db2admin using pwd; EXPORT TO result.csv OF DEL
MODIFIED BY NOCHARDEL SELECT * FROM alarms;
but didn't work with the following error:
SQL0104N An unexpected token "EXPORT" was found following
"". Expected tokens may include: "NEW". SQLSTATE=42601
as an example, for VERTICA, vsql tool, this can be done this way:
vsql -h localhost -U user -w pwd -c "SELECT * FROM alarms" -A -o
"alarms.csv" -F "|" -P footer=off -q
You appear to be using Microsoft Windows db2cmd.exe .
Your question has nothing to do with Db2 per se, but it is instead more about CMD (cmd.exe) scripting syntax, a legacy scripting language for batch files by Microsoft that still works on Windows-10, and which also works in db2cmd.exe.
In a db2cmd.exe shell you can use the "&&" sequence between distinct Db2 commands (and each such command must have the db2 prefix). Additionally each such command line has to escape any of the characters that are special characters to the shell itself. By default the escape character is a caret (^) symbol.
For example db2 connect to dbname && db2 ^"export to alarms.csv of del ... select ^* from alarms^" && db2 connect reset
( I show the ^ before any " that you might want to pass to Db2-CLP ).
But that && will require that each command returns a zero exit code, which might not be what you want, although it is usually the safest option. If a previous command fails then subsequent commands will not run.
If you want to tolerate some non-zero exit codes, use bracketing ( ... ) to group commands, and then use the && or & outside the brackets depending on your requirements. You can read about CMD scripting in any good book, plenty of examples online.
However, when scripting for Db2 on Windows , it can be much wiser to append all of the commands (without the Db2 prefix) into a plain text file, and then ask the Db2 clp to execute the text file via the syntax db2 -tvf texfile. Doing it this way lets you add conditional logic inside the textfile, handle exceptions , avoid shell escaping requirements, etc. If you encapsulate all your logic inside a script, it makes it easier to test, and also easier to run from a single db2cmd /c .... command-line.
If you want to make a batch file (*.bat or *.cmd) that does not need the db2cmd prefix to be invoked, you can alter your batch file to have a few lines at the start of the batch file to re-execute itself via db2cmd.exe. This works better if your db2cmd.exe is already on the PATH environment variable, but if that is not the case then you can fully-qualify the absolute pathname to your db2cmd.exe inside the batch file. The lines to add at the start of the batch file are:
#rem re-execute via db2cmd if running from cmd.exe
#echo off
if "%DB2CLP%"=="" db2cmd /c /i /w "%0" %* & goto :EOF
db2 connect to sample user db2admin using pwd
if errorlevel 1 #echo "Failed to connect to database " && #goto :EOF
db2 "EXPORT TO result.csv OF DEL MODIFIED BY NOCHARDEL SELECT * FROM alarms"
if errorlevel 3 #echo "Export from Db2 failed" && #goto :EOF
Additionally on Windows, you can use Powershell scripting to maniuplate Db2 databases, and you can also use Windows subsystem for unix to run Unix-style shell scripts in some configuration.
Your example's most direct comparison in Db2-land would be clpplus which lets you specify the database but you also have to provide login information (including a password or you can be prompted for it).
Within the db2cmd and db2 framework you have a couple options but most likely will want to use a script file.
One option: Set the registry variable DB2DBDFT to your default database. Personally, I dislike this option because it causes an implicit connection to a database that you may not have intended.
One option: Put your series of commands into a file and run that file. This is the more traditional way of running multiple commands. Commands can be terminated with a semi-colon and newline (it understands DOS and Unix differences here). You can use a different terminator by using -td # (for example). You would then invoke db2 -tf file.sql.
One option: A batch file. It's similar to above but you'd use the db2cmd environment to execute a batch that has the db2 commands in it. db2cmd gets you an appropriate environment for working with Db2. If you connect to a database in this environment you stay connected until you issue a CONNECT RESET, a TERMINATE, are forcibly disconnected, or your environment exits. So your batch file would simply have:
db2 connect to sample user db2admin using pwd
db2 "EXPORT TO result.csv OF DEL MODIFIED BY NOCHARDEL SELECT * FROM alarms"
(note the quotes to keep the command line from substituting all the filenames in the current working directory where the * is)
Two options.
1-st:
db2cmd /i /w /c "db2 ^"connect to sample^" & db2 ^"values 1^" & db2 connect reset"
2-nd:
You may set the following Windows system environment variable DB2CLP to the value **$$** and run db2 commands from windows cmd directly afterwards like this:
db2 "connect to sample" & db2 "values 1" & db2 connect reset
I am making an automated script from terminal that creates a file with the output of \l
But I do not know where the \o command in postgresql prints out the file that it has made. The documentation doesn't inform where.
I did read this, but no luck:
Sincerely
\o points at the named file in current working directory of psql. As you found out, this has some issues in automated environments.
This means you have basically two options:
use absolute paths with \o
Alternatively you can use \cd to set your current working directory inside your psql script.
In your particular case, however, you know that psql -l gives you the same info? That may be easier to manage in an automated environment.
I am facing a problem in DB2. In my Oracle environment it was very easy for me to include multiple scripts in one master script, which were executed sequentially. e.g.:
Master.sql:
connect ....
#script1.sql
#script2.sql
Now I have to build up same logic in DB2 LUW. Is there a simple way to include multiple scripts in one master script? I would like to have one single db2 call from shell, which executes the master script and within all subscripts.
Regards
Jan
There is notrhing to stop you from creating a single file with multiple sql batches. In the Windows world, it would look like this:
Note: First you initialize the db2 command prompt.
db2cmd -c -w -i %1.bat
With as many of these as you want in the .bat file:
db2 -txf c:\Example\db2html.sql
In Linux, the db2clp is included in the shell once you load the db2profile ('. /home/db2inst1/sqllib/db2profile). In windows, you need to call db2cmd in order to use db2clp.
With a interactive db2clp, you cannot call db2 scripts via #scriptX, however, you can call them from the shell like
db2 -tvf script
However, if you use the CLP*Plus you can do almost everything you do in SQL*Plus. For more information: https://www.ibm.com/developerworks/community/blogs/IMSupport/entry/tech_tip_db2_s_new_clp_plus_utility?lang=en