How to sync multiple perforce workspaces - version-control

I have multiple workspaces in Perforce, say w1, w2, w3,... all with different mappings that may or may not point to different folders in the same depot(s). I want to write a .bat file that syncs them automatically and in sequence as not to put stress on the server.
Optimally, I want to start this off automatically and have it first sync w1, after it's done have it sync w2, and so on. Assume I don't have any environment variables set, so if they're necessary, please let me know.
How would go about doing this?

If you don't want to set up any P4 environment variables, you could use the global options and do something like this:
p4 -u <user> -P <password> -p <port> login
p4 -u <user> -P <password> -p <port> -c <workspace1> sync //path/to/sync/...
p4 -u <user> -P <password> -p <port> -c <workspace2> sync //other/path/...
p4 -u <user> -P <password> -p <port> -c <workspace3> sync //yet/another/path/...
If you set up the P4USER, P4PASSWD, and P4PORT P4 environment variables (see the p4 set command), then you could clean it up a little to look like this:
p4 login
p4 -c <workspace1> sync //path/to/sync/...
p4 -c <workspace2> sync //other/path/...
p4 -c <workspace3> sync //yet/another/path/...

Related

Postgres: copy one remote DB to another

I'm trying to copy a database from one remote server to another one. I've tried several different commands from my terminal (macOS):
"pg_dump -U postgres -d [DB] -f [DB].sql"
"pg_dump -U postgres -d [DB] -h [Host] -f [DB].sql"
But nothing works. I get errors like "pg_dump: error: connection to database [DB] failed: FATAL: database [DB] does not exist".
Any ideas how to solve this problem? I've tried to edit the pg_hba.conf, but it didn't work as well..
The procedure to make this work is :
First sub-option below outputs plain text file, second custom format(binary) file
a) pg_dump -C -h host_name -U user_name -d database_name -f database.sql
-C tells pg_dump to provide the command to create the database
on restore.
b) pg_dump -Fc -h host_name -U user_name -d database_name-f database.out
To restore you need different programs
a) For plain text option 1a do:
psql -d postgres -h host_name -U user_name -f database.sql
You need to connect to existing database, postgres in this case,
and then the commands from -C above will create
the database(database_name) and then connect to it for rest of operation.
b) For custom format option 1b:
pg_restore -C -d postgres -h host_name -U user_name database.out
Note: just specify the dump file(database.out) do not use -f.
More options and details can be found:
pg_dump
and
pg_restore
Look in the Notes section at the bottom of link for examples.

perforce root directory path

How can I get the perforce root directory path. I've searched online and I've tried solution such as
p4 -F %clientRoot% -ztag info
However the results returned were empty, but when i run this command:
p4 clients -u jmartini
I get these results:
Client jma_HP001 2017/10/19 root C:\projects\john 'Created by
jmartini. '
How can I simply just get the root directory path from command line. I would expect my results to be this:
C:\projects\john
If p4 info doesn't return the current client root, your shell does not have P4CLIENT set correctly. To fix this, you can do:
p4 set P4CLIENT=jma_HP001
From this point on, other commands (including the p4 -F %clientRoot% -ztag info you tried to run first) will return results relative to that client workspace.
If you want to just get the client root out of the clients command you can do:
p4 -F %domainMount% clients -u jmartini
or:
p4 -Ztag -F %Root% clients -u jmartini
Note that if the user owns multiple clients this will get you multiple lines of output.
To figure out the formatting variables you can use with the -F flag, try running commands with the -e or -Ztag global options:
p4 -e clients -u jmartini
p4 -Ztag clients -u jmartini
More on the -F flag in this blog article: https://www.perforce.com/blog/fun-formatting

pg_dumpall not working when start it with QProcess

I want to copy my data and tables from one postgres installation to the other, source version listens on port 5432 destination server on port 5433. User myUser is superuser on both versions.
Postgres "pg_dumpall" does not working when start it with QProcess
but the command works in windows cmd, this here:
pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433
But not from Qt code using QProcess:
QProcess *startProgram = new QProcess();
startProgram->start("pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433");
startProgram->waitForFinished()
return true
startProgram->exitCode();
returns 1
startProgram->exitStatus();
return 0
Anyway my data and tables are not copied to destination.
Creating db with QProcess works by using:
startProgram->start("createdb -p 5433 -U myUser myDbName");
Yeah its a bit annoying, I was trying to do the same thing with ls | grep <pattern> type commands - which spawn off multiple processes...
I came up with this for linux:
if (QProcess::startDetached("xfce4-terminal -x bash -c \"ls -l | grep main > out\""))
{
qDebug("ok\n");
}
else
{
qDebug("failed\n");
}
So basically if I break that down:
QProcess runs xfce4-terminal (or which ever term you want) with the execute parameter -x:
xfce4-terminal -x <command to execute>
This then executes bash with the command parameter -c (in escaped quotes):
bash -c \"bash command\"
Finally the bash command:
ls -l | grep main > out
So for your application you could substitute the final command (part 3) with:
pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433
I am assuming you are using linux? (there is a similar possibility for windows which uses cmd instead of terminal. Also you can probably just replace xfce4-terminal for gnome-terminal which is perhaps more common, but might need to check the -x is the same.... IIRC it is.
There is probably a nicer way to do this.... but I wanted to harness the power of bash, so this seemed the logical way to do it.
Further: I think you can just do this:
QProcess::startDetached("bash -c \"ls -l | grep main > out\"")
And get rid of the terminal part, (works for simple stuff like ls), but I am not sure if all the paths and what-not are setup... worth a go as it is a little neater and removes your reliance on any particular terminal...
Thank you! Yes, the pipe was the problem.
In windows this works for me:
QProcess *startProgram = new QProcess();
startProgram->start("cmd /c \"pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433\"");

Loading PostgreSQL Database Backup Into Docker/Initial Docker Data

I am migrating an application into Docker. One of the issues that I am bumping into is what is the correct way to load the initial data into PostgreSQL running in Docker? My typical method of restoring a database backup file are not working. I have tried the following ways:
gunzip -c mydbbackup.sql.gz | psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db> -W
That does not work, because PostgreSQL is prompting for a password, and I cannot enter a password because it is reading data from STDOUT. I cannot use the $PGPASSWORD environment variable, because the any environment variable I set in my host is not set in my container.
I also tried a similar command above, except using the -f flag, and specify the path to a sql backup file. This does not work because my file is not on my container. I could copy the file to my container with the ADD statement in my Dockerfile, but this does not seem right.
So, I ask the community. What is the preferred method on loading PostgreSQL database backups into Docker containers?
I cannot use the $PGPASSWORD environment variable, because the any
environment variable I set in my host is not set in my container.
I don't use docker, but your container looks like a remote host in the command shown, with psql running locally. So PGPASSWORD never has to to be set on the remote host, only locally.
If the problems boils down to adding a password to this command:
gunzip -c mydbbackup.sql.gz |
psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db> -W
you may submit it using several methods (in all cases, don't use the -W option to psql)
hardcoded in the invocation:
gunzip -c mydbbackup.sql.gz |
PGPASSWORD=something psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db>
typed on the keyboard
echo -n "Enter password:"
read -s PGPASSWORD
export PGPASSWORD
gunzip -c mydbbackup.sql.gz |
psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db>
Note about the -W or --password option to psql.
The point of this option is to ask for a password to be typed first thing, even if the context makes it unnecessary.
It's frequently misunderstood as the equivalent of the -poption of mysql. This is a mistake: while -p is required on password-protected connections, -W is never required and actually goes in the way when scripting.
-W, --password
Force psql to prompt for a password before connecting to a
database.
This option is never essential, since psql will automatically
prompt for a password if the server demands password
authentication. However, psql will waste a connection attempt
finding out that the server wants a password. In some cases it is
worth typing -W to avoid the extra connection attempt.

Which pgdump format is best for small storage size and fast restore?

This a first time foray in PostgreSQL backups (db dumps) and I've been researching the different pgdump formats, other pgdump options, and pgdumpall. For a Postgres beginner looking at taking an hourly dump (will overwrite previous dump) of two databases that contain table triggers and two different schemas in each db, what would be the backup format and options to easily achieve the following:
Small file size (single file per db or ability to choose which db to restore)
Easy to restore as clean db (with & without same db name[s])
Easy to restore on different server (user maybe different)
Triggers are disabled on restore and re-enabled after restore.
Include example commands to backup and restore.
Any other helpful pgdump/pgrestore suggestions welcome.
This command will create a small dmp file which includes only structure of the dattabase - tabels, columns, triggers, views etc.. (This command will just take few minutes)
pg_dump -U "dbuser" -h "host" -p "port" -F c -b -v -f ob_`date +%Y%m%d`.dmp dbname
**ex:** pg_dump -U thames -h localhost -p 5432 -F c -b -v -f ob_`date +%Y%m%d`.dmp dbname
This command will take the backup of complete database
pg_dump -h localhost -U "dbuser" "dbname" -Fc > "pathfilename.backup"
**ex:** pg_dump -h localhost -U thames thamesdb - Fc > "thamesdb.backup"
and for restore you can use:
pg_restore -i -h localhost -U "user" -d "dbname" -v "dbname.backup"
**ex:** pg_restore -i -h localhost -U thames -d thamesdb -v "thamesdb.backup"
to take backup of selected tabels(uses regular expressions) here
pg_dump -t '(A|B|C)'
for full details you can visit pgdump help page there are many options out there
If you want to take your backups hourly, I would think you should be using log archiving instead of pg_dump.