How can I get the perforce root directory path. I've searched online and I've tried solution such as
p4 -F %clientRoot% -ztag info
However the results returned were empty, but when i run this command:
p4 clients -u jmartini
I get these results:
Client jma_HP001 2017/10/19 root C:\projects\john 'Created by
jmartini. '
How can I simply just get the root directory path from command line. I would expect my results to be this:
C:\projects\john
If p4 info doesn't return the current client root, your shell does not have P4CLIENT set correctly. To fix this, you can do:
p4 set P4CLIENT=jma_HP001
From this point on, other commands (including the p4 -F %clientRoot% -ztag info you tried to run first) will return results relative to that client workspace.
If you want to just get the client root out of the clients command you can do:
p4 -F %domainMount% clients -u jmartini
or:
p4 -Ztag -F %Root% clients -u jmartini
Note that if the user owns multiple clients this will get you multiple lines of output.
To figure out the formatting variables you can use with the -F flag, try running commands with the -e or -Ztag global options:
p4 -e clients -u jmartini
p4 -Ztag clients -u jmartini
More on the -F flag in this blog article: https://www.perforce.com/blog/fun-formatting
Related
If I'd like to save the output from file1.sql to a new file file2.sql, I would use this command in terminal/cmd:
psql -U postgres -f file1.sql -o file2.sql
What if, though, I want file2.sql to be in a different folder?
If I try this command:
psql -U postgres -f file1.sql -o New/file2.sql,
that won't automatically make a new folder and will give an error. The New folder needs to exist before I can do this.
I need to this over many output files and many new folders. One obvious alternative would be to pre-create the required folders using Python, but really, is there no way Postgresql can create folders for me?
Use mkdir -p.
It will try to create a directory New if it doesn't exist or does nothing if it already exists.The && ensures that
your psql command runs only if the mkdir command succeeds.
mkdir -p New && psql -U postgres -f file1.sql -o New/file2.sql
If you want to run os commands inside psql, simply use \! <command> option within file1.sql and then output via \o option.
\! mkdir -p New
\o New/file2.sql
Hi i am trying to stop workflow through PMCMD but unsuccessfully.
i am doing it through powershell
&"$INFS_ROOT\pmcmd.exe" stopworkflow -usd DS -u $IFPC_USER -p $IFPC_PASS -sv ISUD -d DomainIF -f ("$Folder") ("$wf");
But everytime i got error:
ERROR: Option value cannot start with one leading '-'. Usage: pmcmd
stopworkflow
<<-service|-sv> service <-domain|-d> domain [<-timeout|-t> timeout]>
[<<-user|-u> username|<-uservar|-uv> userEnvVar>]
[<<-password|-p> password|<-passwordvar|-pv> passwordEnvVar>]
[<<-usersecuritydomain|-usd> usersecuritydomain|<-usersecuritydomainvar|-usdv> userSecuritydomainEnvVar>]
[<-folder|-f> folder] [<-runinsname|-rin> runInsName]
[-wfrunid workflowRunId] [-wait|-nowait] workflow
For example getworkflowstatus works clear
&"$INFS_ROOT\pmcmd.exe" getworkflowdetails -usd DS -u $IFPCUser -p $IFPCPass -sv ISUD -d DomainIF -f ("$Folder") ("$wf")
Can anyone help me with stopping workflow through pmcmd? Thanks
I've been seeing '-u' used a lot in command line and I'd like to know what it does. For example, when it's used in:
git push -u origin master
or
mysql -u root
Sometimes means user, but each program sets whatever it want.
I have multiple workspaces in Perforce, say w1, w2, w3,... all with different mappings that may or may not point to different folders in the same depot(s). I want to write a .bat file that syncs them automatically and in sequence as not to put stress on the server.
Optimally, I want to start this off automatically and have it first sync w1, after it's done have it sync w2, and so on. Assume I don't have any environment variables set, so if they're necessary, please let me know.
How would go about doing this?
If you don't want to set up any P4 environment variables, you could use the global options and do something like this:
p4 -u <user> -P <password> -p <port> login
p4 -u <user> -P <password> -p <port> -c <workspace1> sync //path/to/sync/...
p4 -u <user> -P <password> -p <port> -c <workspace2> sync //other/path/...
p4 -u <user> -P <password> -p <port> -c <workspace3> sync //yet/another/path/...
If you set up the P4USER, P4PASSWD, and P4PORT P4 environment variables (see the p4 set command), then you could clean it up a little to look like this:
p4 login
p4 -c <workspace1> sync //path/to/sync/...
p4 -c <workspace2> sync //other/path/...
p4 -c <workspace3> sync //yet/another/path/...
I have many .sql files in a folder (/home/myHSH/scripts) in linux debian. I want to know the command to execute or run all sql files inside the folder into postgreSQL v9.1 database.
PostgreSQL informations:
Database name=coolDB
User name=coolUser
Nice to have: if you know how to execute multiple sql files through GUI tools too like pgAdmin3.
From your command line, assuming you're using either Bash or ZSH (in short, anything but csh/tcsh):
for f in *.sql;
do
psql coolDB coolUser -f "$f"
done
The find command combined with -exec or xargs can make this really easy.
If you want to execute psql once per file, you can use the exec command like this
find . -iname "*.sql" -exec psql -U username -d databasename -q -f {} \;
-exec will execute the command once per result.
The psql command allows you to specify multiple files by calling each file with a new -f argument. e.g. you could build a command such as
psql -U username -d databasename -q -f file1 -f file2
This can be accomplished by piping the result of the find to an xargs command once to format the files with the -f argument and then again to execute the command itself.
find . -iname "*.sql" | xargs printf -- ' -f %s' | xargs -t psql -U username -d databasename -q