Post-commit hook failed (exit code 3) with output - eclipse

I'm trying to call a Jenkins job remotely using a post-commit script. I'm currently committing code through Eclipse Kepler/Subversive/SVNKit Connector.
post-commit script:
if svnlook dirs-changed -r "$REV" "$REPOS" | grep -qEe '^trunk/'; then
wget --post-data="job=APS-RemoteServerAction&token=SECRET&ACTION=deploy&ASSET_NAME=POST-COMMIT-TEST&DEPLOY_ENV=DEV&REVISION=$REV" "http://my.domain.com:8080/buildByToken/buildWithParameters"
fi
Screenshot of error through Eclipse:
Important notes:
Code does get committed properly, repository browser indicates a new version
The job runs on Jenkins, the history shows that
Everytime I commit, I get this error message
I tried adding the flag --quiet, but I got the same exit code.
I'm thinking it's due to wget and posting the values?
Edit #1
I would like to point out that I'm using the Jenkins Build Authorization Token Root Plugin. I switched to a POST instead of a GET (which works) due to eventually moving onto https and keeping the token out of the URL.

I interpret the error message to mean that wget can not write a file with the name buildWithParameters in its current directory. Use wget -O - to write the output to stdout.

The error is (I think) because it's trying to download the webpage to a local dir. You just need to ping the endpoint to make jenkins build, so I used the --spider (doesn't download), --no-proxy (I was getting cached responses sometimes) and -q (don't output, cuz svn will report it).
wget --post-data="job=APS-RemoteServerAction&token=SECRET&ACTION=deploy&ASSET_NAME=POST-COMMIT-TEST&DEPLOY_ENV=DEV&REVISION=$REV" "http://my.domain.com:8080/buildByToken/buildWithParameters" --spider --no-proxy -q

Related

SFTP from web service through Cygwin fails

I have a web page running on Apache which uses a matured set of Perl files for monitoring our workplace servers and applications. One of those tests goes through Cygwin´s SFTP, list files there and assess them.
The problem I have is with SFTP itself - when I run part of test either manually from cmd as D:\cygwin\bin\bash.exe -c "/usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]" or invoke the very same set of Perl files as web it works OK (returns list of files as it should). When exactly same code is run through web page it fails quick and does not tell anything. Only thing I have is error code 255 and "Connection closed". No error stream, no verbose output, nothing, no matter what way to capture any error I have used.
To cut long story short, the culprit was HOME path.
When run manually either directly from cmd or through Perl, the D:\cygwin\bin\bash.exe -c "env" would report HOME as HOME=/cygdrive/c/Users/[username]/ BUT this same command when run through web page reports HOME=/ i.e. root, apparently loosing the home somewhere along the path.
With this knowledge the solution is simple: prepend SFTP command with proper home path (e.g. D:\cygwin\bin\bash.exe -c "export HOME=/cygdrive/c/Users/%USERNAME%/ ; /usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]") and you are good to go.

Building Artifactory fails for Build Stage in Delivery Pipeline

I have created a toolchain, which downloads the code from the bitbucket repository and builds the docker image in IBM Cloud.
After the code builds the image, the build stage fails while building the artifactory.
Error:
Preparing the build artifacts...
Customer script does not exist for the job, exitting
I have specified the Build archive directory as the folder name. Do I need to write any scripts for archiving?
That particular error occurs when one of our checks -- the existence of /home/pipeline/$TASK_ID/_customer_script.sh -- fails.
Archiving happens automatically but that file needs to be present as we use it as part of the traceability around how the artifact was created. Is it possible that file is getting removed? (Also will look into removing or making the check non-fatal however that will take time)
This issue appears to be caused by setting a working directory for the job. _customer_script.sh gets dropped into the working directory, but the script Simon is referring to (/opt/IBM/pipeline/bin/ids-buildables-notify.sh) only checks the top-level directory the code input is at (/home/pipeline/$TASK_ID/).
Three options to fix this, assuming you're doing a container registry job:
Run cp _customer_script.sh /home/pipeline/$TASK_ID in your script. The ids-buildables-notify.sh script does some grepping for your bx cr build call, so make sure that's still in there.
touch /home/pipeline/$TASK_ID/_customer_script.sh and export PIPELINE_IMAGE_URL=<your image url>. If PIPELINE_IMAGE_URL is set, the notify script doesn't bother with being clever, which I prefer.
Don't change the working directory.
A script which works for me:
#!/bin/bash
echo -e "Build environment variables:"
echo "REGISTRY_URL=${REGISTRY_URL}"
echo "REGISTRY_NAMESPACE=${REGISTRY_NAMESPACE}"
echo "IMAGE_NAME=${IMAGE_NAME}"
echo "BUILD_NUMBER=${BUILD_NUMBER}"
echo -e "Building container image"
set -x
export PIPELINE_IMAGE_URL=$REGISTRY_URL/$REGISTRY_NAMESPACE/$IMAGE_NAME:$BUILD_NUMBER
bx cr build -t $PIPELINE_IMAGE_URL .
set +x
touch /home/pipeline/$TASK_ID/_customer_script.sh

In Jmeter, http request executed from Command line fails, but passes in GUI mode

I have multiple http requests under a Thread group, that was always passing till yesterday when executed from either GUI and Command line mode on my Mac system.
Now, when executing from NON GUI mode(Command line), one URL (launching the home page) always fails when executing on Slave systems from the Master system
But works when executed on the Master system itself.
I was trying some changes in jmeter.properties. Not sure, if it is anything to do with the error I face now.
My Command line instruction is as below
sh Jmeter.sh -n -t R3Performance_Fragment.jmx -G ucount=5 -l Results/r1.csv -R 192.168.X.XX,192.168.X.XX
Not sure if I am missing something here, please let me know.

Powershell_script resource throws error: "Your session has expired, please login again."

I am trying to use Chef to pull a file from Perforce, by calling p4 sync from a PowerShell script. As the title indicates, I am being plagued with this failure: "Your session has expired, please login again." From what I have gathered, it has something to do with the way the PowerShell script is run through Chef (using Invoke-Command?)
Here's what I have that is not working :(
powershell_script 'P4Sync' do
cwd "C:\\Program Files\\Perforce"
code <<-EOH
&".\\p4.exe" set P4PORT=server:1234
&".\\p4.exe" set P4USER=AUTOMATION_USER
set shallNotPass 'AUTOMATION_USER_PASSWORD_TICKET'
&".\\p4.exe" -d c:\\temp -P $shallNotPass client -o | &".\\p4.exe" -P $shallNotPass client -i
set rootdir '//root/scripts'
&".\\p4.exe" -P $shallNotPass sync $rootdir/script.bat
&".\\p4.exe" -P $shallNotPass sync $rootdir/script.sh
EOH
end
The other powershell_script resources that I have used (which are working) involve only PowerShell cmdlets, and not external executables.
Any suggestions would be appreciated! Also, if you care to share any other resources where I might have found this information on my own, it would also be helpful. I've spent quite a bit of time hunting the internet on this error, and haven't had much luck.
The error message is a Perforce authentication failure and suggests there's a problem with your AUTOMATION_USER_PASSWORD_TICKET. If that's actually a ticket (it should look like a hash rather than plaintext), the problem is most likely that it's expired -- by default a login ticket is only valid for 12 hours after the "p4 login" command used to acquire it.
See the documentation for "p4 login" for more on how tickets work:
http://www.perforce.com/perforce/r15.1/manuals/cmdref/p4_login.html
The easiest solution is probably to put AUTOMATION_USER in a group with an unlimited Timeout, then re-run "p4 login" to get a new ticket (which will never expire) and put that in your script.

Commit hooks not printing any error message if script exists with code 0

My pre-commit is calling a perl script commit_log.pl.The script is doing so many pre-checks.Now I am trying to send out a mail after commit approval.We are not able to set up post-commit hooks due to some permission issues.So I am trying to call the send mail in the pre-commit script itself.
In my commit_log.pl if the exit code is zero ,even printf is not working.
If exit code is 1 everything is working fine
pre-commit:
log=`$SVNLOOK log -t "$TXN" "$REPOS"`
author=`$SVNLOOK author -t "$TXN" "$REPOS"`
CHANGED=`$SVNLOOK changed -t "$TXN" "$REPOS"`
/usr/bin/perl /isource/svnroot/fgw_ins/hooks/user/commit_log.pl "$log" "$author" "$CHANGED" "$0" 1>&2
if [ "$?" -eq "1" ];
then
exit 1
else
exit 0
fi
# if does not match..fail...
exit 1
---------------------------------------------------------------------------------
commit_log.pl
------------------------
}
else
{
print("Commit approved\n");#this printf itself is not working
`python $path/send_mail.py $comment $committed_filepath`;
exit 0;
}
Can't add much more to Avi's answer: STDOUT is swallowed by the hook. You will NEVER see STDOUT. STDERR is only seen by the client if the hook script returns a non-zero exitcode -- which usually means the hook failed, and in a pre-commit hook prevents the commit.
If you need to send mail out after a commit, and you can't use a post-commit hook, I suggest you use a continuous build system like Jenkins. You can have Jenkins watch your repository, and when it sees a new revision, send out email to those involved.
Jenkins is a continuous build system which can do a build after every commit, but there's no reason why you need to do a build (except it is usually a good idea anyway). Jenkins can be setup to do any action post commit, so you could have Jenkins simply email those involved.
Yes, it's a bit of an overkill having an entire system like Jenkins just to send out email. Why not simply write your own script? However, you can download, install, and configure Jenkins in an hour or two. It'll take you longer just to layout what you think needs to be done.
Besides, once you have Jenkins, you'll find plenty of other uses for it.
I'm not sure where standard output from the pre-commit hook is sent to. According to the SVN book, the standard error is sent back to the client, but only if there is an error (i.e. it exited with a non-zero exit code).
I would try writing to a specific location, rather than to standard output (e.g. /tmp/pre-commit.log for testing purposes).
Also, in general, you probably should avoid as much as possible doing work in the pre-commit script that assumes the commit was successful. The commit may still fail after the pre-commit script runs, such as during the commit itself, which is why the post-commit script exists.