Download of an email attatchment on Jenkins - email

I have been testing the Jenkins "Poll Mail Trigger" plugin to trigger a project via email. To develop the project I need to download the attached file that is sent.
In the project settings, I enable the only download option that exists: "Download to a timestamped Directory".
Also, the only information regarding the download is this: "The Attachments field, allows you to determine what to do with email attachments. If attachments are downloaded, the pmt_attachmentsDirectory variable with be set to the download directory."
Then, after doing several tests without configuring any directory for the download of the attached file -assuming that the download should have been automatic-, I proceeded to change the environment variables in the Jenkins configuration, I put the following:
Name: pmt_attachmentsDirectory
Value: *D: *
So, what I'm trying to do is save the file to my Drive D: \
The file is never downloaded to the specified disk, and just for verification, I am trying to move the file with the following Windows command:
move "% pmt_attachmentsDirectory%" "D: \ Users \ wrodriguez \ Downloads \ Test_Programs"
And finally it throws me the following output: "The system cannot find the specified file."
So, I am really lost here, can anybody help me with this issue? Is there another plugin on Jenkins that I can use to download an attatchment?
Thanks for your help.

I haven't worked on that plugin (or Jenkins) in 5 years, so forgive my rustiness.
Looking at the code, the plugin creates a timestamped folder, and saves any attachments there.
It then injects an environment variable named pmt_attachmentsDirectory into the Jenkins job instance's environment variables - the value is the absolute path of folder (it's not configurable).
Troubleshooting:
have you enabled saving Attachments in the Plugin options?
check the job instance's environment variables - does it have pmt_attachmentsDirectory listed?
print the value of pmt_attachmentsDirectory - e.g. echo "%pmt_attachmentsDirectory%" - what does it say?
does the directory exist?
does the directory contain any files?
does the "View Polling Log" have any errors?
do the Jenkins logs have any errors?
Other notes:
"% pmt_attachmentsDirectory%" - shouldn't have a space e.g. should be "%pmt_attachmentsDirectory%"

Related

AzCopy ignore if source file is older

Is there an option to handle the next situation:
I have a pipeline and Copy Files task implemented in it, it is used to upload some static html file from git to blob. Everything works perfect. But sometimes I need this file to be changed in the blob storage (using hosted application tools). So, the question is: can I "detect" if my git file is older than target blob file and ignore this file for the copy task to leave it untouched. My initial idea was to use Azure file copy and use an "Optional Arguments" textbox. However, I couldn't find required option in the documentation. Does it allow such things? Or should this case be handled some other way?
I think you're looking for the isSourceNewer value for the --overwrite option.
--overwrite string Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default true) Possible values include true, false, prompt, and ifSourceNewer.
More info: azcopy copy - Options
Agree with ickvdbosch. The isSourceNewer value for the --overwrite option could meet your requirements.
error: couldn't parse "ifSourceNewer" into a "OverwriteOption"
Based on my test, I could reproduce this issue in Azure file copy task.
It seems that the isSourceNewer value couldn't be set to Overwrite option in Azure File copy task.
Workaround: you could use PowerShell task to run the azcopy script to upload the files with --overwrite=ifSourceNewer
For example:
azcopy copy "filepath" "BlobURLwithSASToken" --overwrite=ifSourceNewer --recursive
For more detailed info, you could refer to this doc.
For the issue about the Azure File copy task, I suggest that you could submit a feedback ticket in the following link: Report task issues.

Using p4 zip and unzip to export files from one perforce server to another

I was trying to export files along with their revision history inside my depot folder from 2015.2 to 2019 perforce server.Also , I would want perforce to create new user on my new server corresponding to the commiter/submitter on my original 2015 repo.
Perforce replicate looked like overkill for my current task and then I came across this read on perforce's website that mentioned P4 zip.
This looked like it will solve my problem, but the article has a few issues I could not understand.
Let's say I am moving data from server1_ip:port --> server2_ip:port
I am currently following these steps
Making zip of folder to be copied using
p4 remote my_remote_spec , setting
Address: server1_ip:port
DepotMap://depot/... //depot2/...
p4 -p server1_ip:port zip -o test.zip -r my_remote_spec -A //depot/.... But on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
Also, when i did try with a super user, i could not find test.zip even though i was not prompted any errors.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Is the unzip command supposed to be run after a p4 login from user of second server?
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
This is expected:
C:\Perforce\test>p4 help zip
zip -- Package a set of files and their history for use by p4 unzip
...
The zip command requires super permission granted by p4 protect.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Similar to p4 admin checkpoint, the zip file is written to the server machine (relative to the server root, if you don't specify an absolute path), rather than being transferred to the local client directory. This is not explicitly stated in the documentation (which seems like an oversight), but if you look in the root directory of the server where you ran the zip, you should find your test.zip there.
Is the unzip command supposed to be run after a p4 login from user of second server?
Yes, any time you run a command against a particular server, you will need to be logged in to that server. In the case of p4 unzip you will need at least admin permission on the second server.
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
I'm pretty sure that's a typo; whoever wrote the article started off using ports 1666 and 1777, changed their mind halfway through, and didn't proofread. :)

How to deploy releases automatically to gitlab using ci

Im currently trying to figure out how to deploy an gitlab project automatically using ci. I managed to run the building stage successfully, but im unsure how to retrieve and push those builds to the releases.
As far as I know it is possibile to use rsync or webhooks (for example Git-Auto-Deploy) to get the build. However I failed to apply these options successfully.
For publishing releases I did read https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/api/tags.md#create-a-new-release, but im not sure if I understand the required pathing schema correctly.
Is there any simple complete example to try out this process?
A way is indeed to use webhooks:
There are tons of different possible solutions to do that. I'd go with a sh script which is invoked by the hook.
How to intercept your webhook is up to the configuration of your server, if you have php-fpm installed you can use a PHP script.
When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.
The PHP script can be something like that:
<?php
// Check token
$security_file = parse_ini_file("../token.ini");
$gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"];
if ($gitlab_token !== $security_file["token"]) {
echo "error 403";
exit(0);
}
// Get data
$json = file_get_contents('php://input');
$data = json_decode($json, true);
// We want only success build on master
if ($data["ref"] !== "master" ||
$data["build_stage"] !== "deploy" ||
$data["build_status"] !== "success") {
exit(0);
}
// Execute the deploy script:
shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");
I created a token.ini file outside the webroot, which is just one line:
token = supersecrettoken
In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.
Also the deploy script is very very basic, but there are a couple of interesting things:
#!/bin/bash
# See 'Authentication' section here: http://docs.gitlab.com/ce/api/
SECRET_TOKEN=$PERSONAL_TOKEN
# The path where to put the static files
DEST="/usr/share/nginx/html/"
# The path to use as temporary working directory
TMP="/tmp/"
# Where to save the downloaded file
DOWNLOAD_FILE="site.zip";
cd $TMP;
wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE;
ls;
unzip $DOWNLOAD_FILE;
# Whatever, do not do this in a real environment without any other check
rm -rf $DEST;
cp -r _site/ $DEST;
rm -rf _site/;
rm $DOWNLOAD_FILE;
First of all, the script has to be executable (chown +x deploy.sh) and it has to belong to the webserver’s user (usually www-data).
The script needs to have an access token (which you can create here) to access the data. I inserted it as environment variable:
sudo vi /etc/environment
in the file you have to add something like:
PERSONAL_TOKEN="supersecrettoken"
and then remember to reload the file:
source /etc/environment
You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN and verify the token is printed in the terminal.
Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.
The url of the API is
https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname
While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.
It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.

Accessing Teamcity Artifact with Dynamic name

I want to download a TeamCity artifact via powershell. It needs to be the last successful build of a specific branch.
I've noticed two common url paths to access the artifacts. One seems to be
/repository/download/BUILD_TYPE_EXT_ID/.lastSuccessful/ARTIFACT_PATH
The problem is that the file at the end relies on the release version. Within TeamCity there is syntax to specify all files \*.msi. Is there any way to specify an artifact starting with FileName-{version.number}.msi when trying to access this url?
EDIT:
The other url I noticed is for the REST API.
http://teamcity/guestAuth/app/rest/builds/branch:[BRANCH],buildType:[BUILD TYPE],status:SUCCESS,state:finished/artifacts/[BUILD PATH]
The problem is that I can't download artifacts from here. If I want to download the artifacts I have to use the current build id. The above url gives the following url: /guestAuth/app/rest/builds/id:[Build ID]/artifacts/content/[Artifact Path] to download the artifact.
I can use the first REST url to eventually get the second through the returned xml, but would prefer a more straightforward approach.
Unfortunately as TeamCity artifacts are not browsable the usual workarounds like wget recursive download or wildcards are not applicable.
Using wildcards in wget or curl query
How do I use Wget to download all Images into a single Folder
One workaround you could try is formatting the link in the job, saving the link to a text file and storing that as an artifact as well, with a static name. Then you just need to download that text file to get the link.
I found you can format the artifact URL in TeamCity job by doing:
%teamcity.serverUrl%/repository/download/%system.teamcity.buildType.id%/%teamcity.build.id%:id/<path_to_artifact>
In a command line step. You can write this to a file by doing:
echo %teamcity.serverUrl%/repository/download/%system.teamcity.buildType.id%/%teamcity.build.id%:id/myMsi-1.2.3.4.msi > msiLink.txt"
Now you have an artifact with a constant name, that points to the installer (or other artifact) with the changing name.
If you use the artifact msiLink.txt you don't need to use the REST interface (it's still two calls, both through the same interface).
You can easy download the latest version from batch/cmd by using:
wget <url_server>/repository/download/BUILD_TYPE_EXT_ID/.lastSuccessful/msiLink.txt ---user #### --passsword ####
set /P msi_url=<msiLink.txt
wget %msi_url% --user #### --passsword ####
Hope it helps.
Update:
Sorry I just realized the question asked for PowerShell:
$WebClient = New-Object System.Net.WebClient
$WebClient.Credentials = New-Object System.Net.Networkcredential("yourUser", "yourPassword")
$WebClient.DownloadFile( "<url_server>/repository/download/BUILD_TYPE_EXT_ID/.lastSuccessful/msiLink.txt", "msiLink.txt" )
$msi_link = [IO.File]::ReadAllText(".\msiLink.txt")
$WebClient.DownloadFile( $msi_link, "yourPath.msi" )

Ctools do not show up in pentaho UI

I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy