I am new to Jenkins and AWS. I have a MongoDB scripts on AWS EC2 instance. The first script need to run before the jenkins build and stores the snippets of DB. Second script need to run post build to restore to that snippets. Scripts are done ready to be used. I just couldn't find a exact way to get to AWS through build and implement this in a Jenkins job. Any help would be appreciated. Thanks
You can use Jenkins stages to to the Prebuild operation, Build and then Post build operations. With the stages you can use a plugin like SSH Pipeline Steps to remotely execute commands on your EC2 instance.
Related
I'm setting up a multi-stage Azure Devops yaml pipeline for a .Net Framework application.
Part of the pipeline will involve using the AWSPowerShellModuleScript task to configure load balancer rules in AWS.
My Task looks like so...
- task: AWSPowerShellModuleScript#1.7.0
name: SetupLoadBalancerRules
inputs:
awsCredentials: 'My AWS Service Connection'
regionName: 'ap-southeast-2'
scriptType: 'filepath'
filePath: 'pipeline-scripts/manage-aws-load-balancer-rules.ps1'
Everything is working correctly. However the AWSPowerShellModuleScript tasks are quite slow to initialise. The powershell itself is very fast, but the task requires approximately 1.5 minutes to setup.
I'm running 2 of these tasks in different stages of my pipeline, so this adds 3 minutes to the total time. This may not seem like a lot, but the application itself is quite small, so the setup for these tasks is actually the most time consuming part of the pipeline.
As far as I can tell, it seems that the pipeline is starting a generic container, and then installing the AWS Powershell tools, every time it needs to run one of these tasks.
This seems to be very wasteful and inefficient, so I was wondering if there might be some better way to handle it, for example, caching the built container after the powershell tools are installed, or use an existing image with the tools already installed etc.
I'm very new to using the yaml pipelines, so I'm not sure what's possible.
I like my pipelines to be as efficient as possible, so it just bothers me that this is re-running this repetetive install process every time I need to run a simple powershell script.
Also I should mention that I'm using a hosted Devops Agent... vmImage: 'windows-2019'
Just in case it helps. This is from the task log output...
Checking install status for AWS Tools for Windows PowerShell module.
AWS Tools for Windows PowerShell module not found.
Installing AWS Tools for Windows PowerShell module to current user scope
Name Version Source Summary
---- ------- ------ -------
nuget 2.8.5.208 https://onege... NuGet provider for the OneGet meta-package manager
So it determines that the AWS Tools are not installed, and then possibly uses nuget to install it??
I thought perhaps I could use a cache task to cache the install, but even if I could find where the tools are installed to, it seems unlikely that simply restoring the folder would be sufficient.
Using a Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. So the tool needs to be installed in each pipeline.
A stage is one or more jobs, which are units of work assignable to the same machine. Using Microsoft-hosted agent, each stage uses a separate agent generally. So the tool will be installed in each stage.
In a word, Microsoft-hosted agent is not be able to cache tools. In order to pre-install the tool or not install tool every time, you could deploy Self-hosted Windows agents, and install the tool on every machine running agent service.
I have a requirement to integrate the JMeter scripts, checked-in a Git repository, with a DevOps pipeline so that I can run the JMeter scripts using a specific VM in Azure.
Basically, I should have all my jmxs and csvs in a git repository and when I run the pipeline, having a parameter of the script name, it should run the script on a specific VM (not with a static IP) and copy the jtl in some storage.
What is the best way to achieve this?
With a DevOps pipeline so that I can run the JMeter scripts using a
specific VM in Azure. What is the best way to achieve this?
If the specific VM exists before the current pipeline, you can consider installing self-hosted agent there.
To do CI/CD using Azure pipelines, we need at least one agent. If we use microsoft-hosted agent, it will provide one fresh VM for us to run jobs. Since you need to run the script in your own specific VM, I suggest using self-hosted agent. You can follow the steps here to install one agent into your own VM. (The steps are quite easy and only cost several minutes)
After making your VM a self-hosted agent, the pipeline will call your VM to run the jobs. Now your original issue turns into how to run JMeter locally with command-line. See similar issues here: Five Ways To Launch a JMeter Test without Using the JMeter GUI and Run .jmx file through command line ....
1.So now we can use a command-line task in pipeline to run the JMeter related commands shared in the similar topics above. And these jobs are done in your specific VM.
2.I'm not sure which location you want to copy the jtl to, but you can use Azure File Copy task to copy files to Microsoft Azure storage blobs or virtual machines (VMs). Or a simple copy/xcopy command in your command line task to copy files to another location in same machine. (Specific VM)
Hope all above helps :)
I have Use following Task in Azure CD pipeline.
"Run Taurus" Task is as following.
Where "_WM WebClient TestArtifacts" is git/Azure Repo directory where .jmx file kept(in Code).
Have written multiple Powershell scripts to take backup of AZure API management service. One script calls another and exports all the products and properties to my local machine.
I want to automate this process of backing up everything directly to Bitbucket. To achieve this i have configured Jenkins which is running on a CentOS server but, i don't know how to automate things using it?
I tried installing the Powershell Plugin on Jenkins but as i have multiple scripts written having dependency on one another, i just can't paste the whole thing as it is and run.
so is there a way i can run this multiple codes on Jenkins Powershell plugin instead of composing one single code and then run it?
Second thing, should Jenkins be installed on my local machine rather than on CentOS server in order to achieve this?
Or
if Jenkins has a plugin to link with my Azure account and export services to Bitbucket? (with Powershell codes out of picture)
Is there any other tool / alternative which is little less complicated?
I integrated my GitHub repository with AWS Codepipeline and that with Jenkins through the AWS Codepipeline plugin in Jenkins. Jenkins is installed in an EC2 server. I created an IAM role for the EC2 instance holding my Jenkins. I also set up AWS CodePipeline Publisher as the post build action.
However, my code from GitHub is taken in by AWS Codepipeline successfully(The Source stage is successfull), but the Build stage fails with a Timeout error after 1 hour.
When I checked with the Jenkins workspace in the EC2 instance, the workspace for the project is empty.
That is, the code taken in from GitHub is not put into the workspace of Jenkins by the AWS Codepipeline.
Is this a problem with enabling security for Jenkins? But actually I tried with disabling the security as well. But I got the same error.
Your help is really appreciated.
in the Build Triggers section, did you choose Poll SCM?
This is where you configure how often Jenkins should poll AWS CodePipeline for new tasks. For example: H/5 * * * * (every 5 minutes).
Something else that comes to mind is an issue with the credentials. If you open your Jenkins project, there should be an AWS CodePipeline Polling Log link on the left, below "Configure", and you should see an error there if the plugin is unable to poll.
First thing - Make sure Jenkins running on EC2 instance have IAM role and its related permissions to perform actions with AWS Code Pipeline.
Second thing - Under Build Triggers section, select Poll SCM and type five asterisks separated by spaces in Schedule.
Kindly follow the link for more details
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html#getting-started-4-get-instance
This is an old question but had the same problem. After quite a bit of research, I figured out that in my setup, the input and output artifact names were missing.
Steps to check / fix the issue
You will need the aws cli installed.
Use: aws codepipeline get-pipeline --name [pipeline name] > pipeline.json
open the pipeline and confirm that
1. the output artifact in the source stage is the same as the input artifact in the build stage.
2. the output artifact in the build stage is the same as the input artifact in the Beta (or whatever is your deploy stage) stage.
You can check whether things are working fine by going to your S3. In the bucket for your code pipeline, you should see a folder with the same name as the output artifact in your source stage. Inside this, there will be various zip files. Download one and unzip to check that the upload from GitHub was proper.
I am guessing that the issue happened for me because I began with a 2 step pipeline and then added the build process afterwards - May happen with you too if you do not have the Jenkins server ready before creating the pipeline and hence you put that stage later.
I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.