I've started using Azure Pipelines, and my django application runs tests which require a local PostgreSQL server.
I'm pretty stumped as I cannot find any information in the MS documentation for how to change the agents configuration to include a local PostgreSQL server.
It seems like a very simple thing to do but I cannot seem to find the relevant documentation. Looking at the agent pool information it lists MySQL as being installed locally.
How can I include a local PostgreSQL server in the configuration of the agent that will build my application and run tests?
Some options:
Create a Docker container with your testing prerequisites in it, and then run your tests in the container
Create your own self-hosted agent and install whatever software you need on it, and then use that instead of the Microsoft-hosted agents.
Related
In the FAQ of the UsePython task doc it says that in order to use the UsePython task, you have to manually install the required version of Python under a specific folder structure in the Agent.ToolDirectory folder.
This is pretty daunting. Isn't there a way to configure Azure to a specific source (such as Artifactory instance or something like this) and tell it to install it from there?
If you use self hosted agents on VM's you enter the area where tools for configuration managment could be handy. So you can use:
powershell desired state configuration install python quick tutorial
ansible
chef
However, there is no simple way to acheive this. You can do it manually, or build a solution to handle software on your self hosted machines. You can also reuse repository for MS Hosted vm's and leverage this.
I have to stand up a azure pipeline agent for running UI tests that runs "As an interactive process" instead of "As a service". Servers running UI Tests are already prepped for self logon and always on. Can i install the agent on the same server as where the tests are being run?
I believe you have checked this document about how to configure the self hosted agent to run UI test.
I have installed an agent on VM and i can run UI test successfully on the VM.
If UI test can run on the machine where the agent is installed, I think it can also run on the server that runs the UI test, if the agent is installed on the server.
Trying to copy data from PostgreSQL DB on an Ubuntu box that needs IPs whitelisted to access it. With Azure Data Factory IPs changing all the time and since i cannot install Self-hosted integration runtime as its a Linux server, what other options are available to be able to copy data from this PostgreSQL DB into an Azure SQL DB without having to worry about the IP addresses. Any suggestions or known solutions for this please?
Based on the document,ADF Self-hosted integration runtime can't be installed on the linux server,only could be used on the windows server.
BTW,this feature will also not be supported recently,please follow this feedback link.
the latest comment:Currently we don't have any plan on this yet. Could
you share us your reasons why do you want Linux?
As workaround,i suggest you get an idea of Azure DMS(Database Migration Service). Please see more detail about it from this link and this video.
Over the last few months I've become familiar with the AWS OpsWorks deployment process as it pertain to Node.js - deployment for Go seems to be another animal.
From what I've gathered, this is what I need to compile a successful Go deployment:
Install go on the EC2 box
Pull the private repository from GitHub
Pull in all dependencies
Compile the main package for the box's arch
Start the binary with a couple of flags that I use
Everywhere I have read seems to tout the ease of Go deployments because dependencies are included in the binary, but that seems to imply that you are compiling the application in your development environment and pushing that up to the cloud. This doesn't seem like a process that works well across a development team.
https://github.com/crowdmob/chef-golang-web-server-cookbook
I have been attempting to get the Chef Scripts from CrowdMob working, but to no avail. I continue to get errors that look like this:
[2014-08-01T16:08:22+00:00] WARN: Cookbook 'templates' is empty or entirely chefignored at /opt/aws/opsworks/current/merged-cookbooks/templates
What is the proper way to deal with dependencies during deployment?
Are there any established practices for deploying Go onto AWS with Chef?
Use a continuous integration service like CircleCi, Travis or your own setup Jenkins.
On the Continuous integration service then
Add a github post commit hook .
Test / Build the binary
Create the zip file as artifact
At this point you can create an new version on Elastic Beanstalk using the AWS commandline and the zip file created from this version.
venv/bin/aws elasticbeanstalk create-application-version ...
Then just select which version to deploy from the EB dashboard.
For simple services using Chef is overkill IMHO. Docker offers a simple workflow.
Use the Docker container option and then use elastic beanstalk's command-line client to initialize your environment in the project root directory and then you can simply do a 'git aws.push' from the same place.
With the correctly configured Dockerfile in your project and pushed to eb, the EBS' docker container app will pull the correct image with golang installed, then do a go get on your projects dependencies, and then compile and run your app. It sounds way more complicated but it's actually very easy.
Below is a link to a video walkthrough I did for running a simple golang webapp on EBS. The method for uploading the project does not use git. Instead, I zip it up and upload it, but the git method is recommended (and I do it) for automating deployment.
YouTube: How to run a go web app on Amazon's Elastic Beanstalk
I also had some problems to setup a good building process with Elastic Beanstalk and Go.. I don't want to use Docker, and all the people seems to be going on this direction.. so.. you can take a look at this project: https://github.com/battle-arena/heimdall
There you will find a custom setup using the Buildfile and the Procfile.. and I rely on a CI system to build the release package...
Basically I do the following:
Hook the commits to a CI system
On the CI system I run the test and the install.sh if all good
The install.sh will create a build folder and a structure that will be sended to the Elastic Beanstalk with the aws-cli tool
After send to the EB the Buildfile will run the build.sh that will basically extract the compressed package with the proper structure, and run a go get ./... and go build
The Procfile will run the generated binary
I think the result is pretty good, and you can use with any CI tool.
Does anyone know of an automated way to deploy a web role to Azure with the "Enable Web Deploy" option enabled? We have an automated acceptance test process that deploys to Azure using Web Deploy to save time. But we would also like to automate the full deployment of the web role so that it could run off-hours on a less frequent basis.
We are currently using the WAPPSCmdlets module to automate full Azure deployments. However, neither this nor the newer official Azure Powershell cmdlets seem to expose a way to enable Web Deploy in new deployments.
What you'll need to do is create a startup task that does the following:
Download and install Web Deploy
Windows Azure Bootstrapper can help you to download and run the installer from a startup task.
Configure Web Deploy with PowerShell. You might want to start with this article: PowerShell scripts for automating Web Deploy setup
Running PowerShell from a startup task might seem tricky at first. If you run into trouble, take a look at this article: Azure Startup Tasks and Powershell: Lessons Learned
Keep in mind that this startup task should only run for CI deployments and not for your production deployments, so this might be something you need to take care of in your build process (you can use different Cloud projects in Visual Studio for example).