Configure azure VM using terraform and ansible using azure cli and without ssh keys in azure pipelines - azure-devops

I'm using Azure Pipelines with terraform to create a dynamic infrastructure (unknown number of VMs, NIC..) in Azure.
I also tried to use Ansible in a separate stage to configure my VM, but I can't because the work is complicated because I had to create ssh keys and use them by Ansible, so I want to configure my Azure VM without using SSHAkey and use Azure CLI instead, I know that Ansible is agentless and use SSH to communicate with VM but I hope there is a module which allows us to configure VM using Azure CLI?
Alternatively, if anyone already has an existing project like this one and is well organised & less work, I would be grateful.

Related

Mounting Azure Blob Storage to Azure Databricks without using cluster

We have a requirement that while provisioning the Databricks service thru CI/CD pipeline in Azure DevOps we should able to mount a blob storage to DBFS without connecting to a cluster. Is it possible to mount object storage to DBFS cluster by using a bash script from Azure DevOps ?
I looked thru various forums but they all mention about doing this using dbutils.fs.mount but the problem is we cannot run this command in Azure DevOps CI/CD pipeline.
Will appreciate any help on this.
Thanks
What you're asking is possible but it requires a bit of extra work. In our organisation we've tried various approaches and I've been working with Databricks for a while. The solution that works best for us is to write a bash script that makes use of the databricks-cli in your Azure Devops pipeline. The approach we have is as follows:
Retrieve a Databricks token using the token API
Configure the Databricks CLI in the CI/CD pipeline
Use Databricks CLI to upload a mount script
Create a Databricks job using the Jobs API and set the mount script as file to execute
The steps above are all contained in a bash script that is part of our Azure Devops pipeline.
Setting up the CLI
Setting up the Databricks CLI without any manual steps is now possible since you can generate a temporary access token using the Token API. We use a Service Principal for authentication.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/tokens
Create a mount script
We have a scala script that follows the mount instructions. This can be Python as well. See the following link for more information:
https://docs.databricks.com/data/data-sources/azure/azure-datalake-gen2.html#mount-azure-data-lake-storage-gen2-filesystem.
Upload the mount script
In the Azure Devops pipeline the databricks-cli is configured by creating a temporary token using the token API. Once this step is done, we're free to use the CLI to upload our mount script to DBFS or import it as a notebook using the Workspace API.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/workspace#--import
Configure the job that actually mounts your storage
We have a JSON file that defines the job that executes the "mount storage" script. You can define a job to use the script/notebook that you've uploaded in the previous step. You can easily define a job using JSON, check out how it's done in the Jobs API documentation:
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/jobs#--
At this point, triggering the job should create a temporary cluster that mounts the storage for you. You should not need to use the web interface, or perform any manual steps.
You can apply this approach to different environments and resource groups, as do we. For this we make use of Jinja templating to fill out variables that are environment or project specific.
I hope this helps you out. Let me know if you have any questions!

Command into the VM that runs a Azure DevOps pipeline

I'm new to Azure DevOps pipeline, currently nothing works,
I am using Azure DevOps Service with the hosted agent from Azure. Can I some how keep that VM that runs Azure DevOps pipeline running? I want to test my azure-pipeleines.yml file in the faster way via access this VM by terminal.
You cannot access Microsoft-hosted agents via terminal. They are assigned for your build and after they go to pool again to use by someone else.
If you want to access to agents you must have your own. You can create them on your own Azure VM's for instance.
He is right, hosted agents are just containers which are disposed when the pipeline is done. if you want to debug, like checking files or what it's not working, you need to have a self hosted agent. it can be on your own computer for debugging and you use the hosted one during normal processing

How secure is the Azure Pipelines Agent

How secure are the Azure Pipelines hosted Agents and can they be used for more sensitive tasks like code signing?
Consider using container jobs to run your builds. This ensures that everything is done within a disposable container and removed once your task is over. You can inject secrets via Azure KeyVault.

How to run a group of PowerShell scripts in Azure

I have a group of interdependent .ps1 scripts I want to run in Azure (trying to set up continuous deployment with git, Pester unit tests, etc., as outlined in this blog). How can I run these scripts in azure without needing to manage a server on which those scripts can run? E.g., can I put them in a storage account and execute them there, or do something similar?
Using an Azure automation account/runbook seems to be limited to a single script per runbook (granted, you can use modules, which is insufficient in my case).
Note that I need to use PowerShell version 5+ (I noticed Azure web apps and functions only have 4.x.)
Thanks in advance!
You were on the right track with Azure Functions. However, given that you need v5+ of PowerShell, you may want to look at Azure Container Instances (ACI) instead. It's a little different approach (via containers), but should not impose any limitations and will free you from having to manage a virtual machine.
Note: At this time ACI is in preview. Documentation is available here.
There is a PowerShell container image available on Docker Hub that you could start with. To execute multiple scripts in the container, you can override CMD in the docker file.

create multiple Azure VMs in single powershell script parallely

I am able to create Azure VM using powershell.
I have to create 4 VM's parallel.
Does any feature in powershell to do create multiple VMs parallel ? Something like background jobs or call the same function for all different VMs using threads kind of ?
Have you considered VM Scale Sets? They automatically deploy VMs in parallel in a highly available configuration and make managing those VMs much easier (overview doc here: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-overview). You can of course deploy a scale set or a bunch of VMs from powershell (doc for deploying a scale set via powershell here: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-create-vmss), but the Powershell commandlets require you to specify lots of related properties (e.g. virtual network, subnet, load balancer configs, etc.). The Azure CLI 2.0 (which you can use on both Windows and Linux!) gives lots of good defaults. For instance, in Azure CLI 2.0 you can do this single command to create all of your VMs in parallel:
az vmss create --resource-group vmss-test-1 --name MyScaleSet --image UbuntuLTS --authentication-type password --admin-username azureuser --admin-password P#ssw0rd! --instance-count 4
(taken from the documentation here: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-create#create-from-azure-cli)
Hope this helps! :)
No, there is no built-in Azure powershell cmdlets or features enabling you to do so. You can create your own routine for that. I'm using PS jobs for that.
You need to use Save-AzureRmContext and Import-AzureRmContext to authenticate powershell inside jobs or use any form of automated login.
Thanks all, I have solved my issue using PS workflow parallel and sequence features. Achieved it.