Azure DevOps execute container job in container agent - azure-devops

I start Azure DevOps container agent with docker run -e VSTS_ACCOUNT='kagarlickij' -e VSTS_POOL='Self-Hosted' -e VSTS_TOKEN='a***q' -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce
my pipeline has the following lines:
pool:
name: Self-Hosted
container: kagarlickij/packer-ansible-azure-docker-runtime:2.0.0
..and get: [error]Container feature is not supported when agent is already running inside container. Please reference documentation (https://go.microsoft.com/fwlink/?linkid=875268)
Is it possible to Azure DevOps execute container job in container agent?

Related

Docker containers gone after Gitlab CI pipeline

I installed a Gitlab runner with the docker executor on my Raspberry Pi. In my Gitlab repository I have a docker-compose.yaml file which should run 2 containers, 1 for the application and 1 for the database. It works on my laptop Then I built a simple pipeline with 2 stages test and deploy. This is my deploy stage:
deploy-job:
stage: deploy
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
script:
- docker info
- docker compose down
- docker compose build
- docker compose up -d
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
In the pipeline logs I can see that network, volumes and containers get created and the containers are started. It then says
Cleaning up project directory and file based variables 00:03
Job succeeded
When I ssh into the my raspi and do docker ps -a none of the containers are displayed. It is as if nothing has happened.
I compared my setup to the one in this video https://www.youtube.com/watch?v=RV0845KmsNI&t=352s and my pipeline looks similar. The only difference I can figure out is that in the video a shell executor is used for the Gitlab runner.
There are some differences between using the docker and the shell executor. When you use docker the docker-compose will start your application+db inside the container created to run the job and when the job finishes this container will be stopped by the GitLab runner as well as your application+db inside it. On the other hand, when using the shell executor all the commands of the job are executed directly in the system's shell, so when the job execution has finished the containers of your application+db should remain running in the system. One of the advantages of using the docker executor is precisely that it isolates the job execution inside a docker container, and when it finishes the job container is stopped and the system where the GitLab runner is running is not affected at all (this may change if you have configured the runner to run docker as root).
So my suggested solution is to change the executor to shell (you have to handle security issues).

How run the power shell script after deploying to AKS using devops pipeline

I can able to deploy to the Kubernetes POD and do the health check through the bash script.
But, after deploying to the Kubernetes windows notes. All the files are available under the
"wwwroot" folder.
in that folder, wwwroot\DeployConfigs\ i have a batch file that will perform some copies of files to root folder based on the arguments applied.
copy_pay_configs.bat **dev\east**
When i run the power script, i am facing the below error.
line 2: cd: C:inetpubwwwrootDeployConfigs: No such file or directory
and
line 3: copy_pay_configs.bat: command not found
could you please let me know if its possible login to the pod and run the batch file as we do in azure web service. I know through the kubectl login command i can able to login but unable to run the batch file.
I have tried to do the same thing in my environment and got the below results.
Method 1: Execute the Batch script in the Kubernetes pod from azure devops as below.
- task: AzureCLI#2
    inputs:
      azureSubscription: 'svc-01'
      scriptType: 'bash'
      scriptLocation: 'inlineScript'
      inlineScript: |
        az aks get-credentials --resource-group rgtest-01 --name myAKSClustertest01
        kubectl exec -it sample-877cb5f44-56f45 -- "C:\inetpub\wwwroot\file01.bat"
You can also refer this link to use kubectl task in Azure DevOps.  
Method 2:
Execute the batch script in the Kubernetes pod from the azure cli as shown below.
Step 1: az aks get-credentials --resource-group [resource-group-name] --name [AKSSClustername]
Step 2: kubectl exec -it sample-877cb5f44-56f45 -- "C:\inetpub\wwwroot\file01.bat"

Azure DevOps Pipeline local MariaDB

I want to migrate my github action pipeline to azure devops, unfortunally i wasn't able to find an alternative to the github action "ankane/setup-mariadb#v1".
For my pipline I need to create a local mariadb with a database loaded from a .sql file.
I also need to create a user for that database.
This was my code in my github pipeline:
- name: Installing MariaDB
uses: ankane/setup-mariadb#v1
with:
mariadb-version: ${{ matrix.mariadb-version }}
database: DatabaseName
- name: Creating MariaDB User
run: |
sudo mysql -D DatabaseName -e "CREATE USER 'Username'#localhost IDENTIFIED BY 'Password';"
sudo mysql -D DatabaseName -e "GRANT ALL PRIVILEGES ON DatabaseName.* TO 'Username'#localhost;"
sudo mysql -D DatabaseName -e "FLUSH PRIVILEGES;"
- name: Importing Database
run: |
sudo mysql -D DatabaseName < ./test/database.sql
Does anybody know if there is a alternative for azure devops pipelines?
Cheers,
Does anybody know if there is a alternative for azure devops
pipelines?
If the alternative you mentioned means some tasks in Azure DevOps pipeline can do the similar thing as 'ankane/setup-mariadb#v1' in GitHub, then the answer is NO.
DevOps doesn't have a 'build_in' task like this, even the marketplace also doesn't have a extension to do this.
So you have two ways:
1, If your pipeline based on Microsoft hosted agent, everything should be set up via command:
How to Install and Start Using MariaDB on Ubuntu 20.04
2, If your pipeline based on self hosted agent, then you can 'set up' the environment(MariaDB) before start the pipeline. And then use it in your DevOps pipeline.

Error initiliazing container job in a Hosted Azure Pipelines

I want to run a skopeo container as a container job.
I keep getting this error message during the Initialize containers step
Error response from daemon: Container 7e741e4aafb30bb89e1dfb830c1cb69fa8d47d219f28cc7b8e57727253632256 is not running
my pipeline looks like this:
- job: publish_branch_image
pool:
vmImage: ubuntu-latest
container: docker.io/ananace/skopeo:latest
steps:
- script: |
# clean branchname for imagename
export COMMIT_IMAGE="$(Image.TagName)"
export TARGET_IMAGE="$(Image.Name)":$(echo $(Build.SourceBranch) | sed 's./.-.g')
echo "Pushing to ${TARGET_IMAGE}"
skopeo copy docker://${COMMIT_IMAGE} docker://${TARGET_IMAGE} --src-creds="$(Registry.USER):$(Registry.PASSWORD)" --dest-creds="$(Registry.USER):$(Registry.PASSWORD)"
displayName: publish-branch-release-image
According to the error message, it seems that the container is not running, we could run the cmd docker pull docker.io/ananace/skopeo:latest to pull the image and run it via docker run docker.io/ananace/skopeo:latest, then we could use it.
Update1
Thanks for michiel sharing, according to the doc Endpoints and Linux-based containers:
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry, add a service connection to the private registry. Then you can reference it in a container spec:
container:
image: xxx/xxx:tag
endpoint: xxx

Trouble starting cosmos emulator from Azure pipelines

I'm tryng to set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps.
I've installed it from the marketplace, and YAML file contains:
> task: CosmosDbEmulator#2 inputs:
> containerName: 'azure-cosmosdb-emulator'
> enableAPI: 'SQL'
> portMapping: '8081:8081, 8901:8901, 8902:8902, 8979:8979, 10250:10250, 10251:10251, 10252:10252, 10253:10253, 10254:10254,
> 10255:10255, 10256:10256, 10350:10350'
> hostDirectory: '$(Build.BinariesDirectory)\azure-cosmosdb-emulator'
Running this results in failure " The term 'docker' is not recognized as the name of a cmdlet, function, script file, or operable", so I added this to the YAML:
task: DockerInstaller#0
displayName: Docker Installer
inputs:
dockerVersion: 17.09.0-ce
releaseType: stable
resulting in failure:
error during connect: (...): open //./pipe/docker_engine: The system
cannot find the file specified. In the default daemon configuration on
Windows, the docker client must be run elevated to connect. This error
may also indicate that the docker daemon is not running.
New-CosmosDbEmulatorContainer : Could not create container
azure-cosmosdb-emulator from
mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator:latest"
I'm relatively new to azure pipelines and docker, so any help is really appreciated!
error during connect: (...): open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect.
Above error you encountered is because the docker is not installed in your build agent, or the docker client is not successfully started up. DockerInstaller#0 task only install Docker cli, it doesnot install docker client.
See below extract from this document.
The agent pool to be selected for this CI should have Docker for Windows installed unless the installation is done manually in a prior task as a part of the CI. See Microsoft hosted agents article for a selection of agent pools; we recommend to start with Hosted VS2017.
As above document recommended. Please use hosted vs2017 agent to run your pipeline. Set the pool section in your yaml file like below: See pool docuemnt.
pool:
vmImage: vs2017-win2016
If you are using self-hosted agent. Please install docker client in your self-hosted agent machine. And make sure the docker client is up and running.