I am trying to run a self- hosted agent in docker, I have created the dockerfile and start.ps1 files and installed the Azure DevOps Server Express Admin console. I am getting a "Basic authentication requires a secure connection to the server " when I try running the container in docker ( switched windows containers) URL: http://computername/DefaultCollection
I have also attached a screenshot of the error
can you please advise how to resolve this issue.
Docker Run error
thanks
Run a self-hosted agent in Docker
I could not reproduce this issue on my side with hosted agent windows-2019.
To test this issue, I created a folder dockeragent in my Azure repo, which including the files Dockerfile and start.ps1:
Then copy the content from the document Run a self-hosted agent in Docker to those two files.
Next, create a pipeline with an inline powershell task to create the docker image and run docker container:
cd $(System.DefaultWorkingDirectory)\dockeragent
docker build -t dockeragent:latest .
docker run -e AZP_URL=https://dev.azure.com/<YourOrganizationName> -e AZP_TOKEN=<YourPAT> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest
The test result:
To make it work, please make sure the file Dockerfile and start.ps1 is correct without any change.
If above info not help you, please share the content of your Dockerfile and the steps you did.
Your are using azureDevOps without https.
Registering your PiplineAgent via PAT requires https (hence the error: "Basic authentication requires a secure connection to the server".
Try using other authentication Methoden (negotiation, which uses windows authentication)
Related
I'm tryng to set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps.
I've installed it from the marketplace, and YAML file contains:
> task: CosmosDbEmulator#2 inputs:
> containerName: 'azure-cosmosdb-emulator'
> enableAPI: 'SQL'
> portMapping: '8081:8081, 8901:8901, 8902:8902, 8979:8979, 10250:10250, 10251:10251, 10252:10252, 10253:10253, 10254:10254,
> 10255:10255, 10256:10256, 10350:10350'
> hostDirectory: '$(Build.BinariesDirectory)\azure-cosmosdb-emulator'
Running this results in failure " The term 'docker' is not recognized as the name of a cmdlet, function, script file, or operable", so I added this to the YAML:
task: DockerInstaller#0
displayName: Docker Installer
inputs:
dockerVersion: 17.09.0-ce
releaseType: stable
resulting in failure:
error during connect: (...): open //./pipe/docker_engine: The system
cannot find the file specified. In the default daemon configuration on
Windows, the docker client must be run elevated to connect. This error
may also indicate that the docker daemon is not running.
New-CosmosDbEmulatorContainer : Could not create container
azure-cosmosdb-emulator from
mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator:latest"
I'm relatively new to azure pipelines and docker, so any help is really appreciated!
error during connect: (...): open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect.
Above error you encountered is because the docker is not installed in your build agent, or the docker client is not successfully started up. DockerInstaller#0 task only install Docker cli, it doesnot install docker client.
See below extract from this document.
The agent pool to be selected for this CI should have Docker for Windows installed unless the installation is done manually in a prior task as a part of the CI. See Microsoft hosted agents article for a selection of agent pools; we recommend to start with Hosted VS2017.
As above document recommended. Please use hosted vs2017 agent to run your pipeline. Set the pool section in your yaml file like below: See pool docuemnt.
pool:
vmImage: vs2017-win2016
If you are using self-hosted agent. Please install docker client in your self-hosted agent machine. And make sure the docker client is up and running.
See the logs of: https://travis-ci.com/Jeff-Tian/uni-sso/builds/147317611
I created a travis CI project, that uses mongodb service. And it then runs a docker which from inside will connect that mongodb. But as the log shows, it will fail.
I tried those MONGO_URI, none of them works:
mongodb://localhost:27017
mongodb://127.0.0.1:27017
mongodb://host.docker.internal:27017
Can anyone shed some light on this? I can't find a solution either from Travis CI document nor google.
Thanks in advance!
more details
I can use mongodb://host.docker.internal:27017 in travis ci unit test, but inside the docker it would fail.
Probably already late for you, but I've managed to find solution for the same problem I had here https://docs.docker.com/network/network-tutorial-host/.
This approach binds docker container directly to the Docker host’s network
My script to run tests in docker container:
script:
- docker run --network host -e CI=true mydocker/api-test npm test
Then, from your test you can access mongodb using this url
mongodb://127.0.0.1:27017/mongo_db_name
What I want to do:
Deploy docker-compose solution from Github to my virtual private server which has docker and docker-compose installed.
I saw that there are Github Actions that allow me to copy files over SSH after push to master, but I don't know how to run docker-compose up on my server after source has been copied.
On my VPS I have Ubuntu 18.4 installed.
I believe Github actions also allow you to run arbitrary commands on remote servers via ssh (there are a few in their library).
Assuming you copy your docker-compose.yml into, /home/user/app/docker-compose.yml, you could run a command like so:
ssh user#yourserver.example.com "cd /home/user/app/ && docker-compose up -d"
I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>
I'm using Drone 0.4 for as my CI. While trying to migrate from a self hosted private registry to AWS's ECS/ECR, I've come across an authentication issue when referencing these images in my .drone.yml as a composed service.
for example
build:
image: python:3.5
commands:
- some stuff
compose:
db:
image: <account_id>.dkr.ecr.us-east-1.amazonaws.com/reponame:latest
when the drone build runs it's erroring out, like it should, saying
Authentication required to pull from ecr. As I understand when you authenticate for AWS ECR you use something like aws-cli's ecr get-login which gives you a temporary password. I know that I could inject that into my drone secret file and use that value in auth_config but that would mean I'd have to update my secrets' file every twelve hours (or however long that token lasts). Is there a way for drone to perform the authentication process itself?
You can run the authentication command in the same shell before executing your build/compose command:
How we do it in our setup with docker is, we have this shell script part in out Jenkins pipeline(this shell script will work with or without Jenkins, all you have to do is configure your aws credentials):
`aws ecr get-login --region us-east-1`
${MAVEN_HOME}/bin/mvn clean package docker:build -DskipTests
docker tag -f ${DOCKER_REGISTRY}/c-server ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION
docker push ${DOCKER_REGISTRY}/c-server:${RELEASE_VERSION}
So while running the maven command which creates the image or the subsequent commands to push it in ECR, it uses the authentication it gets from the first command.