we want to push an docker repository to DockerHub - from the shell is works. But in Jenkins we get the error message "errorDetail":{"message":"unauthorized: access to the requested resource is not authorized"
I think the problem is that in shell (docker login) i have to insert the email adress, login and password. In Jenkins i can only set login and password NO email.
The version of credential plugin is 1.24 and we use docker-build-step for the docker steps.
thx
Can you have a try with the CloudBees Docker Build and Publish plugin?
This plugin allows to create a build step to build a Dockerfile and publish the image into a registry (DockerHub or a private registry):
Another solution is to open a session on your Jenkins machine with the jenkins user + login to DockerHub with the relevant credentials?
With this solution, the DockerHub credentials will be cached and Jenkins should be able to push your images to the DockerHub registry.
Maybe you can use Docker pipeline plugin (it comes in the recommended plugins).
Jenkinsfile example:
node {
checkout scm
def dockerImage
stage('Build image') {
dockerImage = docker.build("username/repository:tag")
}
stage('Push image') {
dockerImage.push()
}
}
Doing it this way, you must specify the credentials of the docker registry in the Pipeline Model Definition.
Docker pipeline plugin has problems applying the credentials assigned in Pipeline Model Definition to projects with Multi-branch pipeline. That is, if using the above code you continue to receive the error:
denied: requested access to the resource is denied
Then you must specify the credentials in the Jenkinsfile as follows:
node {
checkout scm
def dockerImage
stage('Build image') {
dockerImage = docker.build("username/repository:tag")
}
stage('Push image') {
docker.withRegistry('https://registry-1.docker.io/v2/', 'docker-hub-credentials') {
dockerImage.push()
}
}
}
You can modify the URL to a custom registry if you need it
Related
I have a Jenkins pipeline job to make a release. It uses the Jenkin's Github plugin to checkout the project and make a build.
My simplified DSL is:
multibranchPipelineJob('Release') {
...
branchSources {
branchSource {
source {
github {
id('AAA')
repoOwner('BBB')
repository('CCC')
credentialsId('github-credentials')
repositoryUrl('https://github.com/BBB/CCC')
configuredByUrl(false)
}
}
...
}
}
...
}
and my simplified 'Jenkinsfile' is like:
pipeline {
agent any
stages {
stage('Build & Release') {
steps {
sh "./gradlew clean build release"
}
}
}
}
But, when it tries to execute the release task, it fails with the following exception.
Caused by: org.eclipse.jgit.errors.TransportException: https://github.com/BBB/CCC.git: Authentication is required but no CredentialsProvider has been registered
at org.eclipse.jgit.transport.TransportHttp.connect(TransportHttp.java:531)
at org.eclipse.jgit.transport.TransportHttp.openPush(TransportHttp.java:434)
at org.eclipse.jgit.transport.PushProcess.execute(PushProcess.java:127)
at org.eclipse.jgit.transport.Transport.push(Transport.java:1335)
at org.eclipse.jgit.api.PushCommand.call(PushCommand.java:137)
My understanding is that when release task is run, it tries to connect using SSH to Github, but I haven't setup one as we don't want to maintain a 'user' for Jenkins on Github. How can I resolve this without setting up SSH keys on Github?
You need to make sure you have proper credential set in the Jenkins system config page.
As seen in the documentation, register a PAT (Personal Access Token) stored in Jenkins through the Jenkins Credentials plugin.
Then convert it to a username/password using the GitHub plugin:
Go to the global configuration and add GitHub Server Config.
Go to Advanced -> Manage Additional GitHub Actions -> Convert Login and Password to token
I'm trying to push to GAR from my local machine, but I always get this error:
failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
First, I've confirmed that my account has the Artifact Registry Writer role through IAM.
I have done the following locally:
# Login with my Google account
gcloud auth login --update-adc --force
# Configure docker to use the gcloud CLI auth helper
gcloud auth configure-docker us-west1-docker.pkg.dev
# docker login for good measure
docker login
# Tag my image (already built)
docker tag myimage us-west1-docker.pkg.dev/myproject/myrepo/myimage
# Push it
docker push us-west1-docker.pkg.dev/myproject/myrepo/myimage
On this final command I get the error above.
I have read all the Google documentation I could find but they all suggest the above steps:
https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling
https://cloud.google.com/artifact-registry/docs/docker/troubleshoot
Note: I can't pull either, using the command provided directly from the GCP web UI.
I'm on M1 Mac.
So I was able to solve this problem by completely nuking Docker, specifically with these steps: https://stackoverflow.com/a/69437543/3846032. I couldn't uninstall it by normal means, it would just hang, implying that the problems I was getting were a result of my Docker installation being very broken. Indeed, I managed to follow the above steps on another machine and it worked, which led me to conclude the steps above and my credentials were totally fine.
The 403 was a red herring, it must have come from my local Docker being broken in such a way that it doesn't send properly authenticated requests.
I have an ADO pipeline I'm trying to run as a containerized job. The yaml is setup with the following line:
container: myDockerHub/myRepo:myTag
Where that actually points to a tag in a private repo on DockerHub. The job errors with a message that access to the repo is denied and may require a login. Which is perfectly true. It's a private repo that does require a login. But how do I tell ADO to login to the repo?
I have a service connection setup to DockerHub, and I use docker login successfully in other non-containerized jobs where a script is spinning up a docker image. But since this is using the container global option, I don't see any way to "preface" it with a login instruction. What do I need to get it to work here?
I don't see anything about authentication on the Microsoft documentation on container jobs
You can use your DockerHub service connection with the endpoint property:
container:
image: registry:myimage
endpoint: private_dockerhub_connection
I created tekton pipeline on minikube as per this link (Basically I'm pulling the repo from github and generating image and pushing it to ECR)
But in my case, I'm pushing the image to AWS ECR.
I configured credentials of AWS ECR on my cluster as per this
When I'm running the pipeline I'm getting the following error.
Note: For testing if my AWS credentials were configured correctly or not, I created a simple deployment spec file and ran it. The image is pulled and the application is running. But with tekton I'm getting 401 issue. Can someone help me with this issue, please?
INFO[0000] GET KEYCHAIN
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "12345678910.dkr.ecr.us-east-1.amazonaws.com/test-api:latest": POST https://12345678910.dkr.ecr.us-east-1.amazonaws.com/v2/test-api/blobs/uploads/: unexpected status code 401 Unauthorized: Not Authorized
According to a note in Cloud Build documentation titled Accessing private GitHub repositories:
When you run builds using Cloud Build triggers, you can automatically connect to any private repository you own without storing your credentials in Secret Manager.
Based on this, I have tried to git clone my private GitHub repo (without piping ssh keys from Secret Manager to ssh files which the doc states is unnecessary using a build trigger) to no avail. Using ssh below in my cloudbuild.yaml file:
steps:
- name: google/cloud-sdk:alpine
id: Clone repo
entrypoint: git
args: ['clone', 'git#github.com:my-org/my-repo.git']
results in error:
Step #0: Host key verification failed.
Step #0: fatal: Could not read from remote repository.
And using https
args: ['clone', 'https://github.com/my-org/my-repo.git']
I get:
Step #0 - "Clone repo": fatal: could not read Username for 'https://github.com': No such device or address
Is there any way to clone a private GitHub repo within cloudbuild.yaml without tediously piping ssh keys from Secret Manager to volumes before the clone? Any tips would be much appreciated.
As mentioned in the note shared, You need to configure your Cloud Build Trigger, if you want to avoid Secret Manager.
The Build Trigger setup step involves authenticating to your source repository with your username and password.
So when you fire this Cloud Build Trigger, it will not ask for your credentials in Secret Manager, as the authentication is already provided in an earlier step (Trigger Setup).
I found a similar case that has been created as an issue in github which can help you to resolve your errors while using ssh.
For https approach, I would recommend you to remove https://github.com from the url.
And I found another issue that has been created in github which can help you to resolve your issue while using https approach.