I am using Laravel 4.2 with docker. I setup it on local. It worked without any problem but when I am trying to setup online using same procedure then I am getting error:
pull access denied for <projectname>/php, repository does not exist or may require 'docker login'
is it something relevant to create repository here https://cloud.docker.com/ or need to docker login in command?
After days of study I am still not able to figure out what could be the fix in this case and what are the right steps?
I have the complete code. I can paste here if need to check certain parts.
Please note that the error message from Docker is misleading.
$ docker build deploy/.
Sending build context to Docker daemon 5.632kB
Step 1/16 : FROM rhel7:latest
pull access denied for rhel7, repository does not exist or may require 'docker login'
It says that it may require 'docker login'.
I struggled with this. I realized the image does not exist at https://hub.docker.com any more.
Just make sure to write the docker name correctly!
In my case, I wrote (notice the extra 'u'):
FROM ubunutu:16.04
The correct docker name is:
FROM ubuntu:16.04
The message usually comes when you put the wrong image name. Please check your image if it exists on the Docker repository with the correct tag.
It helped me.
docker run -d -p 80:80 --name ngnix ngnix:latest
Unable to find image 'ngnix:latest' locally
docker: Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
$ docker run -d -p 80:80 --name nginx nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
I had the same issue. In my case it was a private registry. So I had to create a secret as shown here
and then we have to add the image pull secret to the deployment.yaml file as shown below.
pods/private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
November 2020 and later
If this error is new, and pulling from Docker Hub worked in the past, note Docker Hub now introduced rate limiting in Nov 2020
You will frequently see messages like:
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
From Circle CI and other similar tools that use Docker Hub. Or:
Error response from daemon: pull access denied for cimg/mongo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
You'll need to specify the credentials used to fetch the image:
For CircleCI users:
- image: circleci/mongo:4.4.2
# Needed to pull down Mongo images from Docker hub
# Get from https://hub.docker.com/
# Set up at https://app.circleci.com/pipelines/github/org/sapp
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
I had the same issue
pull access denied for microsoft/mmsql-server-linux, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Turns out the DockerHub was moved to a different name
So I would suggest you re check-in docker hub
I solved this by inserting a language at the front of the docker image
FROM python:3.7-alpine
I had the same error message but for a totally different reason.
Being new to docker, I issued
docker run -it <crypticalId>
where <crypticalId> was the id of my newly created container.
But, the run command wants the id of an image, not a container.
To start a container, docker wants
docker start -i <crypticalId>
In my case I was using a custom image and docker baked into Minikube on my local machine.
I had specified the pull policy incorrectly:-
imagePullPolicy: Always
But it should have been:-
imagePullPolicy: IfNotPresent
Because the custom image was only present locally after I'd explicitly built it in the minikube docker environment.
I had this because I inadvertantly remove the AS tag from my first image:
ex:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
should have been:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64 AS installer
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
I had the same issue when working with docker-composer. In my case it was an Amazon AWS ECR private registry. It seems to be a bug in docker-compose
https://github.com/docker/compose/issues/1622#issuecomment-162988389
After adding the full path "myrepo/myimage" to docker compose yaml
image: xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/myrepo:myimage
it was all fine.
This error message might possibly indicate something else.
In my case I defined another Docker-Image elsewhere from which the current Docker inherited its settings (docker-compos.yml):
FROM my_own_image:latest
The error message I got:
qohelet$ docker-compose up
Building web
Step 1/22 : FROM my_own_image:latest
ERROR: Service 'web' failed to build: pull access denied for my_own_image, repository does not exist or may require 'docker login'
Due to a reinstall the previous Docker were gone and I couldn't build my docker using docker-compose up with this command:
sudo docker build -t my_own_image:latest -f MyOwnImage.Dockerfile .
In your specific case you might have defined your own php-docker.
If the repository is private you have to assign permissions to download it. You have two options, with the docker login command, or put in ~/.docker/docker.config the file generated once you login.
if you have over two stage in the docker build process read this solution:
this error message is completely misleading.
if you have a two-stage (context) dockerfile and want to copy some data from the first to the second stage, you must label the first context (ex: build) and access it by that label
#stage(1)
from <image> as build
.
.
#stage(2)
From <image>
copy --from=build /sourceDir /distinationDir
Docker might have lost the authentication data. So you'll have to reauthenticate with your registry provider. With AWS for example:
aws ecr get-login --region us-west-2 --no-include-email
And then copy and paste that resulting "docker login..." to authenticated docker.
Source: Amazon ECR Registeries
If you're downloading from somewhere else than your own registry or docker-hub, you might have to do a separate agreement of terms on their site, like the case with Oracle's docker registry. It allows you to do docker login fine, but pulling the container won't still work until you go to their site and agree on their terms.
Make sure the image exists in docker hub. To me, I was trying to pull MongoDB using the command docker run mongodb which is incorrect. In the docker hub, the image name is mongo.
If you don't have an image with that name locally, docker will try to pull it from docker hub, but there's no such image on docker hub.
Or simply try "docker login".
If you are using multiple Dockerfiles you should not forget to run build for all of it. That was my case.
I had to run docker pull first, then running docker-compose up again and then it worked.
docker pull index.docker.io/youruser/yourrepo:latest
Try this in your docker-compose.yml file
image: php:rc-zts-alpine
When I run the command multiple times "docker pull scrapinghub/splash" in Power shell then it solve the issue.
if it was caused with AWS EC2 and ECR, due to name issue(happens with beginners!)
Error response from daemon: pull access denied for my-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
when using docker pull use Image URI of the image, available in ECR-row itself as Copy URI
docker pull Image_URI
I have seen this message and thought something was wrong about my Docker authentication. However, I've realized that Docker only allows 1 private repository per free plan. So it is quite possible that you are trying to pull your private repository and see this error because have not upgraded your plan.
Got the same problem but nothing worked. And then I understood I need run .sh (.ps1) script first before doing docker-compose.
So I have the following files:
docker-compose.yml
docker-build.sh
docker-build.ps1
Dockerfile
And I had to first run docker-build.sh on Unix (Mac) machine or docker-build.ps1 on Windows:
sh docker-build.sh
It will build an image in my case.
And only then after an image has been built I can run:
docker-compose up --build
For references. Here is my docker-compose file:
version: '3.8'
services:
api-service:
image: x86_64/prediction-service:0.8.1
container_name: api-service
expose:
- 8060
ports:
- "8060:80"
And here is docker-build.sh:
VERSION="0.8.1"
ARCH="x86_64"
APP="prediction-service"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
docker build -f $DIR/Dockerfile -t $ARCH/$APP:$VERSION .
I had misspelled nginx to nignx in Dockerfile
In my case the solution was to re-create docker-file through visual studio and all worked perfeclty.
I heard the same issue.
I solved by login
docker login -u your_user_name
then I was prompt to enter docker hub password
The rest command work perfect after login successfull
Someone might come across the same error for different reasons than what is already presented, so let me share:
I got the same error when using docker multistage builds (Multiple: FROM <> as <>).
And I forgot to remove one (COPY --from=<> <>)
After removing that COPY then it worked fine.
Exceeded Docker Hub's Limit on Free Repos:
Despite first executing:
docker login -u <dockerhub uname>
and "Login Succeeded" being returned, I received the error in this question.
In the webgui in Settings > Visibility Settings I remarked:
Using 2 of 1 private repositories.
Which told me that I had exceeded the limit on Docker Hub's free account limits. However, removing a previous image didn't clear the error...
The Fix:
Indeed, the error message in my case was a red herring- it's nothing related to authentication issues.
Deleting just the images exceeding the allowed limit did NOT clear the error however!
To get past the error you need to delete ALL the images in your FREE Docker Hub account, then run a new build pushing the image to your account.
Your pull command will now succeed.
everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad
I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.
When pushing to the Bluemix registry, I get the following error:
47c2386f248c: Waiting
2be95f0d8a0c: Waiting
2df9b8def18a: Waiting
unauthorized: authentication required
I've got the cs and cr plugins both installed, an have verified Bx is being added to more auths file. Have tried both using OSX keychain as credstore and without.
When I pull the IBMLiberty example from the BX registry, or build an image with Liberty as the base, it does pull without issue.
I'm running:
docker build . -t registry.ng.bluemix.net/my_space/ibm
docker push registry.ng.bluemix.net/my_space/ibm
Have also tried manually exporting BLUEMIX_TRACE=true and re-running the login and init commands.
Ensure you have logged into the Bluemix Container repository before doing docker push:
$ docker pull registry.ng.bluemix.net/myspace/myimage
Using default tag: latest
Please login prior to pull:
Username (bearer): XXXX
Password:
Error response from daemon: unauthorized: authentication required
$ bx cr login
Logging in to 'registry.ng.bluemix.net'...
Logged in to 'registry.ng.bluemix.net'.
$ docker pull registry.ng.bluemix.net/myspace/myimage:4
4: Pulling from myspace/myimage
7b6bb4652a1b: Downloading [===> ] 5.272MB/70.48MB
See:
$ bx cr login --help
NAME:
login - Log the local Docker client in to IBM Bluemix Container Registry.
USAGE:
bx cr login
It's not clear if you own the namespace my_space can you run bx cr namespaces to see what namespaces you can push to? If need be you can add one with bx cr namespace-add <something unique to you>.
We currently have a docker registry setup, that has security. Normally, in order to access it, from a developer's perspective, I have to do a long with the docker login --username=someuser --password=somepassword --email user#domain.com https://docker-registry.domain.com.
However, since I am currently trying to do an automatized deployment of a docker container in the cloud, one of the operations, which is the docker pull command, fails because the login was not performed (it works if I add the login in the template, but that's bad).
I was suggested to use the certificate to allow the pull from being done (.crt file). I tried installing the certificate using the steps explained here: https://www.linode.com/docs/security/ssl/ssl-apache2-centos
But it does not seem to work, I still have to do a manual login in order to be able to perform my docker pull from the registry.
Is there a way I can replace the login command by the use of the certificate?
As I see, it's wrong URL for SSL authentication between docker server and private registry server.
You can follow this:
Running a domain registry
While running on localhost has its uses, most people want their registry to be more widely available. To do so, the Docker engine requires you to secure it using TLS, which is conceptually very similar to configuring your web server with SSL.
Get a certificate
Assuming that you own the domain myregistrydomain.com, and that its DNS record points to the host where you are running your registry, you first need to get a certificate from a CA.
Create a certs directory:
mkdir -p certs
Then move and/or rename your crt file to: certs/domain.crt, and your key file to: certs/domain.key.
Make sure you stopped your registry from the previous steps, then start your registry again with TLS enabled:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
You should now be able to access your registry from another docker host:
docker pull ubuntu
docker tag ubuntu myregistrydomain.com:5000/ubuntu
docker push myregistrydomain.com:5000/ubuntu
docker pull myregistrydomain.com:5000/ubuntu
Gotcha
A certificate issuer may supply you with an intermediate certificate. In this case, you must combine your certificate with the intermediate's to form a certificate bundle. You can do this using the cat command:
cat domain.crt intermediate-certificates.pem > certs/domain.crt
I've got a project that I wan to build with Jenkins.
The project is hosted in a private GitHub repo and I've put the SSH public key in GitHub of my user "deploy".
The project gets checked out fine thanks to the deploy credential in Jenkins git plugin section in the build config.
But a vendor lib which is hosted as private in same GitHub organisation is loaded via a build step command :
php composer.phar install -o --prefer-dist --no-dev
I've installed Jenkins git plugin in order to checkout the main repo from GitHub via private SSH key.
But when the composer tries to checkout the sub project I get
Failed to clone the git#github.com:Organisation/Repo.git repository, try running in interactive mode so that you can enter your GitHub credentials
I've tried to get the composer command ran as a different user without success with stuff like :
su - deploy -c 'php composer.phar install -o --prefer-dist --no-dev'
looks weird anyway. I'd like to figure out the proper way of having the composer doing his job. Thought ?
Jenkins is actually running the shell commands as "jenkins" user.
It means that "jenkins" needs access to GitHub.
Then all the git#github.com:Organisation/Repo.git will work without additional credentials.
Here is explained how to grant Jenkins access to GitHub over SSH
# Login as the jenkins user and specify shell explicity,
# since the default shell is /bin/false for most
# jenkins installations.
sudo su jenkins -s /bin/bash
ssh-keygen -t rsa -C "your_email#example.com"
# Copy ~/.ssh/id_rsa.pub into your Github
# Allow adding the SSH host key to your known_hosts
ssh -T git#github.com
# Exit from su
exit
Inspired from: Managing SSH keys within Jenkins for Git