Podman push results in layer representation error - redhat

Podman Push is resulting in following Error Message:
Error: Copying this image requires changing layer representation, which is not possible (image is signed or the destination specifies a digest)
Registry itself is working, push from another host with same image works (different version though). How to fix this? Already tried nuking podman including graphroot and runroot.
OS: RHEL 8.4
Podman Version: 3.2.3

If i understood correctly by now, skopeo (used to pull/push etc within podman) is the "problem". In my case i pulled a signed image from redhat and I tried pushing it to my gitlab (=docker) registry. Afaik the docker registry doesn't take care of the signature and skopeo refuses to drop the signature by default.
So the easy but dirty fix is to use podman push --remove-signatures

Related

Datalore local installation to Docker postgresql error

I'm following the directions from here and I've gotten this far...
Prerequisites:
Docker Compose version v2.10.2
Clone or download the content of this repository.
Do the following to set up your database:
Open docker-compose.yaml in the [repository_folder]/docker-compose folder in any text editor and replace the values of DB_PASSWORD and POSTGRES_PASSWORD properties with any random string (both properties must have the same value). This string will be used as your database password. Make sure you keep it secret.
Run the following command and wait for Datalore to start up: docker compose up
It's at this point I get the following message:
[+] Running 0/2
- postgresql Error 0.8s
- datalore Error 0.8s
Error response from daemon: manifest for jetbrains/datalore-server:2022.3 not found: manifest unknown: manifest unknown
I'm completely void of ideas for what to try or where to look for more details to even get started other than emailing support at Jetbrains directly (which I've done). The only thing I can think of is that there's some unspoken prerequisite that I'm not aware of because the instructions don't really seem that complicated to this point.
you cloned master branch with datalore-server 2022.3, which is not released yet. You need to either clone an older version (like 2022.2.3) or edit your /docker-compose/docker-compose.yaml and change the image tags there:
datalore:
image: jetbrains/datalore-server:2022.2.3
[...]
postgresql:
image: jetbrains/datalore-postgres:2022.2.3

requested access to the resource is denied [duplicate]

I am using Laravel 4.2 with docker. I setup it on local. It worked without any problem but when I am trying to setup online using same procedure then I am getting error:
pull access denied for <projectname>/php, repository does not exist or may require 'docker login'
is it something relevant to create repository here https://cloud.docker.com/ or need to docker login in command?
After days of study I am still not able to figure out what could be the fix in this case and what are the right steps?
I have the complete code. I can paste here if need to check certain parts.
Please note that the error message from Docker is misleading.
$ docker build deploy/.
Sending build context to Docker daemon 5.632kB
Step 1/16 : FROM rhel7:latest
pull access denied for rhel7, repository does not exist or may require 'docker login'
It says that it may require 'docker login'.
I struggled with this. I realized the image does not exist at https://hub.docker.com any more.
Just make sure to write the docker name correctly!
In my case, I wrote (notice the extra 'u'):
FROM ubunutu:16.04
The correct docker name is:
FROM ubuntu:16.04
The message usually comes when you put the wrong image name. Please check your image if it exists on the Docker repository with the correct tag.
It helped me.
docker run -d -p 80:80 --name ngnix ngnix:latest
Unable to find image 'ngnix:latest' locally
docker: Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
$ docker run -d -p 80:80 --name nginx nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
I had the same issue. In my case it was a private registry. So I had to create a secret as shown here
and then we have to add the image pull secret to the deployment.yaml file as shown below.
pods/private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
November 2020 and later
If this error is new, and pulling from Docker Hub worked in the past, note Docker Hub now introduced rate limiting in Nov 2020
You will frequently see messages like:
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
From Circle CI and other similar tools that use Docker Hub. Or:
Error response from daemon: pull access denied for cimg/mongo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
You'll need to specify the credentials used to fetch the image:
For CircleCI users:
- image: circleci/mongo:4.4.2
# Needed to pull down Mongo images from Docker hub
# Get from https://hub.docker.com/
# Set up at https://app.circleci.com/pipelines/github/org/sapp
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
I had the same issue
pull access denied for microsoft/mmsql-server-linux, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Turns out the DockerHub was moved to a different name
So I would suggest you re check-in docker hub
I solved this by inserting a language at the front of the docker image
FROM python:3.7-alpine
I had the same error message but for a totally different reason.
Being new to docker, I issued
docker run -it <crypticalId>
where <crypticalId> was the id of my newly created container.
But, the run command wants the id of an image, not a container.
To start a container, docker wants
docker start -i <crypticalId>
In my case I was using a custom image and docker baked into Minikube on my local machine.
I had specified the pull policy incorrectly:-
imagePullPolicy: Always
But it should have been:-
imagePullPolicy: IfNotPresent
Because the custom image was only present locally after I'd explicitly built it in the minikube docker environment.
I had this because I inadvertantly remove the AS tag from my first image:
ex:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
should have been:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64 AS installer
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
I had the same issue when working with docker-composer. In my case it was an Amazon AWS ECR private registry. It seems to be a bug in docker-compose
https://github.com/docker/compose/issues/1622#issuecomment-162988389
After adding the full path "myrepo/myimage" to docker compose yaml
image: xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/myrepo:myimage
it was all fine.
This error message might possibly indicate something else.
In my case I defined another Docker-Image elsewhere from which the current Docker inherited its settings (docker-compos.yml):
FROM my_own_image:latest
The error message I got:
qohelet$ docker-compose up
Building web
Step 1/22 : FROM my_own_image:latest
ERROR: Service 'web' failed to build: pull access denied for my_own_image, repository does not exist or may require 'docker login'
Due to a reinstall the previous Docker were gone and I couldn't build my docker using docker-compose up with this command:
sudo docker build -t my_own_image:latest -f MyOwnImage.Dockerfile .
In your specific case you might have defined your own php-docker.
If the repository is private you have to assign permissions to download it. You have two options, with the docker login command, or put in ~/.docker/docker.config the file generated once you login.
if you have over two stage in the docker build process read this solution:
this error message is completely misleading.
if you have a two-stage (context) dockerfile and want to copy some data from the first to the second stage, you must label the first context (ex: build) and access it by that label
#stage(1)
from <image> as build
.
.
#stage(2)
From <image>
copy --from=build /sourceDir /distinationDir
Docker might have lost the authentication data. So you'll have to reauthenticate with your registry provider. With AWS for example:
aws ecr get-login --region us-west-2 --no-include-email
And then copy and paste that resulting "docker login..." to authenticated docker.
Source: Amazon ECR Registeries
If you're downloading from somewhere else than your own registry or docker-hub, you might have to do a separate agreement of terms on their site, like the case with Oracle's docker registry. It allows you to do docker login fine, but pulling the container won't still work until you go to their site and agree on their terms.
Make sure the image exists in docker hub. To me, I was trying to pull MongoDB using the command docker run mongodb which is incorrect. In the docker hub, the image name is mongo.
If you don't have an image with that name locally, docker will try to pull it from docker hub, but there's no such image on docker hub.
Or simply try "docker login".
If you are using multiple Dockerfiles you should not forget to run build for all of it. That was my case.
I had to run docker pull first, then running docker-compose up again and then it worked.
docker pull index.docker.io/youruser/yourrepo:latest
Try this in your docker-compose.yml file
image: php:rc-zts-alpine
When I run the command multiple times "docker pull scrapinghub/splash" in Power shell then it solve the issue.
if it was caused with AWS EC2 and ECR, due to name issue(happens with beginners!)
Error response from daemon: pull access denied for my-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
when using docker pull use Image URI of the image, available in ECR-row itself as Copy URI
docker pull Image_URI
I have seen this message and thought something was wrong about my Docker authentication. However, I've realized that Docker only allows 1 private repository per free plan. So it is quite possible that you are trying to pull your private repository and see this error because have not upgraded your plan.
Got the same problem but nothing worked. And then I understood I need run .sh (.ps1) script first before doing docker-compose.
So I have the following files:
docker-compose.yml
docker-build.sh
docker-build.ps1
Dockerfile
And I had to first run docker-build.sh on Unix (Mac) machine or docker-build.ps1 on Windows:
sh docker-build.sh
It will build an image in my case.
And only then after an image has been built I can run:
docker-compose up --build
For references. Here is my docker-compose file:
version: '3.8'
services:
api-service:
image: x86_64/prediction-service:0.8.1
container_name: api-service
expose:
- 8060
ports:
- "8060:80"
And here is docker-build.sh:
VERSION="0.8.1"
ARCH="x86_64"
APP="prediction-service"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
docker build -f $DIR/Dockerfile -t $ARCH/$APP:$VERSION .
I had misspelled nginx to nignx in Dockerfile
In my case the solution was to re-create docker-file through visual studio and all worked perfeclty.
I heard the same issue.
I solved by login
docker login -u your_user_name
then I was prompt to enter docker hub password
The rest command work perfect after login successfull
Someone might come across the same error for different reasons than what is already presented, so let me share:
I got the same error when using docker multistage builds (Multiple: FROM <> as <>).
And I forgot to remove one (COPY --from=<> <>)
After removing that COPY then it worked fine.
Exceeded Docker Hub's Limit on Free Repos:
Despite first executing:
docker login -u <dockerhub uname>
and "Login Succeeded" being returned, I received the error in this question.
In the webgui in Settings > Visibility Settings I remarked:
Using 2 of 1 private repositories.
Which told me that I had exceeded the limit on Docker Hub's free account limits. However, removing a previous image didn't clear the error...
The Fix:
Indeed, the error message in my case was a red herring- it's nothing related to authentication issues.
Deleting just the images exceeding the allowed limit did NOT clear the error however!
To get past the error you need to delete ALL the images in your FREE Docker Hub account, then run a new build pushing the image to your account.
Your pull command will now succeed.

Unable to find image 'name:latest' locally

I am trying to run the postgres container and get error as bellow.
"Unable to find image 'name:latest' locally
docker: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login': denied: requested access to the resource is denied."
I have been working on the problem for a couple of days I do not know what the problem is.
This is of my command:
The issue is with your command:
docker run -- name
While --name should be with no any spaces, but you have space between -- and name.
Run your command again with the correct syntax.
To clarify more:
When you run docker run -- name, docker assumes that you are trying to pull and download an image called name, and since your name does not include any tags, so it says I cannot find any image called name:latest.
Just in case anyone gets this error for the same reason I did. I had built an image locally and Docker was complaining the image could not be found. It seems the error was happening because I built the image locally, but specified a different platform for docker run (I had copied the command from somewhere else). Example:
docker build -t my-image .
docker run ... --platform=linux/amd64 my-image
linux/amd64 is not my current platform. So I removed this argument and it worked.
Answer : You can't use that image because you didn't login to your Docker Hub Account
After creating an account find the image you want to use and then pull the image .
You can simply use docker pull [OPTIONS] NAME[:TAG|#DIGEST] for pulling an image from docker.hub and the using it as a container
According to the docker reference
Most of your images will be created on top of a base image from the Docker Hub registry.
Docker Hub contains many pre-built images that you can pull and try without needing to define and configure your own.
To download a particular image, or set of images (i.e., a repository), use docker pull.
P.S : Thank you for contributing in stackoverflow community, but for your next question please ensure that you are asking your question properly by reading
Code of Conduct
Before you pull the image from DockerHub, use docker login and then enter your username and password.
If you have not yet registered in DockerHub, register from the link below
here
then you can use this command for pull your images.
docker pull imageName
be notice that the image you want to receive must already be in DockerHub.

pgadmin can't log in after update

Just updated pgadmin4 to version 4.8 and now it won't accept ssh tunnel password into server, I get the following error message:
Failed to decrypt the SSH tunnel password. Error: 'utf-8' codec can't decode byte 0x8c in position 0: invalid start byte
Is there a way around this, I can't restart the database server at this time.
In latest pgAdmin4 version they have increased the security of saved password by implementing master password feature, I think that is causing this issue, meantime you can rename pgadmin4.db to pgadmin4.db_OLD and restart pgAdmin4.
Note: You have to add the all the servers again.
---------- UPDATE ----------
It has been fixed now https://redmine.postgresql.org/issues/4320 and will be in 4.9.
You can try nightly builds though https://postgresql.org/ftp/pgadmin/pgadmin4/snapshots
This also happened for me moving from 4.8.2 for Ubuntu 18.10 to 4.8.2 for Ubuntu 19.04 (different installs). I was able to resolve this by restarting the postgres server with sudo systemctl restart postgresql
As said Murtuza Z, in https://redmine.postgresql.org/issues/4320, you can get fixed server_manager.py and replace it at (pgAdmin install dir)/web/pgadmin/utils/driver/psycopg2/server_manager.py, then restart the pgadmin server.
You can get server_manager.py:
It is attached in an issue info. < this worked for me.
Get from snapshots provided by Murtuza Z in same directory.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.