Podman cannot pull image with sudo - redhat

I am new with Podman.
I tried to run podman on Redhat 8 with sudo and got this issue:
sudo podman pull docker.io/gitlab/gitlab-ce:14.6.4-ce.0
Trying to pull docker.io/gitlab/gitlab-ce:14.6.4-ce.0...
Error: initializing source docker://gitlab/gitlab-ce:14.6.4-ce.0: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp 34.205.13.154:443: i/o timeout
But when I run podman with normal user it can work:
podman pull docker.io/gitlab/gitlab-ce:14.6.4-ce.0
Trying to pull docker.io/gitlab/gitlab-ce:14.6.4-ce.0...
Getting image source signatures
Copying blob 2fa9ca76c6b9 done
Copying blob 8db30bd8306e done
Copying blob 08c01a0ec47e done
Copying blob bb3c5c96997f done
Copying blob 7952c1a53ab4 done
Copying blob 2e1b34a904cb done
Copying blob 7c84eae49fd0 done
Copying blob 49cd799f8c8a done
Copying config b93d637809 done
Writing manifest to image destination
Storing signatures
b93d637809b9ecb3eb42bf65ab5f403c8857d5dcb0f7cade5d377c74e5589e62
I make sure that the proxy, network and enviroment of 2 users are the same.
Please help to support me. Thanks!

Related

Unable to find image 'name:latest' locally

I am trying to run the postgres container and get error as bellow.
"Unable to find image 'name:latest' locally
docker: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login': denied: requested access to the resource is denied."
I have been working on the problem for a couple of days I do not know what the problem is.
This is of my command:
The issue is with your command:
docker run -- name
While --name should be with no any spaces, but you have space between -- and name.
Run your command again with the correct syntax.
To clarify more:
When you run docker run -- name, docker assumes that you are trying to pull and download an image called name, and since your name does not include any tags, so it says I cannot find any image called name:latest.
Just in case anyone gets this error for the same reason I did. I had built an image locally and Docker was complaining the image could not be found. It seems the error was happening because I built the image locally, but specified a different platform for docker run (I had copied the command from somewhere else). Example:
docker build -t my-image .
docker run ... --platform=linux/amd64 my-image
linux/amd64 is not my current platform. So I removed this argument and it worked.
Answer : You can't use that image because you didn't login to your Docker Hub Account
After creating an account find the image you want to use and then pull the image .
You can simply use docker pull [OPTIONS] NAME[:TAG|#DIGEST] for pulling an image from docker.hub and the using it as a container
According to the docker reference
Most of your images will be created on top of a base image from the Docker Hub registry.
Docker Hub contains many pre-built images that you can pull and try without needing to define and configure your own.
To download a particular image, or set of images (i.e., a repository), use docker pull.
P.S : Thank you for contributing in stackoverflow community, but for your next question please ensure that you are asking your question properly by reading
Code of Conduct
Before you pull the image from DockerHub, use docker login and then enter your username and password.
If you have not yet registered in DockerHub, register from the link below
here
then you can use this command for pull your images.
docker pull imageName
be notice that the image you want to receive must already be in DockerHub.

Can't retrieve MongoDB to local drive using SCP from AWS EC2

I have a Docker container using Strapi (which used MondoDB) on a now defunct AWS EC2. I need the content off that server - it can't run because it's too full. So i've tried to retrieve all the files using SCP - which worked a treat apart from download the database content (the actual stuff i need - Strapi and docker book up fine, but because it has to database content, it treats it as a new instance).
Every time i try to download the contents on db from AWS i get 'permission denied'
I'm using SCP something like this
scp -i /directory/to/***.pem -r user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:strapi-docker/* /your/local/directory/files/to/download
Does anyone know how i can get this entire docker container running locally with the database content?
You can temporarily change permissions (recursively) on the directory in question to be world-readable using chmod.

kubernetes: How to download and upload image to internal network

Our customer uses internal network. We have k8s application, some yaml files need to download image from internet. I have a win10 computer and I could ssh internal server and access internet. How to download image and then upload to internal server?
Some image download site would be:
chenliujin/defaultbackend (nginx-default-backend.yaml)
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
How to download image and then upload to internal server?
The shortest path to success is
ssh the-machine-with-internet -- 'bash -ec \
"docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 ; \
docker save quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"' \
| ssh the-machine-without-internet -- 'docker load'
You'll actually need to repeat that ssh machine-without-internet -- docker load bit for every Node in the cluster, otherwise they'll attempt to pull the image when they don't find it already in docker images, which brings us to ...
You are also free to actually cache the intermediate file, if you wish, as in:
ssh machine-with-internet -- 'bash -ec \
"docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 ; \
docker save -o /some/directory/nginx-ingress-0.15.0.tar quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"'
scp machine-with-internet /some/directory/nginx-ingress-0.15.0.tar /some/other/place
# and so forth, including only optionally running the first pull-and-save step
It is entirely possible to use an initContainer: in the PodSpec to implement any kind of pre-loading of docker images before the main Pod's containers attempt to start, but that's likely going to clutter your PodSpec unless it's pretty small and straightforward.
Having said all of that, as #KonstantinVustin already correctly said: having a local docker repository for mirroring the content will save you a ton of heartache
The best way - deploy local mirror for Docker repositories. For example, it could be Artifactory by JFrog

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.