kubernetes: How to download and upload image to internal network - kubernetes

Our customer uses internal network. We have k8s application, some yaml files need to download image from internet. I have a win10 computer and I could ssh internal server and access internet. How to download image and then upload to internal server?
Some image download site would be:
chenliujin/defaultbackend (nginx-default-backend.yaml)
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0

How to download image and then upload to internal server?
The shortest path to success is
ssh the-machine-with-internet -- 'bash -ec \
"docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 ; \
docker save quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"' \
| ssh the-machine-without-internet -- 'docker load'
You'll actually need to repeat that ssh machine-without-internet -- docker load bit for every Node in the cluster, otherwise they'll attempt to pull the image when they don't find it already in docker images, which brings us to ...
You are also free to actually cache the intermediate file, if you wish, as in:
ssh machine-with-internet -- 'bash -ec \
"docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 ; \
docker save -o /some/directory/nginx-ingress-0.15.0.tar quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"'
scp machine-with-internet /some/directory/nginx-ingress-0.15.0.tar /some/other/place
# and so forth, including only optionally running the first pull-and-save step
It is entirely possible to use an initContainer: in the PodSpec to implement any kind of pre-loading of docker images before the main Pod's containers attempt to start, but that's likely going to clutter your PodSpec unless it's pretty small and straightforward.
Having said all of that, as #KonstantinVustin already correctly said: having a local docker repository for mirroring the content will save you a ton of heartache

The best way - deploy local mirror for Docker repositories. For example, it could be Artifactory by JFrog

Related

requested access to the resource is denied [duplicate]

I am using Laravel 4.2 with docker. I setup it on local. It worked without any problem but when I am trying to setup online using same procedure then I am getting error:
pull access denied for <projectname>/php, repository does not exist or may require 'docker login'
is it something relevant to create repository here https://cloud.docker.com/ or need to docker login in command?
After days of study I am still not able to figure out what could be the fix in this case and what are the right steps?
I have the complete code. I can paste here if need to check certain parts.
Please note that the error message from Docker is misleading.
$ docker build deploy/.
Sending build context to Docker daemon 5.632kB
Step 1/16 : FROM rhel7:latest
pull access denied for rhel7, repository does not exist or may require 'docker login'
It says that it may require 'docker login'.
I struggled with this. I realized the image does not exist at https://hub.docker.com any more.
Just make sure to write the docker name correctly!
In my case, I wrote (notice the extra 'u'):
FROM ubunutu:16.04
The correct docker name is:
FROM ubuntu:16.04
The message usually comes when you put the wrong image name. Please check your image if it exists on the Docker repository with the correct tag.
It helped me.
docker run -d -p 80:80 --name ngnix ngnix:latest
Unable to find image 'ngnix:latest' locally
docker: Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
$ docker run -d -p 80:80 --name nginx nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
I had the same issue. In my case it was a private registry. So I had to create a secret as shown here
and then we have to add the image pull secret to the deployment.yaml file as shown below.
pods/private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
November 2020 and later
If this error is new, and pulling from Docker Hub worked in the past, note Docker Hub now introduced rate limiting in Nov 2020
You will frequently see messages like:
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
From Circle CI and other similar tools that use Docker Hub. Or:
Error response from daemon: pull access denied for cimg/mongo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
You'll need to specify the credentials used to fetch the image:
For CircleCI users:
- image: circleci/mongo:4.4.2
# Needed to pull down Mongo images from Docker hub
# Get from https://hub.docker.com/
# Set up at https://app.circleci.com/pipelines/github/org/sapp
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
I had the same issue
pull access denied for microsoft/mmsql-server-linux, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Turns out the DockerHub was moved to a different name
So I would suggest you re check-in docker hub
I solved this by inserting a language at the front of the docker image
FROM python:3.7-alpine
I had the same error message but for a totally different reason.
Being new to docker, I issued
docker run -it <crypticalId>
where <crypticalId> was the id of my newly created container.
But, the run command wants the id of an image, not a container.
To start a container, docker wants
docker start -i <crypticalId>
In my case I was using a custom image and docker baked into Minikube on my local machine.
I had specified the pull policy incorrectly:-
imagePullPolicy: Always
But it should have been:-
imagePullPolicy: IfNotPresent
Because the custom image was only present locally after I'd explicitly built it in the minikube docker environment.
I had this because I inadvertantly remove the AS tag from my first image:
ex:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
should have been:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64 AS installer
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
I had the same issue when working with docker-composer. In my case it was an Amazon AWS ECR private registry. It seems to be a bug in docker-compose
https://github.com/docker/compose/issues/1622#issuecomment-162988389
After adding the full path "myrepo/myimage" to docker compose yaml
image: xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/myrepo:myimage
it was all fine.
This error message might possibly indicate something else.
In my case I defined another Docker-Image elsewhere from which the current Docker inherited its settings (docker-compos.yml):
FROM my_own_image:latest
The error message I got:
qohelet$ docker-compose up
Building web
Step 1/22 : FROM my_own_image:latest
ERROR: Service 'web' failed to build: pull access denied for my_own_image, repository does not exist or may require 'docker login'
Due to a reinstall the previous Docker were gone and I couldn't build my docker using docker-compose up with this command:
sudo docker build -t my_own_image:latest -f MyOwnImage.Dockerfile .
In your specific case you might have defined your own php-docker.
If the repository is private you have to assign permissions to download it. You have two options, with the docker login command, or put in ~/.docker/docker.config the file generated once you login.
if you have over two stage in the docker build process read this solution:
this error message is completely misleading.
if you have a two-stage (context) dockerfile and want to copy some data from the first to the second stage, you must label the first context (ex: build) and access it by that label
#stage(1)
from <image> as build
.
.
#stage(2)
From <image>
copy --from=build /sourceDir /distinationDir
Docker might have lost the authentication data. So you'll have to reauthenticate with your registry provider. With AWS for example:
aws ecr get-login --region us-west-2 --no-include-email
And then copy and paste that resulting "docker login..." to authenticated docker.
Source: Amazon ECR Registeries
If you're downloading from somewhere else than your own registry or docker-hub, you might have to do a separate agreement of terms on their site, like the case with Oracle's docker registry. It allows you to do docker login fine, but pulling the container won't still work until you go to their site and agree on their terms.
Make sure the image exists in docker hub. To me, I was trying to pull MongoDB using the command docker run mongodb which is incorrect. In the docker hub, the image name is mongo.
If you don't have an image with that name locally, docker will try to pull it from docker hub, but there's no such image on docker hub.
Or simply try "docker login".
If you are using multiple Dockerfiles you should not forget to run build for all of it. That was my case.
I had to run docker pull first, then running docker-compose up again and then it worked.
docker pull index.docker.io/youruser/yourrepo:latest
Try this in your docker-compose.yml file
image: php:rc-zts-alpine
When I run the command multiple times "docker pull scrapinghub/splash" in Power shell then it solve the issue.
if it was caused with AWS EC2 and ECR, due to name issue(happens with beginners!)
Error response from daemon: pull access denied for my-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
when using docker pull use Image URI of the image, available in ECR-row itself as Copy URI
docker pull Image_URI
I have seen this message and thought something was wrong about my Docker authentication. However, I've realized that Docker only allows 1 private repository per free plan. So it is quite possible that you are trying to pull your private repository and see this error because have not upgraded your plan.
Got the same problem but nothing worked. And then I understood I need run .sh (.ps1) script first before doing docker-compose.
So I have the following files:
docker-compose.yml
docker-build.sh
docker-build.ps1
Dockerfile
And I had to first run docker-build.sh on Unix (Mac) machine or docker-build.ps1 on Windows:
sh docker-build.sh
It will build an image in my case.
And only then after an image has been built I can run:
docker-compose up --build
For references. Here is my docker-compose file:
version: '3.8'
services:
api-service:
image: x86_64/prediction-service:0.8.1
container_name: api-service
expose:
- 8060
ports:
- "8060:80"
And here is docker-build.sh:
VERSION="0.8.1"
ARCH="x86_64"
APP="prediction-service"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
docker build -f $DIR/Dockerfile -t $ARCH/$APP:$VERSION .
I had misspelled nginx to nignx in Dockerfile
In my case the solution was to re-create docker-file through visual studio and all worked perfeclty.
I heard the same issue.
I solved by login
docker login -u your_user_name
then I was prompt to enter docker hub password
The rest command work perfect after login successfull
Someone might come across the same error for different reasons than what is already presented, so let me share:
I got the same error when using docker multistage builds (Multiple: FROM <> as <>).
And I forgot to remove one (COPY --from=<> <>)
After removing that COPY then it worked fine.
Exceeded Docker Hub's Limit on Free Repos:
Despite first executing:
docker login -u <dockerhub uname>
and "Login Succeeded" being returned, I received the error in this question.
In the webgui in Settings > Visibility Settings I remarked:
Using 2 of 1 private repositories.
Which told me that I had exceeded the limit on Docker Hub's free account limits. However, removing a previous image didn't clear the error...
The Fix:
Indeed, the error message in my case was a red herring- it's nothing related to authentication issues.
Deleting just the images exceeding the allowed limit did NOT clear the error however!
To get past the error you need to delete ALL the images in your FREE Docker Hub account, then run a new build pushing the image to your account.
Your pull command will now succeed.

Unable to find image 'name:latest' locally

I am trying to run the postgres container and get error as bellow.
"Unable to find image 'name:latest' locally
docker: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login': denied: requested access to the resource is denied."
I have been working on the problem for a couple of days I do not know what the problem is.
This is of my command:
The issue is with your command:
docker run -- name
While --name should be with no any spaces, but you have space between -- and name.
Run your command again with the correct syntax.
To clarify more:
When you run docker run -- name, docker assumes that you are trying to pull and download an image called name, and since your name does not include any tags, so it says I cannot find any image called name:latest.
Just in case anyone gets this error for the same reason I did. I had built an image locally and Docker was complaining the image could not be found. It seems the error was happening because I built the image locally, but specified a different platform for docker run (I had copied the command from somewhere else). Example:
docker build -t my-image .
docker run ... --platform=linux/amd64 my-image
linux/amd64 is not my current platform. So I removed this argument and it worked.
Answer : You can't use that image because you didn't login to your Docker Hub Account
After creating an account find the image you want to use and then pull the image .
You can simply use docker pull [OPTIONS] NAME[:TAG|#DIGEST] for pulling an image from docker.hub and the using it as a container
According to the docker reference
Most of your images will be created on top of a base image from the Docker Hub registry.
Docker Hub contains many pre-built images that you can pull and try without needing to define and configure your own.
To download a particular image, or set of images (i.e., a repository), use docker pull.
P.S : Thank you for contributing in stackoverflow community, but for your next question please ensure that you are asking your question properly by reading
Code of Conduct
Before you pull the image from DockerHub, use docker login and then enter your username and password.
If you have not yet registered in DockerHub, register from the link below
here
then you can use this command for pull your images.
docker pull imageName
be notice that the image you want to receive must already be in DockerHub.

VSCode: How to open repositories that have been cloned to a local volume

Using the remote container extension in VSCode I have opened a repository in a dev container. I am using vscode/docker in windows/wsl2 to develop in linux containers. In order to improve disk performance I chose the option to clone the repository into a docker volume instead of bind mounting it from windows. This works great, I did some editing, but before my change was finished I closed my VSCode window to perform some other tasks. Now I have some uncommitted state in a git repository that is cloned into a docker volume, I can inspect the volume using the docker extension.
My question is, how do I reconnect to that project?
One way is if the container is still running I can reconnect to it, then do file>open folder and navigated to the volume mounted inside the container. But what if the container itself has been deleted? If the file was on my windows filesystem I could say "open folder" on the windows directory and then run "Remote-Container: Reopen in dev container" or whatever, but I can't open the folder in a volume can I?
if I understood correctly you cloned a repository directly into a container volume using the "Clone Repository in Container Volume..." option.
Assuming the local volume is still there you should still be able to recover the uncommitted changes you saved inside it.
First make sure the volume is still there: Unless you named it something in particular, it is usually named <your-repository-name>-<hex-unique-id>. Use this docker command to list the volume and their labels:
docker volume ls --format "{{.Name}}:\t{{.Labels}}"
Notice I included the Labels property, this should help you locate the right volume which should have a label that looks like vsch.local.repository=<your-repository-clone-url>. You can even use the filter mode of the previous command if you remember the exact URL used for cloning in the first place, like this:
docker volume ls --filter label=vsch.local.repository=<your-repository-clone-url>
If you still struggle to locate the exact volume, you can find more about the docker volume ls command in the Official docker documentation and also use docker volume inspect to obtain detailed information about volumes.
Once you know the name of the volume, open an empty folder on your local machine and create the necessary devcontainer file .devcontainer/devcontainer.json. Choose the image most suitable to your development environment, but in order to recover your work by performing a simple git commit any image with git installed should do (even those who are not specifically designed to be devcontainers, here I am using Alpine because it occupies relatively little space).
Then set the workspaceFolder and workspaceMount variables to mount your volume in your new devcontainer, like this:
{
"image": "mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13",
"workspaceFolder": "/workspaces/<your-repository-name>",
"workspaceMount": "type=volume,source=<your-volume-name>,target=/worskpaces"
}
If you need something more specific there you can find exhaustive documentation about the possible devcontainer configuration in the devcontainer.json reference page.
You can now use VSCode git tools and even continue the work from where you left the last time you "persisted" your file contents.
This is, by the way, the only way I know to work with VSCode devcontainers if you are using Docker through TCP or SSH with a non-local context (a.k.a. the docker VM is not running on your local machine), since your local file system is not directly available to the docker machine.
If you look at the container log produced when you ask VSCode to spin up a devcontainer for you, you will find the actual docker run command executed by the IDE to be something like along this line:
docker run ... type=bind,source=<your-local-folder>,target=/workspaces/<your-local-folder-or-workspace-basename>,consistency=cached ...
meaning that if you omit to specify the workspaceMount variable in devcontainer.json, VSCode will actually do it for you like if you were to write this:
// file: .devcontainer/devcontainer.json
{
// ...
"workspaceFolder": "/worspaces/${localWorkspaceFolderBasename}",
"workspaceMount": "type=bind,source=${localWorkspaceFolder},target=/worspaces/${localWorkspaceFolderBasename},consistency=cached"}
// ...
}
Where ${localWorkspaceFolder} and ${localWorkspaceFolderBasename} are dynamic variables avaialble in the VSCode devcontainer.json context.
Alternatively
If you just want to commit the changes and throw away the volume afterwards you can simply spin up a docker container with git installed (even the tiny Alpine linux one should do):
docker run --rm -it --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash
Then, if you are familiar with the git command line tool, you can git add and git commit all your modifications. Alternatively you can run the git commands directly instead of manually using a shell inside the container:
docker run --rm -t --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash -c "git add . && git commit -m 'Recovered from devcontainer'"
You can find a full list of devcontainers provided by MS in the VSCode devcontainers repository .
Devcontainers are an amazing tool to help you keep your environment clean and flexible, I hope this answer understood and helped you solve your problem and expand a bit your knowledge about this instrument.
Cheers
you can also use remote-contianer: clone repository in volumes again
volumes and your changes still there

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>

Access docker within container on jenkins slave

my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible