My environment:
Microsoft Windows 10 64bit
Docker, version 1.10.3, build 20f81dd, installed from Docker Toolbox
CloudFoundry CLI, cf version 6.16.1+924508c-2016-02-26
Bluemix CLI, bx version 0.3.1-5206a18-2016-03-01T08:16:52+00:00
I typed this command cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-windows_x64.exe under both CMD and Boot2Docker VM in order to install cf ic plugin, which is mentioned in the official guide. The result is "exit status 2".
Output:
Attempting to download binary file from internet address...
10325504 bytes downloaded...
Installing plugin C:\Users\myusername\AppData\Local\Temp\
ibm-containers-windows_x64.exe...
FAILED exit status 2
What is "exit status 2" any way? I have found a section that mentions "exit status 1" in the troubleshooting document. However, there is no information about "exit status 1".
I installed all the programs using the default settings and the cf ic plugin did not install properly.
How to fix this problem?
install the new bx plugin that include containers feature too: http://clis.ng.bluemix.net/ui/home.html
bx ic help
USAGE:
bluemix ic COMMAND [COMMAND OPTIONS] [ARGUMENTS...]
IBM Containers commands:
attach(Docker) Attach to a running container
build(Docker) Build an image from a Dockerfile
create(Docker) Create a new container
cpi Copy image
exec(Docker) Run a command in a running container
groups List all container groups
group-inspect View the info of specified container group
group-instances List instances of specified container group
group-create Create a new container group
group-update Update an existing container group
group-remove Remove a container group
images(Docker) List images
inspect(Docker) Return low-level information on a container or image
info Display information about IBM Containers
init Initialize IBM Containers CLI
ips List all IP addresses
ip-request Request an IP address
ip-release Release an IP address
ip-bind Bind an IP address to a container instance
ip-unbind Unbind an IP address from a container instance
kill(Docker) Kill a running container
namespace-get Get current container namespace
namespace-set Set container namespace
pause(Docker) Pause all processes within a container
port(Docker) List port mappings or a specific mapping for the container
ps(Docker) List containers
restart(Docker) Restart a running container
rm(Docker) Remove one or more containers
rmi(Docker) Remove one or more images
run(Docker) Run a command in a new container
route-map Map a route to container group
route-unmap Unmap a route from container group
start(Docker) Start a stopped container
stats(Docker) Display a live stream of container(s) resource usage statistics
stop(Docker) Stop a running container
top(Docker) Display the running processes of a container
unpause(Docker) Unpause a paused container
volumes List all volumes
volume-inspect View the info of specified volume
volume-create Create a new volume
volume-remove Remove a volume
volume-fs List filesystems
volume-fs-create Create a new filesystem
volume-fs-remove Remove a filesystem
volume-fs-inspect Inspect a filesystem
volume-fs-flavors List filesystem flavors
version(Docker) Show the Docker version information
wait(Docker) Block until a container stops, then print its exit code
help
Related
I am trying to learn Kubernetes and installed Minikube on my local. I created a sample python app and its corresponding image was pushed to public docker registry. I started the cluster with
kubectl apply -f <<my-app.yml>>
It got started as expected. I stopped Minikube and deleted all the containers and images and restarted my Mac.
My Questions
I start my docker desktop and as soon as I run
minikube start
Minikube goes and pulls the images from public docker registry and starts the container. Is there a configuration file that Minikube looks into to start my container that I had deleted from my local? I am not able to understand from where is Minikube picking up my-app's configurations which was defined in manifest folder.
I have tried to look for config files and did find cache folder. But it does not contain any information about my app
I found this is expected behavior:
minikube stop command should stop the underlying VM or container, but keep user data intact.
So when I manually delete already created resources it does not automatically starts.
More information :
https://github.com/kubernetes/minikube/issues/13552
I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my app:test image into it. I can easily run a container with it using docker run.
It shows up when I run sudo docker images.
When I create a deployment/pod that uses this image and specify Image-PullPolicy: IfNotPresent or Never, I still have the ImagePullBackoff error. The describe command shows me it tries to pull the image from dockerhub...
Note that when I try to use a local image that was pulled as the result of creating another pod, the ImagePullPolicies seem to work, no problem. Although the image doesn't appear when i run sudo docker images --all.
How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?
image doesn't appear when i run sudo docker images --all
Based on your comment, you are using K8s v1.22, which means it is likely your cluster is using containerd container runtime instead of docker (you can check with kubectl get nodes -o wide, and see the last column).
Try listing your images with crictl images and pulling with crictl pull <image_name> to preload the images on the node.
One can do so with a combination of crictl and ctr, if using containerd.
TLDR: these steps, which are also described in the crictl github documentation:
1- Once you get the image on the node (in my case, a VM), make sure it is in an archive (.tar). You can do that with the docker save or ctr image export commands.
2- Use sudo ctr -n=k8s.io images import myimage.tar while in the same directory as thearchived image to add it to containerd in the namespace that kubernetes uses to track it's images. It should now appear when you run sudo crictl images.
As suggested, I tried listing images with crictl and my app:test did not appear. However, trying to import my local image through crictl didn't seem to work either. I used crictl pull app:test and it showed the following error message:
FATA[0000] pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/app:test": failed to resolve reference "docker.io/library/app:test": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed.
However, when following these steps, my image is finally recognized as an existing local image in kubernetes. They are actually the same as suggested in the crictl github documentation
How does one explain this? How do images get "registered" in the kubernetes cluster? Why couldn't crictl import the image? I might post another issue to ask that...
Your cluster is bottled inside of your VM, so what you call local will always be remote for that cluster in that VM. And the reason that kubernetes is trying to pull those images, is because it can't find them in the VM.
Dockerhub is the default place to download containers from, but you can set kubernetes to pull from aws (ECR) from azure (ACR), from github packages (GCR) and from your own private server.
You've got about 100 ways to solve this, none of them are easy or will just work.
1 - easiest, push your images to Dockerhub and let your cluster pull from it.
2 - setup a local private container registry and set your kubernetes VM to pull from it (see this)
3 - setup a private container registry in your kubernetes cluster and setup scripts in your local env to push to it (see this)
Is it possible to take an image or a snapshot of container running inside pod using kubectl?
Via docker, it is possible to use the docker commit command that creates an image of a container from which we can spawn more containers. I wanted to understand if there was something similar that we could do with kubectl.
No, partially because that's not in the kubernetes mental model of anything one would wish to do to a cluster, and partially because docker is not the only container runtime kubernetes uses. Every runtime one could use underneath kubernetes would need to support that operation, and I doubt they do.
You are welcome to do your own docker commit either by getting a shell on the Node, or by running a privileged Pod then connecting to the docker.sock via a volumeMount and running it that way
I was trying to deploy my local docker image on kubernetes, but doesn't work for me.
I loaded image into docker and tagged it as app:v1, then I ran image by use kubectl this way kubectl run app --image=app:v1 --port=8080.
If I want to lookup my pods I see error "Failed to pull image "app:v1": rpc error: code = 2 desc = Error: image library/app not found".
What am I doing wrong?
In normal case your Kubernetes cluster runs on a different machine than your docker build was run on, hence it has no access to your local image (unless you are using minikube and you eval minikubes environment to actually run your docker commands against docker daemon powering the minikube install).
To get it working you need to push the image to a registry available to kubernetes cluster.
By running your command you actually tell kubernetes to pull app:v1 from official docherhub hosted images.
I have an instance group of Container VMs running my app on a docker container.
I am trying to find a good strategy to manage the application logs for docker + MEAN + Google Cloud Compute Machines.
I can see the logs on individual containers running docker logs [container_id].
However, if I stop and start the VM I lose those logs. I also have VMs dynamically added by Auto scaler and would like to have a convenient way to access the logs.
Stack is MEAN and Logging tool is bunyan.
Is is possible to centralize or combine the logs from all VMS in one persistent location?
any suggestions?
UPDATES:
I installed fluentd agent and now I can see logs when I manually run thins on the shell: logger "some message for testing"
However, the logs from my container vm from my docker container never shows up on logs.
I still don't know how to get those docker logs to turn up on google cloud logs. It is supposed to be automatically collected.
cheers
Leo
Here is a yaml, Dockerfile and conf for a fluentd pod inside kubernetes.
Adjust the yaml to mount a disk:
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/logging/fluentd-sidecar-gcp
Then adjust the config to log to the disk.
Build the container with the new configuration.
Deploy the new container.