How I can add root CA to minikube? - kubernetes

My company uses it's own root CA and when I'm trying to pull images. Even from a private registry I'm getting error:
1h 3m 22 {kubelet minikube} Warning FailedSync Error syncing
pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull:
"image pull failed for gcr.io/google_containers/pause-amd64:3.0, this
may be because there are no credentials on this request.
details:
(Error response from daemon: Get https://gcr.io/v1/_ping: x509:
certificate signed by unknown authority)"
1h 10s 387 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with
ImagePullBackOff: "Back-off pulling image
\"gcr.io/google_containers/pause-amd64:3.0\""
How I can install root CA to minkube or avoid this message i.e. Use only private registry, and don't pull anything from gcr.io ?

The only solution I've found so far is adding --insecure-registry gcr.io option to the minikube.

To address:
x509: certificate signed by unknown authority
Could you please try the following suggestion from Minikube repo?
copy the cert into the VM. The location should be:
/etc/docker/certs.d/
from here: https://docs.docker.com/engine/security/certificates/
ref
That thread also includes the following one-liner:
cat <certificatefile> \
| minikube ssh "sudo mkdir -p /etc/docker/certs.d/<domain> && sudo tee /etc/docker/certs.d/<domain>/ca.crt"
The issue here is the CA Trust chain of the Linux host that needs to be updated. The easiest way is to reboot the Linux host after copying the certs into the VM, if rebooting is not an option - look for a way to update-ca-certificates.
Just restarting the Docker Daemon will most likely not solve this issue
Note: allowing the Docker daemon to use insecure registries means certificates aren't verified.. while this may help, it does not solve the question asked here

A straight forward way to do this independent from minikube would be to use a imagePullSecrets configuration. As documented in the Pulling an Image from a Private Registry guide, you can create a Secret and use that along with your image like this:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: private-container
image: <your-private-image>
imagePullSecrets:
- name: the_secret
Where the the_secret could be created with:
kubectl create secret docker-registry the_secret --docker-server=xxx --docker-username=xxx --docker-password=xxx --docker-email=xxx
Or with:
kubectl create secret generic the_secret --from-file=.docker/config.json
Or with anything else which is related to kubectl create secret - see documentation for details.
Edit: Even in the official minikube documentation you'll find that they use Secrets along with the registry-creds addon.
You'll find the generic documentation here and the documentation for the addon here.
It burns down to:
minikube addons enable registry-creds
But technically it does the same as described above.

You can add your companies certificate into the minikube kubernetes cluster by first copying the certificate to $HOME/.minikube/certs:
mkdir -p $HOME/.minikube/certs
cp my-company.pem $HOME/.minikube/certs/my-company.pem
And then starting minikube with the --embed-certs option:
minikube start --embed-certs
Source: Official documentation

Related

Failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict"

I am new to Kubernetes. I just trying to create a tls secret using kubectl. My ultimate goal is deploy a keycloak cluster in kubernetes.
So I follow this youtube tutorial. But in this tutorial doesn't mention how to generate my own tls key and tls cert. So to do that I use this documentation (https://www.linode.com/docs/guides/create-a-self-signed-tls-certificate/).
Then I could generate MyCertTLS.crt and MyKeyTLS.key
gayan#Gayan:/srv$ cd certs
gayan#Gayan:/srv/certs$ ls
MyCertTLS.crt MyKeyTLS.key
To create secret key for the kubernetes, I ran this command
sudo kubectl create secret tls my-tls --key="MyKeyTLS.key" --cert="MyCertTLS.crt" -n keycloak-test
But It's not working, I got this error,
gayan#Gayan:/srv/certs$ sudo kubectl create secret tls my-tls --key="MyKeyTLS.key" --cert="MyCertTLS.crt" -n keycloak-test
[sudo] password for gayan:
error: failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:8080: connect: connection refused
Note:
MiniKube is Running...
And Ingress Addon also enabled...
I have created a namespace called keycloak-test.
gayan#Gayan:/srv/keycloak$ kubectl get namespaces
NAME STATUS AGE
default Active 3d19h
ingress-nginx Active 119m
keycloak-test Active 4m12s
kube-node-lease Active 3d19h
kube-public Active 3d19h
kube-system Active 3d19h
kubernetes-dashboard Active 3d19h
I am trying to fix this error. But I have no idea why I get this, looking for a solution from the genius community.
I figured this out! I posted this, because this may helpful for someone.
I am getting that error,
error: failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:8080: connect: connection refused
Because my kubernetes api-server is running on a different port.
You can view what port your kubernetes api-server is running by running this command,
kubectl config view
Then for example, if you can see server: localhost:40475 like that, It's mean your server running on port 40475.
And kubernetes default port is 8443
Then you should mention the correct port on your kubectl command to create the secret.
So, I add --server=https://localhost:40475 to my command.
kubectl create secret tls my-tls --key="tls.key" --cert="tls.crt" -n keycloak-test --server=https://localhost:40475
And another thing, if you getting error like permission denied
You have to change the ownership of your tls.key file and tls.crt file.
I did this by running these commands,
sudo chmod 666 tls.crt
sudo chmod 666 tls.key
Then you should run above kubectl command, without sudo! It works !!!!!!
If you run that command with sudo, It will ask username and passwords and it confused me and it did not work.
So, by doing this way, I solved this issue!
Hope this will help to someone!!! Thanks!
In your examples, kubectl get namespaces works, but sudo kubectl create secret doesn't.
You don't need sudo to work with Kubernetes. In particular, the connection information is stored in a $HOME/.kube/config file by default, but when you sudo kubectl ..., that changes the home directory and you can't find the connection information.
The standard Kubernetes assumption is that the cluster is remote, and so your local user ID doesn't really matter to it. All that does matter is the Kubernetes-specific permissions assigned to the user that's accessing the cluster.

Kubeflow pipeline fail to create container

I'm running Kubeflow in a local machine that I deployed with multipass using these steps but when I tried running my pipeline, it got stuck with the message ContainerCreating. When I ran kubectl describe pod train-pipeline-msmwc-1648946763 -n kubeflow I found this on the Events part of the describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 7m12s (x51 over 120m) kubelet, kubeflow-vm Unable to mount volumes for pod "train-pipeline-msmwc-1648946763_kubeflow(45889c06-87cf-4467-8cfa-3673c7633518)": timeout expired waiting for volumes to attach or mount for pod "kubeflow"/"train-pipeline-msmwc-1648946763". list of unmounted volumes=[docker-sock]. list of unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact pipeline-runner-token-dkvps]
Warning FailedMount 2m22s (x67 over 122m) kubelet, kubeflow-vm MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file
Looks to me like there is a problem with my deployment, but I'm new to Kubernetes and can't figure out what I supposed to do right now. Any idea on how to solve this? I don't know if it helps but I'm pulling the containers from a private docker registry and I've set up the secret according to this.
You don't need to use docker. In fact the problem is with workflow-controller-configmap in kubeflow name space. You can edit it with
kubectl edit configmap workflow-controller-configmap -n kubeflow
and change containerRuntimeExecutor: docker to containerRuntimeExecutor: pns. Also you can change some of the steps and install kubeflow 1.3 in mutlitpass 1.21 rather than 1.15. Do not use kubelfow add-on (at least didn't work for me). You need kustomize 3.2 to create manifests as they mentioned in https://github.com/kubeflow/manifests#installation.
There was one step missing which is not mentioned in the tutorial, which is, I have to install docker. I've installed docker, rebooted the machine, and now everything works fine.

Pulling local repository docker image from kubernetes

Trying to install a sample container App using Pod in my local environment
I'm using kubernates cluster coming with docker desktop.
I'm creating the Pod using bellow command with the YML file
kubectl create -f test_image_pull.yml
apiVersion: v1
kind: Pod
metadata:
# value must be lower case
name: sample-python-web-app
spec:
containers:
- name: sample-hello-world
image: local/sample:latest
imagePullPolicy: Always
command: ["echo", "SUCCESS"]
docker file used to build the image and this container running without any issue if u run with docker run
# Use official runtime python
FROM python:2.7-slim
# set work directory to app
WORKDIR /app
# Copy current directory
COPY . /app
# install needed packages
RUN pip install --trusted-host pypi.python.org -r requirement.txt
# Make port 80 available to outside container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python" , "app.py"]
from flask import Flask
from redis import Redis, RedisError
import os
import socket
#connect to redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format (
name=os.getenv("NAME", "world"),
hostname=socket.gethostname(),
visits=visits
)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
Flask
Redis
Once I describe the pod it shows me below error
kubectl describe pod sample-python-web-app
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m25s default-scheduler Successfully assigned default/sample-python-web-app to docker-desktop
Normal Pulling 97s (x4 over 3m22s) kubelet, docker-desktop Pulling image "local/sample:latest"
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Failed to pull image "local/sample:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for local/sample, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Error: ErrImagePull
Normal BackOff 78s (x6 over 3m16s) kubelet, docker-desktop Back-off pulling image "local/sample:latest"
Warning Failed 66s (x7 over 3m16s) kubelet, docker-desktop Error: ImagePullBackOff
Kubernetes pulls container images from a Docker Registry. Per the doc:
You create your Docker image and push it to a registry before
referring to it in a Kubernetes pod.
Moreover:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags.
So, the way the image is referenced in the pod's spec - "image: local/sample:latest" - Kubernetes looks on Docker Hub for the image in repository named "local".
You can push the image to Docker Hub or some other external Docker Registry, public or private; you can host Docker Registry on the Kubernetes cluster; or, you can run a Docker Registry locally, in a container.
To run a Docker registry locally:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Next, find what is the IP address of the host - below I'll use 10.0.2.1 as an example.
Then, assuming the image name is "local/sample:latest", tag the image:
docker tag local/sample:latest 10.0.2.1:5000/local/sample:latest
...and push the image to the local registry:
docker push 10.0.2.1:5000/local/sample:latest
Next, change in pod's configuration YAML how the image is referenced - from
image: local/sample:latest
to
image: 10.0.2.1:5000/local/sample:latest
Restart the pod.
EDIT: Most likely the local Docker daemon will have to be configured to treat the local Docker registry as insecure. One way to configure that is described here - just replace "myregistrydomain.com" with the host's IP (e.g. 10.0.2.1). Docker Desktop also allows to edit daemon's configuration file through the GUI.
If you want to setup local repository for Kubernetes cluster, you might follow this guide .
I would recommend using Trow.io which is a image Management for Kubernetes to quickly create a registry that runs wihtin Kubernetes and provides a secure and fast way to get containers running on the cluster.
We're building an image management solution for Kubernetes (and possibly other orchestrators). At its heart is the Trow Registry, which runs inside the cluster, is simple to set-up and fully integrated with Kubernetes, including support for auditing and RBAC.
Why "Trow"
"Trow" is a word with multiple, divergent meanings. In Shetland folklore a trow is a small, mischievous creature, similar to the Scandanavian troll. In England, it is a old style of cargo boat that transported goods on rivers. Finally, it is an archaic word meaning "to think, believe, or trust". The reader is free to choose which interpretation they like most, but it should be pronounced to rhyme with "brow".
Whole installation process is described here.

Kubernetes Kops - set sysctl flag on kubelet

We need to enable some sysctl parameters in kubernetes. This should be achievable with the below annotation in the Deployment.
annotations:
security.alpha.kubernetes.io/unsafe-sysctls: net.ipv4.ip_local_port_range="10240 65535"
When doing so the container fails to start with the error:
Warning FailedCreatePodSandBox 8s (x12 over 19s) kubelet, <node> Failed create pod sandbox.
The solution looks to be to add this flag to the kublet:
--experimental-allowed-unsafe-sysctls
Which for other flags can be done under kubelet in
kops edit cluster
Does anyone know the correct way to do this as it refuses to pick up the setting when entering the flag there.
Thanks,
Alex
A fix for this was merged back in May, you can see the PR here: https://github.com/kubernetes/kops/pull/5104/files
You'd enable it with:
spec:
kubelet:
ExperimentalAllowedUnsafeSysctls:
- 'net.ipv4.ip_local_port_range="10240 65535"'
It seems the flag takes a stringSlice, so you'd need to pass an array.
If that doesn't work, ensure you're using the right version of kops
As of 2020-05-18, the proper config is, for example:
kubelet:
allowedUnsafeSysctls:
- net.ipv4.ip_local_port_range="10240 65535"
In general, all KOPS config must be camelCased.
From here, KOPS 1.16.2+

kubernetes-dashboard CrashLoopBackOff: Couldn't read CA certificate: open : no such file or directory

I just installed a single-node kubernetes cluster on CentOS7 using kubeadm according to this manual, then installed the kubernetes-dashboard extension. But the pod status is CrashLoopBackOff.
I have checked the logs of the dashboard docker container and found following error:
...
2017/10/24 10:15:57 Serving securely on HTTPS port: 8443
2017/10/24 10:15:57 Couldn't read CA certificate: open : no such file or directory
What does this mean?
you need to mount your certificate into your kubernetes-dashboard deployment so it can access your SSL/TLS Certificate.
i assume you are using the following deployment:
https://github.com/kubernetes/dashboard/blob/master/src/deploy/recommended/kubernetes-dashboard.yaml
so you need to add your key and public key to your kubernetes secret "kubernetes-dashboard-certs".
For Cert Generation see: https://github.com/kubernetes/dashboard/wiki/Certificate-management
For more information about Secrets in K8s see:
https://kubernetes.io/docs/concepts/configuration/secret/