I'm trying to use kubeadm dind, but I'm having trouble with private registries. I created a docker private registry, which is running on http, but kubernetes running in dind refuses to use http, and keeps trying to download using https.
The error I'm receiving is this...
Failed to pull image "192.168.2.5:5000/inotes-init-db:1.0.18": rpc
error: code = Unknown desc = Error response from daemon: Get
https://192.168.2.5:5000/v2/: http: server gave HTTP response to HTTPS
client
But the registry is running on http...
$ curl -X GET http://192.168.2.5:5000/v2/_catalog 2> /dev/null| jq
{
"repositories": [
"inotes-init-db",
"intelli-notes"
]
}
As you can see above, it works fine on http, but if I try https, it fails...
$ curl -X GET https://192.168.2.5:5000/v2/_catalog
curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received
I also thought I could maybe access the kubernetes nodes to update their /etc/default/docker.json file, but I can't shell into them.
So, how do I get kubeadm to use http?
Related
Hi I am attempting to create a set of dynamic admission webhooks (registry whitelisting, mostly for security context stuff). This is the chart that I am using, everything works fine when deployed to 2 other EKS clusters, but when I deploy it to a more secure cluster that we are setting up (using Bottlerocket OS among others things) I get the following error:
Error from server (InternalError): Internal error occurred: failed calling webhook "...": failed to call webhook: Post "https://image-admission-controller-webhook.kube-system.svc:443/validate?timeout=2s": context deadline exceeded
I have verified that the service has an endpoint, the selector label maps to a pod, and that I am able to curl the above URL using a test curl image. What should I do? Thanks!
Needed to allow a rule in the SG for the controlplane to allow 443 outbound from RFC1918
I can interact with my open search docker container via curl -XGET https://localhost:9200 -u 'admin:admin' --insecure but no use when I want to automate requests to it... I need to be able to access it, even via HTTP is fine (not secure meaning not HTTPS).
command:
curl -XGET https://localhost:9200
error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I did go into that link, not a solution, just an explanation that threw me into a rabbit hole I was able to get out just now.
The awful thing is that this happened to me before and I fixed it, but it was not on a docker container and I don't remember how I fixed it.
You can disable security in your Dockerfile then:
RUN echo "plugins.security.disabled: true" >> /usr/share/opensearch/config/opensearch.yml
Your OpenSearch will be accessible via http://localhost:9200 after. I do this to setup my data, and then load /usr/share/opensearch/data in an other container set up with security.
I am running Grafana v6.2.4 in kubernetes, using basic auth. I want to use the k8s proxy for testing (i.e. kubectl proxy --port=8080). I have changed the GF_SERVER_ROOT_URL environment variable to:
{
"name": "GF_SERVER_ROOT_URL",
"value": "http://localhost:8080/api/v1/namespaces/my-namespace/services/grafana-prom:80/proxy/"
}
This allows me to log in and use Grafana through my browser at http://localhost:8080/api/v1/namespaces/my-namespace/services/grafana-prom:80/proxy/.
However, I want to use it via the API. If I send a request to http://localhost:8080/api/v1/namespaces/my-namespace/services/grafana-prom:80/proxy/api/dashboards/db I get back
{
"message": "Unauthorized"
}
However, if I set up a kubernetes port forward and send the identical request to http://localhost:30099/api/dashboards/db, then it succeeds.
Is there a different environment variable aside from GF_SERVER_ROOT_URL that I should be changing so that the API server root goes through the k8s proxy, i.e. http://localhost:8080/api/v1/namespaces/my-namespace/services/grafana-prom:80/proxy/api/dashboards/db? I have looked here but couldn't find it.
Otherwise what is the correct way to access the API through the k8s proxy?
I should add that I am specifically trying to use kubetctl proxy as an alternative to kubectl port-forward so I'm hoping to find an alternative to the suggestion here https://stackoverflow.com/a/45189081/1011724
I tried to replicate this in minikube and i might have found the reason for your requests through the API server proxy (using kubectl proxy) not getting authorized correctly.
Running the following curl-command:
curl -H "Authorization: Bearer <TOKEN>" http://localhost:8080/api/v1/namespaces/my-namespace/services/grafana-prom:80/proxy/api/dashboards/home
and using tcpdump to capture the requests in the Pod with tcpdump -vvvs 0 -l -A -i any yielded the following result(s):
GET /api/dashboards/home HTTP/1.1
Host: localhost:8080
User-Agent: curl/7.58.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 127.0.0.1, 192.168.99.1
X-Forwarded-Uri: /api/v1/namespaces/my-namespace/services/grafana-prom:80/proxy/api/dashboards/home
This GET request doesn't have the Authorization header (resulting in a 401 Unauthorized) so basically the API server seem to strip this HTTP header as it passes the request down to the Pod.
If i instead use kubectl port-forward -n my-namespace svc/grafana-prom 8080:80 the GET request now looks like this:
GET /api/dashboards/home HTTP/1.1
Host: localhost:8080
User-Agent: curl/7.58.0
Accept: */*
Authorization: Bearer <TOKEN>
When writing this answer i found this issue in the k/k repo #38775, to qoute one of the comments:
this is working as expected. "proxying" through the apiserver will not get you standard proxy behavior (preserving Authorization headers end-to-end), because the API is not being used as a standard proxy
This basically means that kubectl proxy will not work when trying to authorize through it, it's not a "regular" reverse proxy and probably for a good reason won't preserve Authorization headers.
Note that i tested both token and basic authorization using curl although token based auth is used above.
Hopefully this clear things up a bit!
Question
Which format of kubeadm init --token is correct?
(2/4) Initializing your master shows "--token xyz".
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm init shows "--token=xyz".
kubeadm join --token=abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 192.168.1.1:6443
The execution log (using Ansible) showed several error message. Wonder if this is related with the format.
changed: [192.168.99.12] => {...
"[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.",
"[preflight] Running pre-flight checks",
"[preflight] Starting the kubelet service",
"[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
"[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
"[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
"[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
"[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
"[discovery] Requesting info from \"https://192.168.99.10:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"192.168.99.10:6443\"",
"[discovery] Successfully established connection with API Server \"192.168.99.10:6443\"",
"[bootstrap] Detected server version: v1.8.5",
"[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)",
"",
"Node join complete:",
"* Certificate signing request sent to master and response",
" received.",
"* Kubelet informed of new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this machine join."
kubeadm uses spf13/pflag, where both notations are correct.
From the docs:
--flag // boolean flags, or flags with no option default values
--flag x // only on flags without a default value
--flag=x
As far as I know, format does not matter. It's in their output which is recommended
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
I think Token is used for security (ssl or tls) purpose so Master Node and Worker Node can communicate using encryption. It's part of TLS Handshake.
As Kubelet in worker node needs its own token to communicate with the kube-api server on the master node.
Problem
I am trying to enable authentication on my kubelet servers using Bearer Tokens (not X.509 client certificate authentication), and fail to understand the workflow.
What I tried
According to the documentation page Kubelet authentication/authorization, starting the kubelet with the --authentication-token-webhook flag enables the Bearer Token authentication. I could confirm that by sending a request to the kubelet REST API using one of the default secrets created by the Controller Manager:
$ MY_TOKEN="$(kubectl get secret default-token-kw7mk \
-o jsonpath='{$.data.token}' | base64 -d)"
$ curl -sS -o /dev/null -D - \
--cacert /var/run/kubernetes/kubelet.crt \
-H "Authorization : Bearer $MY_TOKEN" \
https://host-192-168-0-10:10250/pods/
HTTP/1.1 200 OK
Content-Type: application/json
Date: Fri, 30 Jun 2017 22:12:29 GMT
Transfer-Encoding: chunked
However any communication with the kubelet via the API server (typically using the kubectl logs or exec commands) using the same Bearer Token as above fails with:
$ kubectl --token="$MY_TOKEN" -n kube-system logs \
kube-dns-2272871451-sc02r -c kubedns
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log kube-dns-2272871451-sc02r))
Where I need clarification
My initial assumption was that the API server just passed the Bearer Token it received from the client directly to the kubelet, but my little experiment above proved me otherwise.
I see that the kube-apiserver documentation mentions a flag called --authentication-token-webhook-config-file but I'm unsure how to use it, or if it's even relevant for authenticating the API server against a kubelet.
Current configuration
My kubelet(s) run with:
--anonymous-auth=false
--authorization-mode=Webhook
--authentication-token-webhook
--cadvisor-port=0
--cluster-dns=10.0.0.10
--cluster-domain=cluster.local
--read-only-port=0
--kubeconfig=/etc/kubernetes/kubeconfig-kubelet
--pod-manifest-path=/etc/kubernetes/manifests
--require-kubeconfig
My API server runs with:
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
--anonymous-auth=false
--authorization-mode=AlwaysAllow
(+ tls flags)
When making calls to the API server that require communication from the API server to the kubelet, that communication is done using the API server's client credentials, which only support x509 authentication to the kubelet.
The flags used to give the API server the credentials to use to contact the kubelet are listed in the "X509 client certificate authentication" section of https://kubernetes.io/docs/admin/kubelet-authentication-authorization/
API server webhook authentication options are unrelated to kubelet auth.