I am running minikube/Kubernetes and am having difficulty accessing a volume from a volumeMount in a deployment.
I can confirm that when the microservice starts up, it is not able to access the /config directory (ie. the "mountPath" in the "volumeMounts"). I have verified that the hostPath/path is valid.
I have experimented with a number of techniques and have also validated that the deployment files is correct. I have also tried using quotes/double-quotes/no-quotes around the path specifications, but this does not address the issue.
Note that I am using a "hostPath" for simple testing purposes, however, this is the scenario that I nevertheless need to address.
My minikube configuration is illustrated below:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T07:30:54Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
I am running minikube on MacOS/Sierra version 10.12.3 (16D32).
My deployment file (deployment.yaml):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: atmp1000-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: atmp1000
spec:
containers:
- name: atmp1000
image: atmp1000
ports:
- containerPort: 7010
volumeMounts:
- name: atmp1000-volume
mountPath: '/config'
volumes:
- name: atmp1000-volume
hostPath:
path: '/Users/<username>/<some-path>/config'
Any help is appreciated.
In the interest of completeness, below is the solution that I found... I got the hostPath and mounts working on minikube (on Mac) which took a few steps but required several "minikube delete" commands to get the most current version and reset the environment. Below are some additional notes about how to get this functioning:
I had to use the xhyve driver to make it all work properly -- it probably works using other drivers but I did not try them.
I found that minikube mounts host paths at "/User" which means the "volumes/hostPath/path" should start at "/User"
I found a variety of ways that this worked including using claims but the files in the original question now reflect a correct and simple configuration.
Host mounting directories is not supported by minikube yet. Please check https://github.com/kubernetes/minikube/issues/2
Internally minikube uses a virtual machine to host Kubernetes. If you specify hostPath in a POD spec, Kubernetes will host mount the specified directory inside the VM and not the directory on your actual host.
If you really need to access something on your host machine, you have to use NFS or any other network based volume type.
Related
I have a deployment yaml file that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
setHostnameAsFQDN: true
hostname: hello
subdomain: world
containers:
- name: hello-kubernetes
image: redis
However, I am getting this error:
$ kubectl apply -f dep.yaml
error: error validating "dep.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "setHostnameAsFQDN" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
My kubectl version:
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
After specifying --validate=falsee, hostname and hostname -f still return different values.
I believe I missunderstood something. Doc says that setHostnameAsFQDN will be available from kubernetes v1.20
You showed kubectl version. Your kubernetes version also need to be v1.20. Make sure you are using kubernetes version v1.20.
Use kubectl version for seeing both client and server version. Where client version refers to kubectl version and server version refers to kubernetes version.
As far the k8s v1.20 release note doc: Previously introduced in 1.19 behind a feature gate, SetHostnameAsFQDN is now enabled by default. More details on this behavior is available in documentation for DNS for Services and Pods
I can easily reporduce this and could not find an answer for this issue either in k8s doc or the community.
Simple reproduce steps:
create service and endpoint with below config
---
kind: Service
apiVersion: v1
metadata:
name: hostname
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 9376
---
kind: Endpoints
apiVersion: v1
metadata:
name: hostname
subsets:
- addresses:
- ip: 10.244.44.250
- ip: 10.244.154.235
ports:
- port: 9376
kubectl apply -f <filename> to apply the config
test the service and it works perfect. Assume the cluster IP is A
kubectl delete -f <filename> to delete the service and endpoint and kubectl apply -f <filename> again
we got another cluser IP B, which works perfect also
however, cluser IP A was not removed as expected. I can use A to access the service still.
Update the endpoint definition (add new endpoint IP or remove one) and apply, B sees the change while A uses old config still.
Is there someone can explain what happens there?
My k8s version is:
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
I was trying unsuccessfully access Kubernetes API via HTTPS ingress and now started to wonder if that is possible at all?
Any working detailed guide for a direct remote access (without using ssh -> kubectl proxy to avoid user management on Kubernetes node) would be appreciated. :)
UPDATE:
Just to make more clear. This is bare metal on premise deployment (NO GCE, AWZ, Azure or any other) and there is intension that some environments will be totally offline (which will add additional issues with getting the install packages).
Intention is to be able to use kubectl on client host with authentication via Keycloak (which also fails if followed by the step by step instructions). Administrative access using SSH and then kubectl is not suitable fir client access. So it looks I will have to update firewall to expose API port and create NodePort service.
Setup:
[kubernetes - env] - [FW/SNAT] - [me]
FW/NAT allows only 22,80 and 443 port access
So as I set up an ingress on Kubernetes, I cannot create a firewall rule to redirect 443 to 6443. Seems the only option is creating an https ingress to point access to "api-kubernetes.node.lan" to kubernetes service port 6443. Ingress itself is working fine, I have created a working ingress for Keycloak auth application.
I have copied .kube/config from the master node to my machine and placed it into .kube/config (Cygwin environment)
What was attempted:
SSL passthrough. Could not enable as kubernetes-ingress controller was not able to start due to not being able to create intermediary cert. Even if started, most likely would have crippled other HTTPS ingresses.
Created self-signed SSL cert. As a result via browser, I could get an API output, when pointing to https://api-kubernetes.node.lan/api. However, kubectl throws an error due to unsigned cert, which is obvious.
Put apiserver.crt into ingress tls: definition. Got an error due to cert is not suitable for api-kubernetes.node.lan. Also obvious.
Followed guide [1] to create kube-ca signed certificate. Now the browser does not show anything at all. Using curl to access https://api-kubernetes.node.lan/api results in an empty output (I can see an HTTP OK when using -v). Kubectl now gets the following error:
$ kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"windows/amd64"}
Error from server: the server responded with the status code 0 but did not return more information
When trying to compare apiserver.pem and my generated cert I see the only difference:
apiserver.pem
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
generated.crt
X509v3 Extended Key Usage:
TLS Web Server Authentication
Ingress configuration:
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-api
namespace: default
labels:
app: kubernetes
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- secretName: kubernetes-api-cert
hosts:
- api-kubernetes.node.lan
rules:
- host: api-kubernetes.node.lan
http:
paths:
- path: "/"
backend:
serviceName: kubernetes
servicePort: 6443
Links:
[1] https://db-blog.web.cern.ch/blog/lukas-gedvilas/2018-02-creating-tls-certificates-using-kubernetes-api
You should be able to do it as long as you expose the kube-apiserver pod in the kube-system namespace. I tried it like this:
$ kubectl -n kube-system expose pod kube-apiserver-xxxx --name=apiserver --port 6443
service/apiserver exposed
$ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apiserver ClusterIP 10.x.x.x <none> 6443/TCP 1m
...
Then go to a cluster machine and point my ~/.kube/config context IP 10.x.x.x:6443
clusters:
- cluster:
certificate-authority-data: [REDACTED]
server: https://10.x.x.x:6443
name: kubernetes
...
Then:
$ kubectl version --insecure-skip-tls-verify
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
I used --insecure-skip-tls-verify because 10.x.x.x needs to be valid on the server certificate. You can actually fix it like this: Configure AWS publicIP for a Master in Kubernetes
So maybe a couple of things in your case:
Since you are initially serving SSL on the Ingress you need to use the same kubeapi-server certificates under /etc/kubernetes/pki/ on your master
You need to add the external IP or name to the certificate where the Ingress is exposed. Follow something like this: Configure AWS publicIP for a Master in Kubernetes
Partially answering my own question.
For the moment I am satisfied with token based auth: this allows to have separate access levels and avoid allowing shell users. Keycloak based dashboard auth worked, but after logging in, was not able to logout. There is no logout option. :D
And to access dashboard itself via Ingress I have found somewhere a working rewrite rule:
nginx.ingress.kubernetes.io/configuration-snippet: "rewrite ^(/ui)$ $1/ui/ permanent;"
One note, that UI must be accessed with trailing slash "/": https://server_address/ui/
For those coming here who just want to see their kubernetes API from another network and with another host-name but don't need to change the API to a port other than the default 6443, an ingress isn't necessary.
If this describes you, all you have to do is add additional SAN rules in your API's cert for the DNS you're coming from. This article describes the process in detail
We are using a Apache-Kafka deployment on Kubernetes which is based on the ability to label pods after they have been created (see https://github.com/Yolean/kubernetes-kafka). The init container of the broker pods takes advantage of this feature to set a label on itself with its own numeric index (e.g. "0", "1", etc) as value. The label is used in the service descriptors to select exactly one pod.
This approach works fine on our DIND-Kubernetes environment. However, when tried to port the deployment onto a Docker-EE Kubernetes environment we ran into trouble because the command kubectl label pod generates a run time error which is completely misleading (also see https://github.com/fabric8io/kubernetes-client/issues/853).
In order to verify the run time error in a minimal setup we created the following deployment scripts.
First step: Successfully label pod using the Docker-EE-Host
# create a simple pod as a test target for labeling
> kubectl run -ti -n default --image alpine sh
# get the pod name for all further steps
> kubectl -n default get pods
NAME READY STATUS RESTARTS AGE
nfs-provisioner-7d49cdcb4f-8qx95 1/1 Running 1 7d
nginx-deployment-76dcc8c697-ng4kb 1/1 Running 1 7d
nginx-deployment-76dcc8c697-vs24j 1/1 Running 0 20d
sh-777f6db646-hrm65 1/1 Running 0 3m <--- This is the test pod
test-76bbdb4654-9wd9t 1/1 Running 2 6d
test4-76dbf847d5-9qck2 1/1 Running 0 5d
# get client and server versions
> kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5",
GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean",
BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11- docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5",
GitTreeState:"clean", BuildDate:"2018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# set label
kubectl -n default label pod sh-777f6db646-hrm65 "mylabel=hallo"
pod "sh-777f6db646-hrm65" labeled <---- successful execution
Everything works fine as expected.
Second step: Reproduce run-time error from within pod
Create Docker image containing kubectl 1.10.5
FROM debian:stretch-
slim#sha256:ea42520331a55094b90f6f6663211d4f5a62c5781673935fe17a4dfced777029
ENV KUBERNETES_VERSION=1.10.5
RUN set -ex; \
export DEBIAN_FRONTEND=noninteractive; \
runDeps='curl ca-certificates procps netcat'; \
buildDeps=''; \
apt-get update && apt-get install -y $runDeps $buildDeps --no-install- recommends; \
rm -rf /var/lib/apt/lists/*; \
\
curl -sLS -o k.tar.gz -k https://dl.k8s.io/v${KUBERNETES_VERSION}/kubernetes-client-linux-amd64.tar.gz; \
tar -xvzf k.tar.gz -C /usr/local/bin/ --strip-components=3 kubernetes/client/bin/kubectl; \
rm k.tar.gz; \
\
apt-get purge -y --auto-remove $buildDeps; \
rm /var/log/dpkg.log /var/log/apt/*.log
This image is deployed as 10.100.180.74:5000/test/kubectl-client-1.10.5 in a site local registry and will be referred to below.
Create a pod using the container above
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: pod-labeler
namespace: default
spec:
selector:
matchLabels:
app: pod-labeler
replicas: 1
serviceName: pod-labeler
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: pod-labeler
annotations:
spec:
terminationGracePeriodSeconds: 10
containers:
- name: check-version
image: 10.100.180.74:5000/test/kubectl-client-1.10.5
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
value: sh-777f6db646-hrm65
command: ["/usr/local/bin/kubectl", "version" ]
- name: label-pod
image: 10.100.180.74:5000/test/kubectl-client-1.10.5
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
value: sh-777f6db646-hrm65
command: ["/bin/bash", "-c", "/usr/local/bin/kubectl -n default label pod $POD_NAME 'mylabel2=hallo'" ]
Logging output
We get the following logging output
# Log of the container "check-version"
2018-07-18T11:11:10.791011157Z Client Version: version.Info{Major:"1",
Minor:"10", GitVersion:"v1.10.5",
GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean",
BuildDate:"2018-\
06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
2018-07-18T11:11:10.791058997Z Server Version: version.Info{Major:"1",
Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae",
GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean",
BuildDate:"2018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc",
Platform:"linux/amd64"}
and the run time error
2018-07-18T11:24:15.448695813Z The Pod "sh-777f6db646-hrm65" is invalid:
spec.tolerations: Forbidden: existing toleration can not be modified except its tolerationSeconds
Notes
This is not an authorization problem since we've given the default user of the default namespace full administrative rights. In case we don't, we get an error message referring to missing permissions.
Both client and servers versions "outside" (e.g on the docker host) and "inside" (e.g. the pod) are identical down to the GIT commit tag
We are using version 3.0.2 of the Universal Control Plane
Any ideas?
It was pointed out in one of the comments that the issue may be caused by a missing permission even though the error message does not insinuate so. We officially filed a ticket with Docker and actually got exactly this result: In order to be able to set/modify a label from within a pod the default user of the namespace must be given the "Scheduler" role on the swarm resource (which later shows up as \ in the GUI). Granting this permission fixes the problem. See added grant in Docker-EE-GUI below.
From my point of view, this is far from obvious. The Docker support representative offered to investigate if this is actually expected behavior or results from a bug. As soon as we learn more on this question I will include it into our answer.
As for using more debugging output: Unfortunately, adding --v=9 to the calls of kubectl does not return any useful information. There's too much output to be displayed here but the overall logging is very similar in both cases: It consists of a lot GET API requests which are all successful followed by a final PATCH API request which succeeds in the one case but fails in the other as described above.
I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
using:
$ kubectl create -f test-ingress.yaml
I end up with the following error message:
error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"]
error: no objects passed to create
When I run kubectl version it shows:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me:
All components are up to date.
So how do I enable the extensions/v1beta1 in Kubernetes/GKE?
The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes:
The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1
APIs like autoscaling will not be available via kubectl until next
week's push of the kubectl binary.
along with the solution (download the newer binary manually).