Problem : Not able to write in the directory inside the container.
I am using hostPath storage for the persistent storage requirements. I am not using PV anc PVC to use hospath, instead of that, using it's volume plugin. for example
{
"apiVersion": "v1",
"id": "local-nginx",
"kind": "Pod",
"metadata": {
"name": "local-nginx"
},
"spec": {
"containers": [
{
"name": "local-nginx",
"image": "fedora/nginx",
"volumeMounts": [
{
"mountPath": "/usr/share/nginx/html/test",
"name": "localvol"
}
]
}
],
"volumes": [
{
"name": "localvol",
"hostPath": {
"path": "/logs/nginx-logs"
}
}
]
}
}
Note: nginx pod is just for exmaple.
My directory on host is getting created as "drwxr-xr-x. 2 root root 6 Apr 23 18:42 /logs/nginx-logs"
and same permissions are reflecting inside the pod, but as it's 755, other user i.e. user inside the pod is not able to write/create file inside the mounted dir.
Questions:
Is there any way out to avoid the problem specified above?
Is there any way to specify the directory permission in case of Hostpath storage?
Is there any field which I can set in the following definition to give the required permission?
"volumes":{
"name": "vol",
"hostPath": {
"path": "/any/path/it/will/be/replaced"}}
I think the problem you are encountering is not related to the user or group (your pod definition does not have RunAsUser spec, so by default it is run as root), but rather to the SELinux policy. In order to mount a host directory to the pod with rw permissions, it should have the following label: svirt_sandbox_file_t . You can check the current SElinux label with the following command: ls -laZ <your host directory> and change it with chcon -Rt svirt_sandbox_file_t <your host directory>.
Related
I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}
New to Kubernetes.
I have a private dockerhub image deployed on a Kubernetes instance. When I exec into the pod I can run the following so I know my docker image is running:
root#private-reg:/# curl 127.0.0.1:8085
Hello world!root#private-reg:/#
From the dashboard I can see my service has an external endpoint which ends with port 8085. When I try to load this I get 404. My service YAML is as below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/test",
"uid": "a1a2ae23-339b-11e9-a3db-ae0f8069b739",
"resourceVersion": "3297377",
"creationTimestamp": "2019-02-18T16:38:33Z",
"labels": {
"k8s-app": "test"
}
},
"spec": {
"ports": [
{
"name": "tcp-8085-8085-7vzsb",
"protocol": "TCP",
"port": 8085,
"targetPort": 8085,
"nodePort": 31859
}
],
"selector": {
"k8s-app": "test"
},
"clusterIP": "******",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
}
}
Can anyone point me in the right direction.
What is the output from the below command
curl cluzterIP:8085
If you get Hello world message then it means that the service is routing the traffic Correctly to the backend pod.
curl HostIP:NODEPORT should also be working
Most likely that service is not bound to the backend pod. Did you define the below label on the pod?
labels: {
"k8s-app": "test"
}
You didn't mention what type of load balancer or cloud provider you are using but if your load balancer provisioned correctly which you should be able to see in your kube-controller-manager logs, then you should be able to access your service with what you see here:
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "******"
}
]
}
Then you could check by running:
$ curl <ip>:<whatever external port your lb is fronting>
It's likely that this didn't provision if as described in other answers this works:
$ curl <clusterIP for svc>:8085
and
$ curl <NodeIP>:31859 # NodePort
Give a check on services on kuberntes, there are a few types:
https://kubernetes.io/docs/concepts/services-networking/service/
ClusterIP: creates access to service only inside the cluster.
NodePort: Access service through a given port on the nodes.
LoadBalancer: service externally acessible through a LB.
I am assuming you are running on GKE.
What kind of service is it, the one launched?
I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
We are using hyperkube's apiserver and configuring it via a manifest file:
"containers":[
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube-amd64:v1.2.1",
"command": [
"/hyperkube",
"apiserver",
"--service-cluster-ip-range=192.168.0.0/23",
"--service-node-port-range=9000-9999",
"--bind-address=127.0.0.1",
"--etcd-servers=http://127.0.0.1:4001",
"--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",
"--client-ca-file=/srv/kubernetes/ca.crt",
"--basic-auth-file=/srv/kubernetes/basic_auth.csv",
"--min-request-timeout=300",
"--tls-cert-file=/srv/kubernetes/server.cert",
"--tls-private-key-file=/srv/kubernetes/server.key",
"--token-auth-file=/srv/kubernetes/known_tokens.csv",
"--allow-privileged=true",
"--v=4"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/srv/kubernetes"
}
]
}
I'm trying to figure out how to set up a different set of tokens than in /srv/kubernetes/known_tokens.csv to have users "superuser" and "reader", instead of admin, kubelet, and kube_proxy. How can I do this?
Your manifest is using the exposed volume path /srv/kubernetes, so should be able to map that to another persistent volume (http://kubernetes.io/docs/user-guide/volumes/) and setup the new files there.
You can do that by specifying a volume:
"volumes": [
{
"name": "data",
"hostPath": {
"path": "/foo"
}
}
]
I have a kubernetes pod to which I attach a GCE persistent volume using a persistence volume claim. (For the even worse issue without a volume claim see: Mounting a gcePersistentDisk kubernetes volume is very slow)
When there is no volume attached, the pod starts in no time (max 2 seconds). But when the pod has a GCE persistent volume mount, the Running state is reached somewhere between 20 and 60 seconds. I was testing with different disk sizes (10, 200, 500 GiB) and multiple pod creations and the size does not seem to be correlated with the delay.
And this delay is not only happening in the beginning but also when rolling updates are performed with the replication controllers or when the code crashes during runtime.
Below I have the kubernetes specifications:
The replication controller
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "a1"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "a1"
}
},
"spec": {
"containers": [
{
"name": "a1-setup",
"image": "nginx",
"ports": [
{
"containerPort": 80
},
{
"containerPort": 443
}
]
}
]
}
}
}
}
The volume claim
{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": {
"name": "myclaim"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
And the volume
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "mydisk",
"labels": {
"name": "mydisk"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"gcePersistentDisk": {
"pdName": "a1-drive",
"fsType": "ext4"
}
}
}
Also
GCE (along with AWS and OpenStack) must first attach a disk/volume to the node before it can be mounted and exposed to your pod. The time required for attachment is dependent on the cloud provider.
In the case of pods created by a ReplicationController, there is an additional detach operation that has to happen. The same disk cannot be attached to more than one node (at least not in read/write mode). Detaching and pod cleanup happen in a different thread than attaching. To be specific, Kubelet running on a node has to reconcile the pods it currently has (and the sum of their volumes) with the volumes currently present on the node. Orphaned volumes are unmounted and detached. If your pod was scheduled on a different node, it must wait until the original node detaches the volume.
The cluster eventually reaches the correct state, but it might take time for each component to get there. This is your wait time.