Why don't I have a default serviceAccount on kubernetes? - kubernetes

I'm trying to get Kubernetes running on some local machines running CoreOS. I'm loosely following this guide. Everything seems to be up and running, and I'm able to connect to the api via kubectl. However, when I try to create a pod, I get this error:
Pod "redis-master" is forbidden: Missing service account default/default: <nil>
Doing kubectl get serviceAccounts confirms that I don't have any ServiceAccounts:
NAME SECRETS
According to the documentation, each namespace should have a default ServiceAccount. Running kubectl get namespace confirms that I have the default namespace:
NAME LABELS STATUS
default <none> Active
I'm brand new to Kubernetes and CoreOS, so I'm sure there's something I'm overlooking, but I can't for the life of me figure out what's going on. I'd appreciate any pointers.
UPDATE
It appears the kube-controller-manager isn't running. When I try to run it, I get this message:
I1104 21:09:49.262780 26292 plugins.go:69] No cloud provider specified.
I1104 21:09:49.262935 26292 nodecontroller.go:114] Sending events to api server.
E1104 21:09:49.263089 26292 controllermanager.go:217] Failed to start service controller: ServiceController should not be run without a cloudprovider.
W1104 21:09:49.629084 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly.
W1104 21:09:49.629322 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly.
W1104 21:09:49.636082 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly.
W1104 21:09:49.638712 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly.
Since I'm running this locally, I don't have a cloud provider. I tried to define --cloud-provider="" but it still complains with the same error.

The default service account for each namespace is created by the service account controller, which is a loop that is part of the kube-controller-manager binary. So, verify that binary is running, and check its logs for anything that suggests it can't create a service account, make sure you set the "--service-account-private-key-file=somefile" to a file that has a valid PEM key.
Alternatively, if you want to make some progress without service accounts, and come back to that later, you can disable the admission controller that is blocking your pods by removing the "ServiceAccount" option from your api-server's --admission-controllers flag. But you will probably want to come back and fix that later.

This worked for me
--disable-admission-plugins=ServiceAccount

Related

Error Prometheus endpoint for checking AlertManager

I installed Prometheus (follow in this link: https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/)
But, when checking status of Targets, it shows "Down" for AlertManager service, every another endpoint are up, please see the attached file
Then, I check Service Discovery, the discovered labels shows:
"address="192.168.180.254:9093"
__meta_kubernetes_endpoint_address_target_kind="Pod"
__meta_kubernetes_endpoint_address_target_name="alertmanager-6c666985cc-54rjm"
__meta_kubernetes_endpoint_node_name="worker-node1"
__meta_kubernetes_endpoint_port_protocol="TCP"
__meta_kubernetes_endpoint_ready="true"
__meta_kubernetes_endpoints_name="alertmanager"
__meta_kubernetes_namespace="monitoring"
__meta_kubernetes_pod_annotation_cni_projectcalico_org_podIP="192.168.180.254/32"
__meta_kubernetes_pod_annotationpresent_cni_projectcalico_org_podIP="true"
__meta_kubernetes_pod_container_name="alertmanager"
__meta_kubernetes_pod_container_port_name="alertmanager"
__meta_kubernetes_pod_container_port_number="9093""
But Target Labels show another port (8080), I don't know why:
instance="192.168.180.254:8080"
job="kubernetes-service-endpoints"
kubernetes_name="alertmanager"
kubernetes_namespace="monitoring"
First, if you want to install prometheus and grafana without getting sick, you need to do it though helm.
First install helm
And then
helm install installationWhatEverName stable/prometheus-operator
I've reproduced your issue on GCE.
If you are using version 1.16+ you have probably changed apiVersion as in tutorial you have Deployment in extensions/v1beta1. Since K8s 1.16+ you need to change it to apiVersion: apps/v1. Otherwise you will get error like:
error: unable to recognize "STDIN": no matches for kind "Deployment" in version "extensions/v1beta1"
Second thing, in 1.16+ you need to specify selector. If you will not do it you will receive another error:
`error: error validating "STDIN": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false`
It would look like:
...
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
...
Regarding port 8080 please check this article with example.
Port: Port is the port number which makes a service visible to
other services running within the same K8s cluster. In other words,
in case a service wants to invoke another service running within the
same Kubernetes cluster, it will be able to do so using port specified
against “port” in the service spec file.
It worked for my environment in GCE. Did you configure firewall for your endpoints?
In addition. In Helm 3 some hooks were deprecated. You can find this information here.
If you still have issue please provide your YAMLs witch applied changes to version 1.16+.

Pod status as `CreateContainerConfigError` in Minikube cluster

I am trying to run Sonarqube service using the following helm chart.
So the set-up is like it starts a MySQL and Sonarqube service in the minikube cluster and Sonarqube service talks to the MySQL service to dump the data.
When I do helm install followed by kubectl get pods I see the MySQL pod status as running, but the Sonarqube pod status shows as CreateContainerConfigError. I reckon it has to do with the mounting volume thingy: link. Although I am not quite sure how to fix it (pretty new to Kubernetes environment and till learning :) )
This can be solved by various ways, I suggest better go for kubectl describe pod podname name, you now might see the cause of why the service that you've been trying is failing. In my case, I've found that some of my key-values were missing from the configmap while doing the deployment.
I ran into this problem myself today as I was trying to create secrets and using them in my pod definition yaml file. It would help if you check the output of kubectl get secrets and kubectl get configmaps if you are using any of them and validate if the # of data items you wanted are listed correctly.
I recognized that in my case problem was that when we create secrets with multiple data items: the output of kubectl get secrets <secret_name> had only 1 item of data while I had specified 2 items in my secret_name_definition.yaml. This is because of the difference between using kubectl create -f secret_name_definition.yaml vs kubectl create secret <secret_name> --from-file=secret_name_definition.yaml The difference is that in the case of the former, all the items listed in the data section of the yaml will be considered as key-value pairs and hence the # of items will be shown as the correct output when we query using kubectl get secrets secret_name but in the case of the latter only the first data item in the secret_name_definition.yaml will be evaluated for the key-value pairs and hence the output of kubectl get secrets secret_name will show only 1 data item and this is when we see the error "CreateContainerConfigError".
Note that this problem wouldn't occur if we use kubectl create secret <secret_name> with the options --from-literal= because then we would have to use the prefix --from-literal= for every key-value pair we want to define.
Similarly, if we are using --from-file= option, we still have to specify the prefix multiple times, one for each key-value pair, but just that we can pass the raw value of the key when we use --from-literal and the encoded form (i.e. value of the key will now be echo raw_value | base64 of it as a value when we use --from-file.
For example, say the keys are "username" and "password", if creating the secret using the command kubectl create -f secret_definition.yaml we need to have the values for both "username" and "password" encoded as mentioned in the "Create a Secret" section of https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
I would like to highlight the "Note:" section in https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/ Also, https://kubernetes.io/docs/concepts/configuration/secret/ has a very clear explanation of creating secrets
Also make sure that the deployment.yaml now has the correct definiton for this container:
env:
- name: DB_HOST
value: 127.0.0.1
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
As quoted by others, "kubectl describe pods pod_name" would help but in my case I only understood that the container wasn't being created first of all and the output of "kubectl logs pod_name -c container_name" didn't help much.
Recently, I had encountered the same CreateContainerConfigError error and after little debugging I found out that it was because I was using a kubernetes secret in my Deployment yaml, which was not actually present/created in that namespace where the pods were getting created.
Also after reading the previous answer I guess this can be assured that this particular error is focused around kubernetes secrets!
Check your secrets and config maps (kubectl get [secrets|configmaps]) that already exist and are correctly pointed in the YAML descriptor file, in both cases an incorrect secret/configmap (not created, mispelling, etc.) results in CreateContainerConfigError.
As already pointed in the answers can check the error with kubectl describe pod [pod name] and something like this should appear at the bottom of the ouput:
Warning Failed 85s (x12 over 3m37s) kubelet, gke-****-default-pool-300d3c89-9jkz
Error: configmaps "config-map-1" not found
UPDATE: From #alexis-wilke
The list of events can be ephemeral in some versions and this message disappear quickly. As a rule of thumb, check events list immediately when booting a pod, or if you have CreateContainerConfigError without events double check secrets and config maps as they can leave the pod in this state with no trace at some point
I also ran into this issue, and the problem was due to an environment variable using a field ref, on a controller. The other controller and the worker were able to resolve the reference. We didn't have time to track down the cause of the issue and wound up tearing down the cluster and rebuilding it.
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Apr 02 16:35:46 ip-10-30-45-105.ec2.internal sh[1270]: E0402 16:35:46.502567 1270 pod_workers.go:186] Error syncing pod 3eab4618-5564-11e9-a980-12a32bf6e6c0 ("datadog-datadog-spn8j_monitoring(3eab4618-5564-11e9-a980-12a32bf6e6c0)"), skipping: failed to "StartContainer" for "datadog" with CreateContainerConfigError: "host IP unknown; known addresses: [{Hostname ip-10-30-45-105.ec2.internal}]"
Try to use the option --from-env-file instead of --from-file and see if this problem disappears. I got the same error and looking into the pod events, it suggested that the key-value pairs inside the mysecrets.txt file is not properly read. If you have only one line, Kubernetes takes the content inside the file as value and the filename as key. To avoid this issue, you need to read the file as environment variable files as shown below.
mysecrets.txt:
MYSQL_PASSWORD=dfsdfsdfkhk
For example:
kubectl create secret generic secret-name --from-env-file=mysecrets.txt
kubectl create configmap generic configmap-name --from-env-file=myconfigs.txt

OpenShift - how can pods resolve each other names

I m trying to have cloudera manager and cloudera agents on openshift, in order to run the installation I need to get all the pods communicating with each other.
Manually, I modified the /etc/hosts on the manager and add all the agents and on the agents I added the manager and all the other agents.
Now I wanted to automate this, let suppose I add a new agent, I want it to resolve the manager and the host (I can get a part of it done, by passing the manager name as an env variable and with a shell script add it to the /etc/hosts, not the ideal way but still solution). But the second part would be more difficult, to get the manager to resolve every new agent, and also to resolve every other agent on the same service.
I was wondering if there is a way so every pod on the cluster can resolve the others names ?
I have to services cloudera-manager with one pod, and an other service cloudera-agent with -let's say- 3 agents.
do you have any idea ?
thank you.
Not sure, but it looks like you could benefit from StatefulSets.
There are other ways to get the other pods ips (like using a headless service or requesting to the serverAPI directly ) but StatefulSets provide :
Stable, unique network identifiers
Stable, persistent storage.
Lots of other functionality that facilitates the deployment of a special kind of clusters like distributed databases. Not sure my term 'distributed' here is correct, but it helps me remind what they are for :).
If you want to get all Pods running under a certain Service, make sure to use a headless Service (i.e. set clusterIP: None). Then, you can query your local DNS-Server for the Service and will receive A-Records for all Pods assigned to it:
---
apiVersion: v1
kind: Service
metadata:
name: my-sv
namespace: my-ns
labels:
app: my-app
spec:
clusterIP: None
selector:
app: my-app
Then start your Pods (make sure to give app: labels for assignment) and query your DNS-Server from any of them:
kubectl exec -ti my-pod --namespace=my-ns -- /bin/bash
$ nslookup my-sv.my-ns.svc.cluster.local
Server: 10.255.3.10
Address: 10.255.3.10#53
Name: my-sv.my-ns.svc.cluster.local
Address: 10.254.24.11
Name: my-sv.my-ns.svc.cluster.local
Address: 10.254.5.73
Name: my-sv.my-ns.svc.cluster.local
Address: 10.254.87.6

Make Kubernetes wait for Pod termination before removing from Service endpoints

According to Termination of Pods, step 7 occurs simultaneously with 3. Is there any way I can prevent this from happening and have 7 occur only after the Pod's graceful termination (or expiration of the grace period)?
The reason why I need this is that my Pod's termination routine requires my-service-X.my-namespace.svc.cluster.local to resolve to the Pod's IP during the whole process, but the corresponding Endpoint gets removed as soon as I run kubectl delete on the Pod / Deployment.
Note: In case it helps making this clear, I'm running a bunch of clustered VerneMQ (Erlang) nodes which, on termination, dump their contents to other nodes on the cluster — hence the need for the nodenames to resolve correctly during the whole termination process. Only then should the corresponding Endpoints be removed.
Unfortunately kubernetes was designed to remove the Pod from the endpoints at the same time as the prestop hook is started (see link in question to kubernetes docs):
At the same time as the kubelet is starting graceful shutdown, the
control plane removes that shutting-down Pod from Endpoints
This google kubernetes docs says it even more clearly:
Pod is set to the “Terminating” State and removed from the
endpoints list of all Services
There also was also a feature request for that. which was not recognized.
Solution for helm users
But if you are using helm, you can use hooks (e.g. pre-delete,pre-upgrade,pre-rollback). Unfortunately this helm hook is an extra pod which can not access all pod resources.
This is an example for a hook:
apiVersion: batch/v1
kind: Job
metadata:
name: graceful-shutdown-hook
annotations:
"helm.sh/hook": pre-delete,pre-upgrade,pre-rollback
labels:
app.kubernetes.io/name: graceful-shutdown-hook
spec:
template:
spec:
containers:
- name: graceful-shutdown
image: busybox:1.28.2
command: ['sh', '-cx', '/bin/sleep 15']
restartPolicy: Never
backoffLimit: 0
Maybe you should consider using headless service instead of using ClusterIP one. That way your apps will discover using the actual endpoint IPs and the removal from endpoint list will not break the availability during shutdown, but will remove from discovery (or from ie. ingress controller backends in nginx contrib)

Only enable ServiceAccounts for some pods in Kubernetes

I use the Kubernetes ServiceAccount plugin to automatically inject a ca.crt and token in to my pods. This is useful for applications such as kube2sky which need to access the API Server.
However, I run many hundreds of other pods that don't need this token. Is there a way to stop the ServiceAccount plugin from injecting the default-token in to these pods (or, even better, have it off by default and turn it on explicitly for a pod)?
As of Kubernetes 1.6+ you can now disable automounting API credentials for a particular pod as stated in the Kubernetes Service Accounts documentation
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
Right now there isn't a way to enable a service account for some pods but not others, although you can use ABAC to for some service accounts to restrict access to the apiserver.
This issue is being discussed in https://github.com/kubernetes/kubernetes/issues/16779 and I'd encourage you to add your use can to that issue and see when it will be implemented.