Hashicorp Vault for Kubernetes - Deployment can't get secret injected - kubernetes

I am currently doing a PoC on Vault for K8s, but I am having some issues injecting a secret into an example application. I have created a Service Account which is associated with a role, which is then associated with a policy that allows the service account to read secrets.
I have created a secret basic-secret, which I am trying to inject to my example application. The application is then associated with a Service Account. Below you can see the code for deploying the example application (Hello World) and the service account:
apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-secret
labels:
app: basic-secret
spec:
selector:
matchLabels:
app: basic-secret
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/tls-skip-verify: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
vault.hashicorp.com/agent-inject-template-helloworld: |
{{- with secret "secret/basic-secret/helloworld" -}}
{
"username" : "{{ .Data.username }}",
"password" : "{{ .Data.password }}"
}
{{- end }}
vault.hashicorp.com/role: "basic-secret-role"
labels:
app: basic-secret
spec:
serviceAccountName: basic-secret
containers:
- name: app
image: jweissig/app:0.0.1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: basic-secret
labels:
app: basic-secret
When I describe the pod (kubectl describe pod basic-secret-7d6777cdb8-tlfsw -n vault) of the deployment I get:
Furthermore, for logs (kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent -n vault) I get:
Error from server (BadRequest): container "vault-agent" in pod "basic-secret-7d6777cdb8-tlfsw" is waiting to start: PodInitializing
I am not sure why the Vault-Agent is not initiallizing. If someone has any idea what might be the issue, I would appreciate it a lot!
Best William.

Grab your container logs with kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent-init -n vault because container vault-agent-init has to finish first.
Does your policy allow access to that secret (secret/basic-secret/helloworld)?
Did you create your role (basic-secret-role) in k8s auth? In role creation process, you can authorize certain namespaces so that might be a problem.
But, let's see those agent-init logs first

Related

Why Kubernetes Services are created before Deployment/Pods?

If I have to deploy a workload on Kubernetes and also have to expose it as a service, I have to create a Deployment/Pod and a Service. If the Kubernetes offering is from Cloud and we are creating a LoadBalancer service, it makes sense to create the the service first before the workload as the url creation for the service takes time. But in case the Kubernetes deployed on non-cloud platform there is no point in creating the Service before the workload.
So why I have to create a service first then the workload?
There is no requirement to create a service before a deployment or vice versa. You can create the deployment before the service or the service before the deployment.
If you create the deployment before the service, then the application packaged in the deployment will not be accessible externally until you create the LoadBalancer.
Conversely, if you create the LoadBalancer first, the traffic for the application will not be routed to the application as it hasn't been created yet, giving 503s to the caller.
You are declaring to Kubernetes how you want the state of the infrastructure to be. e.g. "I want a deployment and a service". Kubernetes will go off and create those, but they may not necessarily end up being creating in a predictable order. For example, LoadBalancers take a while to assign IPs from your Cloud provider, so even though the resource is created in the cluster, it's not actually getting any traffic.
As wrote in the official kubernetes documentation, and unlike to what other users have told you, there is a specific case that the service needs to be created BEFORE the pod.
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Note:
When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue.
For example:
Create some pods, deploy and service in a namespace:
namespace.yaml
# namespace
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: securedapp
spec: {}
status: {}
services.yaml
# expose api svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: api
name: api-svc
namespace: securedapp
spec:
ports:
- port: 90
protocol: TCP
targetPort: 80
selector:
type: api
type: ClusterIP
status:
loadBalancer: {}
---
# expose frontend-svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: secured
name: frontend-svc
namespace: securedapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
type: secured
type: ClusterIP
status:
loadBalancer: {}
pods-and-deploy.yaml
# create the pod for frontend
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: secured
name: secured-frontend
namespace: securedapp
spec:
containers:
- image: nginx
name: secured-frontend
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create the pod for the api
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: api
name: webapi
namespace: securedapp
spec:
containers:
- image: nginx
name: webapi
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create a deploy
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
name: sure-even-with-deploy
namespace: securedapp
spec:
replicas: 1
selector:
matchLabels:
app: sure-even-with-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
If you create all the resources contextually in the same time, for example placing all this files on a folder, then do an apply on it like this:
kubectl apply -f .
Then if we get the envs from one pod
k exec -it webapi -n securedapp -- env
you will obtain something like this:
# environment of api_svc
API_SVC_PORT_90_TCP=tcp://10.152.183.242:90
API_SVC_SERVICE_PORT=90
API_SVC_PORT_90_TCP_ADDR=10.152.183.242
API_SVC_SERVICE_HOST=10.152.183.242
API_SVC_PORT_90_TCP_PORT=90
API_SVC_PORT=tcp://10.152.183.242:90
API_SVC_PORT_90_TCP_PROTO=tcp
# environment of frontend
FRONTEND_SVC_SERVICE_HOST=10.152.183.87
FRONTEND_SVC_SERVICE_PORT=80
FRONTEND_SVC_PORT=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PORT=80
FRONTEND_SVC_PORT_280_TCP=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PROTO=tcp
FRONTEND_SVC_PORT_280_TCP_ADDR=10.152.183.87
Clear all the resoruces created:
kubectl delete -f .
Now, another try.
This time we will do the same thing but "slowly", we create things one by one.
Sure, the ns first
k apply -f ns.yaml
then the pods
k apply -f pods.yaml
After a while create the services
k apply -f services.yaml
Now you will discover that if you get the envs from one pod
k exec -it webapi -n securedapp -- env
this time you will NOT have the environment variables of the services.
So, as mentioned on the k8s documentation there is at least one case, where it is necessary to create the svc BEFORE the pods: if you need env variables of your services.
Saluti

kubernetes pod deployment not updating

I have a pod egress-operator-controller-manager created from makefile by command make deploy IMG=my_azure_repo/egress-operator:v0.1.
This pod was showing unexpected status: 401 Unauthorized error in description, so I created imagePullSecrets and trying to update this pod with secret by creating pod's deployment.yaml [egress-operator-manager.yaml] file. But when I am applying this yaml file its giving below error:
root#Ubuntu18-VM:~/egress-operator# kubectl apply -f /home/user/egress-operator-manager.yaml
The Deployment "egress-operator-controller-manager" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"moduleId":"egress-operator"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
egress-operator-manager.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: egress-operator-controller-manager
namespace: egress-operator-system
labels:
moduleId: egress-operator
spec:
replicas: 1
selector:
matchLabels:
moduleId: egress-operator
strategy:
type: Recreate
template:
metadata:
labels:
moduleId: egress-operator
spec:
containers:
- image: my_azure_repo/egress-operator:v0.1
name: egress-operator
imagePullSecrets:
- name: mysecret
Can somene let me know that how can I update this pod's deployment.yaml ?
Delete the deployment once and try applying the YAML agian.
it could be due to K8s service won't allow the rolling update once deployed the label selectors of K8s service can not be updated until you decide to delete the existing deployment
Changing selectors leads to undefined behaviors - users are not expected to change the selectors
https://github.com/kubernetes/kubernetes/issues/50808

imagePullSecrets on default service account don't seem to work

I am basically trying to pull GCR images from Azure kubernetes cluster.
I have the folowing for my default service account:
kubectl get serviceaccounts default -o yaml
apiVersion: v1
imagePullSecrets:
- name: gcr-json-key-stg
kind: ServiceAccount
metadata:
creationTimestamp: "2019-12-24T03:42:15Z"
name: default
namespace: default
resourceVersion: "151571"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 7f88785d-05de-4568-b050-f3a5dddd8ad1
secrets:
- name: default-token-gn9vb
If I add the same imagePullSecret to individual deployments, it works. So, the secret is correct. However, when I use it for a default service account, I get a ImagePullBackOff error which on describing confirms that it's a permission issue.
Am I missing something?
I have made sure that my deployment is not configured with any other specific serviceaccount and should be using the default serviceaccount.
ok, the problem was that the default service account that I added the imagePullSecret wasn't in the same namespace.
Once, I patched the default service account in that namespace, it works perfectly well.
After you add the secret for pulling the image to the service account, then you need to add the service account into your pod or deployment. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
run: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: yourPrivateRegistry/image:tag
ports:
- containerPort: 80
serviceAccountName: pull-image # your service account
And the service account pull-image looks like this:

Kube / create deployment with config map

I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.

How to configure a non-default serviceAccount on a deployment

My Understanding of this doc page is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.
How do I achieve something similar like in this example for a deployment?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
kubernetes nginx-deployment.yaml where serviceAccountName: test-sa
used as non default service account
Link: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
test-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: test-ns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
serviceAccountName: test-sa
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80