Kubernetes deployment with token from service account - how to specify multiple token audiences? - kubernetes

I know that in Kubernetes deployments we can use projected volume to mount a token from a Service Account. Additionally, we can specify audience for the token. The problem is that I need multiple audiences, not just one. Please see the yaml I use for deployment at the bottom of the question.
The part that is not working as expected is audience: service1,service2,service3.
When kubernetes generates the token and I decode it in jwt.io, audiences sections looks like this:
"aud": [
"service1,service2,service3"
]
But I expect it to look like this:
"aud": [
"service1", "service2", "service3"
]
Basically - kubernetes thinks "service1,service2,service3" is one audience, but I need a way to specify that this token must work for 3 separate audiences. I thought this can be achieved by separating values with commas, but apparently not.
I also tried this in my deployment but it fails saying audience must be string:
- serviceAccountToken:
audience:
- service1
- service2
- service3
expirationSeconds: 600
path: my-token
This is the full yaml for my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: token-client
name: token-client
namespace: token-demo
spec:
replicas: 1
selector:
matchLabels:
app: token-client
template:
metadata:
labels:
app: token-client
spec:
serviceAccountName: token-client-test
containers:
- image: myuser/myimage:0.2.0
name: token-client
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: my-token
volumes:
- name: my-token
projected:
sources:
- serviceAccountToken:
audience: service1,service2,service3
expirationSeconds: 600
path: my-token

Related

Wiremock docker add OAUTH2

I am using WireMock docker image to mock an endpoint. I am using the below yaml to create a deployment on kubernetes and it's working fine, I have added __files and mappings to add the endpoints and responses. Now I need to add OAUTH2 to test authentication in my application. could this be done? and what properties should I add:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wiremock-helper
labels:
app: wiremock-helper
spec:
replicas: 1
selector:
matchLabels:
app: wiremock-helper
template:
metadata:
labels:
app: wiremock-helper
spec:
containers:
- name: wiremock-helper
image: rodolpheche/wiremock:latest
ports:
- containerPort: 8080
volumeMounts:
- name: wiremock-volume
mountPath: /home/wiremock
volumes:
- name: wiremock-volume
nfs:
server: 10.0.0.0
path: /home/wiremock
I'm not certain I understand your question, but if I do, the deployment won't handle Authentication or Authorization. You can have your application handle those directly, use an API gateway/proxy solution (envoy, emissary-ingress, traefik, ory oathkeeper, etc.), or use a service mesh solution (Istio with Auth Policy).

Kubernetes :Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container

I am doing my first deployment in Kubernetes and I've hosted my API in my namespace and it's up and running. So I tried to connect my API with MongoDB. Added my database details in ConfigMaps via Rancher.
I tried to invoke the DB in my deployment YAML file but got an error stating Unknown Field - ConfigMapref
Below is my deployment YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec
replicas: 2
selector:
matchLables:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: always
ports:
- containerPort: 80
configMapRef:
- name: myfirstprojectdb # This is the name of the config map created via rancher
myfirstprojectdb ConfigMap will store all the details like the database name, username, password, etc.
On executing the pipeline I get the below error.
How do I need to refer my config map in deployment yaml?
Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container
There are some more typos (e.g. missing : after spec or Always should be with capital letter). Also indentation should be consistent in the whole yaml file - see yaml indentation and separation.
I corrected your yaml so it passes api server's check + added config map reference (considering it contains env variables):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec:
replicas: 2
selector:
matchLabels:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: myfirstprojectdb
Useful link:
Configure all key-value pairs in a ConfigMap as container environment variables which is related to this question.

Replicaset or Deployment with multiple template specifications

Is it possible to create a replicaset / deployment with multiple template specifications - say I had one template specification for logical group "app = ui, rel = stable" and other template specification for "app = as, rel =stable".
Is it possible to create a replicset / deployment targeting "rel=stable" - to target all the pods with the label "rel = stable" ?
Please see the attached pic for more details
Credits : Kubernetes In action
Update1 - adding more details. I am aware of deployments to some extent. However, wanted to know if this is possible ? If not , how can achieve it.
The requirement is to have a single deployment that manages different types of pods.
Please see yaml file for reference. Please ignore images names and ports etc., those are just some dummy names
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
rel: stable
spec:
selector:
matchLabels:
rel: stable
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: uipod
image: ui
ports:
- containerPort: 80
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: aspod
image: as
ports:
- containerPort: 81
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: pcpod
image: pc
ports:
- containerPort: 82
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: scpod
image: sc
ports:
- containerPort: 83
"manages all the templates ( Pods ) that had the label " rel = stable""
I don't exactly what you mean with, but it not possible create a deployment to manage other deployments.
You can create a deployment file with as many pods you want, but if to separate them you need to use external script/kubectl command to manage all of them.

How to set SMTP (email configration) in .yaml file in kubernetes

I want to deploy ghost (blog) on Kubernetes with email configuration in google cloud.
ghost is running fine in k8s. But, I'm not able to fix my SMTP setting in deployment file
my .yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: blog
labels:
app: blog
spec:
replicas: 1
selector:
matchLabels:
app: blog
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: ghost:2.6-alpine
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
- name: url
value: http://my-blog.com
environment:
url: http://my-blog.com
mail__transport: 2525
mail__options__service: {Sendgrid}
mail__options__auth__user: "gurpreet004"
mail__options__auth__pass: "Server#1234"
it showing an error :
error: error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "environment" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
please provide any solution
The environment field doesn't exist. If you want these values as environment variables in the container, you can do like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: blog
labels:
app: blog
spec:
replicas: 1
selector:
matchLabels:
app: blog
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: ghost:2.6-alpine
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
- name: url
value: http://my-blog.com
- name: mail__transport
value: SMTP
- name: mail__options__service
value: Sendgrid
- name: mail__options__auth__user
value: gurpreet004
- name: mail__options__auth__pass
value: Server#1234
The given answer is correct but you need to follow some steps:
. Create the account in the Mailgun.
. verify the account and also register your domain.
. Mailgun provides you with "demo SMTP" details to test your mail.
. if you need you can reset your (demo SMTP)password.
. Enter these details in the mail section of yours.YAML file and run it.
. Open ghost admin and click the option "labs"-> test mail

Are multiple imagePullSecrets allowed and used by Kubernetes to pull an image from a private registry?

I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called deploy-secret.
The secret's login information expires after short time in the registry.
I additionally created a second, permanent secret that allows access to the docker registry, named permanent-secret.
Is it possible to specify the Pod with two secrets? For example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?
Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
The regcred secret has the correct values and the regcred-test is just a bunch of gibberish. So we can see that it ignores the incorrect secret.