Add ExternalSecret to Yaml file deploying to K8s - kubernetes

I'm trying to deploy a Kubernetes processor to a cluster on GCP GKE but the pod fails with the following error:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
This is my deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbt-core-processor
namespace: prod
labels:
app: dbt-core
spec:
replicas: 1
selector:
matchLabels:
app: dbt-core
template:
metadata:
labels:
app: dbt-core
spec:
containers:
- name: dbt-core-processor
image: IMAGE
resources:
requests:
cpu: 50m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
valueFrom:
secretKeyRef:
name: service-account-credentials-dbt-test
key: service-account-credentials-dbt-test
---
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: service-account-credentials-dbt-test
namespace: prod
spec:
backendType: gcpSecretsManager
data:
- key: service-account-credentials-dbt-test
name: service-account-credentials-dbt-test
version: latest
When I run kubectl apply -f deployment.yml I get the following error:
deployment.apps/dbt-core-processor created
error: unable to recognize "deployment.yml": no matches for kind "ExternalSecret" in version "kubernetes-client.io/v1"
This creates my processor but the pod fails to spin up the secrets:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
How do I add the secrets from my secrets manager in GCP to this deployment?

ExternalSecret is a custom resource definition (CRD) and it looks like it is not installed on your cluster.
I googled kubernetes-client.io/v1 and it looks like you may be following instructions from the old, archived project that first provided this CRD? The GitHub repo pointed me to a maintained project that has replaced it.
The good news is that the current project has what looks like comprehensive documentation, including a guide to how to install the CRDs on your cluster and the proper configuration for the External secret.

Related

Having trouble deploying database to kubernetes cluster

I am able to deploy the database service itself, but when I try to deploy with a persistent volume claim as well, the deployment silently fails. Below is the deployment.yaml file I am using. The service deploys fine if I remove the first 14 lines that define the persistent volume claim.
apiVersion: apps/v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: timescale-pvc-1
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: timescale
spec:
selector:
matchLabels:
app: timescale
replicas: 1
template:
metadata:
labels:
app: timescale
spec:
containers:
- name: timescale
image: timescale/timescaledb:2.3.0-pg11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: "password"
- name: POSTGRES_DB
value: "metrics"
volumes:
- name: timescaledb-pv
persistentVolumeClaim:
claimName: timescale-pvc-1
Consider StatefulSet for running stateful apps like databases. Deployment is preferred for stateless services.
You are using the below storage class in the PVC.
storageClassName: standard
Ensure the storage class supports dynamic storage provisioning.
Are you creating a PV along with PVC and Deployment? A Deployment, Stateful set or a Pod can only use PVC if there is a PV available.
If you're creating the PV as well then there's a possibility of a different issue. Please share the logs of your Deployment and PVC

Iscsi as persistent storage statefulset kubernetes

I have a use case where there will be telecom application running within the several pods(every pod will host some configured service for billing for specific client) and this expects the service to store the state so obvious choice is statefulset .
Now the problem is I need to use iscsi as storage in the backend for these pods, Can you please point me to some reference to such use case - I am looking for Yaml for PV PVC and statefulset and how to link them . These PV PVC shall use iscsi as storage option.
Yes you are right statefulset is option however you might can also use the deployment.
You have not mentioned which cloud provider you will be using but still sharing one note : iscsi storage is not optimized with GKE cotnainer OS running K8s nodes so if you are no GCP GKE change OS or would suggest using the Ubuntu image first.
You can start the iscsi service on the Ubuntu first.
You can use the Minio or OpenEBS for the storage option also.
Sharing the details if for OpenEBS
Create GCP disks for attaching nodes as a mount or you can dynamically provision it using the YAML as per need.
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
name: disk-pool
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
spec:
name: disk-pool
type: disk
poolSpec:
poolType: striped
blockDevices:
blockDeviceList:
- blockdevice-<Number>
- blockdevice-<Number>
- blockdevice-<Number>
Stoage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-sc-rep1
annotations:
openebs.io/cas-type: cstor
cas.openebs.io/config: |
- name: StoragePoolClaim
value: "disk-pool"
- name: ReplicaCount
value: "1"
provisioner: openebs.io/provisioner-iscsi
Application workload
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: openebs-sc-rep1
if you are on AWS EBS using the iscsi.
For testing you can also checkout
https://cloud.ibm.com/catalog/content/minio
Few links :
https://support.zadarastorage.com/hc/en-us/articles/360039289891-Kubernetes-CSI-Getting-started-with-Zadara-CSI-on-GKE-Google-Kubernetes-Engine-

Kubernetes cluster not pulling images created by Skaffold

This is my skaffold file:
apiVersion: skaffold/v1
kind: Config
metadata:
name: app-skaffold
build:
artifacts:
- image: myappservice
context: api-server
deploy:
helm:
releases:
- name: myapp
chartPath: chart/myapp
And in my Helm templates folder I have only one manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: apiserver
image: myappservice
ports:
- containerPort: 5050
env:
- name: a-key
valueFrom:
secretKeyRef:
name: secret-key
key: secret-key-value
But everytime I run:
$ skaffold dev
And check my pods' status with $ kubectl get pods, I get ErrImagePull Statuses.
This started since I decided to add Helm to the stack because it was working using only kubectl.
In my deploy section in my skaffold.yaml file, I had:
deploy:
kubectl:
manifests:
- ./k8s-manifests/*.yaml
And it was working fine, the only thing I did was to move the manifest file into the templates folder of my Helm chart and change the skaffold.yaml file as shown above.
What am I missing?
I ran into a registry issue where all my images suddenly disappeared after an ingress config change and skaffold (1.0.0) couldn't load anything. The only way I could fix it was by deleting my entire cluster and re-creating it again.
This probably won't help, but it's worth a shot.

K8S deployments with shared environment variables

We have a set of deployments (sets of pods) that are all using same docker image. Examples:
web api
web admin
web tasks worker nodes
data tasks worker nodes
...
They all require a set of environment variables that are common, for example location of the database host, secret keys to external services, etc. They also have a set of environment variables that are not common.
Is there anyway where one could either:
Reuse a template where environment variables are defined
Load environment variables from file and set them on the pods
The optimal solution would be one that is namespace aware, as we separate the test, stage and prod environment using kubernetes namespaces.
Something similar to dockers env_file would be nice. But I cannot find any examples or reference related to this. The only thing I can find is setting env via secrets, but that is not clean, way to verbose, as I still need to write all environment variables for each deployment.
You can create a ConfigMap with all the common key:value pairs of env variables.
Then you can reuse the configmap to declare all the values of configMap as environment in Deployment.
Here is an example taken from kubernetes official docs.
Create a ConfigMap containing multiple key-value pairs.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Use envFrom to define all of the ConfigMap’s data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config # All the key-value pair will be taken as environment key-value pair
env:
- name: uncommon
value: "uncommon value"
restartPolicy: Never
You can specify uncommon env variables in env field.
Now, to verify if the environment variables are actually available, see the logs.
$ kubectl logs -f test-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
SPECIAL_LEVEL=very
uncommon=uncommon value
SPECIAL_TYPE=charm
...
Here, it is visible that all the provided environments are available.
you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: lord/auth
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: lord/tickets
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY

Kubernetes | Rolling Update on Replica Set

I'm trying to perform a rolling update of the container image that my Federated Replica Set is using but I'm getting the following error:
When I run: kubectl rolling-update mywebapp -f mywebapp-v2.yaml
I get the error message: the server could not find the requested resource;
This is a brand new and clean install on Google Container Engine (GKE) so besides creating the Federated Cluster and deploying my first service nothing else has been done. I'm following the instructions from the Kubernetes Docs but no luck.
I've checked to make sure that I'm in the correct context and I've also created a new YAML file pointing to the new image and updated the metadata name. Am I missing something? The easy way for me to do this is to delete the replica set and then redeploy but then I'm cheating myself :). Any pointers would be appreciated
mywebappv2.yaml - new yaml file for rolling update
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp-v2
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
My original mywebapp.yaml file:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
Try kind: Deployment.
Most kubectl commands that support Replication Controllers also
support ReplicaSets. One exception is the rolling-update command. If
you want the rolling update functionality please consider using
Deployments instead.
Also, the rolling-update command is imperative
whereas Deployments are declarative, so we recommend using Deployments
through the rollout command.
-- Replica Sets |
Kubernetes