Create a job to save a secret in Kubernetes - kubernetes

I have created a job that runs the following command (job-command.sh):
cat << EOF | kubectl apply -f -
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"name": "my-secret-name",
"namespace": "my-namespace"
},
"data": {
"cert": "my-certificate-data"
},
"type": "Opaque"
}
EOF
The job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: "my-name-job1"
namespace: "my-namespace"
labels:
app: "my-name"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
template:
metadata:
labels:
app: my-name
spec:
serviceAccountName: my-name-sa
restartPolicy: Never
containers:
- name: myrepo
image: "repo:image"
imagePullPolicy: "Always"
command: ["sh", "-c"]
args:
- |-
/job-command.sh
And the rbac like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-name-sa
namespace: my-namespace
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-name-rolebinding
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: my-name-role
subjects:
- kind: ServiceAccount
name: my-name-sa
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-name-role
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ["get", "watch", "list", "create", "update", "patch"]
I'm getting the following issue:
"error: error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false"
I don't understand what's wrong... the service account used by the job should give access to save secrets...

Related

telepresence: error: workload "xxx-service.default" not found

I have this chart of a personal project deployed in minikube:
---
# Source: frontend/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: xxx-app-service
spec:
selector:
app: xxx-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
# Source: frontend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2022-06-19T21:57:01Z'
generation: 3
labels:
app: xxx-app
name: xxx-app
namespace: default
resourceVersion: '43299'
uid: 7c43767a-abbd-4806-a9d2-6712847a0aad
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxx-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: xxx-app
spec:
containers:
- image: "registry.gitlab.com/esavara/xxx/wm:staging"
name: frontend
imagePullPolicy: Always
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
env:
- name: PORT
value: "3000"
resources: {}
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
# Source: frontend/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: '2022-06-19T22:28:58Z'
generation: 1
name: xxx-app
namespace: default
resourceVersion: '44613'
uid: b58dcd17-ee1f-42e5-9dc7-d915a21f97b5
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: "xxx-app-service"
port:
number: 3000
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 192.168.39.80
---
# Source: frontend/templates/gitlab-registry-sealed.json
{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"type": "kubernetes.io/dockerconfigjson",
"data": null
},
"encryptedData": {
".dockerconfigjson": "AgBpHoQw1gBq0IFFYWnxlBLLYl1JC23TzbRWGgLryVzEDP8p+NAGjngLFZmtklmCEHLK63D9pp3zL7YQQYgYBZUjpEjj8YCTOhvnjQIg7g+5b/CPXWNoI5TuNexNJFKFv1mN5DzDk9np/E69ogRkDJUvUsbxvVXs6/TKGnRbnp2NuI7dTJ18QgGXdLXs7S416KE0Yid9lggw06JrDN/OSxaNyUlqFGcRJ6OfGDAHivZRV1Kw9uoX06go3o+AjVd6eKlDmmvaY/BOc52bfm7pY2ny1fsXGouQ7R7V1LK0LcsCsKdAkg6/2DU3v32mWZDKJgKkK5efhTQr1KGOBoLuuHKX6nF5oMA1e1Ii3wWe77lvWuvlpaNecCBTc7im+sGt0dyJb4aN5WMLoiPGplGqnuvNqEFa/nhkbwXm5Suke2FQGKyzDsMIBi9p8PZ7KkOJTR1s42/8QWVggTGH1wICT1RzcGzURbanc0F3huQ/2RcTmC4UvYyhCUqr3qKrbAIYTNBayfyhsBaN5wVRnV5LiPxjLbbOmObSr/8ileJgt1HMQC3V9pVLZobaJvlBjr/mmNlrF118azJOFY+a/bqzmtBWwlcvVuel/EaOb8uA8mgwfnbnShMinL1OWTHoj+D0ayFmUdsQgMYwRmStnC7x/6OXThmBgfyrLguzz4V2W8O8qbdDz+O5QoyboLUuR9WQb/ckpRio2qa5tidnKXzZzbWzFEevv9McxvCC1+ovvw4IullU8ra3FutnTentmPHKU2OPr1EhKgFKIX20u8xZaeIJYLCiZlVQohylpjkHnBZo1qw+y1CTiDwytunpmkoGGAXLx++YQSjEJEo889PmVVlSwZ8p/Rdusiz1WbgKqFt27yZaOfYzR2bT++HuB5x6zqfK6bbdV/UZndXs"
}
}
}
I'm trying to use Telepresence to redirect the traffic from the deployed application to a Docker container which have my project mounted inside and has hot-reloading, to continue the development of it but inside Kubernetes, but running telepresence intercept xxx-app-service --port 3000:3000 --env-file /tmp/xxx-app-service.env fails with the following error:
telepresence: error: workload "xxx-app-service.default" not found
Why is this happening and how do I fix it?

Kubernetes client Job create

I am using kubernetes client (https://github.com/kubernetes-client/javascript) to create job and I have set up service account for the pod which is creating the job. However, I am getting this error when executing the job creation.
body: {
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Job.batch "compiler-job" is invalid: spec.template.spec.containers: Required value',
reason: 'Invalid',
details: {
name: 'compiler-job',
group: 'batch',
kind: 'Job',
causes: [Array]
},
code: 422
},
I am sure there is something wrong with the body options that i am passing to k8sBatchV1Api.createNamespacedJob() but I am not sure what I am doing wrong. Here is the snippet of the manifest.
const kc = new k8s.KubeConfig();
kc.loadFromCluster();
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api);
k8sBatchV1Api.createNamespacedJob('default', {
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'compiler-job'
},
spec: {
template: {
metadata: {
name: 'compiler-job'
},
spec: {
containers: {
image: 'perl',
name: 'compiler-job',
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
},
restartPolicy: "OnFailure"
}
}
}
}).catch((e: any) => console.log(e));
Here is the serviceaccount.yaml file
apiVersion: v1
kind: ServiceAccount
metadata:
name: create-job
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: create-job-role
rules:
- apiGroups: [ "batch", "extensions" ]
resources: [ "jobs" ]
verbs: [ "get", "list", "watch", "create", "update", "patch", "delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-job-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: create-job
namespace: default
roleRef:
kind: ClusterRole
name: create-job-role
apiGroup: rbac.authorization.k8s.io
Been spending almost a week and have no clue.
I think that might be because containers should be a list, can you try this?
const kc = new k8s.KubeConfig();
kc.loadFromCluster();
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api);
k8sBatchV1Api.createNamespacedJob('default', {
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'compiler-job'
},
spec: {
template: {
metadata: {
name: 'compiler-job'
},
spec: {
containers: [{
image: 'perl',
name: 'compiler-job',
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
}],
restartPolicy: "OnFailure"
}
}
}
}).catch((e: any) => console.log(e));

Issue injecting helm value on configmap

can someone help?
I am trying to inject a helm value on a config map, but it breaks the format. If I use the value directly instead of .Values, it works fine.
What I have:
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: {{ .Values.okta.clientID }}
clientSecret: {{ .Values.okta.clientSecret }}
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
The result
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: "name: Okta\nissuer: https://mycompany.okta.com\nclientID: myClientId \nclientSecret:
mySecret\nrequestedScopes: [\"openid\", \"profile\",
\"email\", \"groups\"]\nrequestedIDTokenClaims: {\"groups\": {\"essential\": true}}\n"
it should be with the values.yaml . it worked for me in both ways :
using the values in values.yaml
Values.yaml:
okta:
clientSecret: test1233
clientID: testnew
configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
labels:
app: test
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: {{ .Values.okta.clientID }}
clientSecret: {{ .Values.okta.clientSecret }}
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
command used :
helm install testchart .\mycharttest --dry-run
-----Output-------------------
# Source: mycharttest/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
labels:
app: test
product: test
db: test
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: testnew
clientSecret: test1233
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
using the values in runtime
---Command --
helm install test .\mycharttest --dry-run --set okta.clientID=newclientid --set okta.clientSecret=newsecret
----Output ---
# Source: mycharttest/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
labels:
app: test
product: test
db: test
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: newclientid
clientSecret: newsecret
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}
kubernetes version : 1.22
Helm version :
version.BuildInfo{Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.16.9"}
The easy way store everything into the file and use it directly first
file oidc.config
name: Okta
issuer: https://mycompany.okta.com
clientID: clientID
clientSecret: clientSecret
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
helm
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
{{- $files := .Files }}
{{- range tuple "oidc.config" }}
{{ . }}: |-
{{ $files.Get . }}
{{- end }}
Reference doc : https://helm.sh/docs/chart_template_guide/accessing_files/
Also checkout this similar answer : https://stackoverflow.com/a/56209432/5525824
After lots of tries, it worked when I skipped the a whitespace at the beginning
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: "https://mycompany.okta.com"
clientID: {{- .Values.okta.clientId }}
clientSecret: {{- .Values.okta.clientSecret }}
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}

Kubernetes Persistent Volume configuration

My K8s cluster needs persistent storage
This is the pv-claim.yaml file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Mi
This is the pv-volume.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "kpi-mon/configurations"
and this is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}-deployment
labels:
app: {{ .Values.name }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.name }}
image: {{ .Values.repository -}}{{- .Values.image -}}:{{- .Values.tag }}
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: task-pv-storage
mountPath: "kpi-mon/configurations"
envFrom:
- configMapRef:
name: {{ .Values.name }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.name }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: task-pv-storage
persistentVolumeClain:
claimName: task-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.name }}
type: NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
I'm trying to map local directory from the Pod file system
I choosed this documentation https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
When I try to load the deployment an error occurred and I cannot understand what the problem is.

Skipper https rest end point requests returning http urls

I am trying a poc with Spring cloud dataflow streams and have the the application iis running in Pivotal Cloud Foundry. Trying the same in kubernetes and the spring dataflow server dashboard is not loading.Debugged the issue and found the root cause is when the dashboard is loaded, its trying to hit the Skipper rest end point /api and this returns a response with the urls of other end points in skipper but the return urls are all in http. How can i force skipper to return https urls instead of http? Below is the response when i try to curl the same endpoints .
C:>curl -k https:///api
RESPONSE FROM SKIPPER
{
"_links" : {
"repositories" : {
"href" : "http://<skipper_url>/api/repositories{?page,size,sort}",
"templated" : true
},
"deployers" : {
"href" : "http://<skipper_url>/api/deployers{?page,size,sort}",
"templated" : true
},
"releases" : {
"href" : "http://<skipper_url>/api/releases{?page,size,sort}",
"templated" : true
},
"packageMetadata" : {
"href" : "**http://<skipper_url>/api/packageMetadata{?page,size,sort,projection}**",
"templated" : true
},
"about" : {
"href" : "http://<skipper_url>/api/about"
},
"release" : {
"href" : "http://<skipper_url>/api/release"
},
"package" : {
"href" : "http://<skipper_url>/api/package"
},
"profile" : {
"href" : "http://<skipper_url>/api/profile"
}
}
}
kubernetes deployment yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: skipper-server-network-policy
spec:
podSelector:
matchLabels:
app: skipper-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
replicas: 1
selector:
matchLabels:
app: skipper-server
template:
metadata:
labels:
app: skipper-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: skipper-server
image: <image_path>
imagePullPolicy: Always
ports:
- containerPort: 7577
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
imagePullSecrets:
- name: poc-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
ports:
- port: 80
targetPort: 7577
protocol: TCP
name: http
selector:
app: skipper-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: true
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<skipper_url>"
http:
paths:
- path: /
backend:
serviceName: skipper-server
servicePort: 80
tls:
- hosts:
- "<skipper_url>"
SKIPPER APPLICATION.properties
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.server.use-forward-headers=true
The root cause was skipper /api end point returning http urls for the /deployer and kubernetes ingress trying to redirect and getting blocked with a 308 error. Added below to skipper env properties and this fixed the issue.
DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
spec:
containers:
env:
- name: "server.tomcat.internal-proxies"
value: ".*"
- name: "server.use-forward-headers"
value: "true"**
INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
**nginx.ingress.kubernetes.io/ssl-redirect: false**