Kubernetes client Job create - kubernetes

I am using kubernetes client (https://github.com/kubernetes-client/javascript) to create job and I have set up service account for the pod which is creating the job. However, I am getting this error when executing the job creation.
body: {
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Job.batch "compiler-job" is invalid: spec.template.spec.containers: Required value',
reason: 'Invalid',
details: {
name: 'compiler-job',
group: 'batch',
kind: 'Job',
causes: [Array]
},
code: 422
},
I am sure there is something wrong with the body options that i am passing to k8sBatchV1Api.createNamespacedJob() but I am not sure what I am doing wrong. Here is the snippet of the manifest.
const kc = new k8s.KubeConfig();
kc.loadFromCluster();
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api);
k8sBatchV1Api.createNamespacedJob('default', {
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'compiler-job'
},
spec: {
template: {
metadata: {
name: 'compiler-job'
},
spec: {
containers: {
image: 'perl',
name: 'compiler-job',
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
},
restartPolicy: "OnFailure"
}
}
}
}).catch((e: any) => console.log(e));
Here is the serviceaccount.yaml file
apiVersion: v1
kind: ServiceAccount
metadata:
name: create-job
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: create-job-role
rules:
- apiGroups: [ "batch", "extensions" ]
resources: [ "jobs" ]
verbs: [ "get", "list", "watch", "create", "update", "patch", "delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-job-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: create-job
namespace: default
roleRef:
kind: ClusterRole
name: create-job-role
apiGroup: rbac.authorization.k8s.io
Been spending almost a week and have no clue.

I think that might be because containers should be a list, can you try this?
const kc = new k8s.KubeConfig();
kc.loadFromCluster();
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api);
k8sBatchV1Api.createNamespacedJob('default', {
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'compiler-job'
},
spec: {
template: {
metadata: {
name: 'compiler-job'
},
spec: {
containers: [{
image: 'perl',
name: 'compiler-job',
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
}],
restartPolicy: "OnFailure"
}
}
}
}).catch((e: any) => console.log(e));

Related

dapr.io/config annotation to access a secret

I am trying to deploy a k8s pos with dapr sidecar container.
I want the dapr container to access a secret key named "MY_KEY" stored in a secret called my-secrets.
I wrote this manifest for the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: my-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: my-app
dapr.io/app-port: "80"
dapr.io/config: |
{
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}
spec:
containers:
- name: my_container
image: <<image_source>>
imagePullPolicy: "Always"
ports:
- containerPort: 80
envFrom:
- secretRef:
name: my-secrets
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: http://+:80
imagePullSecrets:
- name: <<image_pull_secret>>
but it seems that it cannot create the configuration, the dapr container log is:
time="2023-01-24T09:05:50.927484097Z" level=info msg="starting Dapr Runtime -- version 1.9.5 -- commit f5f847eef8721d85f115729ee9efa820fe7c4cd3" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.927525344Z" level=info msg="log level set to: info" app_id=emy-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.927709269Z" level=info msg="metrics server started on :9090/" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.metrics type=log ver=1.9.5
time="2023-01-24T09:05:50.92795239Z" level=info msg="Initializing the operator client (config: {
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}
)" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.93737904Z" level=fatal msg="error loading configuration: rpc error: code = Unknown desc = error getting configuration: Configuration.dapr.io "{
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}" not found" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
can anyone tell me what's I am doing wrong?
Thanks in advance for your help.

I have an RBAC problem, but everything I test seems ok?

This is a continuation of the problem described here (How do I fix a role-based problem when my role appears to have the correct permissions?)
I have done much more testing and still do not understand the error
Error from server (Forbidden): pods is forbidden: User "dma" cannot list resource "pods" in API group "" at the cluster scope
UPDATE: Here is another hint from the API server
watch chan error: etcdserver: mvcc: required revision has been compacted
I found this thread, but I am working in the current kubernetes
How fix this error "watch chan error: etcdserver: mvcc: required revision has been compacted"?
My user exists
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
dma 77m kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued
The clusterrole exists
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kubelet-runtime"},"rules":[{"apiGroups":["","extensions","apps","argoproj.io","workflows.argoproj.io","events.argoproj.io","coordination.k8s.io"],"resources":["*"],"verbs":["*"]},{"apiGroups":["batch"],"resources":["jobs","cronjobs"],"verbs":["*"]}]}
creationTimestamp: "2021-12-16T00:24:56Z"
name: kubelet-runtime
resourceVersion: "296716"
uid: a4697d6e-c786-4ec9-bf3e-88e3dbfdb6d9
rules:
- apiGroups:
- ""
- extensions
- apps
- argoproj.io
- workflows.argoproj.io
- events.argoproj.io
- coordination.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- '*'
The sandbox namespace exists
NAME STATUS AGE
sandbox Active 6d6h
My user has authority to operate in the kubelet cluster and the namespace "sandbox"
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRoleBinding",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"rbac.authorization.k8s.io/v1\",\"kind\":\"ClusterRoleBinding\",\"metadata\":{\"annotations\":{},\"name\":\"dma-kubelet-binding\"},\"roleRef\":{\"apiGroup\":\"rbac.authorization.k8s.io\",\"kind\":\"ClusterRole\",\"name\":\"kubelet-runtime\"},\"subjects\":[{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"argo\"},{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"argo-events\"},{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"sandbox\"}]}\n"
},
"creationTimestamp": "2021-12-16T00:25:42Z",
"name": "dma-kubelet-binding",
"resourceVersion": "371397",
"uid": "a2fb6d5b-8dba-4320-af74-71caac7bdc39"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "kubelet-runtime"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "argo"
},
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "argo-events"
},
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "sandbox"
}
]
}
My user has the correct permissions
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "Role",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"rbac.authorization.k8s.io/v1\",\"kind\":\"Role\",\"metadata\":{\"annotations\":{},\"name\":\"dma\",\"namespace\":\"sandbox\"},\"rules\":[{\"apiGroups\":[\"\",\"apps\",\"autoscaling\",\"batch\",\"extensions\",\"policy\",\"rbac.authorization.k8s.io\",\"argoproj.io\",\"workflows.argoproj.io\"],\"resources\":[\"pods\",\"configmaps\",\"deployments\",\"events\",\"pods\",\"persistentvolumes\",\"persistentvolumeclaims\",\"services\",\"workflows\"],\"verbs\":[\"get\",\"list\",\"watch\",\"create\",\"update\",\"patch\",\"delete\"]}]}\n"
},
"creationTimestamp": "2021-12-21T19:41:38Z",
"name": "dma",
"namespace": "sandbox",
"resourceVersion": "1058387",
"uid": "94191881-895d-4457-9764-5db9b54cdb3f"
},
"rules": [
{
"apiGroups": [
"",
"apps",
"autoscaling",
"batch",
"extensions",
"policy",
"rbac.authorization.k8s.io",
"argoproj.io",
"workflows.argoproj.io"
],
"resources": [
"pods",
"configmaps",
"deployments",
"events",
"pods",
"persistentvolumes",
"persistentvolumeclaims",
"services",
"workflows"
],
"verbs": [
"get",
"list",
"watch",
"create",
"update",
"patch",
"delete"
]
}
]
}
My user is configured correctly on all nodes
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://206.81.25.186:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dma
name: dma#kubernetes
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: dma
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Based on this website, I have been searching for a watch event.
I think have rebuilt everything above the control plane but the problem persists.
The next step would be to rebuild the entire cluster, but it would be so much more satisfying to find the actual problem.
Please help.
FIX:
So the policy for the sandbox namespace was wrong. I fixed that and the problem is gone!
I think finally understand RBAC (policies and all). Thank you very much to members of the Kubernetes slack channel. These policies have passed the first set of tests for a development environment ("sandbox") for Argo workflows. Still testing.
policies.yaml file:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dev
namespace: sandbox
rules:
- apiGroups:
- "*"
attributeRestrictions: null
resources: ["*"]
verbs:
- get
- watch
- list
- apiGroups: ["argoproj.io", "workflows.argoproj.io", "events.argoprpj.io"]
attributeRestrictions: null
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
- eventbus
- eventsource
- sensor
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dma-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: User
name: dma
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dma-admin
subjects:
- kind: User
name: dma
namespace: sandbox
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
namespace: sandbox
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
...

Create a job to save a secret in Kubernetes

I have created a job that runs the following command (job-command.sh):
cat << EOF | kubectl apply -f -
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"name": "my-secret-name",
"namespace": "my-namespace"
},
"data": {
"cert": "my-certificate-data"
},
"type": "Opaque"
}
EOF
The job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: "my-name-job1"
namespace: "my-namespace"
labels:
app: "my-name"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
template:
metadata:
labels:
app: my-name
spec:
serviceAccountName: my-name-sa
restartPolicy: Never
containers:
- name: myrepo
image: "repo:image"
imagePullPolicy: "Always"
command: ["sh", "-c"]
args:
- |-
/job-command.sh
And the rbac like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-name-sa
namespace: my-namespace
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-name-rolebinding
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: my-name-role
subjects:
- kind: ServiceAccount
name: my-name-sa
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-name-role
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ["get", "watch", "list", "create", "update", "patch"]
I'm getting the following issue:
"error: error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false"
I don't understand what's wrong... the service account used by the job should give access to save secrets...

why pulumi kubernetes provider is changing the service and deployment name?

I tried to convert the below working kubernetes manifest from
##namespace
---
apiVersion: v1
kind: Namespace
metadata:
name: poc
##postgress
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db
name: db
namespace: poc
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgres
---
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
namespace: poc
spec:
type: ClusterIP
ports:
- name: "db-service"
port: 5432
targetPort: 5432
selector:
app: db
##adminer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ui
name: ui
namespace: poc
spec:
replicas: 1
selector:
matchLabels:
app: ui
template:
metadata:
labels:
app: ui
spec:
containers:
- image: adminer
name: adminer
ports:
- containerPort: 8080
name: ui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ui
name: ui
namespace: poc
spec:
type: NodePort
ports:
- name: "ui-service"
port: 8080
targetPort: 8080
selector:
app: ui
to
import * as k8s from "#pulumi/kubernetes";
import * as kx from "#pulumi/kubernetesx";
//db
const dbLabels = { app: "db" };
const dbDeployment = new k8s.apps.v1.Deployment("db", {
spec: {
selector: { matchLabels: dbLabels },
replicas: 1,
template: {
metadata: { labels: dbLabels },
spec: {
containers: [
{
name: "postgres",
image: "postgres",
env: [{ name: "POSTGRES_USER", value: "postgres"},{ name: "POSTGRES_PASSWORD", value: "postgres"}],
ports: [{containerPort: 5432}]
}
]
}
}
}
});
const dbService = new k8s.core.v1.Service("db", {
metadata: { labels: dbDeployment.spec.template.metadata.labels },
spec: {
selector: dbLabels,
type: "ClusterIP",
ports: [{ port: 5432, targetPort: 5432, protocol: "TCP" }],
}
});
//adminer
const uiLabels = { app: "ui" };
const uiDeployment = new k8s.apps.v1.Deployment("ui", {
spec: {
selector: { matchLabels: uiLabels },
replicas: 1,
template: {
metadata: { labels: uiLabels },
spec: {
containers: [
{
name: "adminer",
image: "adminer",
ports: [{containerPort: 8080}],
}
]
}
}
}
});
const uiService = new k8s.core.v1.Service("ui", {
metadata: { labels: uiDeployment.spec.template.metadata.labels },
spec: {
selector: uiLabels,
type: "NodePort",
ports: [{ port: 8080, targetPort: 8080, protocol: "TCP" }]
}
});
With this pulumi up -y is SUCCESS without error but the application is not fully UP and RUNNING. Because the adminer image is trying to use Postgres database hostname as db, But looks like pulumi is changing the service name like below:
My question here is, How to make this workable?
Is there a way in pulumi to strict with the naming?
Note- I know we can easily pass the hostname as an env variable to the adminer image but I am wondering if there is anything that can allow us to not change the name.
Pulumi automatically adds random strings to your resources to help with replacing resource. You can find more information about this in the FAQ
If you'd like to disable this, you can override it using the metadata, like so:
import * as k8s from "#pulumi/kubernetes";
import * as kx from "#pulumi/kubernetesx";
//db
const dbLabels = { app: "db" };
const dbDeployment = new k8s.apps.v1.Deployment("db", {
spec: {
selector: { matchLabels: dbLabels },
replicas: 1,
template: {
metadata: { labels: dbLabels },
spec: {
containers: [
{
name: "postgres",
image: "postgres",
env: [{ name: "POSTGRES_USER", value: "postgres"},{ name: "POSTGRES_PASSWORD", value: "postgres"}],
ports: [{containerPort: 5432}]
}
]
}
}
}
});
const dbService = new k8s.core.v1.Service("db", {
metadata: {
name: "db", // explicitly set a name on the service
labels: dbDeployment.spec.template.metadata.labels
},
spec: {
selector: dbLabels,
type: "ClusterIP",
ports: [{ port: 5432, targetPort: 5432, protocol: "TCP" }],
}
});
With that said, it's not always best practice to hardcode names like this, you should, if possible, reference outputs from your resources and pass them to new resources.

Skipper https rest end point requests returning http urls

I am trying a poc with Spring cloud dataflow streams and have the the application iis running in Pivotal Cloud Foundry. Trying the same in kubernetes and the spring dataflow server dashboard is not loading.Debugged the issue and found the root cause is when the dashboard is loaded, its trying to hit the Skipper rest end point /api and this returns a response with the urls of other end points in skipper but the return urls are all in http. How can i force skipper to return https urls instead of http? Below is the response when i try to curl the same endpoints .
C:>curl -k https:///api
RESPONSE FROM SKIPPER
{
"_links" : {
"repositories" : {
"href" : "http://<skipper_url>/api/repositories{?page,size,sort}",
"templated" : true
},
"deployers" : {
"href" : "http://<skipper_url>/api/deployers{?page,size,sort}",
"templated" : true
},
"releases" : {
"href" : "http://<skipper_url>/api/releases{?page,size,sort}",
"templated" : true
},
"packageMetadata" : {
"href" : "**http://<skipper_url>/api/packageMetadata{?page,size,sort,projection}**",
"templated" : true
},
"about" : {
"href" : "http://<skipper_url>/api/about"
},
"release" : {
"href" : "http://<skipper_url>/api/release"
},
"package" : {
"href" : "http://<skipper_url>/api/package"
},
"profile" : {
"href" : "http://<skipper_url>/api/profile"
}
}
}
kubernetes deployment yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: skipper-server-network-policy
spec:
podSelector:
matchLabels:
app: skipper-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
replicas: 1
selector:
matchLabels:
app: skipper-server
template:
metadata:
labels:
app: skipper-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: skipper-server
image: <image_path>
imagePullPolicy: Always
ports:
- containerPort: 7577
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
imagePullSecrets:
- name: poc-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
ports:
- port: 80
targetPort: 7577
protocol: TCP
name: http
selector:
app: skipper-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: true
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<skipper_url>"
http:
paths:
- path: /
backend:
serviceName: skipper-server
servicePort: 80
tls:
- hosts:
- "<skipper_url>"
SKIPPER APPLICATION.properties
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.server.use-forward-headers=true
The root cause was skipper /api end point returning http urls for the /deployer and kubernetes ingress trying to redirect and getting blocked with a 308 error. Added below to skipper env properties and this fixed the issue.
DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
spec:
containers:
env:
- name: "server.tomcat.internal-proxies"
value: ".*"
- name: "server.use-forward-headers"
value: "true"**
INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
**nginx.ingress.kubernetes.io/ssl-redirect: false**