Failure to create a job in kubernetes due to error - kubernetes

When creating a Job in kubernetes 1.6, the following error occurs:
Error from server (BadRequest): error when creating "job.yaml":
Job in version "v1" cannot be handled as a Job: [pos 217]:
json: expect char '"' but got char '1'
The job.yaml in question is:
apiVersion: batch/v1
kind: Job
metadata:
name: sysbench-oltp
spec:
template:
metadata:
name: sysbench-oltp
spec:
containers:
- name: sysbench-oltp
image: sysbench-oltp:1.0
env:
- name: OLTP_TABLE_SIZE
value: 10000
- name: DB_NAME
value: "test"
- name: DB_USER
value: "test_user"
Any variations on the API do not seem to matter at all. Anybody have any idea of what the problem is?

Found the solution:
The JSON parser returns a rather unrelated error on a piece of the data in the environment variables:
env:
- name: OLTP_TABLE_SIZE
value: 10000
Should read:
env:
- name: OLTP_TABLE_SIZE
value: "10000"
After which all the parsing works as it should.

Related

Kubernetes YAML file with if-else condition

I have yaml file which use to deploy my application in all the environments. I want to add some JVM args only for test environment . is there anyway i can do it in YAML file?
here is the yaml
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
-Denable.scan=true
"
here i want -Denable.scan=true to be conditional and should add only for Test environment .
I tried following way but it not working and kubernete throwing error error converting YAML to JSON: yaml: line 53: did not find expected key
Tried:-
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
${{if eq "TEST" "TEST" }} # just sample condition , it will change
-Denable.scan=true
${{end }}
"
helm will do that. In fact, the syntax is almost identical to what you've put, and would be something like this:
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
{{- if eq .Values.profile "TEST" }}
-Denable.scan=true
{{- end }}
"
And you declare via the install package (called a Chart) which profile you want to use (i.e. you set the .Values.profile value)
You can check out https://helm.sh/ for details and examples

Automated way to create multiple kubernetes Job manifests

Cron template
kind: CronJob
metadata:
name: some-example
namespace: some-example
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-example
image: gcr.io/some-example/some-example
imagePullPolicy: Always
env:
- name: REPO_URL
value: https://example.com/12/some-example
I need to create multiple Job files with different URLs of REPO_URL over 100s save in a file. I am looking for a solution where I can set Job template and get the required key:value from another file.
so far I've tried https://kustomize.io/, https://ballerina.io/, and https://github.com/mikefarah/yq. But I am not able to find a great example to fit the scenario.
That would be pretty trivial with yq and a shell script. Assuming
your template is in cronjob.yml, we can write something like this:
let count=0
while read url; do
yq -y '
.metadata.name = "some-example-'"$count"'"|
.spec.jobTemplate.spec.template.spec.containers[0].env[0].value = "'"$url"'"
' cronjob.yml
echo '---'
let count++
done < list_of_urls.txt | kubectl apply -f-
E.g., if my list_of_urls.txt contains:
https://google.com
https://stackoverflow.com
The above script will produce:
[...]
metadata:
name: some-example-0
namespace: some-example
spec:
[...]
env:
- name: REPO_URL
value: https://google.com
---
[...]
metadata:
name: some-example-1
namespace: some-example
spec:
[...]
env:
- name: REPO_URL
value: https://stackoverflow.com
You can drop the | kubectl apply -f- if you just want to see the
output instead of actually creating resources.
Or for more structured approach, we could use Ansible's k8s
module:
- hosts: localhost
gather_facts: false
tasks:
- k8s:
state: present
definition:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "some-example-{{ count }}"
namespace: some-example
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-example
image: gcr.io/some-example/some-example
imagePullPolicy: Always
env:
- name: REPO_URL
value: "{{ item }}"
loop:
- https://google.com
- https://stackoverflow.com
loop_control:
index_var: count
Assuming that the above is stored in playbook.yml, running this with
ansible-playbook playbook.yml would create the same resources as the
earlier shell script.

using generateName for yaml file to be installed by helm

I have upload.yaml file which is uploads a script to mongo, I package with helm.
apiVersion: batch/v1
kind: Job
metadata:
generateName: upload-strategy-to-mongo-v2
spec:
parallelism: 1
completions: 1
template:
metadata:
name: upload-strategy-to-mongo
spec:
volumes:
- name: upload-strategy-to-mongo-scripts-volume
configMap:
name: upload-strategy-to-mongo-scripts-v3
containers:
- name: upload-strategy-to-mongo
image: mongo
env:
- name: MONGODB_URI
value: ####
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-user
key: ####
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-user
key: #####
volumeMounts:
- mountPath: /scripts
name: upload-strategy-to-mongo-scripts-volume
command: ["mongo"]
args:
- $(MONGODB_URI)/ravnml
- --username
- $(MONGODB_USERNAME)
- --password
- $(MONGODB_PASSWORD)
- --authenticationDatabase
- admin
- /scripts/upload.js
restartPolicy: Never
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: upload-strategy-to-mongo-scripts-v3
data:
upload.js: |
// Read the object from file and parse it
var data = cat('/scripts/strategy.json');
var obj = JSON.parse(data);
// Upsert strategy
print(db.strategy.find());
db.strategy.replaceOne(
{ name : obj.name },
obj,
{ upsert: true }
)
print(db.strategy.find());
strategy.json: {{ .Files.Get "strategy.json" | quote }}
now I am using generateName to generate a custom name every time I install it. I require to have multiple packages been installed and I require the name to be dynamic.
Error
When I install this script with helm install <name> <tar.gz file> -n <namespace> I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
but I am able to install if I don't use generateName. Any ideas?
I looked at various resources but they don't seem to answer how to install via helm.
references looked:
Add random string on Kubernetes pod deployment name https://github.com/kubernetes/kubernetes/issues/44501 ;
https://zknill.io/posts/kubernetes-generated-names/
This seems to be a known issue. Helm doesn't work with generateName. For unique names, you can use the Helm's build in properties like Revision or Name. See the following link for reference:
https://github.com/helm/helm/issues/3348#issuecomment-482369133

applying task in k8s pod

I'm trying to run kubectl -f pod.yaml but getting this error. Any hint?
error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
You have an indentation error on your pod.yaml definition with imagePullSecrets and you need to specify the - name: for your imagePullSecrets. Should be something like this:
apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
Note that imagePullSecrets: is plural and an array so you can specify multiple credentials to multiple registries.
If you are using Docker you can also specify multiple credentials in ~/.docker/config.json.
If you have the same credentials in imagePullSecrets: and configs in ~/.docker/config.json, the credentials are merged.

Kubernetes not accepting new job definition

I'm running jobs on EKS. After trying to start a job with invalid yaml, it doesn't seem to let go of the bad yaml and keeps giving me the same error message even after correcting the file.
I successfully ran a job.
I added an environment variable with a boolean value in the env section, which raised this error:
Error from server (BadRequest): error when creating "k8s/jobs/create_csv.yaml": Job in version "v1" cannot be handled as a Job: v1.Job: Spec: v1.JobSpec: Template: v1.PodTemplateSpec: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|oduction"},{"name":"RAILS_LOG_TO_STDOUT","value":true},{"name":"AWS_REGION","value":"us-east-1"},{"n|...
I changed the value to be a string yes, but the error message continues to show the original, bad yaml.
No jobs show up in kubectl get jobs --all-namespaces
So I don't know where this old yaml would be hiding.
I thought this might be because I didn't have imagePullPolicy set to Always, but it happens even if I run the kubectl command locally.
Below is my job definition file:
apiVersion: batch/v1
kind: Job
metadata:
generateName: create-csv-
labels:
transformer: AR
spec:
template:
spec:
containers:
- name: create-csv
image: my-image:latest
imagePullPolicy: Always
command: ["bin/rails", "create_csv"]
env:
- name: RAILS_ENV
value: production
- name: RAILS_LOG_TO_STDOUT
value: yes
- name: AWS_REGION
value: us-east-1
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws
key: aws_access_key_id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws
key: aws_secret_access_key
restartPolicy: OnFailure
backoffLimit: 6
"yes" must be quoted in yaml or it gets treated as a keyword that means a boolean true
Try this:
value: "yes"
Single quotes didn't work for me, but the below did:
value: "'true'"