Kubernetes cronjob doesn't use env variables - kubernetes

I have a Cron job in Kubernetes that does not use the urls in env variables to go to another api's find information to use in him, returning errors like will be using the urls of the appsettings/launchsettings from the console application project.
When I executed the cronjob, it returned an error ex: "Connection refused (20.210.70.20:80)"
My Cron job:
`
apiVersion: batch/v1
kind: CronJob
metadata:
name: productintegration-cronjob
spec:
schedule: "0 3 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: productintegration-cronjob
image: reconhece.azurecr.io/ms-product-integration:9399
command:
- /bin/sh
- -c
- echo dotnet - $(which dotnet);
echo Running Product Integration;
/usr/bin/dotnet /app/MsProductIntegration.dll
env:
- name: DatabaseProducts
value: "http://catalog-api:8097/api/Product/hash/{0}/{1}"
- name: DatabaseCategory
value: "http://catalog-api:8097/api/Category"
`
My catalogApi deployment where my cron job needs to go:
`
apiVersion: apps/v1
kind: Deployment
metadata:
name: catalog-api-deployment
labels:
app: catalog-api
spec:
replicas: 1
selector:
matchLabels:
app: catalog-api
template:
metadata:
labels:
app: catalog-api
spec:
containers:
- name: catalog-api
image: test.azurecr.io/ms-catalog-api:6973
ports:
- containerPort: 80
env:
- name: DatabaseSettings__ConnectionString
value: "String Connection" - I removed
- name: DatabaseSettings__DatabaseName
value: "DbCatalog"
``
The minikube works fine.
How do I fix this error ?
I already changed the port from my catalogApi but without success.
I tried changing the name of the env variable but without success too.

Related

K3s no longer recognizes Liveness and Readiness Probes in Deployment

I am redeploying a K3s deployment from a few months ago. Then, it worked fine, with no problems. However, when I try to deploy it now: I get the following error:
Error from server (BadRequest): error when creating "deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.livenessProbe", unknown field "spec.template.spec.readinessProbe"
My .yaml for the deployment is unchanged, and looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vei-deployment
spec:
replicas: 1
selector:
matchLabels:
app: server-pod
template:
metadata:
labels:
app: server-pod
spec:
containers:
- name: server-pod
image: myname/mydeployment:latest
env:
- name: AWS_ACCESS_KEY_ID
value: $AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
value: $AWS_SECRET_ACCESS_KEY
ports:
- name: grpc
containerPort: 50051
livenessProbe:
exec:
command:
- grpcurl
- -plaintext
- localhost:50051
- ping.Pinger/Ping
readinessProbe:
exec:
command:
- grpc_health_probe
- -addr=:50051
I have linted the .yaml file, and their doesn't seem to be any problem on that end. Has the syntax for Liveness and readiness changed drastically over the past few months?
The probes were missing an indent, and needed to be indented four spaces so that they were defined as part of the container. Instead of in the question, the .yaml needs to be like so:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vei-deployment
spec:
replicas: 1
selector:
matchLabels:
app: server-pod
template:
metadata:
labels:
app: server-pod
spec:
containers:
- name: server-pod
image: myname/mydeployment:latest
env:
- name: AWS_ACCESS_KEY_ID
value: $AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
value: $AWS_SECRET_ACCESS_KEY
ports:
- name: grpc
containerPort: 50051
livenessProbe:
exec:
command:
- grpcurl
- -plaintext
- localhost:50051
- ping.Pinger/Ping
readinessProbe:
exec:
command:
- grpc_health_probe
- -addr=:50051

kubernetes set env variable

My requirement is inside pod there is a file
location : /mnt/secrets-store/environment
In kubernetes manifest file i would like to set environment variable . Values contains above location flat file
pls share your thought how to achieve that
I have tried below option in the k8s yml file but not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-api
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: sample-api
template:
metadata:
labels:
app: sample-api
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: sample-api
image: sample.azurecr.io/sample:11129
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: NRIA_DISPLAY_NAME
value: $("/usr/bin/cat" "/mnt/secrets-store/environment")

Automated way to create multiple kubernetes Job manifests

Cron template
kind: CronJob
metadata:
name: some-example
namespace: some-example
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-example
image: gcr.io/some-example/some-example
imagePullPolicy: Always
env:
- name: REPO_URL
value: https://example.com/12/some-example
I need to create multiple Job files with different URLs of REPO_URL over 100s save in a file. I am looking for a solution where I can set Job template and get the required key:value from another file.
so far I've tried https://kustomize.io/, https://ballerina.io/, and https://github.com/mikefarah/yq. But I am not able to find a great example to fit the scenario.
That would be pretty trivial with yq and a shell script. Assuming
your template is in cronjob.yml, we can write something like this:
let count=0
while read url; do
yq -y '
.metadata.name = "some-example-'"$count"'"|
.spec.jobTemplate.spec.template.spec.containers[0].env[0].value = "'"$url"'"
' cronjob.yml
echo '---'
let count++
done < list_of_urls.txt | kubectl apply -f-
E.g., if my list_of_urls.txt contains:
https://google.com
https://stackoverflow.com
The above script will produce:
[...]
metadata:
name: some-example-0
namespace: some-example
spec:
[...]
env:
- name: REPO_URL
value: https://google.com
---
[...]
metadata:
name: some-example-1
namespace: some-example
spec:
[...]
env:
- name: REPO_URL
value: https://stackoverflow.com
You can drop the | kubectl apply -f- if you just want to see the
output instead of actually creating resources.
Or for more structured approach, we could use Ansible's k8s
module:
- hosts: localhost
gather_facts: false
tasks:
- k8s:
state: present
definition:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "some-example-{{ count }}"
namespace: some-example
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-example
image: gcr.io/some-example/some-example
imagePullPolicy: Always
env:
- name: REPO_URL
value: "{{ item }}"
loop:
- https://google.com
- https://stackoverflow.com
loop_control:
index_var: count
Assuming that the above is stored in playbook.yml, running this with
ansible-playbook playbook.yml would create the same resources as the
earlier shell script.

Getting ValidationError(Deployment): unknown field "env"

I'm trying to set up a manifest with specifying the ASPNETCORE_ENVIRONMENT env variable:
kubectl version 1.18.10
kind: Deployment
apiVersion: apps/v1
metadata:
name: $(appName)
labels:
app: $(appName)
spec:
replicas: 2
selector:
matchLabels:
app: $(appName)
template:
metadata:
labels:
app: $(appName)
spec:
containers:
- name: $(appName)
image: xxx.azurecr.io/xxx:$(Build.BuildId)
ports:
- name: http
containerPort: 80
protocol: TCP
env:
- name: ASPNETCORE_ENVIRONMENT
value: Staging
However it's not working, validation tells me:
##[error]error: error validating "/home/vsts/work/_temp/Deployment_xxx6_1610541107518": error validating data: ValidationError(Deployment): unknown field "env" in io.k8s.api.apps.v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false
I'm using Azure Devops Release Pipeline Powershell Task "Generate Kubernetes Manifest file".
Ok it must've been wrong formatting or indentation.
The following file is working now:
kind: Deployment
apiVersion: apps/v1
metadata:
name: $(appName)
labels:
app: $(appName)
spec:
replicas: 2
selector:
matchLabels:
app: $(appName)
template:
metadata:
labels:
app: $(appName)
spec:
containers:
- name: $(appName)
image: xxx.azurecr.io/xxx:$(Build.BuildId)
ports:
- containerPort: 80
env:
- name: ASPNETCORE_ENVIRONMENT
value: Staging

Kubernetes: Is there a way to retrieve or inject local env vars into configmap.yaml? [duplicate]

I am setting up the kubernetes setup for django webapp.
I am passing environment variable while creating deployment as below
kubectl create -f deployment.yml -l key1=value1
I am getting error as below
error: no objects passed to create
Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.
deployment.yaml as below
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: sigma-service
name: $key1
What will be the reason for causing the above error while creating deployment?
I used envsubst (https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) for this. Create a deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $NAME
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then:
export NAME=my-test-nginx
envsubst < deployment.yaml | kubectl apply -f -
Not sure what OS are you using to run this. On macOS, envsubst installed like:
brew install gettext
brew link --force gettext
This isn't a right way to use the deployment, you can't provide half details in yaml and half in kubectl commands. If you want to pass environment variables in your deployment you should add those detail in the deployment spec.template.spec:
You should add following block to your deployment.yaml
spec:
containers:
- env:
- name: var1
value: val1
This will export your environment variables inside the container.
The other way to export the environment variable is use kubectl run (not advisable) as it is going to be depreciated very soon. You can use following command:
kubectl run nginx --image=nginx --restart=Always --replicas=1 --env=var1=val1
The above command will create a deployment nginx with replica 1 and environment variable var1=val1
You cannot pass variables to "kubectl create -f". YAML files should be complete manifests without variables. Also you cannot use "-l" flag to "kubectl create -f".
If you want to pass environment variables to pod you should do like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: MY_VAT
value: MY_VALUE
ports:
- containerPort: 80
Read more here: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Follow the below steps
create test-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
using sed command you can update the deployment name at deployment time
sed -e 's|MYAPP|my-nginx|g' test-deploy.yaml | kubectl apply -f -
File: ./deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
File: ./service.yaml
apiVersion: v1
kind: Service
metadata:
name: MYAPP
labels:
app: nginx
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: nginx
File: ./kustomization.yaml
resources:
- deployment.yaml
- service.yaml
If you're using https://kustomize.io/, you can do this trick in a CI:
sh '( echo "images:" ; echo " - name: $IMAGE" ; echo " newTag: $VERSION" ) >> ./kustomization.yaml'
sh "kubectl apply --kustomize ."
I chose yq since it is yaml aware and gives a precise control where text substitutions happen.
To set an image from bash env var:
export IMAGE="your_image:latest"
yq eval '.spec.template.spec.containers[0].image = "'$IMAGE'"' manifests/daemonset.yaml | kubectl apply -f -
yq is available on MacPorts (as of 19/04/21 v4.4.1) with sudo port install yq
I was facing the same problem. I created a python script to change simple/complex or add values to the YAML file.
This became very handy in a similar situation that you describe. Also, switching to the python domain can allow for more complex scenarios.
The code and how to use it are available at this gist.
https://gist.github.com/washraf/f81153270c80b0b4ecf90a53872abde7
Please try following
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kdpd00201
name: frontend
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: frontend
image: ifccncf/nginx:1.14.2
ports:
- containerPort: 8001
env:
- name: NGINX_PORT
value: "8001"
My solution is then
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
namespace: kdpd00201
spec:
replicas: 4
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- env: # modified level
- name: NGINX_PORT
value: "8080"
image: lfccncf/nginx:1.13.7
name: nginx
ports:
- containerPort: 8080