Getting ValidationError(Deployment): unknown field "env" - kubernetes

I'm trying to set up a manifest with specifying the ASPNETCORE_ENVIRONMENT env variable:
kubectl version 1.18.10
kind: Deployment
apiVersion: apps/v1
metadata:
name: $(appName)
labels:
app: $(appName)
spec:
replicas: 2
selector:
matchLabels:
app: $(appName)
template:
metadata:
labels:
app: $(appName)
spec:
containers:
- name: $(appName)
image: xxx.azurecr.io/xxx:$(Build.BuildId)
ports:
- name: http
containerPort: 80
protocol: TCP
env:
- name: ASPNETCORE_ENVIRONMENT
value: Staging
However it's not working, validation tells me:
##[error]error: error validating "/home/vsts/work/_temp/Deployment_xxx6_1610541107518": error validating data: ValidationError(Deployment): unknown field "env" in io.k8s.api.apps.v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false
I'm using Azure Devops Release Pipeline Powershell Task "Generate Kubernetes Manifest file".

Ok it must've been wrong formatting or indentation.
The following file is working now:
kind: Deployment
apiVersion: apps/v1
metadata:
name: $(appName)
labels:
app: $(appName)
spec:
replicas: 2
selector:
matchLabels:
app: $(appName)
template:
metadata:
labels:
app: $(appName)
spec:
containers:
- name: $(appName)
image: xxx.azurecr.io/xxx:$(Build.BuildId)
ports:
- containerPort: 80
env:
- name: ASPNETCORE_ENVIRONMENT
value: Staging

Related

error: error parsing db.yaml: error converting YAML to JSON: yaml: line 19: did not find expected '-' indicator

kubectl apply -f db.yaml
error: error parsing db.yaml: error converting YAML to JSON: yaml: line 19: did not find expected '-' indicator
could you please give me a hint about what's wrong in the YAML file?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
spec:
selector:
matchLabels:
app: my-db
replicas: 1
template:
metadata:
labels:
app: my-db
spec:
containers:
- name: app
image: postgres:my-postgres
ports:
- containerPort: 5432
deployment.spec.template.spec.containers.env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
the deployment.spec.template.spec.containers.env is the the path in the manifest (try following the names deployment -> spec -> ... ; you can see that in the manifest). so instead of using that, you should just simply use env . The correct manifest is below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
spec:
selector:
matchLabels:
app: my-db
replicas: 1
template:
metadata:
labels:
app: my-db
spec:
containers:
- name: app
image: postgres:my-postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust

k8s `selector` does not match template `labels`

I am pulling my hair out here. I deployed my template, deleted it, and then I go to deploy it again without any changes and am getting the following error:
The Deployment "blog" is invalid: spec.template.metadata.labels: Invalid value: map[string]string(nil): selector does not match template labels
My deployment yaml is below and as you can see the metadata and selector labels are both web, so I have no idea what the error is trying to tell me:
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
labels:
app: web
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
replicas: 1
template:
spec:
containers:
- env:
image: test_blog:latest
imagePullPolicy: Always
name: blog
ports:
- containerPort: 8080
You have two template block. I think thats the problem. Try this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
containers:
- env:
image: test_blog:latest
imagePullPolicy: Always
name: blog
ports:
- containerPort: 8080

Kubernetes two pods communication ( One is Beanstalkd and another is worker )

I am working on kubernetes to create two pods using deployment:
First deployment - pods have container and running is beanstalkd.
The second one has a worker which is running on php7/nginx and has an application codebase.
I am getting exception:
"user_name":"anonymous","message":"exception 'Pheanstalk_Exception_ConnectionException' with message 'Socket error 0: php_network_getaddresses: getaddrinfo failed: Try again (connecting to test-beanstalkd:11300)' in /var/www/html/vendor/pda/pheanstalk/classes/Pheanstalk/Socket/NativeSocket.php:35\nStack trace:\n#0 "
How to communicate between them:
My beanstalkd.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-beanstalkd
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-beanstalkd
template:
metadata:
labels:
app: test-beanstalkd
spec:
containers:
# Our PHP-FPM application
- image: schickling/beanstalkd
name: test-beanstalkd
args:
- -p
- "11300"
- -z
- "1073741824"
---
apiVersion: v1
kind: Service
metadata:
name: test-beanstalkd-svc
namespace: test
labels:
run: test-beanstalkd
spec:
ports:
- port: 11300
protocol: TCP
selector:
app: test-beanstalkd
selector:
app: test-beanstalkd
type: NodePort
the below is our worker.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-worker
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-worker
template:
metadata:
labels:
app: test-worker
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
containers:
# Our PHP-FPM application
- image: test-worker:master
name: worker
env:
- name: beanstalkd_host
value: "test-beanstalkd"
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: test-worker-svc
namespace: test
labels:
run: test-worker
spec:
ports:
- port: 80
protocol: TCP
selector:
app: worker
type: NodePort
the mistake is that in the env of test-worker the beanstalkd_host variable needs to be set to test-beanstalkd-svc because it is the name of the service.

kubernetes set env variable

My requirement is inside pod there is a file
location : /mnt/secrets-store/environment
In kubernetes manifest file i would like to set environment variable . Values contains above location flat file
pls share your thought how to achieve that
I have tried below option in the k8s yml file but not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-api
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: sample-api
template:
metadata:
labels:
app: sample-api
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: sample-api
image: sample.azurecr.io/sample:11129
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: NRIA_DISPLAY_NAME
value: $("/usr/bin/cat" "/mnt/secrets-store/environment")

Error in creating Deployment YAML on kubernetes spec.template.spec.containers[1].image: Required value

I have created an EC2 and install EKS on it.Then i created cluster and install docker image on it.
Now i'm trying to deploy this image to the docker container using given yaml and getting error.
Error in creating Deployment YAML on kubernetes
spec.template.spec.containers[1].image: Required value
spec.template.spec.containers[2].image: Required value
--i can see the image on ec2 docker.
my yaml is like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: "mp3-image1:latest"
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
The deployment yaml have indentation problem near the env section and should look like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: mp3-image1:latest
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
This may be totally unrelated, but I had the same issue with a k8s deployment file that had variable substitution in the image but the env variable it was referencing wasn't defined.
...
spec:
containers:
- name: indexing-queue
image: ${K8S_IMAGE} #<--- here
Basically this error means "can't find/understand" the image you've set