purpose of second 'spec' - kubernetes

I am a bit confused on this kubernetes yaml file. The first 'spec' says there need to be 3 replicas all the time.
How about the second 'spec' that just above 'containers'. Also, what does the template thing do ?
apiVersion: v1
kind: ReplicationController
metadata:
name: node-js
labels:
name: node-js
spec:
replicas: 3
selector:
name: node-js
template:
metadata:
labels:
name: node-js
spec:
containers:
- name: node-js
image: jonbaier/node-express-info:latest
ports:
- containerPort: 80

The first spec is for settings of the ReplicationController itself.
Block named "template" and "spec" in it is a pod template with a configuration of the pod which you want to run by your replication controller:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-js
labels:
name: node-js
spec: # related to ReplicationController itself
replicas: 3 # value 3 mean that you want to launch 3 replicas of a pod
selector:
name: node-js
template: # section with a pod template
metadata:
labels:
name: node-js
spec: # related to a container you want to run
containers:
- name: node-js
image: jonbaier/node-express-info:latest
ports:
- containerPort: 80

Related

Kubernetes StatefulSet error - serviceName environment variable doesn't exist

I'm supposed to make a StatefulSet with a Headless Service but when I make the Headless Service and create the StatefulSet only one pod gets made but with Error status and I get this error when trying to use kubectl log:
serviceName environment variable doesn't exist! Fix your specification.
Here is my code:
apiVersion: v1
kind: Service
metadata:
name: svc-hl-xyz
spec:
clusterIP: None
selector:
app: svc-hl-xyz
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-xyz
spec:
replicas: 3
serviceName: "svc-hl-xyz"
selector:
matchLabels:
app: svc-hl-xyz
template:
metadata:
labels:
app: svc-hl-xyz
spec:
containers:
- name: ctr-sts-xyz
image: XXX/XXX/XXX
command: ["XXX", "XXX","XXX"]
My specification seems to follow the Kubernetes documentation for StatefulSet so I'm not sure why it doesn't work. All I can think of is that the command or the image I'm trying to use is causing this somehow.
The container logs (serviceName environment variable doesn't exist! Fix your specification.) tell you that the serviceName environment variable is missing.
Add it to the container spec in your statefulset:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-xyz
spec:
replicas: 3
serviceName: "svc-hl-xyz"
selector:
matchLabels:
app: svc-hl-xyz
template:
metadata:
labels:
app: svc-hl-xyz
spec:
containers:
- name: ctr-sts-xyz
image: quay.io/myafk/interactive:stable
command: ["interactive", "workloads","-t=first"]
env:
- name: serviceName
value: svc-hl-xyz
More information about env variables on Pods can be found in the docs

Kubernetes two pods communication ( One is Beanstalkd and another is worker )

I am working on kubernetes to create two pods using deployment:
First deployment - pods have container and running is beanstalkd.
The second one has a worker which is running on php7/nginx and has an application codebase.
I am getting exception:
"user_name":"anonymous","message":"exception 'Pheanstalk_Exception_ConnectionException' with message 'Socket error 0: php_network_getaddresses: getaddrinfo failed: Try again (connecting to test-beanstalkd:11300)' in /var/www/html/vendor/pda/pheanstalk/classes/Pheanstalk/Socket/NativeSocket.php:35\nStack trace:\n#0 "
How to communicate between them:
My beanstalkd.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-beanstalkd
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-beanstalkd
template:
metadata:
labels:
app: test-beanstalkd
spec:
containers:
# Our PHP-FPM application
- image: schickling/beanstalkd
name: test-beanstalkd
args:
- -p
- "11300"
- -z
- "1073741824"
---
apiVersion: v1
kind: Service
metadata:
name: test-beanstalkd-svc
namespace: test
labels:
run: test-beanstalkd
spec:
ports:
- port: 11300
protocol: TCP
selector:
app: test-beanstalkd
selector:
app: test-beanstalkd
type: NodePort
the below is our worker.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-worker
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-worker
template:
metadata:
labels:
app: test-worker
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
containers:
# Our PHP-FPM application
- image: test-worker:master
name: worker
env:
- name: beanstalkd_host
value: "test-beanstalkd"
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: test-worker-svc
namespace: test
labels:
run: test-worker
spec:
ports:
- port: 80
protocol: TCP
selector:
app: worker
type: NodePort
the mistake is that in the env of test-worker the beanstalkd_host variable needs to be set to test-beanstalkd-svc because it is the name of the service.

kubernetes set env variable

My requirement is inside pod there is a file
location : /mnt/secrets-store/environment
In kubernetes manifest file i would like to set environment variable . Values contains above location flat file
pls share your thought how to achieve that
I have tried below option in the k8s yml file but not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-api
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: sample-api
template:
metadata:
labels:
app: sample-api
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: sample-api
image: sample.azurecr.io/sample:11129
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: NRIA_DISPLAY_NAME
value: $("/usr/bin/cat" "/mnt/secrets-store/environment")

Error in creating Deployment YAML on kubernetes spec.template.spec.containers[1].image: Required value

I have created an EC2 and install EKS on it.Then i created cluster and install docker image on it.
Now i'm trying to deploy this image to the docker container using given yaml and getting error.
Error in creating Deployment YAML on kubernetes
spec.template.spec.containers[1].image: Required value
spec.template.spec.containers[2].image: Required value
--i can see the image on ec2 docker.
my yaml is like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: "mp3-image1:latest"
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
The deployment yaml have indentation problem near the env section and should look like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: mp3-image1:latest
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
This may be totally unrelated, but I had the same issue with a k8s deployment file that had variable substitution in the image but the env variable it was referencing wasn't defined.
...
spec:
containers:
- name: indexing-queue
image: ${K8S_IMAGE} #<--- here
Basically this error means "can't find/understand" the image you've set

imagePullSecrets not working with Kind deployment

I'm tying to create a deployment with 3 replicas, whcih will pull image from a private registry. I have stored the credentials in a secret and using the imagePullSecrets in the deployment file. Im getting below error in the deploy it.
error: error validating "private-reg-pod.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "containers" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "imagePullSecrets" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "template" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Any help on this?
Below is my deployment file :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Thanks,
Sundar
Image section should be placed in container specification. ImagePullSecret should be placed in spec section so proper yaml file looks like this (please note indent):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Very common issue with kubernetes Deployment.
The valid format for pulling image from private repository in your Kubernetes Deployment file is:
spec:
imagePullSecrets:
- name: <your secret name>
containers:
Please make sure you have created the secret,then please try to make it like the below .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: nginx
imagePullSecrets:
- name: regcred
Both #Jakub-Bujny and #itmaven are correct. The indentation is really important in creating and using .yaml (or .yml) file. The yaml file has been parsed based on these indentations. So, both of these are correct:
1)
spec:
imagePullSecrets:
- name: regcred
containers:
- name: test-pod
image:
2)
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Note: before you used the imagePullSecrets you have to create that using the following code:
kubectl create secret docker-registry <private-registry> --docker-server=
<cluster_CA_domain>:[some port] --docker-username=<user_name> --docker-
password=<user_password> --docker-email=<user_email>
also check if the imagePullSecrets was created successfully using the following code:
kubectl get secret