Deployment in version "v1" cannot be handled as a Deployment: - kubernetes

helm install failing with the below error
command
helm install --name helloworld helm
Below is the error once I ran above command
Error: release usagemetrics failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.LivenessProbe: readObjectStart: expect { or n, but found 9, error found in #10 byte of ...|ssProbe":9001,"name"|..., bigger context ...|"imagePullPolicy":"IfNotPresent","livenessProbe":9001,"name":"usagemetrics-helm","ports":[{"containe|...
Below is the deployment.yaml file i feel the issue in liveness and probeness configuration .
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-helm
spec:
replicas: 1
selector:
matchLabels:
app: release-name-helm
release: release-name
template:
metadata:
labels:
app: release-name-helm
release: release-name
spec:
containers:
- name: release-name-helm
imagePullPolicy: IfNotPresent
image: hellworld
ports:
- name: "http"
containerPort: 9001
envFrom:
- configMapRef:
name: release-name-helm
- secretRef:
name: release-name-helm
livenessProbe:
9001
readinessProbe:
9001

The problem seems to be related to the livenessProbe and readynessProbe that are both wrong.
An example of livenessProbe of http from the documentation here is:
livenessProbe
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
Your yamls if you only want to have a check of the port should be like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-helm
spec:
replicas: 1
selector:
matchLabels:
app: release-name-helm
release: release-name
template:
metadata:
labels:
app: release-name-helm
release: release-name
spec:
containers:
- name: release-name-helm
imagePullPolicy: IfNotPresent
image: hellworld
ports:
- name: "http"
containerPort: 9001
envFrom:
- configMapRef:
name: release-name-helm
- secretRef:
name: release-name-helm
livenessProbe:
tcpSocket:
port: 9001
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 9001
initialDelaySeconds: 5
periodSeconds: 10

Related

Kubernetes: Cannot connect to service when using named targetPort

Here's my config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
template:
metadata:
labels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
svc: app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: server
image: server
ports:
- name: http-port
containerPort: 3000
resources:
limits:
memory: 128Mi
requests:
memory: 36Mi
envFrom:
- secretRef:
name: db-env
- secretRef:
name: oauth-env
startupProbe:
httpGet:
port: http
path: /
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 10
livenessProbe:
httpGet:
port: http
path: /
periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
ports:
- port: 80
targetPort: http-port
When I try that I can't connect to my site. When I change targetPort: http-port back to targetPort: 3000 it works fine. I thought the point of naming my port was so that I could use it in the targetPort. Does it not work with deployments?

How to install Selenium Grid 4 in Kubernetes?

I want to install Selenium Grid 4 in Kubernetes. I am new to this. Could anyone share helm charts or manifests or installation steps or anything. I could not find anything.
Thanks.
You can find the selenium docker hub image at : https://hub.docker.com/layers/selenium/hub/4.0.0-alpha-6-20200730
YAML example
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200515
resources:
limits:
memory: "1000Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
you can read more at : https://www.swtestacademy.com/selenium-kubernetes-scalable-parallel-tests/
I have found a tutorial to for set up Selenium grid in Kubernetes cluster. And here you can find examples:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:4.0.0
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: selenium-hub
labels:
name: hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200326
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
replication_controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: selenium-rep
spec:
replicas: 2
selector:
app: selenium-chrome
template:
metadata:
name: selenium-chrome
labels:
app: selenium-chrome
spec:
containers:
- name: node-chrome
image: selenium/node-chrome
ports:
- containerPort: 5555
env:
- name: HUB_HOST
value: "selenium-srv"
- name: HUB_PORT
value: "4444"
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
labels:
app: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
This tutorial is also recorded on YouTube. You can find there a playlist with a couple of episodes related to Selenium Grid on Kubernetes.
It might be late for answer but now we have selemium-hub with helm charts. Just posting the link in case someone stumbles upon the same issue. Thank you for the contributions.
Selenium-hub helm chart

Readiness probe failing when second pod gets scheduled to the same node

I have a k8s service which maps to pod deployment with 2 replicas and is exposed as clusterIp service. I am seeing an issue when the 2nd pod gets scheduled to the same node the readiness probe (http call to an api in container port) is failing with "unable to connect error" . Is this due to some port conflict?
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
namespace: demo
labels:
app: demo
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: demo
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/app-configmap.yaml") . | sha256sum }}
labels:
app: demo
spec:
containers:
- name: demo
image: demo-app-image:1.0.1
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
- name: config-volume
mountPath: /config/app
volumes:
- name: config-volume
configMap:
name: demo-configmap
items:
- key: config
path: config.json
nodeSelector:
usage: demo-server
Service
apiVersion: v1
kind: Service
metadata:
name: demo-service
namespace: demo
labels:
app: demo-service
spec:
selector:
app: demo
ports:
- name: admin-port
protocol: TCP
port: 26001
targetPort: 8081

Unable to update kubernetes container /etc/hosts file using hostAliases

I am trying to update kubernetes container /etc/hosts file through hostAliases with Deployment kind, however this is not updating /etc/hosts but deployment is successful.
Unable to make out what is stopping to updating hostAliases. And also, please suggest any alternative way to update /etc/hosts entries of a kubernetes container
Here is the deployment.yml file I am using, appreciate your help
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-services
namespace: dev
labels:
app: test-services
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: test-services
template:
metadata:
labels:
app: test-services
spec:
containers:
- name: test-services
image: test-services:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 5
env:
- name: JAVA_OPTS
value: "-Dspring.profiles.active=dev"
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-credentials
key: password
restartPolicy: Always
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
imagePullSecrets:
- name: registry-secret

Accessing kubernetes headless service over ambassador

I have deployed my service as headless server and did follow the kubernetes configuration as mentioned in this link (http://vertx.io/docs/vertx-hazelcast/java/#_using_this_cluster_manager). My service is load balanced and proxied using ambassador. Everything was working fine as long as the service was not headless. Once the service changed to headless, ambassador is not able to discover my services. Which means it was looking for clusterIP and it is missing now as the services are headless. What is that I need to include in my deployment.yaml so these services are discovered by ambassador.
Error I see " upstream connect error or disconnect/reset before headers. reset reason: connection failure"
I need these services to be headless because that is the only way to create a cluster using hazelcast. And I am creating web socket connection and vertx eventbus.
apiVersion: v1
kind: Service
metadata:
name: abt-login-service
labels:
chart: "abt-login-service-0.1.0-SNAPSHOT"
annotations:
fabric8.io/expose: "true"
fabric8.io/ingress.annotations: 'kubernetes.io/ingress.class: nginx'
getambassador.io/config: |
---
apiVersion: ambassador/v1
name: login_mapping
ambassador_id: default
kind: Mapping
prefix: /login/
service: abt-login-service.default.svc.cluster.local
use_websocket: true
spec:
type: ClusterIP
clusterIP: None
selector:
app: RELEASE-NAME-abt-login-service
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- name: hz-port-name
port: 5701
protocol: TCP```
```Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: RELEASE-NAME-abt-login-service
labels:
draft: draft-app
chart: "abt-login-service-0.1.0-SNAPSHOT"
spec:
replicas: 2
selector:
matchLabels:
app: RELEASE-NAME-abt-login-service
minReadySeconds: 30
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
draft: draft-app
app: RELEASE-NAME-abt-login-service
component: abt-login-service
spec:
serviceAccountName: vault-auth
containers:
- name: abt-login-service
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
- name: _JAVA_OPTIONS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:Min
HeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dhazelcast.diagnostics.enabled=true
"
image: "draft:dev"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
ports:
- containerPort: 5701
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /health
port: 8080
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 400m
memory: 512Mi
terminationGracePeriodSeconds: 10```
How can I make these services discoverable by ambassador?