How to install Selenium Grid 4 in Kubernetes? - kubernetes

I want to install Selenium Grid 4 in Kubernetes. I am new to this. Could anyone share helm charts or manifests or installation steps or anything. I could not find anything.
Thanks.

You can find the selenium docker hub image at : https://hub.docker.com/layers/selenium/hub/4.0.0-alpha-6-20200730
YAML example
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200515
resources:
limits:
memory: "1000Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
you can read more at : https://www.swtestacademy.com/selenium-kubernetes-scalable-parallel-tests/

I have found a tutorial to for set up Selenium grid in Kubernetes cluster. And here you can find examples:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:4.0.0
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: selenium-hub
labels:
name: hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200326
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
replication_controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: selenium-rep
spec:
replicas: 2
selector:
app: selenium-chrome
template:
metadata:
name: selenium-chrome
labels:
app: selenium-chrome
spec:
containers:
- name: node-chrome
image: selenium/node-chrome
ports:
- containerPort: 5555
env:
- name: HUB_HOST
value: "selenium-srv"
- name: HUB_PORT
value: "4444"
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
labels:
app: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
This tutorial is also recorded on YouTube. You can find there a playlist with a couple of episodes related to Selenium Grid on Kubernetes.

It might be late for answer but now we have selemium-hub with helm charts. Just posting the link in case someone stumbles upon the same issue. Thank you for the contributions.
Selenium-hub helm chart

Related

GKE "no.scale.down.node.pod.not.enough.pdb" log even with existing PDB

My GKE cluster is displaying "Scale down blocked by pod" note, and clicking it then going to the Logs Explorer it shows a filtered view with log entries for the pods that had the incident: no.scale.down.node.pod.not.enough.pdb . But that's really strange since the pods on the log entries having that message do have PDB defined for them. So it seems to me that GKE is wrongly reporting the cause of the blocking of the node scale down. These are the manifests for one of the pods with this issue:
apiVersion: v1
kind: Service
metadata:
labels:
app: ms-new-api-beta
name: ms-new-api-beta
namespace: beta
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: ms-new-api-beta
type: NodePort
The Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ms-new-api-beta
name: ms-new-api-beta
namespace: beta
spec:
selector:
matchLabels:
app: ms-new-api-beta
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
labels:
app: ms-new-api-beta
spec:
containers:
- command:
- /deploy/venv/bin/gunicorn
- '--bind'
- '0.0.0.0:8000'
- 'newapi.app:app'
- '--chdir'
- /deploy/app
- '--timeout'
- '7200'
- '--workers'
- '1'
- '--worker-class'
- uvicorn.workers.UvicornWorker
- '--log-level'
- DEBUG
env:
- name: ENV
value: BETA
image: >-
gcr.io/.../api:${trigger['tag']}
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /rest
port: 8000
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 20
timeoutSeconds: 30
name: ms-new-api-beta
ports:
- containerPort: 8000
name: http
protocol: TCP
readinessProbe:
httpGet:
path: /rest
port: 8000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 2
resources:
limits:
cpu: 150m
requests:
cpu: 100m
startupProbe:
failureThreshold: 30
httpGet:
path: /rest
port: 8000
periodSeconds: 120
imagePullSecrets:
- name: gcp-docker-registry
The Horizontal Pod Autoscaler:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: ms-new-api-beta
namespace: beta
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ms-new-api-beta
targetCPUUtilizationPercentage: 100
And finally, the Pod Disruption Budget:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ms-new-api-beta
namespace: beta
spec:
minAvailable: 0
selector:
matchLabels:
app: ms-new-api-beta
no.scale.down.node.pod.not.enough.pdb is not complaining about the lack of a PDB. It is complaining that, if the pod is scaled down, it will be in violation of the existing PDB(s).
The "budget" is how much disruption the Pod can permit. The platform will not take any intentional action which violates that budget.
There may be another PDB in place that would be violated. To check, make sure to review pdbs in the pod's namespace:
kubectl get pdb

Kubernetes: Cannot connect to service when using named targetPort

Here's my config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
template:
metadata:
labels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
svc: app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: server
image: server
ports:
- name: http-port
containerPort: 3000
resources:
limits:
memory: 128Mi
requests:
memory: 36Mi
envFrom:
- secretRef:
name: db-env
- secretRef:
name: oauth-env
startupProbe:
httpGet:
port: http
path: /
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 10
livenessProbe:
httpGet:
port: http
path: /
periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
ports:
- port: 80
targetPort: http-port
When I try that I can't connect to my site. When I change targetPort: http-port back to targetPort: 3000 it works fine. I thought the point of naming my port was so that I could use it in the targetPort. Does it not work with deployments?

I cant get my pods to communicate using a service

When I try using a service to read from my backend (written in ASP.NET Core) in my front (in Angular). I can read this screenshot 1 in the browser console and it doesn't get the information from the API pod.
I have two kubernetes deployment and each one has a service, they are created by this YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnet-angular-deployment
spec:
selector:
matchLabels:
app: dotnet-angular-pod
replicas: 1
template:
metadata:
labels:
app: dotnet-angular-pod
run: dotnet-angular-pod
spec:
containers:
- name: dotnet-angular-container
image: dotnet-angular
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: dotnet-angular-service
labels:
run: dotnet-angular-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 32000
selector:
app: dotnet-angular-pod
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnetangularapi-deployment
spec:
selector:
matchLabels:
app: dotnetangularapi-pod
replicas: 2
template:
metadata:
labels:
app: dotnetangularapi-pod
run: dotnetangularapi-pod
spec:
containers:
- name: dotnetangularapi-container
image: dotnetangularapi
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
env:
- name: ASPNETCORE_URLS
value: http://+:80
---
apiVersion: v1
kind: Service
metadata:
name: dotnetangularapi-service
labels:
run: dotnetangularapi-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
selector:
app: dotnetangularapi-pod
type: NodePort
---
In my Angular app when I call the backend I write http://dotnetangularapi-service/demo the controller I want to access is DemoController.cs thus the /demo.
I can't understand why the browser sends an ERR_NAME_NOT_RESOLVE and I can't even understand what the second error means.

Deployment in version "v1" cannot be handled as a Deployment:

helm install failing with the below error
command
helm install --name helloworld helm
Below is the error once I ran above command
Error: release usagemetrics failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.LivenessProbe: readObjectStart: expect { or n, but found 9, error found in #10 byte of ...|ssProbe":9001,"name"|..., bigger context ...|"imagePullPolicy":"IfNotPresent","livenessProbe":9001,"name":"usagemetrics-helm","ports":[{"containe|...
Below is the deployment.yaml file i feel the issue in liveness and probeness configuration .
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-helm
spec:
replicas: 1
selector:
matchLabels:
app: release-name-helm
release: release-name
template:
metadata:
labels:
app: release-name-helm
release: release-name
spec:
containers:
- name: release-name-helm
imagePullPolicy: IfNotPresent
image: hellworld
ports:
- name: "http"
containerPort: 9001
envFrom:
- configMapRef:
name: release-name-helm
- secretRef:
name: release-name-helm
livenessProbe:
9001
readinessProbe:
9001
The problem seems to be related to the livenessProbe and readynessProbe that are both wrong.
An example of livenessProbe of http from the documentation here is:
livenessProbe
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
Your yamls if you only want to have a check of the port should be like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-helm
spec:
replicas: 1
selector:
matchLabels:
app: release-name-helm
release: release-name
template:
metadata:
labels:
app: release-name-helm
release: release-name
spec:
containers:
- name: release-name-helm
imagePullPolicy: IfNotPresent
image: hellworld
ports:
- name: "http"
containerPort: 9001
envFrom:
- configMapRef:
name: release-name-helm
- secretRef:
name: release-name-helm
livenessProbe:
tcpSocket:
port: 9001
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 9001
initialDelaySeconds: 5
periodSeconds: 10

Accessing kubernetes headless service over ambassador

I have deployed my service as headless server and did follow the kubernetes configuration as mentioned in this link (http://vertx.io/docs/vertx-hazelcast/java/#_using_this_cluster_manager). My service is load balanced and proxied using ambassador. Everything was working fine as long as the service was not headless. Once the service changed to headless, ambassador is not able to discover my services. Which means it was looking for clusterIP and it is missing now as the services are headless. What is that I need to include in my deployment.yaml so these services are discovered by ambassador.
Error I see " upstream connect error or disconnect/reset before headers. reset reason: connection failure"
I need these services to be headless because that is the only way to create a cluster using hazelcast. And I am creating web socket connection and vertx eventbus.
apiVersion: v1
kind: Service
metadata:
name: abt-login-service
labels:
chart: "abt-login-service-0.1.0-SNAPSHOT"
annotations:
fabric8.io/expose: "true"
fabric8.io/ingress.annotations: 'kubernetes.io/ingress.class: nginx'
getambassador.io/config: |
---
apiVersion: ambassador/v1
name: login_mapping
ambassador_id: default
kind: Mapping
prefix: /login/
service: abt-login-service.default.svc.cluster.local
use_websocket: true
spec:
type: ClusterIP
clusterIP: None
selector:
app: RELEASE-NAME-abt-login-service
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- name: hz-port-name
port: 5701
protocol: TCP```
```Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: RELEASE-NAME-abt-login-service
labels:
draft: draft-app
chart: "abt-login-service-0.1.0-SNAPSHOT"
spec:
replicas: 2
selector:
matchLabels:
app: RELEASE-NAME-abt-login-service
minReadySeconds: 30
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
draft: draft-app
app: RELEASE-NAME-abt-login-service
component: abt-login-service
spec:
serviceAccountName: vault-auth
containers:
- name: abt-login-service
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
- name: _JAVA_OPTIONS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:Min
HeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dhazelcast.diagnostics.enabled=true
"
image: "draft:dev"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
ports:
- containerPort: 5701
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /health
port: 8080
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 400m
memory: 512Mi
terminationGracePeriodSeconds: 10```
How can I make these services discoverable by ambassador?