I cant get my pods to communicate using a service - kubernetes

When I try using a service to read from my backend (written in ASP.NET Core) in my front (in Angular). I can read this screenshot 1 in the browser console and it doesn't get the information from the API pod.
I have two kubernetes deployment and each one has a service, they are created by this YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnet-angular-deployment
spec:
selector:
matchLabels:
app: dotnet-angular-pod
replicas: 1
template:
metadata:
labels:
app: dotnet-angular-pod
run: dotnet-angular-pod
spec:
containers:
- name: dotnet-angular-container
image: dotnet-angular
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: dotnet-angular-service
labels:
run: dotnet-angular-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 32000
selector:
app: dotnet-angular-pod
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnetangularapi-deployment
spec:
selector:
matchLabels:
app: dotnetangularapi-pod
replicas: 2
template:
metadata:
labels:
app: dotnetangularapi-pod
run: dotnetangularapi-pod
spec:
containers:
- name: dotnetangularapi-container
image: dotnetangularapi
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
env:
- name: ASPNETCORE_URLS
value: http://+:80
---
apiVersion: v1
kind: Service
metadata:
name: dotnetangularapi-service
labels:
run: dotnetangularapi-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
selector:
app: dotnetangularapi-pod
type: NodePort
---
In my Angular app when I call the backend I write http://dotnetangularapi-service/demo the controller I want to access is DemoController.cs thus the /demo.
I can't understand why the browser sends an ERR_NAME_NOT_RESOLVE and I can't even understand what the second error means.

Related

Configuring yaml file

I'm learning k8s, I found an example in the MS docs. The problem I'm having is that I want to switch what GITHUB repo thats being used. I havent been able to figure out the path within this yaml example
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
This YAML example doesn't have a Github Repo field at all. That's why you can't find a path.
If you're trying to change the container image source, it has to be from a container registry (or your own filesystem), which is located at
containers: image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
where mcr.microsoft.com is the container registry.
You won't be able to connect this directly to a Github Repository, but any container registry will work, and I believe Github has one at https://ghcr.io (that link itself will direct you back to Github)

How to install Selenium Grid 4 in Kubernetes?

I want to install Selenium Grid 4 in Kubernetes. I am new to this. Could anyone share helm charts or manifests or installation steps or anything. I could not find anything.
Thanks.
You can find the selenium docker hub image at : https://hub.docker.com/layers/selenium/hub/4.0.0-alpha-6-20200730
YAML example
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200515
resources:
limits:
memory: "1000Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
you can read more at : https://www.swtestacademy.com/selenium-kubernetes-scalable-parallel-tests/
I have found a tutorial to for set up Selenium grid in Kubernetes cluster. And here you can find examples:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:4.0.0
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: selenium-hub
labels:
name: hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200326
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
replication_controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: selenium-rep
spec:
replicas: 2
selector:
app: selenium-chrome
template:
metadata:
name: selenium-chrome
labels:
app: selenium-chrome
spec:
containers:
- name: node-chrome
image: selenium/node-chrome
ports:
- containerPort: 5555
env:
- name: HUB_HOST
value: "selenium-srv"
- name: HUB_PORT
value: "4444"
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
labels:
app: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
This tutorial is also recorded on YouTube. You can find there a playlist with a couple of episodes related to Selenium Grid on Kubernetes.
It might be late for answer but now we have selemium-hub with helm charts. Just posting the link in case someone stumbles upon the same issue. Thank you for the contributions.
Selenium-hub helm chart

Kubernetes: Error converting YAML to JSON: yaml: line 12: did not find expected key

I'm trying to add ciao to my Kubernetes single node cluster and every time I run the kubectl apply -f command, I keep running into the error " error converting YAML to JSON: YAML: line 12: did not find expected key ". I looked at the other solutions but they were no help. Any help will be appreciated.
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: Secret
metadata:
name: ciao
namespace: monitoring
data:
BASIC_AUTH_USERNAME: YWRtaW4=
BASIC_AUTH_PASSWORD: cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
spec:
replicas: 1
template:
metadata:
selector:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
---
apiVersion: v1
kind: Service
metadata:
name: ciao
namespace: monitoring
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: ciao
Looks there's an indentation in your Deployment definition. This should work:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
labels:
app: ciao
spec:
replicas: 1
selector:
matchLabels:
app: ciao
template:
metadata:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
Keep in mind that in this definition the PV persistent-volume needs to exist in your cluster/namespace.

How to read & modify Kube Manifest values with yq?

I have a Kube manifest that need be applied to a couple of kubernetes clusters with different resource settings. For that I need to change resource section of this file on the fly. Here's its contents:
apiVersion: v1
kind: Service
metadata:
name: abc-api
labels:
app: abc-api
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 3000
targetPort: 3000
selector:
app: abc-api
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-api
labels:
app: abc-api
spec:
selector:
matchLabels:
app: abc-api
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: abc-api
tier: frontend
spec:
containers:
- image: ABC_IMAGE
resources:
requests:
memory: "128Mi"
cpu: .30
limits:
memory: "512Mi"
cpu: .99
I searched and found that yq is a better tool for this. However when I read values from this file, it only shows it till the line with '3 dashes': no values past that.
# yq r worker/deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: hometales-api
labels:
app: hometales-api
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 3000
targetPort: 3000
selector:
app: hometales-api
tier: frontend
I want to read the Deployment section, as well as edit the resource values.
Section to read:
---
apiVersion: apps/v1
kind: Deployment
metadata:
....
Section to edit:
resources:
requests:
memory: "128Mi"
cpu: .20
limits:
memory: "512Mi"
cpu: .99
So 1st part of Q: how to read after 2nd instance of 3-dashes?
2nd part of Q: how to edit resource values on the fly?
I'm able to run this command and read this section, but can't read memory or cpu value further:
# yq r -d1 deployment.yaml "spec.template.spec.containers[0].resources.requests"
memory: "128Mi"
cpu: .20
Use the -d CLI option. See https://mikefarah.gitbook.io/yq/commands/write-update#multiple-documents for more details.
Also Kubernetes has its own thing for in kubectl patch.

Connecting angular front end to API using kubernetes service

in my env file for my angular frontend I have the API endpoint set as localhost:8000 because my API listens on that port, but it is in a separate pod is this correct or am I meant to use the name I gave to the backend service in the deployment file. Second, how do I connect the backend service is how I have it done in the deployment file below correct?
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-api
spec:
replicas: 1
selector:
matchLabels:
app: ai-api
template:
metadata:
labels:
app: ai-api
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: ai-api
image: test.azurecr.io/api:v5
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 8000
name: ai-api
---
apiVersion: v1
kind: Service
metadata:
name: ai-api
spec:
ports:
- port: 8000
selector:
app: ai-api
---
# Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-front
spec:
replicas: 1
selector:
matchLabels:
app: ai-front
template:
metadata:
labels:
app: ai-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: ai-front
image: test.azurecr.io/front-end:v5.1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: api
value: "ai-api"
---
apiVersion: v1
kind: Service
metadata:
name: ai-front
spec:
type: LoadBalancer
ports:
- port: 80
#Tells loadbalancer which deployment to use
selector:
app: ai-front
You mentioned that you have API endpoint set as localhost:8000 for your frontend which is not correct as localhost is referring to the same pod from which the request is send from (so it means "connect to myself"). Change it to ai-api:8000. And also make sure that your api server is listening on 0.0.0.0:8000 and not on localhost:8000.
I also see that you are passing the name of your backend service to the frontend pod:
env:
- name: api
value: "ai-api"
and if you are using this env to connect to your backend app it would stand in contradiction with your earlier statement that you are connecting to localhost:8000.