Error while trying to create ReplicaSet on Kubernetes with YAML - kubernetes

I'm a beginner with Kubernetes and YAML.
I've been trying to deploy a ReplicaSet with YAML.
This is the file for the ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
label:
app: myapp
spec:
selector:
matchlabels:
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
And this is the Pod file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
However, when I execute the kubectl create -f replicaset.yml command, I get the following error:
The ReplicaSet "myapp-replicaset" is invalid:
spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string(nil), MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: empty selector is invalid for deployment
spec.template.spec.containers: Required value

Your replicaset.yaml indentation seems to be wrong + with some typos.
replicas & template should be inside the spec level. Also, check the marked & corrected typos in labels & matchLabels.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: # labels
app: myapp
spec:
selector:
matchLabels: # matchLabels
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80

Related

Istio : HTTPS Traffic between Pods working only if sidecar not injected

Steps i have done :
I have two namespaces one with istio injected and another not
Now deploy simple nginx server using this yaml in both namespace
apiVersion: v1
kind: Service
metadata:
name: software-upgrader
labels:
app: software-upgrader
service: software-upgrader
spec:
ports:
- name: http
port: 25301
selector:
app: software-upgrader
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: software-upgrader
spec:
selector:
matchLabels:
app: software-upgrader
version: v1
template:
metadata:
labels:
app: software-upgrader
version: v1
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
imagePullPolicy: IfNotPresent
name: software-upgrader
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
now deploy HTTPS servers in both namespaces by this steps Steps to deploy HTTPS server
now curl it from another pod in both namespace
The Pod with istio not injected would get 200 OK , while istio-injected pod would get
curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0
command terminated with exit code 56
Pardon me of my ignorance do i have to create some Service-entry or Virtual Service for HTTPS to happen between Pods in same namespace to happen if istio is injected?
You have to add Protocol to Service port Definition
apiVersion: v1
kind: Service
metadata:
name: test-https-server
labels:
app: test-https-server
service: test-https-server
spec:
ports:
- name: test-https
port: 25302
appProtocol: https
selector:
app: test-https-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-https-server
spec:
selector:
matchLabels:
app: test-https-server
template:
metadata:
labels:
app: test-https-server
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
command: ["bash", "-c", "python3 ThreadedHTTPSServer.py 25302"]
imagePullPolicy: Always
name: test-https-server
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
This has a example of working example
ports:
- name: http
port: 25302
appProtocol: https # Should Specify Protocol
Istio appProtocol configuration doc

Why does my Ingress.yaml file not expose my containerised frontend?

I'm using microk8s to run my containerised frontend in a k8s cluster. However when I try to access it, I get a 'site can't be reached' error. I first tested it out in minikube with minikube tunnel and that works. What am I doing wrong here?
Note: I've enabled the ingress addon in microk8s with microk8s enable ingress.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mf1
port:
number: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mf1
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: mf1
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mf1
spec:
replicas: 1
selector:
matchLabels:
app: mf1
template:
metadata:
labels:
app: mf1
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: mf1
image: nginx:latest
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
imagePullPolicy: Always

Configuring yaml file

I'm learning k8s, I found an example in the MS docs. The problem I'm having is that I want to switch what GITHUB repo thats being used. I havent been able to figure out the path within this yaml example
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
This YAML example doesn't have a Github Repo field at all. That's why you can't find a path.
If you're trying to change the container image source, it has to be from a container registry (or your own filesystem), which is located at
containers: image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
where mcr.microsoft.com is the container registry.
You won't be able to connect this directly to a Github Repository, but any container registry will work, and I believe Github has one at https://ghcr.io (that link itself will direct you back to Github)

Kubernetes: Error converting YAML to JSON: yaml: line 12: did not find expected key

I'm trying to add ciao to my Kubernetes single node cluster and every time I run the kubectl apply -f command, I keep running into the error " error converting YAML to JSON: YAML: line 12: did not find expected key ". I looked at the other solutions but they were no help. Any help will be appreciated.
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: Secret
metadata:
name: ciao
namespace: monitoring
data:
BASIC_AUTH_USERNAME: YWRtaW4=
BASIC_AUTH_PASSWORD: cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
spec:
replicas: 1
template:
metadata:
selector:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
---
apiVersion: v1
kind: Service
metadata:
name: ciao
namespace: monitoring
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: ciao
Looks there's an indentation in your Deployment definition. This should work:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
labels:
app: ciao
spec:
replicas: 1
selector:
matchLabels:
app: ciao
template:
metadata:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
Keep in mind that in this definition the PV persistent-volume needs to exist in your cluster/namespace.

Connecting angular front end to API using kubernetes service

in my env file for my angular frontend I have the API endpoint set as localhost:8000 because my API listens on that port, but it is in a separate pod is this correct or am I meant to use the name I gave to the backend service in the deployment file. Second, how do I connect the backend service is how I have it done in the deployment file below correct?
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-api
spec:
replicas: 1
selector:
matchLabels:
app: ai-api
template:
metadata:
labels:
app: ai-api
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: ai-api
image: test.azurecr.io/api:v5
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 8000
name: ai-api
---
apiVersion: v1
kind: Service
metadata:
name: ai-api
spec:
ports:
- port: 8000
selector:
app: ai-api
---
# Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-front
spec:
replicas: 1
selector:
matchLabels:
app: ai-front
template:
metadata:
labels:
app: ai-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: ai-front
image: test.azurecr.io/front-end:v5.1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: api
value: "ai-api"
---
apiVersion: v1
kind: Service
metadata:
name: ai-front
spec:
type: LoadBalancer
ports:
- port: 80
#Tells loadbalancer which deployment to use
selector:
app: ai-front
You mentioned that you have API endpoint set as localhost:8000 for your frontend which is not correct as localhost is referring to the same pod from which the request is send from (so it means "connect to myself"). Change it to ai-api:8000. And also make sure that your api server is listening on 0.0.0.0:8000 and not on localhost:8000.
I also see that you are passing the name of your backend service to the frontend pod:
env:
- name: api
value: "ai-api"
and if you are using this env to connect to your backend app it would stand in contradiction with your earlier statement that you are connecting to localhost:8000.