API Deployment on Kubernetes, pods are not creating - kubernetes

I'm deploying APi on kubernetes but POD'S are not creating getting error:
unable to ensure pod container exists: failed to create container for [kubepods burstable pod63951ed1-a42f-44c9-9f85-fbb4c06a3e83] : mkdir /sys/fs/cgroup/memory/kubepods/burstable/pod63951ed1-a42f-44c9-9f85-fbb4c06a3e83: cannot allocate memory
YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
labels:
app: abc
spec:
replicas: 3
selector:
matchLabels:
app: vaccndev
template:
metadata:
labels:
name: abc
app: abc
spec:
hostname: abc
containers:
- name: abc
image: docker.repo1.xyz.com/va_nonuser/server:Dev
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 1Gi

Related

Accessing PVC created by pod from Statefulset in another pod created by daemonset using azure disk

I have app-1 pods created by StatefulSet and in that, I am creating PVC as well
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app-1
spec:
replicas: 3
selector:
matchLabels:
app: app-1
serviceName: "app-1"
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: app-1
image: registry.k8s.io/nginx-slim:0.8
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 4567
volumeMounts:
- name: app-1-state-volume-claim
mountPath: /app1Data
- name: app-2-data-volume-claim
mountPath: /app2Data
volumeClaimTemplates:
- metadata:
name: app-1-state-volume-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-csi-premium"
resources:
requests:
storage: 1Gi
- metadata:
name: app-2-data-volume-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-csi-premium"
resources:
requests:
storage: 1Gi
state of the app1 is maintained in PVC - app-1-state-volume-claim
app1 is also creating data for app2 in PVC - app-2-data-volume-claim
I want to access app-2-data-volume-claim in another pod deployment described below
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: app-2
spec:
selector:
matchLabels:
name: app-2
template:
metadata:
labels:
name: app-2
spec:
containers:
- name: app-2
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: app2Data
mountPath: /app2Data
volumes:
- name: app2Data
persistentVolumeClaim:
claimName: app-2-data-volume-claim
This is failing with below output
persistentvolumeclaim "app-2-data-volume-claim" not found
How can I do that?I cannot use Azure file share due to app-1 limitation.

Error while trying to create ReplicaSet on Kubernetes with YAML

I'm a beginner with Kubernetes and YAML.
I've been trying to deploy a ReplicaSet with YAML.
This is the file for the ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
label:
app: myapp
spec:
selector:
matchlabels:
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
And this is the Pod file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
However, when I execute the kubectl create -f replicaset.yml command, I get the following error:
The ReplicaSet "myapp-replicaset" is invalid:
spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string(nil), MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: empty selector is invalid for deployment
spec.template.spec.containers: Required value
Your replicaset.yaml indentation seems to be wrong + with some typos.
replicas & template should be inside the spec level. Also, check the marked & corrected typos in labels & matchLabels.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: # labels
app: myapp
spec:
selector:
matchLabels: # matchLabels
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80

Configuring yaml file

I'm learning k8s, I found an example in the MS docs. The problem I'm having is that I want to switch what GITHUB repo thats being used. I havent been able to figure out the path within this yaml example
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
This YAML example doesn't have a Github Repo field at all. That's why you can't find a path.
If you're trying to change the container image source, it has to be from a container registry (or your own filesystem), which is located at
containers: image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
where mcr.microsoft.com is the container registry.
You won't be able to connect this directly to a Github Repository, but any container registry will work, and I believe Github has one at https://ghcr.io (that link itself will direct you back to Github)

How does k8s correlate ReplicaSet and Pods?

Say I have the following k8s config file with 2 identical deployments only different by name.
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: hello-world
name: hello-world-deployment-1
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-2
image: rancher/hello-world:v0.1.2
resources:
limits:
cpu: "1"
memory: 100M
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: hello-world
name: hello-world-deployment-2
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world-2
image: rancher/hello-world:v0.1.2
resources:
limits:
cpu: "1"
memory: 100M
As I understand, k8s correlate ReplicaSets and Pods by their label. Therefore, with the above configuration, I guess there will be some problem or k8s will forbid this configuration.
However, it turns out everything is fine. Apart from the labels, are there something else k8s is using to correlate ReplicaSet and Pods?

Connecting angular front end to API using kubernetes service

in my env file for my angular frontend I have the API endpoint set as localhost:8000 because my API listens on that port, but it is in a separate pod is this correct or am I meant to use the name I gave to the backend service in the deployment file. Second, how do I connect the backend service is how I have it done in the deployment file below correct?
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-api
spec:
replicas: 1
selector:
matchLabels:
app: ai-api
template:
metadata:
labels:
app: ai-api
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: ai-api
image: test.azurecr.io/api:v5
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 8000
name: ai-api
---
apiVersion: v1
kind: Service
metadata:
name: ai-api
spec:
ports:
- port: 8000
selector:
app: ai-api
---
# Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-front
spec:
replicas: 1
selector:
matchLabels:
app: ai-front
template:
metadata:
labels:
app: ai-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: ai-front
image: test.azurecr.io/front-end:v5.1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: api
value: "ai-api"
---
apiVersion: v1
kind: Service
metadata:
name: ai-front
spec:
type: LoadBalancer
ports:
- port: 80
#Tells loadbalancer which deployment to use
selector:
app: ai-front
You mentioned that you have API endpoint set as localhost:8000 for your frontend which is not correct as localhost is referring to the same pod from which the request is send from (so it means "connect to myself"). Change it to ai-api:8000. And also make sure that your api server is listening on 0.0.0.0:8000 and not on localhost:8000.
I also see that you are passing the name of your backend service to the frontend pod:
env:
- name: api
value: "ai-api"
and if you are using this env to connect to your backend app it would stand in contradiction with your earlier statement that you are connecting to localhost:8000.