Why does my Ingress.yaml file not expose my containerised frontend? - kubernetes

I'm using microk8s to run my containerised frontend in a k8s cluster. However when I try to access it, I get a 'site can't be reached' error. I first tested it out in minikube with minikube tunnel and that works. What am I doing wrong here?
Note: I've enabled the ingress addon in microk8s with microk8s enable ingress.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mf1
port:
number: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mf1
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: mf1
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mf1
spec:
replicas: 1
selector:
matchLabels:
app: mf1
template:
metadata:
labels:
app: mf1
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: mf1
image: nginx:latest
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
imagePullPolicy: Always

Related

Istio : HTTPS Traffic between Pods working only if sidecar not injected

Steps i have done :
I have two namespaces one with istio injected and another not
Now deploy simple nginx server using this yaml in both namespace
apiVersion: v1
kind: Service
metadata:
name: software-upgrader
labels:
app: software-upgrader
service: software-upgrader
spec:
ports:
- name: http
port: 25301
selector:
app: software-upgrader
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: software-upgrader
spec:
selector:
matchLabels:
app: software-upgrader
version: v1
template:
metadata:
labels:
app: software-upgrader
version: v1
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
imagePullPolicy: IfNotPresent
name: software-upgrader
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
now deploy HTTPS servers in both namespaces by this steps Steps to deploy HTTPS server
now curl it from another pod in both namespace
The Pod with istio not injected would get 200 OK , while istio-injected pod would get
curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0
command terminated with exit code 56
Pardon me of my ignorance do i have to create some Service-entry or Virtual Service for HTTPS to happen between Pods in same namespace to happen if istio is injected?
You have to add Protocol to Service port Definition
apiVersion: v1
kind: Service
metadata:
name: test-https-server
labels:
app: test-https-server
service: test-https-server
spec:
ports:
- name: test-https
port: 25302
appProtocol: https
selector:
app: test-https-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-https-server
spec:
selector:
matchLabels:
app: test-https-server
template:
metadata:
labels:
app: test-https-server
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
command: ["bash", "-c", "python3 ThreadedHTTPSServer.py 25302"]
imagePullPolicy: Always
name: test-https-server
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
This has a example of working example
ports:
- name: http
port: 25302
appProtocol: https # Should Specify Protocol
Istio appProtocol configuration doc

Error while trying to create ReplicaSet on Kubernetes with YAML

I'm a beginner with Kubernetes and YAML.
I've been trying to deploy a ReplicaSet with YAML.
This is the file for the ReplicaSet:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
label:
app: myapp
spec:
selector:
matchlabels:
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
And this is the Pod file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
However, when I execute the kubectl create -f replicaset.yml command, I get the following error:
The ReplicaSet "myapp-replicaset" is invalid:
spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string(nil), MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: empty selector is invalid for deployment
spec.template.spec.containers: Required value
Your replicaset.yaml indentation seems to be wrong + with some typos.
replicas & template should be inside the spec level. Also, check the marked & corrected typos in labels & matchLabels.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: # labels
app: myapp
spec:
selector:
matchLabels: # matchLabels
env: production
name: nginx
replicas: 3
template:
metadata:
name: nginx
labels:
env: production
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80

visual studio kubernetes project 503 error in azure

I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3
It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.

Kubernetes Ingress can not access container by path

I am new to Kubernetes, I configured a Ingress and want to access container by minikube ip/path, but it failed to connect.
However, I could access it by using host instead of path, so I thought the problem might be Ingress.
I have no idea how to do, hope someone can help me. Thanks.
Here's my Deployment, Service and Ingress yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-deployment
spec:
replicas: 2
selector:
matchLabels:
app: portainer
template:
metadata:
labels:
app: portainer
spec:
containers:
- name: portainer
image: portainer/portainer:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rancher-deployment
spec:
replicas: 2
selector:
matchLabels:
app: rancher
template:
metadata:
labels:
app: rancher
spec:
containers:
- name: rancher
image: rancher/server:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: portainer-service
spec:
selector:
app: portainer
type: NodePort
ports:
- port: 80
targetPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: rancher-service
spec:
selector:
app: rancher
type: NodePort
ports:
- port: 80
targetPort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: path-based-ingress
spec:
rules:
- http:
paths:
- path: /portainer
backend:
serviceName: portainer-service
servicePort: 80
- path: /rancher
backend:
serviceName: rancher-service
servicePort: 80
Add this to your ingress in metadata section
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /

Expose a redis cluster - with a kubernetes statefulset to the internet

I created a statefulset that deploys a redis image to GCP on kubernetes. The challenge I am having is exposing it using a single domain name. Such that the pods can be accessed in the following order - redis.com/first, redis.com/second, redis.com/third
here are the YAML files
Statefulset
kind: StatefulSet
metadata:
name: app-redis
spec:
selector:
matchLabels:
app: apprenticeship-redis
serviceName: 'redis-service'
replicas: 3
template:
metadata:
labels:
app: app-redis
spec:
terminationGracePeriodSeconds: 10
containers:
- name: app-redis
image: redis
args:
- /etc/redis/redis.conf
volumeMounts:
- mountPath: /etc/redis
name: redis-config
readOnly: false
- name: redis-storage
mountPath: /data
readOnly: false
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
livenessProbe:
exec:
command: ['redis-cli', 'ping']
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
volumes:
- name: redis-config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Headless service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-redis
name: redis-service
namespace: default
spec:
ports:
- name: server-port
port: 80
protocol: TCP
targetPort: 6379
clusterIP: None
selector:
statefulset.kubernetes.io/pod-name: app-redis-0
Loadbalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: redis-service
name: app-redis
spec:
externalTrafficPolicy: Local
ports:
- port: 80
protocol: TCP
targetPort: 6379
selector:
app: app-redis
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xxx
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xxx
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: default
data:
redis.conf: |
dbfilename "dump.rdb"
dir /data
save 3600 1
save 300 10
save 60 100
appendonly yes
appendfilename "appendonly.aof"
Storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: redis-storage
provisioner: kubernetes.io/gce-pd
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'false'
spec:
rules:
- host: app-redis.tk
http:
paths:
- path: /
backend:
serviceName: app-redis
servicePort: 80
Each pod in the StatefulSet will need to have a service linking to it.
This service will need to be created with:
selector:
statefulset.kubernetes.io/pod-name: <POD_NAME>
Then you will be able to set ingress and use it to redirect traffic based on path:
...
spec:
rules:
- http:
paths:
- path: /app-redis-0
backend:
serviceName: redis-service-0
servicePort: 6379
- path: /app-redis-1
backend:
serviceName: redis-service-1
servicePort: 6379
- path: /app-redis-2
backend:
serviceName: redis-service-2
servicePort: 6379
...
You can read about Exposing StatefulSets in Kubernetes and Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?