visual studio kubernetes project 503 error in azure - kubernetes

I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3

It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.

Related

Istio : HTTPS Traffic between Pods working only if sidecar not injected

Steps i have done :
I have two namespaces one with istio injected and another not
Now deploy simple nginx server using this yaml in both namespace
apiVersion: v1
kind: Service
metadata:
name: software-upgrader
labels:
app: software-upgrader
service: software-upgrader
spec:
ports:
- name: http
port: 25301
selector:
app: software-upgrader
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: software-upgrader
spec:
selector:
matchLabels:
app: software-upgrader
version: v1
template:
metadata:
labels:
app: software-upgrader
version: v1
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
imagePullPolicy: IfNotPresent
name: software-upgrader
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
now deploy HTTPS servers in both namespaces by this steps Steps to deploy HTTPS server
now curl it from another pod in both namespace
The Pod with istio not injected would get 200 OK , while istio-injected pod would get
curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0
command terminated with exit code 56
Pardon me of my ignorance do i have to create some Service-entry or Virtual Service for HTTPS to happen between Pods in same namespace to happen if istio is injected?
You have to add Protocol to Service port Definition
apiVersion: v1
kind: Service
metadata:
name: test-https-server
labels:
app: test-https-server
service: test-https-server
spec:
ports:
- name: test-https
port: 25302
appProtocol: https
selector:
app: test-https-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-https-server
spec:
selector:
matchLabels:
app: test-https-server
template:
metadata:
labels:
app: test-https-server
spec:
containers:
- image: gcr.io/mesh7-public-images/scalability/nginx
command: ["bash", "-c", "python3 ThreadedHTTPSServer.py 25302"]
imagePullPolicy: Always
name: test-https-server
resources:
limits:
cpu: 20m
memory: 32Mi
requests:
cpu: 20m
memory: 32Mi
This has a example of working example
ports:
- name: http
port: 25302
appProtocol: https # Should Specify Protocol
Istio appProtocol configuration doc

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

multiple kubernetes deployments using same global yaml as template

I have ran into an issue.
My goal: Create multiple nginx deployments using the same "template" file and use kustomize to replace the container name. This is just an example, as in the next steps I will add/replace/remove lines (for eg.: resources) from "nginx_temaplate.yml" for different deployments. For now I want to make the patches work to create multiple deployments :-) I am not even sure, if the structure is correct.
The structure is:
base/nginx_template.yml
base/kustomization.yml
base/apps/nginx1/nginx1.yml
base/apps/nginx2/nginx2.yml
base/nginx_template.yml:
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: template
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
base/kustomization.yml:
resources:
- nginx_template.yml
patches:
- path: ./apps/nginx1/nginx1.yml
target:
kind: Deployment
- path: ./apps/nginx2/nginx2.yml
target:
kind: Deployment
base/apps/nginx1/nginx1.yml:
- op: replace
path: /spec/template/spec/containers/0/name
value: nginx-1
base/apps/nginx2/nginx2.yml:
- op: replace
path: /spec/template/spec/containers/0/name
value: nginx-2
All it does now is, that it only creates the nginx-2. Thank you for any help.

connect flask app to Prometheus in Kubernetes cluster

I'm new with Prometheus and I have simple flask app in Kubernetes cluster also I have Prometheus-Monitoring-Grafana services in cluster too in namespace calles prometheus-monitoring. But the problem is when I create ServiceMonitor via .yaml file to connect my app to monitor with Prometheus I see that targets is not added but in config i see that job was added. But status in Prometheus - Service Discovery is Dropped.
A have no idea why my service is not connect to serviceMonitor
serviceMonitor/default/monitoring-webapp/0 (0 / 2 active targets)
app.py
app = Flask(__name__)
metrics = PrometheusMetrics(app)
#app.route('/api')
def index():
return 'ok'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_one:latest
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
env:
- name: flask_url
value: http://flasktwo-service:5003
imagePullSecrets:
- name: dockersecret
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: service
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: monitoring-webapp
labels:
release: prometheus-monitoring
app: webapp
spec:
endpoints:
- path: /metrics
port: service
targetPort: 5000
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app: webapp
Finally i figured it out. The issue was port name. Please find workable solution below
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
template:
metadata:
labels:
component: backend
instance: app
name: containers-my-app
spec:
containers:
- name: app
image: dmitriy83/flask_one:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: webapp
imagePullSecrets:
- name: myregistrykey
service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
component: backend
instance: app
name: containers-my-app
namespace: default
spec:
type: ClusterIP
ports:
- name: http
port: 5000
protocol: TCP
targetPort: webapp # one of the major thing w/o it you could not have active targets in Prometheus
selector:
component: backend
instance: app
name: containers-my-app
finally monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: webapp-super
labels:
component: backend
instance: app
name: containers-my-app
release: kube-prometheus-stack # You need to verify what is your realease name pf prometheus
namespace: prometheus-monitoring # choose in what name space your prometheus is
spec:
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
component: backend
instance: app
name: containers-my-app
endpoints:
- port: http # http - is a port name which was put in service.yaml

How to set dynamic IP to property file?

I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084