Kubernetes Front and Back end communication - kubernetes

I have been struggling for a few hours on this one. I have a very simple 2 tier dotnet core skeleton app (mvc and webapi) hosted on Azure using Kubernetes with Windows as the orchestrator.
The deployment works fine and I can pass basic environment variables over. The challenge I have is that I cannot determine how to pass the backend service IP address over to the frontend variables.
if I stage the deployments, I can manually pass the exposed IP of the backend into the frontend. Ideally, this needs to be deployed as a service.
Any help will be greatly appreciated.
Deployment commands:
1 - kubectl create -f backend-deploy.yaml
2 - kubectl create -f backend-service.yaml
3 - kubectl create -f frontend-deploy.yaml
4 - kubectl create -f frontend-service.yaml
backend-deploy.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: acme
spec:
replicas: 5
template:
metadata:
labels:
app: acme-app
tier: backend
spec:
containers:
- name: backend-container
image: "some/image"
imagePullSecrets:
- name: supersecretkey
env:
- name: Config__AppName
value: "Acme App"
- name: Config__AppDescription
value: "Just a backend application"
- name: Config__AppVersion
value: "1.0"
- name: Config__CompanyName
value: "Acme Trading Limited"
backend-service.yaml
kind: Service
apiVersion: v1
metadata:
name: acme-app
spec:
selector:
app: acme-app
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
frontend-deploy.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
template:
metadata:
labels:
app: acme-app
tier: frontend
spec:
containers:
- name: frontend-container
image: "some/image"
imagePullSecrets:
- name: supersecretkey
env:
- name: Config__AppName
value: "Acme App"
- name: Config__AppDescription
value: "Just a frontend application"
- name: Config__AppVersion
value: "1.0"
- name: Config__AppTheme
value: "fx-theme-black"
- name: Config__ApiUri
value: ***THIS IS WHERE I NEED THE BACKEND SERVICE IP***
- name: Config__CompanyName
value: "Acme Trading Limited"
frontend-service.yaml
kind: Service
apiVersion: v1
metadata:
name: frontend
spec:
selector:
app: acme
tier: frontend
ports:
- protocol: "TCP"
port: 80
targetPort: 80
type: LoadBalancer

If your backend service was created BEFORE the frontend pods, you should have the environment variables ACME_APP_SERVICE_HOST and ACME_APP_SEVICE_PORT inside the pods.
If your backend service was created AFTER the frontend pods, then delete the pods and wait for them to be restarted. The new pods should have those variables.
To check the environment variables do:
$ kubectl exec podName env

Related

k8s access api from application in pod over subdomain via service

I would like to access my pods created by my backend via subdomain, i already setup a pod yaml and a service yaml
I followed the official k8s docs but its not working, i just get connection refused when i visit http://playout-6297bbdceab3f039170509ee.default-subdomain.default.svc.domain.com
Here are both yaml files:
Service
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: playout
clusterIP: None
ports:
- name: http
port: 80
targetPort: 7999
Pod
apiVersion: v1
kind: Pod
metadata:
name: playout-6297bbd9eab3f039170509e8
namespace: default
labels:
name: playout
spec:
hostname: playout-6297bbd9eab3f039170509e8
subdomain: default-subdomain
containers:
- image: eli4n/playout:playout-latest
ports:
- containerPort: 7999
env:
- name: PLAYOUT_STATION
value: 6297bbceeab3f039170509df
- name: PLAYOUT_CHANNEL
value: 6297bbd9eab3f039170509e8
name: playout
imagePullSecrets:
- name: regcred`
I use k8s with rancher via hetzner cloud driver
Btw i'm really new to k8s
Thanks!

multiple kubernetes deployments using same global yaml as template

I have ran into an issue.
My goal: Create multiple nginx deployments using the same "template" file and use kustomize to replace the container name. This is just an example, as in the next steps I will add/replace/remove lines (for eg.: resources) from "nginx_temaplate.yml" for different deployments. For now I want to make the patches work to create multiple deployments :-) I am not even sure, if the structure is correct.
The structure is:
base/nginx_template.yml
base/kustomization.yml
base/apps/nginx1/nginx1.yml
base/apps/nginx2/nginx2.yml
base/nginx_template.yml:
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: template
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
base/kustomization.yml:
resources:
- nginx_template.yml
patches:
- path: ./apps/nginx1/nginx1.yml
target:
kind: Deployment
- path: ./apps/nginx2/nginx2.yml
target:
kind: Deployment
base/apps/nginx1/nginx1.yml:
- op: replace
path: /spec/template/spec/containers/0/name
value: nginx-1
base/apps/nginx2/nginx2.yml:
- op: replace
path: /spec/template/spec/containers/0/name
value: nginx-2
All it does now is, that it only creates the nginx-2. Thank you for any help.

connect Postgres database in docker to app in Kubernetes

I'm new with Kubernetes and I try to understand how to connect Postgres database which is outside from Kubernetes (exactly in docker with ip address 172.17.0.2 and port 5432) to my webapp in Kubernetes.
I try connect database through env variable PS_DATABASE_URL in Deployment section.
But it cannot find mentioned url for postgres. How it need to be done correctly?
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://postgres:password#172.17.0.2:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
I figured it out. it depends from cloud provider. For this example i use amazon cloud and to connect database on amazon (this is external service). So we must define it in yaml file like an external service.
postgres_external.yaml
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.cdmhjidhpqyu.us-east-2.rds.amazonaws.com
to connect to external service you need to link to it on deployment.
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://<username>:<password>#postgres:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
Please note in webapp.yaml, env section value value: postgresql://<username>:<password>#postgres:5432/db   contains postgres - this is name of our external service which we define in postgres_external.yaml

ERROR! no action detected in task, ansible

I am using ansible version 2.5.1 with python version 2.7.17 and I installed an open shift.
The playbook looks like this:
---
- hosts: node 1
tasks:
- name: Create a k8s namespace
k8s:
name: CC_Namespace
api_version: v1
kind: Namespace
state: present
# Deployment Frontend
- name: Create a Frontend Deployment Object
k8s:
apiVersion: v1
kind: Deployment
metadata:
name: nginx-frontend-deployment
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
livenessProbe:
exec:
command:
- /ready
readinessProbe:
exec:
command:
- /ready
# Deployment Backend
- name: Create a Backend Deployment Object
k8s:
apiVersion: v1
kind: Deployment
metadata:
name: nginx-backend-deployment
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9 # change to Dockerfile
ports:
- containerPort: 80
livenessProbe:
exec:
command:
- /ready
readinessProbe:
exec:
command:
- /ready
# Service Backend
- name: Create a Backend Service Object
k8s:
apiVersion: v1
kind: Service
metadata:
name: cc-backend-service
spec:
selector:
app: CCApp
ports:
- protocol: TCP
port: 80
type: ClusterIP
# Serive Frontend
- name: Create a Frontend Service Object
k8s:
apiVersion: v1
kind: Service
metadata:
name: cc-frontend-service
spec:
selector:
app: CCApp
ports:
- protocol: TCP
port: 80
type: NodePort
and this is the error:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/home/rocco/cc-webapp.yml': line 4, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- name: Create a k8s namespace
^ here
The minimum ansible version to have k8s module available is 2.6. (Reference)
No choice, you have to upgrade.
Note: I tested your playbook syntax without any errors in ansible 2.9.2

Kubernetes MySQL connection timeout

I've set up a Kubernetes deployment and service for MySQL. I cannot access the MySQL service from any pod using its DNS name... It just times out. Any other port refuses the connection immediately, but the port in my service configuration times out after ~10 seconds.
I am able to resolve the MySQL Pod DNS.
I cannot ping the host.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
run: mysql-service
spec:
ports:
- port: 3306
protocol: TCP
- port: 3306
protocol: UDP
selector:
run: mysql-service
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-service
labels:
app: mysql-service
spec:
replicas: 1
selector:
matchLabels:
app: mysql-service
template:
metadata:
labels:
app: mysql-service
spec:
containers:
- name: 'mysql-service'
image: mysql:5.5
env:
- name: MYSQL_ROOT_PASSWORD
value: some_password
- name: MYSQL_DATABASE
value: some_database
ports:
- containerPort: 3306
Your deployment (and more specifically its pod spec) says
labels:
app: mysql-service
but your service says
selector:
run: mysql-service
These don't match, so your service isn't attaching to the pod. You should also see this if you kubectl describe service mysql-service, the "endpoints" list will be empty.
Change the service's selector to match the pod's labels (or vice versa) and this should be better.