k8s access api from application in pod over subdomain via service - kubernetes

I would like to access my pods created by my backend via subdomain, i already setup a pod yaml and a service yaml
I followed the official k8s docs but its not working, i just get connection refused when i visit http://playout-6297bbdceab3f039170509ee.default-subdomain.default.svc.domain.com
Here are both yaml files:
Service
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: playout
clusterIP: None
ports:
- name: http
port: 80
targetPort: 7999
Pod
apiVersion: v1
kind: Pod
metadata:
name: playout-6297bbd9eab3f039170509e8
namespace: default
labels:
name: playout
spec:
hostname: playout-6297bbd9eab3f039170509e8
subdomain: default-subdomain
containers:
- image: eli4n/playout:playout-latest
ports:
- containerPort: 7999
env:
- name: PLAYOUT_STATION
value: 6297bbceeab3f039170509df
- name: PLAYOUT_CHANNEL
value: 6297bbd9eab3f039170509e8
name: playout
imagePullSecrets:
- name: regcred`
I use k8s with rancher via hetzner cloud driver
Btw i'm really new to k8s
Thanks!

Related

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

Use a common container registry in k8s deployments in a federated cluster

Setup
I have a federated k8s cluster that each cluster has master and workers.
In a federation, each cluster has a different domain for accessing image registry. (e.g. myregistry-1, myregistry-2).
In other words, each cluster has its own registry.
Question
I don't want to change domain for each cluster. Basically, I would like to create a common endpoint that matches to each inner registry, which is internal to that cluster.
Example: Below deployment on all clusters.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.default:5000/nginx:1.14.2
ports:
- containerPort: 80
I tried to implement "Services without selectors" and created an endpoint and updated deployment.yaml but didn't work.
harbor.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-service
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
harbor-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: harbor-service
subsets:
- addresses:
- ip: <INTERNAL_IP_OF_REGISTRY>
ports:
- port: 5000

Kibana on Kubernetes - how to point to ES container running on a different pod

Learning Kubernetes by setting up two pods, each running an elastic-search and a kibana container respectively.
My configuration file is able to setup both pods as well as create two services to access these applications on host machine's web browser.
Issue is that i don't know how to make Kibana container communicate with ES application/pod.
Earlier while learning Docker i crafted a docker-compose app configuration and now basically trying to do the same using Kubernetes ( docker-compose config pasted below ) .
Came across a blog that suggested using Deployment instead of Pod. Again not sure how would one make Kibana talk to ES
Kubernetes configuation yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod-elasticsearch
labels:
app: myapp
spec:
hostname: "es01-docker-local"
containers:
- name: myelasticsearch-container
image: myelasticsearch
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: myelasticsearch-service
spec:
type: NodePort
ports:
- targetPort: 9200
port: 9200
nodePort: 30015
selector:
app: myapp
---
apiVersion: v1
kind: Pod
metadata:
name: pod-kibana
labels:
app: myapp
spec:
containers:
- name: mykibana-container
image: mykibana
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: mykibana-service
spec:
type: NodePort
ports:
- targetPort: 5601
port: 5601
nodePort: 30016
selector:
app: myapp
For reference below is the docker-compose that i am trying to replicate on Kubernetes
version: "2.2"
services:
elasticsearch:
image: myelasticsearch
container_name: myelasticsearch-container
restart: always
hostname: 'es01.docker.local'
ports:
- '9200:9200'
- '9300:9300'
volumes:
- myVolume:/home/newuser/
environment:
- discovery.type=single-node
kibana:
depends_on:
- elasticsearch
image: mykibana
container_name: mykibana-container
restart: always
ports:
- '5601:5601'
volumes:
- myVolume:/home/newuser/
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: http://es01:9200
volumes:
myVolume:
networks:
myNetwork:
ES Pod description:
% kubectl describe pod/pod-elasticsearch
Name: pod-elasticsearch
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Sun, 10 Jan 2021 23:06:18 -0800
Labels: app=myapp
Annotations: <none>
Status: Running
IP: 10.x.0.yy
IPs:
IP: 10.x.0.yy
In kubernetes Pod/Deployment/DaemonSet... in the same cluster can communicate with each other with no problem because it has a flat network architecture .One way for these resources to call each other directly is by the name of Kubernetes service of each resource.
For example any resource in the cluster can call your kibana-app directly by service name you give it to it mykibana-service.name-of-namespace.
So for kibana pod to communicate with elasticsearch it can use http://name-of-service-of-elasticsearch.name-of-namespace:9200 namespace is be default if you dont specify where you create your service => http://name-of-service-of-elasticsearch.default:9200 or http://name-of-service-of-elasticsearch:9200
The concern you raised on what type of your resource you have to create (pod, deployment,daemonset or statefulSet) is not important for these resources to communicate with each other.
If you re having problem converting docker-compose to manifest file you can start with Kompose you can do kompose convert where is your docker-compose is located .
Here sample
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: myelasticsearch:yourtag #fix this
name: elasticsearch
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- mountPath: /home/newuser/
name: my-volume
volumes:
- name: my-volume
emptyDir: {} # I wouldnt use emptydir
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
ports:
- port: 9200
name: "9200"
targetPort: 9200
- port: 9300
name: "9300"
targetPort: 9300
selector:
app: elasticsearch
type: ClusterIP #you dont need to make expose your service publicly
#####################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/ #elasticsearch is the same name as service resrouce name
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
image: mykibana:yourtagname #fix this
name: kibana
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
app: kibana
type: NodePort
You can choose whats adequate for your app , for example in elasticsearch you can use StatefulSet ,Deployment, in ElasticSearch, and you can you use Deployment for Kibana , Also you can change the type of volume .
Also the mynetwork that you created in docker-compose can be translated network policy where you can isolate your resources (for example isolated mynetwork namespace) because these resources are not isolated if they are created in the same cluster by default.
Hope I helped
If you want to deploy Elasticsearch and Kibana in Kubernetes the usual way then you have to take care of some core Elasticsearch cluster configuration like:
cluster.initial_master_nodes [7.0] Added in 7.0.
network.host
network.publish_host
Also you would have to carefully setup the network.host so that even after accidental pod restarts the network.host remains the same.
While deploying Kibana you need provide Elasticsearch service and also manually configure the SSL certificates if Elasticsearch has SSL enabled.
So to install Elastic Stack on Kubernetes then you should probably prefer
Elastic Cloud on Kubernetes (ECK). The documentation provided by Elastic is easy to understand.
Elastic Cloud on Kubernetes (ECK) uses Kubernetes Operators to make installation easier and it automatically takes care of core cluster configuration.
ECK installation will create a default user called "elastic" and you can retrieve its password from secrets. It also creates self-signed certificates which can be found in secrets.
For deploying Kibana you can just provide "elasticsearchRef" in your YAML file and it will automatically configure the Elasticsearch endpoints. You can use the default "elastic" user to login to Kibana.

Connecting to redis pod from another pod in the same cluster

In my cluster I have a nodejs application pod and a redis pod, and I am trying to connect to redis from nodejs, but I am getting the following error:
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:60:26)
It is worth noting that in my nodejs app I am pointing at redis:6379 and if I change the pointer to localhost:6379, I will get a ECONNREFUSED error.
My redis deployment yaml looks like this:
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- '/redis-master/redis.conf'
env:
- name: MASTER
value: 'true'
ports:
- containerPort: 6379
resources:
limits:
cpu: '0.1'
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
Mu redis service yaml is the following:
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
And the service of my app looks like this:
apiVersion: v1
kind: Service
metadata:
name: bff
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8080'
labels:
app: bff
spec:
ports:
- name: external
port: 80
targetPort: web
protocol: TCP
- name: metrics
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: bff
I have tried following other answers given to similar questions, but they do not seem to work in my case.
The root cause is your service selector definition doesn't match your pods's lables.
Quote from Kubernetes service document:
The controller for the Service selector continuously scans for Pods that match its selector, and then POSTs any updates to an Endpoint object also named “my-service”.
See this link for full document.
In short, by following this Kubernetes official example, you should deploy the deployment "redis-master-deployment.yaml" instead of bare pod.

Kubernetes Front and Back end communication

I have been struggling for a few hours on this one. I have a very simple 2 tier dotnet core skeleton app (mvc and webapi) hosted on Azure using Kubernetes with Windows as the orchestrator.
The deployment works fine and I can pass basic environment variables over. The challenge I have is that I cannot determine how to pass the backend service IP address over to the frontend variables.
if I stage the deployments, I can manually pass the exposed IP of the backend into the frontend. Ideally, this needs to be deployed as a service.
Any help will be greatly appreciated.
Deployment commands:
1 - kubectl create -f backend-deploy.yaml
2 - kubectl create -f backend-service.yaml
3 - kubectl create -f frontend-deploy.yaml
4 - kubectl create -f frontend-service.yaml
backend-deploy.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: acme
spec:
replicas: 5
template:
metadata:
labels:
app: acme-app
tier: backend
spec:
containers:
- name: backend-container
image: "some/image"
imagePullSecrets:
- name: supersecretkey
env:
- name: Config__AppName
value: "Acme App"
- name: Config__AppDescription
value: "Just a backend application"
- name: Config__AppVersion
value: "1.0"
- name: Config__CompanyName
value: "Acme Trading Limited"
backend-service.yaml
kind: Service
apiVersion: v1
metadata:
name: acme-app
spec:
selector:
app: acme-app
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
frontend-deploy.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
template:
metadata:
labels:
app: acme-app
tier: frontend
spec:
containers:
- name: frontend-container
image: "some/image"
imagePullSecrets:
- name: supersecretkey
env:
- name: Config__AppName
value: "Acme App"
- name: Config__AppDescription
value: "Just a frontend application"
- name: Config__AppVersion
value: "1.0"
- name: Config__AppTheme
value: "fx-theme-black"
- name: Config__ApiUri
value: ***THIS IS WHERE I NEED THE BACKEND SERVICE IP***
- name: Config__CompanyName
value: "Acme Trading Limited"
frontend-service.yaml
kind: Service
apiVersion: v1
metadata:
name: frontend
spec:
selector:
app: acme
tier: frontend
ports:
- protocol: "TCP"
port: 80
targetPort: 80
type: LoadBalancer
If your backend service was created BEFORE the frontend pods, you should have the environment variables ACME_APP_SERVICE_HOST and ACME_APP_SEVICE_PORT inside the pods.
If your backend service was created AFTER the frontend pods, then delete the pods and wait for them to be restarted. The new pods should have those variables.
To check the environment variables do:
$ kubectl exec podName env