I'm migrating our swarm cluster to a k8s one, and that means I need to rewrite all the composes files to k8s files. Everything was going smothy, till I reach the redis compose...
The compose file from redis:
Yes, Its simple because is just to test during development for cache stuff...
version: "3"
services:
db:
image: redis:alpine
ports:
- "6380:6379"
deploy:
labels:
- traefik.frontend.rule=Host:our-redis-url.com
placement:
constraints:
- node.labels.so==linux
networks:
- traefik
networks:
traefik:
external: true
So, we have 4 nodes in that swarm... my DNS (our-redis-url.com) is pointing to one of them, and it works like a charm. I simple connect to redis using that url + the port 6380.
Now.... I have created the same thing, but for k8s, as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-ms
namespace: prod
spec:
replicas: 1
selector:
matchLabels:
app: redis-ms
template:
metadata:
labels:
app: redis-ms
spec:
containers:
- name: redis-ms
image: redis:alpine
ports:
- containerPort: 6379
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis-ms
namespace: prod
spec:
selector:
app: redis-ms
ports:
- protocol: TCP
port: 6380
targetPort: 6379
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ms
namespace: prod
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: our-redis-url.com
http:
paths:
- backend:
service:
name: redis-ms
port:
number: 6380
path: /
pathType: Prefix
And that didn't work.
The pod run, and by the logs I can see it's waiting for connections, BUT I don't know how to do the trick like in docker-compose (traefik.frontend.rule=Host:redis-ms.mstech.com.br to bind the url and the port part).
I have tried to use the tool kompose to convert this compose file... It didn't work to lol
If anyone could bring me some advice, or help me fix the problem I'll thankfull.
I'm using k8s with traefik as ingress controler.
As mentioned in comments, the Ingress system is only for HTTP traffic. Traefik does also support TCP and UDP traffic but that's separate from Ingress stuff and had to be configured through Traefik's more-specific tools (either their custom resources or a config file). More commonly you would use a LoadBalancer-type Service which creates a TCP LB in your cloud provider.
Related
I have several pages that I found with similar question and most answer tell us to white list our IP. However I have allowed access from anywhere 0.0.0.0/0 in the atlas, and have installed the latest version of mongoose(6.2.6 ; which is supposed to have support for the protocol (mongodb+srv).
The connection works perfectly when I run locally using npm start or even from a dockerized container. But, when I deploy to a k8s cluster, I get an error saying:
querySrv ENOTFOUND _mongodb._tcp.mongodb-cluster0.zvnxj.mongodb.net
The deployment and service file are as:
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
The service.yaml has the contents:
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
selector:
app: my-workflow-api
type: LoadBalancer
ports:
- name: http
port: 8000
targetPort: 3000
protocol: TCP
The namespace.yaml has the contents:
apiVersion: v1
kind: Namespace
metadata:
name: ns-my-workflow-api
I also tried the deployment.yaml with the dns rule:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
dnsPolicy: Default # <------ this rule
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
Once I changed the connection url to use 2.0.14 or earlier I was able to connect. The connection string started with mongodb://....
While I have managed to make the connection work with the workaround using an old-style connection string, and it seems to be some sort of dns resolution issue, how do I make the newer protocols work to connect to atlas from inside the cluster? Thanks in advance
I was able to solve it using this to start minikube:
minikube start --driver=docker
It seems there's some dns resolution issue with the underlying oracle's virtualbox driver(Maybe some configuration and setup issue as well)
Learning Kubernetes by setting up two pods, each running an elastic-search and a kibana container respectively.
My configuration file is able to setup both pods as well as create two services to access these applications on host machine's web browser.
Issue is that i don't know how to make Kibana container communicate with ES application/pod.
Earlier while learning Docker i crafted a docker-compose app configuration and now basically trying to do the same using Kubernetes ( docker-compose config pasted below ) .
Came across a blog that suggested using Deployment instead of Pod. Again not sure how would one make Kibana talk to ES
Kubernetes configuation yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod-elasticsearch
labels:
app: myapp
spec:
hostname: "es01-docker-local"
containers:
- name: myelasticsearch-container
image: myelasticsearch
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: myelasticsearch-service
spec:
type: NodePort
ports:
- targetPort: 9200
port: 9200
nodePort: 30015
selector:
app: myapp
---
apiVersion: v1
kind: Pod
metadata:
name: pod-kibana
labels:
app: myapp
spec:
containers:
- name: mykibana-container
image: mykibana
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: mykibana-service
spec:
type: NodePort
ports:
- targetPort: 5601
port: 5601
nodePort: 30016
selector:
app: myapp
For reference below is the docker-compose that i am trying to replicate on Kubernetes
version: "2.2"
services:
elasticsearch:
image: myelasticsearch
container_name: myelasticsearch-container
restart: always
hostname: 'es01.docker.local'
ports:
- '9200:9200'
- '9300:9300'
volumes:
- myVolume:/home/newuser/
environment:
- discovery.type=single-node
kibana:
depends_on:
- elasticsearch
image: mykibana
container_name: mykibana-container
restart: always
ports:
- '5601:5601'
volumes:
- myVolume:/home/newuser/
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: http://es01:9200
volumes:
myVolume:
networks:
myNetwork:
ES Pod description:
% kubectl describe pod/pod-elasticsearch
Name: pod-elasticsearch
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Sun, 10 Jan 2021 23:06:18 -0800
Labels: app=myapp
Annotations: <none>
Status: Running
IP: 10.x.0.yy
IPs:
IP: 10.x.0.yy
In kubernetes Pod/Deployment/DaemonSet... in the same cluster can communicate with each other with no problem because it has a flat network architecture .One way for these resources to call each other directly is by the name of Kubernetes service of each resource.
For example any resource in the cluster can call your kibana-app directly by service name you give it to it mykibana-service.name-of-namespace.
So for kibana pod to communicate with elasticsearch it can use http://name-of-service-of-elasticsearch.name-of-namespace:9200 namespace is be default if you dont specify where you create your service => http://name-of-service-of-elasticsearch.default:9200 or http://name-of-service-of-elasticsearch:9200
The concern you raised on what type of your resource you have to create (pod, deployment,daemonset or statefulSet) is not important for these resources to communicate with each other.
If you re having problem converting docker-compose to manifest file you can start with Kompose you can do kompose convert where is your docker-compose is located .
Here sample
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: myelasticsearch:yourtag #fix this
name: elasticsearch
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- mountPath: /home/newuser/
name: my-volume
volumes:
- name: my-volume
emptyDir: {} # I wouldnt use emptydir
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
ports:
- port: 9200
name: "9200"
targetPort: 9200
- port: 9300
name: "9300"
targetPort: 9300
selector:
app: elasticsearch
type: ClusterIP #you dont need to make expose your service publicly
#####################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/ #elasticsearch is the same name as service resrouce name
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
image: mykibana:yourtagname #fix this
name: kibana
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
app: kibana
type: NodePort
You can choose whats adequate for your app , for example in elasticsearch you can use StatefulSet ,Deployment, in ElasticSearch, and you can you use Deployment for Kibana , Also you can change the type of volume .
Also the mynetwork that you created in docker-compose can be translated network policy where you can isolate your resources (for example isolated mynetwork namespace) because these resources are not isolated if they are created in the same cluster by default.
Hope I helped
If you want to deploy Elasticsearch and Kibana in Kubernetes the usual way then you have to take care of some core Elasticsearch cluster configuration like:
cluster.initial_master_nodes [7.0] Added in 7.0.
network.host
network.publish_host
Also you would have to carefully setup the network.host so that even after accidental pod restarts the network.host remains the same.
While deploying Kibana you need provide Elasticsearch service and also manually configure the SSL certificates if Elasticsearch has SSL enabled.
So to install Elastic Stack on Kubernetes then you should probably prefer
Elastic Cloud on Kubernetes (ECK). The documentation provided by Elastic is easy to understand.
Elastic Cloud on Kubernetes (ECK) uses Kubernetes Operators to make installation easier and it automatically takes care of core cluster configuration.
ECK installation will create a default user called "elastic" and you can retrieve its password from secrets. It also creates self-signed certificates which can be found in secrets.
For deploying Kibana you can just provide "elasticsearchRef" in your YAML file and it will automatically configure the Elasticsearch endpoints. You can use the default "elastic" user to login to Kibana.
I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
I have a docker image from I am doing
docker run --name test -h test -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:install
I am trying to put into a kubernetes deploy file and I have this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: Always
my service.yaml
apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9443
protocol: TCP
targetPort: 9443
selector:
app: websphere
May I have guidance on how to map 2 ports in my deployment file?
You can add as many ports as you need.
Here your deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9043
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: IfNotPresent
Here your service.yml:
apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9043
name: hello
protocol: TCP
targetPort: 9043
nodePort: 30043
- port: 9443
name: privet
protocol: TCP
targetPort: 9443
nodePort: 30443
selector:
app: websphere
Check on your kubernetes api-server configuration what is the range for nodePorts (usually 30000-32767, but it's configurable).
EDIT
If I remove from deployment.yml the resources section, it starts correctly (after about 5 mins).
Here a snippet of the logs:
[9/10/18 8:08:06:004 UTC] 00000051 webcontainer I
com.ibm.ws.webcontainer.VirtualHostImpl addWebApplication SRVE0250I:
Web Module Default Web Application has been bound to
default_host[:9080,:80,:9443,:506 0,:5061,:443].
Problems come connecting to it (I use ingress with traefik), because of certificates (I suppose):
[9/10/18 10:15:08:413 UTC] 000000a4 SSLHandshakeE E SSLC0008E:
Unable to initialize SSL connection. Unauthorized access was denied
or security settings have expired. Exception is
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
connection?
To solve that (I didn't go further) this may help: SSLHandshakeE E SSLC0008E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired
Trying to connect with port-forward:
and using dthe browser to connect, I land on this page:
Well in kubernetes you can define your ports using #port label. This label comes under ports configuration in your deployment. According to the configurations you can simply define any numbers of ports you wish. Following example shows how to define two ports.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
- name: https
protocol: TCP
port: 443
targetPort: 9377
I'm running this command:
kubectl set image deployment/www-deployment VERSION_www=newImage
Works fine. But there's a 10 second window where the website is 503, and I'm a perfectionist.
How can I configure kubernetes to wait for the image to be available before switching the ingress?
I'm using the nginx ingress controller from here:
gcr.io/google_containers/nginx-ingress-controller:0.8.3
And this yaml for the web server:
# Service and Deployment
apiVersion: v1
kind: Service
metadata:
name: www-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: www
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: www-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: www
spec:
containers:
- image: myapp/www
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: http-port
name: www
ports:
- containerPort: 80
name: http-port
protocol: TCP
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: www.sh
mode: 256
path: env.sh
secretName: env-secret
The Docker image is based on a node.js server image.
/healthz is a file in the webserver which returns ok I thought that liveness probe would make sure the server was up and ready before switching to the new version.
Thanks in advance!
within the Pod lifecycle it's defined that:
The default state of Liveness before the initial delay is Success.
To make sure you don't run into issues better configure the ReadinessProbe for your Pods too and consider to configure .spec.minReadySeconds for your Deployment.
You'll find details in the Deployment documentation