Failed to retrieve Ignite pods IP addresses - kubernetes

I am trying to run apache ignite cluster using Google Kubernetes Engine.
After following the tutorial here are some yaml files.
First I create a service -
ignite-service.yaml
apiVersion: v1
kind: Service
metadata:
# Name of Ignite Service used by Kubernetes IP finder.
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
name: ignite
namespace: default
spec:
clusterIP: None # custom value.
ports:
- port: 9042 # custom value.
selector:
# Must be equal to one of the labels set in Ignite pods'
# deployement configuration.
app: ignite
kubectl create -f ignite-service.yaml
Second, I create a deployment for my ignite nodes ignite-deployment.yaml
An example of a Kubernetes configuration for Ignite pods deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-cluster
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
containers:
# Custom Ignite pod name.
- name: ignite-node
image: apacheignite/ignite:2.4.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
kubectl create -f ignite-deployment.yaml
After that I check status of my pods which are running in my case. However when I check logs for any of my pod, I get the following error,
java.io.IOException: Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite
Things I have tried:-
I followed this link to make my cluster work. But in step 4, when I run the daemon yaml file, I get the following error
error: error validating "daemon.yaml": error validating data: ValidationError(DaemonSet.spec.template.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
Can anybody point me to my mistake which I might be doing here?
Thanks.

Step 1: kubectl apply -f ignite-service.yaml (with the file in your question)
Step 2: kubectl apply -f ignite-rbac.yaml
ignite-rbac.yaml is like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ignite-endpoint-access
namespace: default
labels:
app: ignite
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["ignite"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ignite-role-binding
namespace: default
labels:
app: ignite
subjects:
- kind: ServiceAccount
name: ignite
roleRef:
kind: Role
name: ignite-endpoint-access
apiGroup: rbac.authorization.k8s.io
Step 3: kubectl apply -f ignite-deployment.yaml (very similar to your file, I've only added one line, serviceAccount: ignite:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-cluster
namespace: default
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
serviceAccount: ignite ## Added line
containers:
# Custom Ignite pod name.
- name: ignite-node
image: apacheignite/ignite:2.4.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
This should work fine. I've got this in the logs of the pod (kubectl logs -f ignite-cluster-xx-yy), showing the 2 Pods successfully locating each other:
[13:42:00] Ignite node started OK (id=f89698d6)
[13:42:00] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1, offheap=0.72GB, heap=1.0GB]
[13:42:00] Data Regions Configured:
[13:42:00] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]
[13:42:01] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2, offheap=1.4GB, heap=2.0GB]
[13:42:01] Data Regions Configured:
[13:42:01] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]

Related

Why Kubernetes Services are created before Deployment/Pods?

If I have to deploy a workload on Kubernetes and also have to expose it as a service, I have to create a Deployment/Pod and a Service. If the Kubernetes offering is from Cloud and we are creating a LoadBalancer service, it makes sense to create the the service first before the workload as the url creation for the service takes time. But in case the Kubernetes deployed on non-cloud platform there is no point in creating the Service before the workload.
So why I have to create a service first then the workload?
There is no requirement to create a service before a deployment or vice versa. You can create the deployment before the service or the service before the deployment.
If you create the deployment before the service, then the application packaged in the deployment will not be accessible externally until you create the LoadBalancer.
Conversely, if you create the LoadBalancer first, the traffic for the application will not be routed to the application as it hasn't been created yet, giving 503s to the caller.
You are declaring to Kubernetes how you want the state of the infrastructure to be. e.g. "I want a deployment and a service". Kubernetes will go off and create those, but they may not necessarily end up being creating in a predictable order. For example, LoadBalancers take a while to assign IPs from your Cloud provider, so even though the resource is created in the cluster, it's not actually getting any traffic.
As wrote in the official kubernetes documentation, and unlike to what other users have told you, there is a specific case that the service needs to be created BEFORE the pod.
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Note:
When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue.
For example:
Create some pods, deploy and service in a namespace:
namespace.yaml
# namespace
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: securedapp
spec: {}
status: {}
services.yaml
# expose api svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: api
name: api-svc
namespace: securedapp
spec:
ports:
- port: 90
protocol: TCP
targetPort: 80
selector:
type: api
type: ClusterIP
status:
loadBalancer: {}
---
# expose frontend-svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: secured
name: frontend-svc
namespace: securedapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
type: secured
type: ClusterIP
status:
loadBalancer: {}
pods-and-deploy.yaml
# create the pod for frontend
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: secured
name: secured-frontend
namespace: securedapp
spec:
containers:
- image: nginx
name: secured-frontend
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create the pod for the api
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: api
name: webapi
namespace: securedapp
spec:
containers:
- image: nginx
name: webapi
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create a deploy
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
name: sure-even-with-deploy
namespace: securedapp
spec:
replicas: 1
selector:
matchLabels:
app: sure-even-with-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
If you create all the resources contextually in the same time, for example placing all this files on a folder, then do an apply on it like this:
kubectl apply -f .
Then if we get the envs from one pod
k exec -it webapi -n securedapp -- env
you will obtain something like this:
# environment of api_svc
API_SVC_PORT_90_TCP=tcp://10.152.183.242:90
API_SVC_SERVICE_PORT=90
API_SVC_PORT_90_TCP_ADDR=10.152.183.242
API_SVC_SERVICE_HOST=10.152.183.242
API_SVC_PORT_90_TCP_PORT=90
API_SVC_PORT=tcp://10.152.183.242:90
API_SVC_PORT_90_TCP_PROTO=tcp
# environment of frontend
FRONTEND_SVC_SERVICE_HOST=10.152.183.87
FRONTEND_SVC_SERVICE_PORT=80
FRONTEND_SVC_PORT=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PORT=80
FRONTEND_SVC_PORT_280_TCP=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PROTO=tcp
FRONTEND_SVC_PORT_280_TCP_ADDR=10.152.183.87
Clear all the resoruces created:
kubectl delete -f .
Now, another try.
This time we will do the same thing but "slowly", we create things one by one.
Sure, the ns first
k apply -f ns.yaml
then the pods
k apply -f pods.yaml
After a while create the services
k apply -f services.yaml
Now you will discover that if you get the envs from one pod
k exec -it webapi -n securedapp -- env
this time you will NOT have the environment variables of the services.
So, as mentioned on the k8s documentation there is at least one case, where it is necessary to create the svc BEFORE the pods: if you need env variables of your services.
Saluti

Access redis by service name in Kubernetes

I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### 👈
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### 👈
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### 👈
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379

Can we create service to link two PODs from different Deployments >

My application has to deployments with a POD.
Can I create a Service to distribute load across these 2 PODs, part of different deployments ?
If so, How ?
Yes it is possible to achieve. Good explanation how to do it can be found on Kubernete documentation. However, keep in mind that both deployments should provide the same functionality, as the output should have the same format.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Based on example from Documentation.
1. nginx Deployment. Keep in mind that Deployment can have more than 1 label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
env: dev
replicas: 2
template:
metadata:
labels:
run: nginx
env: dev
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2. nginx-second Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-second
spec:
selector:
matchLabels:
run: nginx
env: prod
replicas: 2
template:
metadata:
labels:
run: nginx
env: prod
spec:
containers:
- name: nginx-second
image: nginx
ports:
- containerPort: 80
Now to pair Deployments with Services you have to use Selector based on Deployments labels. Below you can find 2 service YAMLs. nginx-service which pointing to both deployments and nginx-service-1 which points only to nginx-second deployment.
## Both Deployments
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
selector:
run: nginx
---
### To nginx-second deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-service-1
spec:
ports:
- port: 80
protocol: TCP
selector:
env: prod
You can verify that service binds to deployment by checking the endpoints.
$ kubectl get pods -l run=nginx -o yaml | grep podIP
podIP: 10.32.0.9
podIP: 10.32.2.10
podIP: 10.32.0.10
podIP: 10.32.2.11
$ kk get ep nginx-service
NAME ENDPOINTS AGE
nginx-service 10.32.0.10:80,10.32.0.9:80,10.32.2.10:80 + 1 more... 3m33s
$ kk get ep nginx-service-1
NAME ENDPOINTS AGE
nginx-service-1 10.32.0.10:80,10.32.2.11:80 3m36s
Yes, you can do that.
Add a common label key pair to both the deployment pod spec and use that common label as selector in service definition
With the above defined service the requests would be load balanced across all the matching pods.

Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder

I am trying to make ignite deployed in k8s discoverable using TcpDiscoveryKubernetesIpFinder. I have also used all the deployment configurations as recommended in apache ignite documentation to make it discoverable. Ignite version is v2.6. When I try to access the ignite from another service inside the cluster(and namespace), it fails giving below error.
. . instance-14292nccv10-74997cfdff-kqdqh] Caused by:
java.io.IOException: Server returned HTTP response code: 403 for URL:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/my-namespace/endpoints/ignite-service
[instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] . .
My ignite configurations to make it discoverable are as follows,
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite-service
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite-service
namespace: my-namespace
rules:
- apiGroups:
- ""
resources:
- pods
- endpoints
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite-service
roleRef:
kind: ClusterRole
name: ignite-service
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite-service
namespace: my-namespace
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ignite-service-volume-claim-blr3
namespace: my-namespace
spec:
storageClassName: ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Secret
metadata:
name: ignite-files
namespace: my-namespace
data:
ignite-config.xml: PGJlYW5zIHhtbG5zID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMiCiAgICAgICB4bWxuczp4c2kgPSAiaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiCiAgICAgICB4bWxuczp1dGlsID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvdXRpbCIKICAgICAgIHhzaTpzY2hlbWFMb2NhdGlvbiA9ICIKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMvc3ByaW5nLWJlYW5zLnhzZAogICAgICAgaHR0cDovL3d3dy5zcHJpbmdmcmFtZXdvcmsub3JnL3NjaGVtYS91dGlsCiAgICAgICBodHRwOi8vd3d3LnNwcmluZ2ZyYW1ld29yay5vcmcvc2NoZW1hL3V0aWwvc3ByaW5nLXV0aWwueHNkIj4KCiAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLklnbml0ZUNvbmZpZ3VyYXRpb24iPgogICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gImRpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgIDxiZWFuIGNsYXNzID0gIm9yZy5hcGFjaGUuaWduaXRlLnNwaS5kaXNjb3ZlcnkudGNwLlRjcERpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZSA9ICJpcEZpbmRlciI+CiAgICAgICAgICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuc3BpLmRpc2NvdmVyeS50Y3AuaXBmaW5kZXIua3ViZXJuZXRlcy5UY3BEaXNjb3ZlcnlLdWJlcm5ldGVzSXBGaW5kZXIiPgogICAgICAgICAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZT0ibmFtZXNwYWNlIiB2YWx1ZT0ibXktbmFtZXNwYWNlIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lPSJzZXJ2aWNlTmFtZSIgdmFsdWU9Imlnbml0ZS1zZXJ2aWNlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgPC9iZWFuPgogICAgICAgIDwvcHJvcGVydHk+CiAgICAgICAgPCEtLSBFbmFibGluZyBBcGFjaGUgSWduaXRlIG5hdGl2ZSBwZXJzaXN0ZW5jZS4gLS0+CiAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGF0YVN0b3JhZ2VDb25maWd1cmF0aW9uIj4KICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuY29uZmlndXJhdGlvbi5EYXRhU3RvcmFnZUNvbmZpZ3VyYXRpb24iPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGVmYXVsdERhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLkRhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAicGVyc2lzdGVuY2VFbmFibGVkIiB2YWx1ZSA9ICJ0cnVlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gInN0b3JhZ2VQYXRoIiB2YWx1ZSA9ICIvZGF0YS9pZ25pdGUvc3RvcmFnZSIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsUGF0aCIgdmFsdWUgPSAiL2RhdGEvaWduaXRlL2RiL3dhbCIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsQXJjaGl2ZVBhdGgiIHZhbHVlID0gIi9kYXRhL2lnbml0ZS9kYi93YWwvYXJjaGl2ZSIvPgogICAgICAgICAgICA8L2JlYW4+CiAgICAgICAgPC9wcm9wZXJ0eT4KICAgIDwvYmVhbj4KPC9iZWFucz4=
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
# Name of Ignite Service used by Kubernetes IP finder.
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
name: ignite-service
namespace: my-namespace
spec:
clusterIP: None # custom value.
ports:
- port: 9042 # custom value.
selector:
# Must be equal to one of the labels set in Ignite pods'
# deployement configuration.
app: ignite-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-service
namespace: my-namespace
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 1
template:
metadata:
labels:
# This label has to be added to the selector's section of
# ignite-service.yaml so that the Kubernetes Ignite lookup service
# can easily track all Ignite pods available deployed so far.
app: ignite-service
spec:
serviceAccountName: ignite-service
volumes:
# Custom name for the storage that holds Ignite's configuration
# which is example-kube.xml.
- name: ignite-storage
persistentVolumeClaim:
# Must be equal to the PersistentVolumeClaim created before.
claimName: ignite-service-volume-claim-blr3
- name: ignite-files
secret:
secretName: ignite-files
containers:
# Custom Ignite pod name.
- name: ignite-node
# Ignite Docker image. Kubernetes IP finder is supported starting from
# Apache Ignite 2.6.0
image: apacheignite/ignite:2.6.0
lifecycle:
postStart:
exec:
command: ['/bin/sh', '/opt/ignite/apache-ignite-fabric/bin/control.sh', '--activate']
env:
# Ignite's Docker image parameter. Adding the jar file that
# contain TcpDiscoveryKubernetesIpFinder implementation.
- name: OPTION_LIBS
value: ignite-kubernetes
# Ignite's Docker image parameter. Passing the Ignite configuration
# to use for an Ignite pod.
- name: CONFIG_URI
value: file:///etc/ignite-files/ignite-config.xml
- name: ENV
value: my-namespace
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
volumeMounts:
# Mounting the storage with the Ignite configuration.
- mountPath: "/data/ignite"
name: ignite-storage
- name: ignite-files
mountPath: "/etc/ignite-files"
I saw some links in stackoverflow with similar issue, followed the proposed solution but that doesn't work either. Any pointers on this will be of great help!
According to the URL, the IP finder tries to use a service named ignite, while you create it by name ignite-service.
You should provide both namespace and service name in the IP finder configuration:
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="my-namespace"/>
<property name="serviceName" value="ignite-service"/>
</bean>
You need to make sure you have the following locked down and handled.
Creation of your namespace in kubernetes
Creation of your service account in kubernetes
Permissions set for your service account in your namespace in your cluster.
service account permissions
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions

How to get Kubernetes Ingress Port 80 working on baremetal single node cluster

I have a bare-metal kubernetes (v1.11.0) cluster created with kubeadm and working fine without any issues. Network with calico and made it a single node cluster using kubectl taint nodes command. (single node is a requirement).
I need to run mydockerhub/sampleweb static website image on host port 80. Assume the IP address of the ubuntu server running this kubernetes is 192.168.8.10.
How to make my static website available on 192.168.8.10:80 or a hostname mapped to it on local DNS server? (Example: frontend.sampleweb.local:80). Later I need to run other services on different port mapped to another subdomain. (Example: backend.sampleweb.local:80 which routes to a service run on port 8080).
I need to know:
Can I achieve this without a load balancer?
What resources needed to create? (ingress, deployment, etc)
What additional configurations needed on the cluster? (network policy, etc)
Much appreciated if sample yaml files are provided.
I'm new to kubernetes world. I got sample kubernetes deployments (like sock-shop) working end-to-end without any issues. I tried NodePort to access the service but instead of running it on a different port I need to run it exact port 80 on the host. I tried many ingress solutions but didn't work.
Screenshot of my setup:
I recently used traefik.io to configure a project with similar requirements to yours.
So I'll show a basic solution with traefik and ingresses.
I dedicated a whole namespace (you can use kube-system), called traefik, and created a kubernetes serviceAccount:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: traefik
name: traefik-ingress-controller
The traefik controller which is invoked by ingress rules requires a ClusterRole and its binding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
namespace: traefik
name: traefik-ingress-controller
The traefin controller will be deployed as daemonset (i.e. by definition one for each node in your cluster) and a Kubernetes service is dedicated to the controller:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-lb
image: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
namespace: traefik
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
The final part requires you to create a service for each microservice in you project, here an example:
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
and also the ingress (set of rules) that will forward the request to the proper service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: traefik
name: ingress-ms-1
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-address-url
http:
paths:
- backend:
serviceName: my-svc-1
servicePort: 80
In this ingress I wrote a host URL, this will be the entry point in your cluster, so you need to resolve the name to your master K8S node. If you have more nodes which could be master, then a loadbalancer is suggested (in this case the host URL will be the LB).
Take a look to kubernetes.io documentation to have clear the concepts for kubernetes. Also traefik.io is useful.
I hope this helps you.
In addition to the andswer of Nicola Ben , You have to define an externalIPs in your traefik service, just follow the steps of Nicola Ben and add a externalIPs section to the service "my-svc-1" .
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
externalIPs:
- <IP_OF_A_NODE>
And you can define more than on externalIP.