Apache Ignite error - Failed to retrieve Ignite pods IP addresses - kubernetes

We followed the article https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service and setup the Ignite Cluster in the Kubernetes.
We were unable to establish the connection from the client application which is deployed in a different POD to this Ignite Cluster using TcpDiscoveryKubernetesIpFinder.
We saw the below errors in the Ignite Cluster node,
[SEVERE][main][TcpDiscoverySpi] Failed to get registered addresses from IP finder (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries) [maxTimeout=0] class org.apache.ignite.spi.IgniteSpiException:
Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:80)
Here is the spring.xml configuration:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="ignite-namespace" />
<property name="serviceName" value="ignite-service" />
</bean>
</property>
</bean>
</property>
</bean>
</beans>
We specified the namespace and service names in the deployment files,
cluster-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ignite
namespace: ignite-namespace
rules:
- apiGroups:
- ""
resources: # Here are the resources you can access
- pods
- endpoints
verbs: # That is what you can do with them
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ignite
roleRef:
kind: ClusterRole
name: ignite
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite
namespace: ignite-namespace
ignite-service.yaml:
apiVersion: v1
kind: Service
metadata:
# The name must be equal to KubernetesConnectionConfiguration.serviceName
name: ignite-service
# The name must be equal to KubernetesConnectionConfiguration.namespace
namespace: ignite-namespace
labels:
app: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
targetPort: 8080
- name: thinclients
port: 10800
targetPort: 10800
# The pod-to-service routing is required for apps that are not deployed in K8
sessionAffinity: ClientIP
selector:
# Must be equal to the label set for pods.
app: ignite
status:
loadBalancer: {}

Related

Unable to create clusters in Hazelcast over the Kubernetes

I am trying to use Hazelcast on Kubernetes. For that the Docker is installed on Windows and Kubernetes environment is simulate on the Docker. Here is the config file hazelcast.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false" />
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<!--
<properties>
<property name="service-dns">cobrapp.default.endpoints.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
-->
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
The problem is that it is unable to create cluster on the simulated environment. According to my deploment file it should create three clusters. Here is the deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
imagePullPolicy: Never
image: testapp:latest
ports:
- containerPort: 5701
- containerPort: 8085
---
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
type: LoadBalancer
ports:
- name: hazelcast
port: 5701
- name: test
protocol: TCP
port: 8085
targetPort: 8085
The output upon executing the deployment file
Members [1] {
Member [10.1.0.124]:5701 this
}
However the expected output is, it should have three clusters in it as per the deployment file. If anybody can help?
Hazelcast's default multicast discovery doesn't work on Kubernetes out-of-the-box. You need an additional plugin for that. Two alternatives are available, Kubernetes API and DNS lookup.
Please check the relevant documentation for more information.

No available Hazelcast instance"HazelcastCachingProvider.HAZELCAST_CONFIG_LOCATION"

any idea about this error:
No available Hazelcast instance. Please specify your Hazelcast configuration file path via "HazelcastCachingProvider.HAZELCAST_CONFIG_LOCATION"
Working fine with this cfg in a local kubernetes, but always getting this error when i set Kubernetes = true and multicast to false. I'm trying to use it for Liberty in IBMCloud Kubernetes.
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
https://hazelcast.com/schema/config/hazelcast-config-3.12.xsd">
<group>
<name>cluster</name>
</group>
<network>
<join>
<multicast enabled="true"/>
<kubernetes enabled="false"/>
</join>
</network>
</hazelcast>
server.xml
<httpSessionCache libraryRef="jCacheVendorLib"
uri="file:${server.config.dir}hazelcast-config.xml" />
<library id="jCacheVendorLib">
<file name="${shared.config.dir}/lib/global/hazelcast-3.12.6.jar" />
</library>
This is what I done:
I have a docker image using liberty, in the liberty configuration I set the following configuration to use hazelcast:
<server>
<featureManager>
...
<feature>sessionCache-1.0</feature>
...
</featureManager>
...
<httpSessionCache libraryRef="jCacheVendorLib"
uri="file:${server.config.dir}hazelcast-config.xml" />
<library id="jCacheVendorLib">
<file name="${shared.config.dir}/lib/global/hazelcast-3.12.6.jar" />
</library>
...
</server>
Then I set the configuration in hazelcast-config.xml. I only get the error when I set kubernetes=true and multicast=false. If I left kubernetes = false and multicast = true works fine on my local kubernetes, but hazelcast can't find other pods when I deployed it on cloud (looks like ips are on a different network)
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
https://hazelcast.com/schema/config/hazelcast-config-3.12.xsd">
<group>
<name>cluster</name>
</group>
<network>
<join>
<multicast enabled="true"/>
<kubernetes enabled="false"/>
</join>
</network>
</hazelcast>
Also I ran the RBAC yaml.
And run the following yaml to deploy it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: employee-service
labels:
app: employee-service
spec:
replicas: 3
selector:
matchLabels:
app: employee-service
template:
metadata:
labels:
app: employee-service
spec:
containers:
- name: myapp
image: myapp
ports:
- name: http
containerPort: 8080
- name: multicast
containerPort: 5701
------------------------------
apiVersion: v1
kind: Service
metadata:
name: service
spec:
type: NodePort
selector:
app: employee-service
ports:
- protocol: TCP
port: 9080
targetPort: 9080
nodePort: 31234
If you use the Hazelcast Kubernetes plugin for the discovery, please make sure that you configured RBAC, for example with the following command.
kubectl apply -f https://raw.githubusercontent.com/hazelcast/hazelcast-kubernetes/master/rbac.yaml
Please also make sure that the default parameters work for you (you run your Hazelcast in the same namespace, etc.).
If that does not help, please share the full StackTrace logs.

Kubernetes, access service (Zookeeper)

I am trying to deploy a custom nifi instances working with external zookeeper, on Kubernetes (begineer).
Every thing works except for the state management within Nifi.
I understood that I have to update the state-management.xml file, with the right connection string :
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String"></property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
I do not know how I get access to this connection string within Kubernetes, this is my service.yml for zookeeper:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2181
name: client
For Zookeeper leader election and so on, I used the following address :
zk-0.zk-hs.default.svc.cluster.local:2888:3888
But how to access to the port 2181?
You can just access zk-cs.default.svc.cluster.local:2181

Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder

I am trying to make ignite deployed in k8s discoverable using TcpDiscoveryKubernetesIpFinder. I have also used all the deployment configurations as recommended in apache ignite documentation to make it discoverable. Ignite version is v2.6. When I try to access the ignite from another service inside the cluster(and namespace), it fails giving below error.
. . instance-14292nccv10-74997cfdff-kqdqh] Caused by:
java.io.IOException: Server returned HTTP response code: 403 for URL:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/my-namespace/endpoints/ignite-service
[instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] . .
My ignite configurations to make it discoverable are as follows,
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite-service
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite-service
namespace: my-namespace
rules:
- apiGroups:
- ""
resources:
- pods
- endpoints
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite-service
roleRef:
kind: ClusterRole
name: ignite-service
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite-service
namespace: my-namespace
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ignite-service-volume-claim-blr3
namespace: my-namespace
spec:
storageClassName: ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Secret
metadata:
name: ignite-files
namespace: my-namespace
data:
ignite-config.xml: PGJlYW5zIHhtbG5zID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMiCiAgICAgICB4bWxuczp4c2kgPSAiaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiCiAgICAgICB4bWxuczp1dGlsID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvdXRpbCIKICAgICAgIHhzaTpzY2hlbWFMb2NhdGlvbiA9ICIKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMvc3ByaW5nLWJlYW5zLnhzZAogICAgICAgaHR0cDovL3d3dy5zcHJpbmdmcmFtZXdvcmsub3JnL3NjaGVtYS91dGlsCiAgICAgICBodHRwOi8vd3d3LnNwcmluZ2ZyYW1ld29yay5vcmcvc2NoZW1hL3V0aWwvc3ByaW5nLXV0aWwueHNkIj4KCiAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLklnbml0ZUNvbmZpZ3VyYXRpb24iPgogICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gImRpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgIDxiZWFuIGNsYXNzID0gIm9yZy5hcGFjaGUuaWduaXRlLnNwaS5kaXNjb3ZlcnkudGNwLlRjcERpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZSA9ICJpcEZpbmRlciI+CiAgICAgICAgICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuc3BpLmRpc2NvdmVyeS50Y3AuaXBmaW5kZXIua3ViZXJuZXRlcy5UY3BEaXNjb3ZlcnlLdWJlcm5ldGVzSXBGaW5kZXIiPgogICAgICAgICAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZT0ibmFtZXNwYWNlIiB2YWx1ZT0ibXktbmFtZXNwYWNlIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lPSJzZXJ2aWNlTmFtZSIgdmFsdWU9Imlnbml0ZS1zZXJ2aWNlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgPC9iZWFuPgogICAgICAgIDwvcHJvcGVydHk+CiAgICAgICAgPCEtLSBFbmFibGluZyBBcGFjaGUgSWduaXRlIG5hdGl2ZSBwZXJzaXN0ZW5jZS4gLS0+CiAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGF0YVN0b3JhZ2VDb25maWd1cmF0aW9uIj4KICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuY29uZmlndXJhdGlvbi5EYXRhU3RvcmFnZUNvbmZpZ3VyYXRpb24iPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGVmYXVsdERhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLkRhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAicGVyc2lzdGVuY2VFbmFibGVkIiB2YWx1ZSA9ICJ0cnVlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gInN0b3JhZ2VQYXRoIiB2YWx1ZSA9ICIvZGF0YS9pZ25pdGUvc3RvcmFnZSIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsUGF0aCIgdmFsdWUgPSAiL2RhdGEvaWduaXRlL2RiL3dhbCIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsQXJjaGl2ZVBhdGgiIHZhbHVlID0gIi9kYXRhL2lnbml0ZS9kYi93YWwvYXJjaGl2ZSIvPgogICAgICAgICAgICA8L2JlYW4+CiAgICAgICAgPC9wcm9wZXJ0eT4KICAgIDwvYmVhbj4KPC9iZWFucz4=
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
# Name of Ignite Service used by Kubernetes IP finder.
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
name: ignite-service
namespace: my-namespace
spec:
clusterIP: None # custom value.
ports:
- port: 9042 # custom value.
selector:
# Must be equal to one of the labels set in Ignite pods'
# deployement configuration.
app: ignite-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-service
namespace: my-namespace
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 1
template:
metadata:
labels:
# This label has to be added to the selector's section of
# ignite-service.yaml so that the Kubernetes Ignite lookup service
# can easily track all Ignite pods available deployed so far.
app: ignite-service
spec:
serviceAccountName: ignite-service
volumes:
# Custom name for the storage that holds Ignite's configuration
# which is example-kube.xml.
- name: ignite-storage
persistentVolumeClaim:
# Must be equal to the PersistentVolumeClaim created before.
claimName: ignite-service-volume-claim-blr3
- name: ignite-files
secret:
secretName: ignite-files
containers:
# Custom Ignite pod name.
- name: ignite-node
# Ignite Docker image. Kubernetes IP finder is supported starting from
# Apache Ignite 2.6.0
image: apacheignite/ignite:2.6.0
lifecycle:
postStart:
exec:
command: ['/bin/sh', '/opt/ignite/apache-ignite-fabric/bin/control.sh', '--activate']
env:
# Ignite's Docker image parameter. Adding the jar file that
# contain TcpDiscoveryKubernetesIpFinder implementation.
- name: OPTION_LIBS
value: ignite-kubernetes
# Ignite's Docker image parameter. Passing the Ignite configuration
# to use for an Ignite pod.
- name: CONFIG_URI
value: file:///etc/ignite-files/ignite-config.xml
- name: ENV
value: my-namespace
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
volumeMounts:
# Mounting the storage with the Ignite configuration.
- mountPath: "/data/ignite"
name: ignite-storage
- name: ignite-files
mountPath: "/etc/ignite-files"
I saw some links in stackoverflow with similar issue, followed the proposed solution but that doesn't work either. Any pointers on this will be of great help!
According to the URL, the IP finder tries to use a service named ignite, while you create it by name ignite-service.
You should provide both namespace and service name in the IP finder configuration:
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="my-namespace"/>
<property name="serviceName" value="ignite-service"/>
</bean>
You need to make sure you have the following locked down and handled.
Creation of your namespace in kubernetes
Creation of your service account in kubernetes
Permissions set for your service account in your namespace in your cluster.
service account permissions
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions

How to setMasterUrl in Ignite XML config for Kubernetes IPFinder

Using test config with Ignite 2.4 and k8s 1.9:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/>
</property>
</bean>
</property>
</bean>
</beans>
Unable to find Kubernetes API Server at https://kubernetes.default.svc.cluster.local:443
Can I set the API Server URL in the XML config file? How?
#Denis was right.
Kubernetes using RBAC access controlling system and you need to authorize your pod to access to API.
For that, you need to add a Service Account to your pod.
So, for do that you need:
Create a service account and set role for it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite
namespace: <Your namespace>
I am not sure that permissions to access only pods will be enough for Ignite, but if not - you can add as more permissions as you want. Here is example of different kind of roles with large list of permissions. So, now we create Cluster Role for your app:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite
namespace: <Your namespace>
rules:
- apiGroups:
- ""
resources:
- pods # Here is resources you can access
verbs: # That is what you can do with them
- get
- list
- watch
Create binding for that role:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ignite
roleRef:
kind: ClusterRole
name: ignite
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite
namespace: <Your namespace>
Now, you need to associate ServiceAccount to pods with your application:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
....
spec:
template:
spec:
serviceAccountName: ignite
After that, your application will have an access to K8s API. P.S. Do not forget to change <Your namespace> to namespace where you running Ignition.
Platform versions
Kubernetes: v1.8
Ignite: v2.4
#Anton Kostenko design is mostly right, but here's a refined suggestion that works and grants least access privileges to Ignite.
If you're using a Deployment to manage Ignite, then all of your Pods will launch within a single namespace. Therefore, you should really use a Role and a RoleBinding to grant API access to the service account associated with your deployment.
The TcpDiscoveryKubernetesIpFinder only needs access to the endpoints for the headless service that selects your Ignite pods. The following 2 manifests will grant that access.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ignite-endpoint-access
namespace: <your-ns>
labels:
app: ignite
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["<your-headless-svc>"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ignite-role-binding
labels:
app: ignite
subjects:
- kind: ServiceAccount
name: <your-svc-account>
roleRef:
kind: Role
name: ignite-endpoint-access
apiGroup: rbac.authorization.k8s.io
Take a look at this thread: http://apache-ignite-users.70518.x6.nabble.com/Unable-to-connect-ignite-pods-in-Kubernetes-using-Ip-finder-td18009.html
The problem of 403 error can be solved by granting more permissions to the service account.
Tested Version:
Kubernetes: v1.8
Ignite: v2.4
This is going to be little bit more permissive.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: <namespace>
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
If you're getting 403 unauthorized then your service account that made your resources may not have good enough permissions. you should update your permissions after you ensure that your namespace and service account and deployments/ replica sets are exactly the way you want it to be.
This link is very helpful to setting permissions for service accounts:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions