Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder - kubernetes

I am trying to make ignite deployed in k8s discoverable using TcpDiscoveryKubernetesIpFinder. I have also used all the deployment configurations as recommended in apache ignite documentation to make it discoverable. Ignite version is v2.6. When I try to access the ignite from another service inside the cluster(and namespace), it fails giving below error.
. . instance-14292nccv10-74997cfdff-kqdqh] Caused by:
java.io.IOException: Server returned HTTP response code: 403 for URL:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/my-namespace/endpoints/ignite-service
[instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263)
~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] . .
My ignite configurations to make it discoverable are as follows,
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite-service
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite-service
namespace: my-namespace
rules:
- apiGroups:
- ""
resources:
- pods
- endpoints
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite-service
roleRef:
kind: ClusterRole
name: ignite-service
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite-service
namespace: my-namespace
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ignite-service-volume-claim-blr3
namespace: my-namespace
spec:
storageClassName: ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Secret
metadata:
name: ignite-files
namespace: my-namespace
data:
ignite-config.xml: PGJlYW5zIHhtbG5zID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMiCiAgICAgICB4bWxuczp4c2kgPSAiaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiCiAgICAgICB4bWxuczp1dGlsID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvdXRpbCIKICAgICAgIHhzaTpzY2hlbWFMb2NhdGlvbiA9ICIKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMvc3ByaW5nLWJlYW5zLnhzZAogICAgICAgaHR0cDovL3d3dy5zcHJpbmdmcmFtZXdvcmsub3JnL3NjaGVtYS91dGlsCiAgICAgICBodHRwOi8vd3d3LnNwcmluZ2ZyYW1ld29yay5vcmcvc2NoZW1hL3V0aWwvc3ByaW5nLXV0aWwueHNkIj4KCiAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLklnbml0ZUNvbmZpZ3VyYXRpb24iPgogICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gImRpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgIDxiZWFuIGNsYXNzID0gIm9yZy5hcGFjaGUuaWduaXRlLnNwaS5kaXNjb3ZlcnkudGNwLlRjcERpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZSA9ICJpcEZpbmRlciI+CiAgICAgICAgICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuc3BpLmRpc2NvdmVyeS50Y3AuaXBmaW5kZXIua3ViZXJuZXRlcy5UY3BEaXNjb3ZlcnlLdWJlcm5ldGVzSXBGaW5kZXIiPgogICAgICAgICAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZT0ibmFtZXNwYWNlIiB2YWx1ZT0ibXktbmFtZXNwYWNlIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lPSJzZXJ2aWNlTmFtZSIgdmFsdWU9Imlnbml0ZS1zZXJ2aWNlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgPC9iZWFuPgogICAgICAgIDwvcHJvcGVydHk+CiAgICAgICAgPCEtLSBFbmFibGluZyBBcGFjaGUgSWduaXRlIG5hdGl2ZSBwZXJzaXN0ZW5jZS4gLS0+CiAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGF0YVN0b3JhZ2VDb25maWd1cmF0aW9uIj4KICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuY29uZmlndXJhdGlvbi5EYXRhU3RvcmFnZUNvbmZpZ3VyYXRpb24iPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGVmYXVsdERhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLkRhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAicGVyc2lzdGVuY2VFbmFibGVkIiB2YWx1ZSA9ICJ0cnVlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gInN0b3JhZ2VQYXRoIiB2YWx1ZSA9ICIvZGF0YS9pZ25pdGUvc3RvcmFnZSIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsUGF0aCIgdmFsdWUgPSAiL2RhdGEvaWduaXRlL2RiL3dhbCIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsQXJjaGl2ZVBhdGgiIHZhbHVlID0gIi9kYXRhL2lnbml0ZS9kYi93YWwvYXJjaGl2ZSIvPgogICAgICAgICAgICA8L2JlYW4+CiAgICAgICAgPC9wcm9wZXJ0eT4KICAgIDwvYmVhbj4KPC9iZWFucz4=
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
# Name of Ignite Service used by Kubernetes IP finder.
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
name: ignite-service
namespace: my-namespace
spec:
clusterIP: None # custom value.
ports:
- port: 9042 # custom value.
selector:
# Must be equal to one of the labels set in Ignite pods'
# deployement configuration.
app: ignite-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-service
namespace: my-namespace
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 1
template:
metadata:
labels:
# This label has to be added to the selector's section of
# ignite-service.yaml so that the Kubernetes Ignite lookup service
# can easily track all Ignite pods available deployed so far.
app: ignite-service
spec:
serviceAccountName: ignite-service
volumes:
# Custom name for the storage that holds Ignite's configuration
# which is example-kube.xml.
- name: ignite-storage
persistentVolumeClaim:
# Must be equal to the PersistentVolumeClaim created before.
claimName: ignite-service-volume-claim-blr3
- name: ignite-files
secret:
secretName: ignite-files
containers:
# Custom Ignite pod name.
- name: ignite-node
# Ignite Docker image. Kubernetes IP finder is supported starting from
# Apache Ignite 2.6.0
image: apacheignite/ignite:2.6.0
lifecycle:
postStart:
exec:
command: ['/bin/sh', '/opt/ignite/apache-ignite-fabric/bin/control.sh', '--activate']
env:
# Ignite's Docker image parameter. Adding the jar file that
# contain TcpDiscoveryKubernetesIpFinder implementation.
- name: OPTION_LIBS
value: ignite-kubernetes
# Ignite's Docker image parameter. Passing the Ignite configuration
# to use for an Ignite pod.
- name: CONFIG_URI
value: file:///etc/ignite-files/ignite-config.xml
- name: ENV
value: my-namespace
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
volumeMounts:
# Mounting the storage with the Ignite configuration.
- mountPath: "/data/ignite"
name: ignite-storage
- name: ignite-files
mountPath: "/etc/ignite-files"
I saw some links in stackoverflow with similar issue, followed the proposed solution but that doesn't work either. Any pointers on this will be of great help!

According to the URL, the IP finder tries to use a service named ignite, while you create it by name ignite-service.
You should provide both namespace and service name in the IP finder configuration:
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="my-namespace"/>
<property name="serviceName" value="ignite-service"/>
</bean>

You need to make sure you have the following locked down and handled.
Creation of your namespace in kubernetes
Creation of your service account in kubernetes
Permissions set for your service account in your namespace in your cluster.
service account permissions
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions

Related

How can I reference value from one Kubernetes resource when defining another resource

I'm using GKE and Helm v3 and I'm trying to create/reserve a static IP address using ComputeAddress and then to create DNS A record with the previously reserved IP address.
Reserve IP address
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: ip-address
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
location: global
Get reserved IP address
kubectl get computeaddress ip-address -o jsonpath='{.spec.address}'
Create DNS A record
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: "A"
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- **IP-ADDRESS-VALUE** <----
Is there a way to reference the IP address value, created by ComputeAddress, in the DNSRecordSet resource?
Basically, I need something similar to the output values in Terraform.
Thanks!
Currently, there is not possible to assign different value as string (IP Address) on the field "rrdatas". So you are not able to "call" another resource like the IP Address created before. You need to put the value on format x.x.x.x
It's interesting that something similar exists for GKE Ingress where we can reference reserved IP address and managed SSL certificate using annotations:
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-address
I have no idea why there is not something like this for DNSRecordSet resource. Hopefully, GKE will introduce it in the future.
Instead of running two commands, I've found a workaround by using Helm's hooks.
First, we need to define Job as post-install and post-upgrade hook which will pick up the reserved IP address when it becomes ready and then create appropriate DNSRecordSet resource with it. The script which retrieves the IP address, and manifest for DNSRecordSet are passed through ConfigMap and mounted to Pod.
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-dns-record-set-hook"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}-dns-record-set-hook"
spec:
restartPolicy: OnFailure
containers:
- name: post-install-job
image: alpine:latest
command: ['sh', '-c', '/opt/run-kubectl-command-to-set-dns.sh']
volumeMounts:
- name: volume-dns-record-scripts
mountPath: /opt
- name: volume-writable
mountPath: /mnt
volumes:
- name: volume-dns-record-scripts
configMap:
name: dns-record-scripts
defaultMode: 0777
- name: volume-writable
emptyDir: {}
ConfigMap definition with the script and manifest file:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: dns-record-scripts
data:
run-kubectl-command-to-set-dns.sh: |-
# install kubectl command
apk add curl && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/linux/amd64/kubectl && \
chmod u+x kubectl && \
mv kubectl /bin/kubectl
# wait for reserved IP address to be ready
kubectl wait --for=condition=Ready computeaddress/ip-address
# get reserved IP address
IP_ADDRESS=$(kubectl get computeaddress ip-address -o jsonpath='{.spec.address}')
echo "Reserved address: $IP_ADDRESS"
# update IP_ADDRESS in manifest
sed "s/##IP_ADDRESS##/$IP_ADDRESS/g" /opt/dns-record.yml > /mnt/dns-record.yml
# create DNS record
kubectl apply -f /mnt/dns-record.yml
dns-record.yml: |-
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: A
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- "##IP_ADDRESS##"
And, finally, for (default) Service Account to be able to retrieve the IP address and create/update DNSRecordSet, we need to assign some roles to it:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dnsrecord-setter
rules:
- apiGroups: ["compute.cnrm.cloud.google.com"]
resources: ["computeaddresses"]
verbs: ["get", "list"]
- apiGroups: ["dns.cnrm.cloud.google.com"]
resources: ["dnsrecordsets"]
verbs: ["get", "create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dnsrecord-setter
subjects:
- kind: ServiceAccount
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dnsrecord-setter

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

how to deploy a statefulset when a pod security policy is in place in kubernetes

I am trying to play around with PodSecurityPolicies in kubernetes so pods can't be created if they are using the root user.
This is my psp definition:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.restrictive
spec:
hostNetwork: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
volumes:
- '*'
and this is my statefulset definition
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
securityContext:
#only takes integers.
runAsUser: 1000
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
When trying to create this statefulset I get
create Pod web-0 in StatefulSet web failed error: pods "web-0" is forbidden: unable to validate against any pod security policy:
It doesn't specify what policy am I violating, and since I am specifying I want to run this on user 1000, I am not running this as root (Hence my understanding is that this statefulset pod definition is not violating any rules defined in the PSP). There is no USER specified in the Dockerfile used for this image.
The other weird part, is that this works fine for standard pods (kind: Pod, instead of kind:Statefulset), for example, this works just fine, when the same PSP exists:
apiVersion: v1
kind: Pod
metadata:
name: my-nodejs
spec:
securityContext:
runAsUser: 1000
containers:
- name: my-node
image: node
ports:
- name: web
containerPort: 80
protocol: TCP
command:
- /bin/sh
- -c
- |
npm install http-server-g
npx http-server
What am I missing / doing wrong?
You seems to have forgotten about binding this psp to a service account.
You need to apply the following:
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- eks.restrictive
EOF
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
EOF
If you dont want to use the default account you can create a separate service account and bind the role to it.
Read more about it k8s documentation - pod-security-policy.

How to get Kubernetes Ingress Port 80 working on baremetal single node cluster

I have a bare-metal kubernetes (v1.11.0) cluster created with kubeadm and working fine without any issues. Network with calico and made it a single node cluster using kubectl taint nodes command. (single node is a requirement).
I need to run mydockerhub/sampleweb static website image on host port 80. Assume the IP address of the ubuntu server running this kubernetes is 192.168.8.10.
How to make my static website available on 192.168.8.10:80 or a hostname mapped to it on local DNS server? (Example: frontend.sampleweb.local:80). Later I need to run other services on different port mapped to another subdomain. (Example: backend.sampleweb.local:80 which routes to a service run on port 8080).
I need to know:
Can I achieve this without a load balancer?
What resources needed to create? (ingress, deployment, etc)
What additional configurations needed on the cluster? (network policy, etc)
Much appreciated if sample yaml files are provided.
I'm new to kubernetes world. I got sample kubernetes deployments (like sock-shop) working end-to-end without any issues. I tried NodePort to access the service but instead of running it on a different port I need to run it exact port 80 on the host. I tried many ingress solutions but didn't work.
Screenshot of my setup:
I recently used traefik.io to configure a project with similar requirements to yours.
So I'll show a basic solution with traefik and ingresses.
I dedicated a whole namespace (you can use kube-system), called traefik, and created a kubernetes serviceAccount:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: traefik
name: traefik-ingress-controller
The traefik controller which is invoked by ingress rules requires a ClusterRole and its binding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
namespace: traefik
name: traefik-ingress-controller
The traefin controller will be deployed as daemonset (i.e. by definition one for each node in your cluster) and a Kubernetes service is dedicated to the controller:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-lb
image: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
namespace: traefik
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
The final part requires you to create a service for each microservice in you project, here an example:
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
and also the ingress (set of rules) that will forward the request to the proper service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: traefik
name: ingress-ms-1
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-address-url
http:
paths:
- backend:
serviceName: my-svc-1
servicePort: 80
In this ingress I wrote a host URL, this will be the entry point in your cluster, so you need to resolve the name to your master K8S node. If you have more nodes which could be master, then a loadbalancer is suggested (in this case the host URL will be the LB).
Take a look to kubernetes.io documentation to have clear the concepts for kubernetes. Also traefik.io is useful.
I hope this helps you.
In addition to the andswer of Nicola Ben , You have to define an externalIPs in your traefik service, just follow the steps of Nicola Ben and add a externalIPs section to the service "my-svc-1" .
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
externalIPs:
- <IP_OF_A_NODE>
And you can define more than on externalIP.

Failed to retrieve Ignite pods IP addresses

I am trying to run apache ignite cluster using Google Kubernetes Engine.
After following the tutorial here are some yaml files.
First I create a service -
ignite-service.yaml
apiVersion: v1
kind: Service
metadata:
# Name of Ignite Service used by Kubernetes IP finder.
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
name: ignite
namespace: default
spec:
clusterIP: None # custom value.
ports:
- port: 9042 # custom value.
selector:
# Must be equal to one of the labels set in Ignite pods'
# deployement configuration.
app: ignite
kubectl create -f ignite-service.yaml
Second, I create a deployment for my ignite nodes ignite-deployment.yaml
An example of a Kubernetes configuration for Ignite pods deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-cluster
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
containers:
# Custom Ignite pod name.
- name: ignite-node
image: apacheignite/ignite:2.4.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
kubectl create -f ignite-deployment.yaml
After that I check status of my pods which are running in my case. However when I check logs for any of my pod, I get the following error,
java.io.IOException: Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite
Things I have tried:-
I followed this link to make my cluster work. But in step 4, when I run the daemon yaml file, I get the following error
error: error validating "daemon.yaml": error validating data: ValidationError(DaemonSet.spec.template.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
Can anybody point me to my mistake which I might be doing here?
Thanks.
Step 1: kubectl apply -f ignite-service.yaml (with the file in your question)
Step 2: kubectl apply -f ignite-rbac.yaml
ignite-rbac.yaml is like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ignite-endpoint-access
namespace: default
labels:
app: ignite
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["ignite"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ignite-role-binding
namespace: default
labels:
app: ignite
subjects:
- kind: ServiceAccount
name: ignite
roleRef:
kind: Role
name: ignite-endpoint-access
apiGroup: rbac.authorization.k8s.io
Step 3: kubectl apply -f ignite-deployment.yaml (very similar to your file, I've only added one line, serviceAccount: ignite:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-cluster
namespace: default
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
serviceAccount: ignite ## Added line
containers:
# Custom Ignite pod name.
- name: ignite-node
image: apacheignite/ignite:2.4.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
This should work fine. I've got this in the logs of the pod (kubectl logs -f ignite-cluster-xx-yy), showing the 2 Pods successfully locating each other:
[13:42:00] Ignite node started OK (id=f89698d6)
[13:42:00] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1, offheap=0.72GB, heap=1.0GB]
[13:42:00] Data Regions Configured:
[13:42:00] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]
[13:42:01] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2, offheap=1.4GB, heap=2.0GB]
[13:42:01] Data Regions Configured:
[13:42:01] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]