borker advertise internal domain in k8s cluster - apache-kafka

I install the bitnami/kafka cluster with helm.
I want to make producers and consumers not in k8s cluster, This is my helm install config yaml file.
replicaCount: 3
service:
type: LoadBalancer
loadBalancerIP: 192.168.99.110
nodePorts:
client: 25100
external: 25101
externalAccess:
enabled: true
service:
type: LoadBalancer
port: 9094
nodePorts:
- 25100
- 25101
loadBalancerIPs:
- 192.168.99.120
- 192.168.99.121
I expected each broker will advertise own address, but they are giving kubernetes internal domain address like kf-kafka-1.kf-kafka-headless.default.svc.cluster.local:9092
please help me what I miiss

I treid to connect port on externalAccess.service.nodePorts
but should use just {externalAccess.service.loadBalancerIPs[n]}:9004
thanks.

Related

Kubernetes service externalTrafficPolicy reset to Local

In my Kubernetes cluster setup, I have a Greenplum DB cluster (one master and 8 segments nodes) with a LoadBlanacer service. Please refer to the below service config.
apiVersion: v1
kind: Service
metadata:
labels:
app: greenplum
greenplum-cluster: greenplum-cluster
name: greenplum
spec:
clusterIP: 10.101.251.127
clusterIPs:
- 10.101.251.127
externalIPs:
- 11.4.8.141
externalTrafficPolicy: Cluster
healthCheckNodePort: 32572
ports:
- name: psql
nodePort: 32198
port: 5432
protocol: TCP
targetPort: 5432
selector:
statefulset.kubernetes.io/pod-name: master-0
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
However, a few hours after deployment, the externalTrafficPolicy value is set to Local instead of Cluster, which made the service inaccessible via the defined external IP. Is there any reason for this? It changes automatically even after I edit the service configuration.
Or is there any other way to access this Greenplum DB (TCP 5432) such as ingress?
Can you provide more details on your Kubernetes cluster setup? Is this bare-metal, cloud managed, kubeadm?
Would be useful to see the logs when you try to connect to the DB, also some traces?
The only difference between externalTrafficPolicy Local or Cluster is that Cluster will balance the traffic between all pods in the cluster having a nice distribution between your pods. Local will only distribute the traffic between the pods already deployed on the node where the request land from the LB.
Additionally using Local will preserve the clientIP of the request but not the Cluster option.
There is a nice video who really explain this on detail here.

How can I access MariaDb outside from Helm installation

I have a RKE2 kube installation, 3 nodes, I install MariaDB from BitNami repository:
- name: mariadb
repository: https://charts.bitnami.com/bitnami
version: 10.3.2
It boots up correctly in my kube installation, but I need to access it from outside the cluster, let's say with my Navicat client as example.
This is my values.yaml:
mariadb:
clusterDomain: a4b-kube.local
auth:
rootPassword: "password"
replicationPassword: "password"
architecture: replication
primary:
service:
type: LoadBalancer
loadBalancerIP: mariadb.acme.com
secondary:
replicaCount: 2
Listing the services I see:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
a4b-test-mariadb-primary LoadBalancer 10.43.171.45 <pending> 3306:31379/TCP 48m
And the external IP never gets updated, I also try specifing an IP instead of dns, in my case was 192.168.113.120 but I got same result. What am I missing?
You might consider using NodePort
mariadb:
clusterDomain: a4b-kube.local
auth:
rootPassword: "password"
replicationPassword: "password"
architecture: replication
primary:
service:
type: NodePort
nodePort: 32036
secondary:
replicaCount: 2
nodePort: 32036 you can choose in range 30000 - 32767 (default)
Then, you can access via nodeIP:nodePort
You need an ingress controller to setup the EXTERNAL-IP. But if you have no intention to expose the database to the Internet; and the cluster nodes are network reachable to your client application, you can use NodePort instead of LoadBalancer. You can then connect to your database thru any of the 3 nodes with the node port from the outside of your cluster.

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

Exposing Kafka cluster in Kubernetes using LoadBalancer service

Suppose if I have 3 node Kafka cluster setup. Then how do I expose it outside a cloud using Load Balancer service? I have read reference material but have a few doubts.
Say for example below is a service for a broker
apiVersion: v1
kind: Service metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9092
name: outside
targetPort: 9092
selector: app: kafka kafka-pod-id: "0"
What is port and targetPort?
Do I setup LoadBalancer service for each of the brokers?
Do these multiple brokers get mapped to single public IP address of cloud LB?
How does a service outside k8s/cloud access individual broker? By using public-ip:port? or by using kafka-<pod-id>.kafka.my.company.com:port?. Also which port is used here? port or targetPort?
How do I specify this configuration in Kafka broker's Advertised.listeners property? As port can be different for services inside k8s cluster and outside it.
Please help.
Based on the information you provided I will try give you some answers, eventually give some advise.
1) port: is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against port in the service spec file.
targetPort: is the port on the POD where the service is running. Your application needs to be listening for network requests on this port for the service to work.
2/3) Each Broker should be exposed as LoadBalancer and be configured as headless service for internal communication. There should be one addiational LoadBalancer with external ip for external connection.
Example of Service
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
pod-name: kafka-0
type: LoadBalancer
4) You have to use kafka-<pod-id>.kafka.my.company.com:port
5) It should be set to the external addres so that clients can connect to it. This article might help with understanding.
Similar case was on Github, it might help you also - https://github.com/kow3ns/kubernetes-kafka/issues/3
In addition, You could also think about Ingress - https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/

Kubernetes, Cannot access exposed services

Kubernetes version:
v1.10.3
Docker version:
17.03.2-ce
Operating system and kernel:
Centos 7
Steps to Reproduce:
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
Results:
[root#rd07 rd]# kubectl describe services example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations:
Selector: run=load-balancer-example
Type: NodePort
IP: 10.108.214.162
Port: 9090/TCP
TargetPort: 9090/TCP
NodePort: 31105/TCP
Endpoints: 192.168.1.23:9090,192.168.1.24:9090
Session Affinity: None
External Traffic Policy: Cluster
Events:
Expected:
Expect to be able to curl the cluster ip defined in the kubernetes service
I'm not exactly sure which is the so called "public-node-ip", so I tried every related ip address, only when using the master ip as the "public-node-ip" it shows "No route to host".
I used "netstat" to check if the endpoint is listened.
I tried "https://github.com/rancher/rancher/issues/6139" to flush my iptables, and it was not working at all.
I tried "https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/", "nslookup hostnames.default" is not working.
The services seems working perfectly fine, but the services still cannot be accessed.
I'm using "calico" and the "flannel" was also tried.
I tried so many tutorials of apply services, they all cannot be accessed.
I'm new to kubernetes, plz if anyone could help me.
If you are on any public cloud you are not supposed to get public ip address at ip a command. But even though the port will be exposed to 0.0.0.0:31105
Here is the sample file you can verify for your configuration:
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: app-name
name: bss
namespace: default
spec:
externalIPs:
- 172.16.2.2
- 172.16.2.3
- 172.16.2.4
externalTrafficPolicy: Cluster
ports:
- port: 9090
protocol: TCP
targetPort: 9090
selector:
k8s-app: bss
sessionAffinity: ClientIP
type: LoadBalancer
status:
loadBalancer: {}
Just replace your <private-ip> at externalIPs: and do curl your public ip with your node port.
If you are using any cloud to deploy application, Also verify configuration from cloud security groups/firewall for opening port.
Hope this may help.
Thank you!
My k8s cluster is 1 master and 1 node.
The service pod is running on the node.
So I used http://nodeip:31105, it shows "Hello Kubernetes!".
But http://masterip:31105 still not working, is it suppose to be right?
I checked the endpoint listen, 31105 is listened on master.