Hazelcast doesn't connect to pods when using Headless Service - kubernetes

The hazelcast members are able to communicate with each other in non-kube environments. However, the same is not happening in Kube environments. The pod is not able to resolve its own domain.
I am working with Spring Boot 2.1.8 and Hazelcast 3.11.4 (TcpIp config)
Hazelcast configuration:
Config config = new Config();
config.setInstanceName("hazelcast-instance")
.setGroupConfig(new GroupConfig(hazelcastGroupName, hazelcastGroupPassword))
.setProperties(hzProps)
.setNetworkConfig(
new NetworkConfig()
.setPort(hazelcastNetworkPort)
.setPortAutoIncrement(hazelcastNetworkPortAutoIncrement)
.setJoin(
new JoinConfig()
.setMulticastConfig(new MulticastConfig().setEnabled(false))
.setAwsConfig(new AwsConfig().setEnabled(false))
.setTcpIpConfig(new TcpIpConfig().setEnabled(true).setMembers(memberList))))
.addMapConfig(initReferenceDataMapConfig());
return config;
Members definition in Statefulset config file:
- name: HC_NETWORK_MEMBERS
value: project-0.project-ha, project-1.project-ha
Headless service config:
---
apiVersion: v1
kind: Service
metadata:
namespace: {{ kuber_namespace }}
labels:
app: project
name: project-ha
spec:
clusterIP: None
selector:
app: project
Errors:
Resolving domain name 'project-0.project-ha' to address(es): [10.42.30.215]
Cannot resolve hostname: 'project-1.project-ha'
Members {size:1, ver:1} [
Member [10.42.11.173]:5001 - d928de2c-b4ff-4f6d-a324-5487e33ca037 this
]
The other pod has a similar error.
Fun facts:
It works fine when I set the full hostname in the HC_NETWORK_MEMBERS var. For example: project-0.project-ha.namespace.svc.cluster.local, project-1.project-ha.namespace.svc.cluster.local

There is a discovery plugin for Kubernetes, see here, and varies "how to" guides for Kubernetes here
If you use that, most of your problems should go away. The plugin looks up the member list from Kubernete's DNS.
If you can, it's worth upgrading to Hazelcast 4.2, as 3.11 is not the latest. Make sure to get a matching version of the plugin.
On 4.2, discovery has auto-detection, so will do it's best to help.

Please check the related guides:
Hazelcast Guide: Hazelcast for Kubernetes
Hazelcast Guide: Embedded Hazelcast on Kubernetes
For the embedded Hazelcast, you need to use the Hazelcast Kubernetes plugin. For the client-server, you can use TCP-IP configuration on the member side and use Hazelcast service name as the static DNS. It will be resolved automatically.

Related

Route annotations from yml file was not working in openshift

We are using openshift for the deployment where we have 3 pods running with same service
To achieve load balancing we are trying to create a annotations in the route.
Adding annotations in Route from console it is working fine
But the same is not working if I configured from yml file.
Is anyone facing the same issue or any available fix for this
apiVersion: v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/balance : roundrobin
haproxy.router.openshift.io/disable_cookies: true
name: frontend
spec:
host: www.example.com
path: "/test"
to:
kind: Service
name: frontend
This annotation doesnt work like this. This is used when route is distributing traffic among many services not just one as in your case.
https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#alternateBackends
Service in OpenShift or Kubernetes is nothing but a kube-proxy which works at TCP L4 level and provides default round robin load balancing to backend pods.
https://blog.getambassador.io/load-balancing-strategies-in-kubernetes-l4-round-robin-l7-round-robin-ring-hash-and-more-6a5b81595d6c

Error Prometheus endpoint for checking AlertManager

I installed Prometheus (follow in this link: https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/)
But, when checking status of Targets, it shows "Down" for AlertManager service, every another endpoint are up, please see the attached file
Then, I check Service Discovery, the discovered labels shows:
"address="192.168.180.254:9093"
__meta_kubernetes_endpoint_address_target_kind="Pod"
__meta_kubernetes_endpoint_address_target_name="alertmanager-6c666985cc-54rjm"
__meta_kubernetes_endpoint_node_name="worker-node1"
__meta_kubernetes_endpoint_port_protocol="TCP"
__meta_kubernetes_endpoint_ready="true"
__meta_kubernetes_endpoints_name="alertmanager"
__meta_kubernetes_namespace="monitoring"
__meta_kubernetes_pod_annotation_cni_projectcalico_org_podIP="192.168.180.254/32"
__meta_kubernetes_pod_annotationpresent_cni_projectcalico_org_podIP="true"
__meta_kubernetes_pod_container_name="alertmanager"
__meta_kubernetes_pod_container_port_name="alertmanager"
__meta_kubernetes_pod_container_port_number="9093""
But Target Labels show another port (8080), I don't know why:
instance="192.168.180.254:8080"
job="kubernetes-service-endpoints"
kubernetes_name="alertmanager"
kubernetes_namespace="monitoring"
First, if you want to install prometheus and grafana without getting sick, you need to do it though helm.
First install helm
And then
helm install installationWhatEverName stable/prometheus-operator
I've reproduced your issue on GCE.
If you are using version 1.16+ you have probably changed apiVersion as in tutorial you have Deployment in extensions/v1beta1. Since K8s 1.16+ you need to change it to apiVersion: apps/v1. Otherwise you will get error like:
error: unable to recognize "STDIN": no matches for kind "Deployment" in version "extensions/v1beta1"
Second thing, in 1.16+ you need to specify selector. If you will not do it you will receive another error:
`error: error validating "STDIN": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false`
It would look like:
...
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
...
Regarding port 8080 please check this article with example.
Port: Port is the port number which makes a service visible to
other services running within the same K8s cluster. In other words,
in case a service wants to invoke another service running within the
same Kubernetes cluster, it will be able to do so using port specified
against “port” in the service spec file.
It worked for my environment in GCE. Did you configure firewall for your endpoints?
In addition. In Helm 3 some hooks were deprecated. You can find this information here.
If you still have issue please provide your YAMLs witch applied changes to version 1.16+.

Jenkins-X: How to link external service in preview environment

From preview environment I want to access a database located in staging environment (in namespace jx-staging).
I am trying to follow Service Linking from Jenkins-X documentation with no success. Documentation is not really clear where to put the service link definition.
I created a service file charts/preview/resources/mysql.yaml with following content, but the service link is not created.
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: ExternalName
externalName: mysql.jx-staging.svc.cluster.local
ports:
- port: 3306
JX Environment:
jx version:
NAME VERSION
jx 1.3.688
jenkins x platform 0.0.3125
Kubernetes cluster v1.10.9-gke.5
kubectl v1.10.7
helm client v2.12.1+g02a47c7
helm server v2.12.0+gd325d2a
git git version 2.11.0
Operating System Debian GNU/Linux 9.6 (stretch)
Where and how to define a service link?
GitHub issue: How to link external service in preview environment
Solution is to move mysql.yaml from resources to templates sub-folder:
charts/preview/templates/mysql.yaml
Issue was cause by a typo in Service Linking documentation which is now corrected.
BTW there is also a FAQ entry on adding more resources to a preview.
Your Service YAML looks good to me. Do you see the Service created when you create a Preview Environment?
You can find the namespace by typing jx get preview then to see if there is a Service in your environment try kubectl get service -n jx-myuser-myapp-pr-1

How to Implement a specific /etc/resolv.conf per Openshift project

I'm having a use case where each openshift project belongs to an own VLAN, which has more than just Openshift Nodes in it. Each VLAN has it's own independent DNS to resolve all the Hosts within that VLAN. The Openshift Cluster itself hosts more of such VLANs on the same time. To get the per-project dns resolution done, it is elementary to get a project-based DNS resolving implemented.
Is there a way to change the pod's /etc/resolv.conf dependent on the Openshift project it runs in? The Cluster runs on RHEL 7.x, Openshift is 3.11
Personally I think the OpenShift has not been supported configuration of DNS per a project unit. But you can consider the CustomPodDNS feature to configure DNS per Pod unit. So you might configure the Pods to use same DNS config in a project using this feature.
You can enable the CustomPodDNS feature for OCP cluster, if you configure the following parameters in /etc/origin/master/master-config.yaml.
kubernetesMasterConfig:
apiServerArguments:
feature-gates:
- CustomPodDNS=true
controllerArguments:
feature-gates:
- CustomPodDNS=true
You can also enable this feature on one node host as configuring it in the /etc/origin/node/node-config.yaml.
kubeletArguments:
feature-gates:
- CustomPodDNS=true
You should restart the related services master and node to take effect the changes.
The config of Pod, refer Pod's Config for more details.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 1.2.3.4
searches:
- ns1.svc.cluster.local
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0

Preserving remote client IP with Ingress

My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.