Prometheus input to Influx Exporter not working with metric_version = 2 but works with metric_version = 1 - plugins

Relevant telegraf.conf:
[[outputs.influxdb]]
urls = ["http://host.docker.internal:8086"]
database = "scraped_metrics"
skip_database_creation = false
[[inputs.prometheus]]
urls = ["http://host.docker.internal:8181/metrics"]
metric_version = 2
System info:
Telegraf 1.14.4 (git: HEAD c6fff6d8)
Insider Docker Container pulled from https://hub.docker.com/_/telegraf
Docker
Steps to reproduce:
Expose these metrics on localhost:8181/metrics
# TYPE mnesia_transaction_duration_us histogram
# HELP mnesia_transaction_duration_us Mnesia txn execution time
mnesia_transaction_duration_us_bucket{le="20"} 129
mnesia_transaction_duration_us_bucket{le="40"} 4026
mnesia_transaction_duration_us_bucket{le="80"} 6682
mnesia_transaction_duration_us_bucket{le="160"} 7687
mnesia_transaction_duration_us_bucket{le="320"} 7977
mnesia_transaction_duration_us_bucket{le="640"} 8043
mnesia_transaction_duration_us_bucket{le="1280"} 8048
mnesia_transaction_duration_us_bucket{le="2560"} 8050
mnesia_transaction_duration_us_bucket{le="5120"} 8051
mnesia_transaction_duration_us_bucket{le="10240"} 8053
mnesia_transaction_duration_us_bucket{le="20480"} 8053
mnesia_transaction_duration_us_bucket{le="40960"} 8057
mnesia_transaction_duration_us_bucket{le="81920"} 8057
mnesia_transaction_duration_us_bucket{le="163840"} 8058
mnesia_transaction_duration_us_bucket{le="327680"} 8058
mnesia_transaction_duration_us_bucket{le="655360"} 8058
mnesia_transaction_duration_us_bucket{le="1310720"} 8058
mnesia_transaction_duration_us_bucket{le="2621440"} 8058
mnesia_transaction_duration_us_bucket{le="5242880"} 8058
mnesia_transaction_duration_us_bucket{le="+Inf"} 8058
mnesia_transaction_duration_us_count 8058
mnesia_transaction_duration_us_sum 769500
With given telegraf config run telegraf and influx on localhost.
Expected behavior:
These metrics should be visible in influx database - scraped metrics
Actual behavior:
With metric_version = 1, the metrics are being sent correctly but i desire the output of metric_version = 2 where the labels don't end up showing as columns in influx and stay as series and no output gets send if i use metric_version = 2
Thanks

Related

Traefik metrics working for Prometheus but Grafana dashboards are empty

I have configured the Trafeik(v1.7.15) and Prometheus operator with stable HELM chart(chart version 8.2.4).
But however I can't see any metrics data from Grafana dashboards and they were empty.
Also I can see the metrics coming with POD IP:8080 port with a curl command. Refer the following metrics extract and few important configuration manifests.
Also I can see that trafeik service monitor is in UP state from Prometheus and same strategy I have done for Mongo/Postgres/Rabbit MQ metrics and those grafana dashboards are with rich set of data representation and working fine.
So highly appreciate if some one can guide me on right track of fixing and displaying Trafeik ingress controller metrics from grafana? Also let me know the cause of this?
I am using following Grafana dashboards and none of shows data.
Few dashboard ID's - 4475 , 8214, 11741, 6293.
THANK YOU
Trafeik Configurations:
Deployment YAML arguments
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
- name: https
containerPort: 443
args:
#- --api
- --web
- --web.metrics.prometheus
- --kubernetes
- --logLevel=INFO
- --configfile=/config/traefik.toml
volumeMounts:
- mountPath: /config
name: config
- mountPath: /ssl
name: ssl
Configmap TOML File
traefik.toml: |
# traefik.toml
logLevel = "INFO"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/tls.crt"
KeyFile = "/ssl/tls.key"
[metrics]
[metrics.prometheus]
buckets = [0.1,0.3,1.2,5.0]
Prometheus service monitor YAML
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: traefik-sm
labels:
release: my-prometheus
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
namespaceSelector:
any: true
endpoints:
- port: admin-ui
name: traefik-ingress-service
targetPort: 8080
path: /metrics
interval: 10s
honorLabels: true
Trafeik metrics with CURL
ubuntu#k8s-node1:~$ curl http://10.96.1.141:8080/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.3978e-05
go_gc_duration_seconds{quantile="0.25"} 1.86e-05
go_gc_duration_seconds{quantile="0.5"} 2.3194e-05
go_gc_duration_seconds{quantile="0.75"} 5.2525e-05
go_gc_duration_seconds{quantile="1"} 0.090356709
go_gc_duration_seconds_sum 12.978064956
go_gc_duration_seconds_count 3774
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 64
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 8.322768e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.7448991752e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.579943e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 2.5932029e+08
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0.00037814152889298634
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.4064e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 8.322768e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.3641216e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.261568e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 54120
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 4.636672e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.6256896e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.5858102844353108e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 2.5937441e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 3472
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 180000
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 245760
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.6043632e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 666961
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 851968
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 851968
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 7.2024312e+07
# HELP go_threads Number of OS threads created
# TYPE go_threads gauge
go_threads 11
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 553.04
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 11
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 6.9451776e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.58573313806e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.90099456e+08
# HELP traefik_backend_server_up Backend server is up, described by gauge value of 0 or 1.
# TYPE traefik_backend_server_up gauge
traefik_backend_server_up{backend="auth-jooqa.abc.com/",url="http://192.168.22.77:8180"}
# HELP traefik_config_last_reload_failure Last config reload failure
# TYPE traefik_config_last_reload_failure gauge
traefik_config_last_reload_failure 0
# HELP traefik_config_last_reload_success Last config reload success
# TYPE traefik_config_last_reload_success gauge
traefik_config_last_reload_success 1.585741581e+09
# HELP traefik_config_reloads_failure_total Config failure reloads
# TYPE traefik_config_reloads_failure_total counter
traefik_config_reloads_failure_total 0
# HELP traefik_config_reloads_total Config reloads
# TYPE traefik_config_reloads_total counter
traefik_config_reloads_total 4
There are too few metrics exported by traefik
If you check your exported metrics, there are too few:
$ curl -s http://10.96.1.141:8080/metrics | grep -P '^traefik_'
traefik_backend_server_up{backend="auth-jooqa.abc.com/",url="http://192.168.22.77:8180"}
traefik_config_last_reload_failure 0
traefik_config_last_reload_success 1.585741581e+09
traefik_config_reloads_failure_total 0
traefik_config_reloads_total 4
Hard to find ready-made grafana dashboard with your set of metrics
Let's grep expr tag in mentioned dashboards (4475 , 8214, 11741, [6293](https://grafana.com/grafana/dashboards/6293
for dashboard_url in 'https://grafana.com/api/dashboards/4475/revisions/4/download' 'https://grafana.com/api/dashboards/6293/revisions/2/download' 'https://grafana.com/api/dashboards/8214/revisions/1/download' 'https://grafana.com/api/dashboards/11741/revisions/1/download' ; do
echo "\t = Dashboard: $dashboard_url = "
curl -s $dashboard_url | jq '.panels[].targets[0].expr' | grep -Po 'traefik_[a-z_]+' | sort |uniq
done
))
The command above return list of traefik_* metrics used in expr of appropriate dashboard:
= Dashboard: https://grafana.com/api/dashboards/4475/revisions/4/download =
traefik_backend_request_duration_seconds_sum
traefik_backend_requests_total
traefik_backend_server_up
traefik_config_reloads_total
traefik_entrypoint_requests_total
= Dashboard: https://grafana.com/api/dashboards/6293/revisions/2/download =
traefik_backend_open_connections
traefik_backend_request_duration_seconds_sum
traefik_backend_requests_total
traefik_entrypoint_open_connections
traefik_entrypoint_request_duration_seconds_sum
traefik_entrypoint_requests_total
= Dashboard: https://grafana.com/api/dashboards/8214/revisions/1/download =
traefik_backend_request_duration_seconds_sum
traefik_backend_requests_total
traefik_entrypoint_request_duration_seconds_sum
traefik_entrypoint_requests_total
= Dashboard: https://grafana.com/api/dashboards/11741/revisions/1/download =
traefik_entrypoint_open_connections
traefik_entrypoint_request_duration_seconds_sum
traefik_entrypoint_requests_total
traefik_service_open_connections
traefik_service_request_duration_seconds_count
traefik_service_request_duration_seconds_sum
traefik_service_requests_total
As you can see, only two of 5 metrics are used.
Let's try to find appropriate dashboard
Since these 4 dashboards aren't appropriate for your metric set, lets try to find appropriate dashboard in GitHub:
traefik_backend_server_up: 8 code results
traefik_backend_server_up or traefik_config_reloads_total: 11 code results
traefik_config_last_reload_failure OR traefik_config_last_reload_success OR traefik_config_reloads_failure_total: 1 code results
Suggestions
So, id suggest:
either try to update traefik to expose more actual metric set
or create your own dashboard, it's easy
P.S. grafana-dashboard-builder for easier creation of Grafana dashboards
There is an open-source tool for easier creation of dashboards:
jakubplichta/grafana-dashboard-builder: Generate Grafana dashboards with YAML
Currently it supports three data-stores:
Graphite
Prometheus
InfluxDB

configuring kafka with JMX-exporter- centos 7

I want to enable kafka monitoring and I am starting with single node deployment as test. I am following steps from https://alex.dzyoba.com/blog/jmx-exporter/
i tried following steps; the last command which checks for jmx-exporter HTTP server reports blank. i believe this is the reason, why I am not seeing metrics from kafka.(more on this below)
wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.6/jmx_prometheus_javaagent-0.6.jar
wget https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-0-8-2.yml
export KAFKA_OPTS='-javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.6.jar=7071:/etc/jmx-exporter/kafka-0-8-2.yml'
/opt/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh /opt/kafka_2.11-0.10.1.0/conf/server.properties
netstat -plntu | grep 7071
kafka broker log on console does not have any ERROR message.
i have Prometheus running in a container and http://IP:9090/metrics shows bunch of metrics.
when i searched for "kafka" it returned following
# TYPE net_conntrack_dialer_conn_attempted_total counter
net_conntrack_dialer_conn_attempted_total{dialer_name="kafka"} 79
# TYPE net_conntrack_dialer_conn_closed_total counter
net_conntrack_dialer_conn_closed_total{dialer_name="kafka"} 0
net_conntrack_dialer_conn_established_total{dialer_name="kafka"} 0
# TYPE net_conntrack_dialer_conn_failed_total counter
net_conntrack_dialer_conn_failed_total{dialer_name="kafka",reason="refused"} 79
net_conntrack_dialer_conn_failed_total{dialer_name="kafka",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kafka",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="kafka",reason="unknown"} 79
# TYPE prometheus_sd_discovered_targets gauge
prometheus_sd_discovered_targets{config="kafka",name="scrape"} 1
# HELP prometheus_target_sync_length_seconds Actual interval to sync the scrape pool.
# TYPE prometheus_target_sync_length_seconds summary
prometheus_target_sync_length_seconds{scrape_job="kafka",quantile="0.01"} NaN
prometheus_target_sync_length_seconds{scrape_job="kafka",quantile="0.05"} NaN
prometheus_target_sync_length_seconds{scrape_job="kafka",quantile="0.5"} NaN
prometheus_target_sync_length_seconds{scrape_job="kafka",quantile="0.9"} NaN
prometheus_target_sync_length_seconds{scrape_job="kafka",quantile="0.99"} NaN
prometheus_target_sync_length_seconds_sum{scrape_job="kafka"} 0.000198245
prometheus_target_sync_length_seconds_count{scrape_job="kafka"} 1
My guess is prometheous is not getting any metrics on port 7071; which aligns with earlier finding that JMX server is not respond on port 7071.
can you help me enabling kafka monitoring using JMX-exporter and Prometheus?
I have Prometheus running in a container
You need to configure Prometheus to scrape your external LAN IP, then because you're running Kafka outside of a container.
You can see on this line that the connection is being refused with your current setup
net_conntrack_dialer_conn_failed_total{dialer_name="kafka",reason="refused"} 79
You should either run Prometheus on your host and scrape localhost:7071
Or run Kafka in a container if you want kafka:7071 to be discoverable by Prometheus

How To Push Gatling Perf Results To EC2 Grafana/InfluxDB instance

I have spun an t2.micro Ubuntu 18.04 EC2 instance and in this EC2 instance i have installed manually Grafana and InfluxDB .
Both Grafana and InfluxDB have been installed successfully with no errors,but now what i expect is when i run Gatling tests at my
windows local ,results should get pushed live to InfluxDB and eventually to Grafana
Here is my extract of Gatling.conf settings
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
#writePeriod = 5 # Write interval, in seconds
}
graphite {
light = false # only send the all* stats
host = "http://ec2-54-67-97-86.us-west-1.compute.amazonaws.com" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
Problem is I see no data in influx instance when i run my Gatling tests from local
ubuntu#ip-172-31-9-16:~$ influx -host ec2-54-67-97-86.us-west-1.compute.amazonaws.com Connected to http://ec2-54-67-97-86.us-west-1.compute.amazonaws.com:8086 version 1.7.7
InfluxDB shell version: 1.7.7
> show databases
name: databases
name
----
_internal
gatling
graphite
> use graphite
Using database graphite
> show series
key
---
X-Grafana-Org-Id:
Can someone help to debug this ,that why no data is being received at influx DB
I suggest you to check your graphite listener in influx.
To do it open your influxdb.conf and find [[graphite]] block.
For default settings it should look like that:
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "gatlingdb"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
templates = [
"gatling.*.*.*.* measurement.simulation.request.status.field",
"gatling.*.users.*.*measurement.simulation.measurement.request.field"
]
More info here: https://gatling.io/docs/current/realtime_monitoring/#influxdb

kubernetes volumes and sockets

I have two containers inside the same pod. One is an haproxy container and I'm pushing the haproxy statistics to a socket inside the container. I want to access the socket inside the haproxy container from the other container. I tried to use volume type mkdir but an error occurred mentioning that there is no unix sockets under the directory which I'm trying to access.
I'm new to these technologies and please help me to solve this problem.
The yaml file is as follows.
yaml file
In reference to kubernetes documentation :
Every container in a Pod shares the network namespace, including the IP address and network ports.
You don't need to use a volume to access to haproxy statistics, just use 127.0.0.1 and the port where the process for haproxy statistics is bound.
Here is an example of a telegraph configuration container deployed in the same pod of an haproxy :
# Telegraf Configuration
[global_tags]
env = "$ENV"
tenant = "$TENANT"
[agent]
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_jitter = "5s"
precision = ""
debug = false
quiet = false
logfile = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb]]
urls = ["http://influxdb.host:2001"]
database = "db_name"
retention_policy = ""
write_consistency = "any"
timeout = "5s"
[[inputs.haproxy]]
servers = [ "http://$STATS_USERNAME:$STATS_PASSWORD#127.0.0.1:$STATS_PORT/haproxy?stats" ]
Input use haproxy plugin, output use influxdb. $STATS_USERNAME $STATS_PASSWORDand $STATS_PORTare environment variable shared between 2 containers.

how to configure loadbalancer in fuse cluster with different machine1,machine2 and machine3?

Following are the steps i follow to setup cluster in 3 different machine.
1. Unzip JBoss fuse in three different folders, so that you have the following configuration:
- machine1/jboss-fuse-6.3.0.redhat-187
- machine2/jboss-fuse-6.3.0.redhat-187
- machine3/jboss-fuse-6.3.0.redhat-187
2. Edit etc/org.apache.karaf.management.cfg and change rmiRegistryPort, rmiServerPort, assiging an unique port:
**#machine1**
rmiRegistryPort = 1099
rmiServerPort = 44444
**#machine2**
rmiRegistryPort = 1100
rmiServerPort = 44445
**#machine3**
rmiRegistryPort = 1101
rmiServerPort = 44446
3. Edit etc/org.apache.karaf.shell.cfg and change sshPort, assiging an unique port:
#machine1
sshPort = 8101
#machine2
sshPort = 8102
#machine3
sshPort = 8103
4. Edit etc/system.properties. Change karaf.name, org.osgi.service.http.port, activemq.port , assiging an unique port:
#machine1
karaf.name = root1
org.osgi.service.http.port=8181
activemq.port = 61616
#machine2
karaf.name = root2
org.osgi.service.http.port=8182
activemq.port = 61617
#machine3
karaf.name = root3
org.osgi.service.http.port=8183
activemq.port = 61618
5. start the root1 Container
./fuse
6. And create the Fabric:
JBossFuse:karaf#root1> fabric:create --new-user administrator --new-user-password password --new-user-role Administrator --zookeeper-password ZooPass1 --resolver manualip --manual-ip 192.168.1.9 --wait-for-provisioning
Above is My root1 machine1 IP Address : 192.168.1.9
7. Now, start the root2 Container and join the Fabric:
./fuse
JBossFuse:karaf#root2> fabric:join 192.168.1.10:2181
Ensemble password: ZooPass1
8. Now, start the root3 Container and join the Fabric:
./fuse
JBossFuse:karaf#root3> fabric:join 192.168.1.11:2181
Ensemble password: ZooPass1
9. Run the following command to ensemble:
JBossFuse:karaf#root1> fabric:ensemble-add root2 root3
This will change of the zookeeper connection string.
Are you sure want to proceed(yes/no):yes
JBossFuse:karaf#root1> fabric:ensemble-list
[id]
root1
root2
root3
Then, i deployed the rest service on all 3 nodes and create the profile also added require profile with HTTP GETEWAY for load balancer and HA but request is not gone throgh machine 2 and machine 3. Even i am also not able access machine 1 and machine 2 hawtio console as per give below URL.
192.168.1.10:8182/hawtio/login
192.168.1.10:8183/hawtio/login
Can anybody help to to achieve load balancing for cluster environment with 3 different machine?
I would suggest -- don't do any of this :) If you're using Fabric8, install one instance of Fuse, do fabric:create, then use container-create-ssh --host localhost to set up other containers on the same machine. That will automatically take care of all the port conflicts that I suspect are at the root of your problem. Fabric8 uses many, many ports, and trying to fix them all up manually is a ghastly job.