filebeat cannot assign requested address - elastic-stack

I am trying to read the syslog information by filebeat. I have my filebeat installed in docker. I get error message
ERROR [syslog] syslog/input.go:150 Error starting the servererrorlisten tcp 192.168.1.142:514: bind: cannot assign requested address
Here is the config file filebeat.yml:
filebeat.inputs:
- type: syslog
format: rfc5424
protocol.tcp:
host: "192.168.1.142:514"
#========================== Elasticsearch output ===============================
output.elasticsearch:
hosts: ["${ELASTICSEARCH_HOST}:9200"]
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
#============================== Dashboards =====================================
setup.dashboards:
enabled: true
#============================== Kibana =========================================
setup.kibana:
host: "${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
#================================== General ===================================
name: test_pc_ecs_log
tags: ["syslog"]
Here is /etc/rsyslog.conf
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
I have checked the connection by and telnet are success:
netstat -4altunp | grep 514
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 1332/rsyslogd
udp 0 0 0.0.0.0:514 0.0.0.0:* 1332/rsyslogd
I am following the config example from input doc.
I would like to ask if anyone set up filebeat for syslog reading.
Thanks

Related

Kubernetes: using container as proxy

I have the following pod setup:
apiVersion: v1
kind: Pod
metadata:
name: proxy-test
namespace: test
spec:
containers:
- name: container-a
image: <Image>
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8083
- name: container-proxy
image: <Image>
ports:
- name: server
containerPort: 7487
protocol: TCP
- name: container-b
image: <Image>
I exec into container-b and execute following curl request:
curl --proxy localhost:7487 -X POST http://localhost:8083/
Due to some reason, http://localhost:8083/ is directly getting called and proxy is ignored. Can someone explain why this can happen ?
Environment
I replicated the scenario on kubeadm and GCP GKE kubernetes clusters to see if there is any difference - no, they behave the same, so I assume AWS EKS should behave the same too.
I created a pod with 3 containers within:
apiVersion: v1
kind: Pod
metadata:
name: proxy-pod
spec:
containers:
- image: ubuntu # client where connection will go from
name: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: proxy-container # proxy - that's obvious
image: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: server # regular nginx server which listens to port 80
image: nginx
For this test stand I installed squid proxy on proxy-container (what is squid and how to install it). By default it listens to port 3128.
As well as curl was installed on ubuntu - client container. (net-tools package as a bonus, it has netstat).
Tests
Note!
I used 127.0.0.1 instead of localhost because squid has some resolving questions, didn't find an easy/fast solution.
curl is used with -v flag for verbosity.
We have proxy on 3128 and nginx on 80 within the pod:
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
curl directly:
# curl 127.0.0.1 -vI
* Trying 127.0.0.1:80... # connection goes directly to port 80 which is expected
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
curl via proxy:
# curl --proxy 127.0.0.1:3128 127.0.0.1:80 -vI
* Trying 127.0.0.1:3128... # connecting to proxy!
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connected to proxy
> HEAD http://127.0.0.1:80/ HTTP/1.1 # going further to nginx on `80`
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
squid logs:
# cat /var/log/squid/access.log
1635161756.048 1 127.0.0.1 TCP_MISS/200 958 GET http://127.0.0.1/ - HIER_DIRECT/127.0.0.1 text/html
1635163617.361 0 127.0.0.1 TCP_MEM_HIT/200 352 HEAD http://127.0.0.1/ - HIER_NONE/- text/html
NO_PROXY
NO_PROXY environment variable might be set up, however by default it's empty.
I added it manually:
# export NO_PROXY=127.0.0.1
# printenv | grep -i proxy
NO_PROXY=127.0.0.1
Now curl request via proxy will look like:
# curl --proxy 127.0.0.1:3128 127.0.0.1 -vI
* Uses proxy env variable NO_PROXY == '127.0.0.1' # curl detects NO_PROXY envvar
* Trying 127.0.0.1:80... # and ignores the proxy, connection goes directly
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
It's possible to override NO_PROXY envvar while executing curl command with --noproxy flag.
--noproxy no-proxy-list
Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single *
character, which matches all hosts, and effectively disables the
proxy. Each name in this list is matched as either a domain which
contains the hostname, or the hostname itself. For example, local.com
would match local.com, local.com:80, and www.local.com, but not
www.notlocal.com. (Added in 7.19.4).
Example:
# curl --proxy 127.0.0.1:3128 --noproxy "" 127.0.0.1 -vI
* Trying 127.0.0.1:3128... # connecting to proxy as it was supposed to
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connection to proxy is established
> HEAD http://127.0.0.1/ HTTP/1.1 # connection to nginx on port 80
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
This proves that proxy works! with localhost.
Another option is something incorrectly configured in proxy which is used in the question. You can get this pod and install squid and curl into both containers and try yourself.

Microk8s Ingress returns 502

I'm new at Kubernetes and trying to do a simple project to connect MySQL and PhpMyAdmin using Kubernetes on my Ubuntu 20.04. I created the components needed and here is the components.
mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: mysql-database
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
phpmyadmin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
spec:
replicas: 1
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin
ports:
- containerPort: 3000
env:
- name: PMA_HOST
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: database_url
- name: PMA_PORT
value: "3306"
- name: PMA_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-username
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-password
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-service
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 8080
targetPort: 3000
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: phpmyadmin-service
port:
number: 8080
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phpmyadmin-service
port:
number: 8080
when I execute microk8s kubectl get ingress ingress-service, the output is:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service public test.com 127.0.0.1 80 45s
and when I tried to access test.com, that's when I got 502 error.
My kubectl version:
Client Version: v1.22.2-3+9ad9ee77396805
Server Version: v1.22.2-3+9ad9ee77396805
My microk8s' client and server version:
Client:
Version: v1.5.2
Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a
Go version: go1.15.15
Server:
Version: v1.5.2
Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a
UUID: b2bf55ad-6942-4824-99c8-c56e1dee5949
As for my microk8s' own version, I followed the installation instructions from here, so it should be 1.21/stable. (Couldn't find the way to check the exact version from the internet, if someone know how, please tell me how)
mysql.yaml logs:
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Initializing database files
2021-10-14T07:05:38.960693Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.26) initializing of server in progress as process 41
2021-10-14T07:05:38.967970Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:39.531763Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:40.591862Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:40.592247Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:40.670594Z 6 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Database files initialized
2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Starting temporary server
2021-10-14T07:05:45.362827Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 90
2021-10-14T07:05:45.486702Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:45.845971Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:46.022043Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:46.022189Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:46.023446Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-10-14T07:05:46.023728Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-10-14T07:05:46.026088Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-10-14T07:05:46.044967Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
2021-10-14T07:05:46.045036Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL.
2021-10-14 07:05:46+00:00 [Note] [Entrypoint]: Temporary server started.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating database testing-database
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating user testinguser
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Giving user testinguser access to schema testing-database
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Stopping temporary server
2021-10-14T07:05:48.422053Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.26).
2021-10-14T07:05:50.543822Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.26) MySQL Community Server - GPL.
2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: Temporary server stopped
2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
2021-10-14T07:05:51.711889Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 1
2021-10-14T07:05:51.725302Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:51.959356Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:52.162432Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:52.162568Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:52.163400Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-10-14T07:05:52.163556Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-10-14T07:05:52.165840Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-10-14T07:05:52.181516Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2021-10-14T07:05:52.181562Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
phpmyadmin.yaml logs:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message
[Thu Oct 14 03:57:32.653011 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.51 (Debian) PHP/7.4.24 configured -- resuming normal operations
[Thu Oct 14 03:57:32.653240 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
Here is also my Allocatable on describe nodes command:
Allocatable:
cpu: 4
ephemeral-storage: 113289380Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5904508Ki
pods: 110
and the Allocated resources:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (13%) 200m (5%)
memory 270Mi (4%) 370Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Any help? Thanks in advance.
Turns out it is a fudged up mistake of mine, where I specify the phpmyadmin's container port to be 3000, while the default image port opens at 80. After changing the containerPort and phpmyadmin-service's targetPort to 80, it opens the phpmyadmin's page.
So sorry for kkopczak and AndD for the fuss and also big thanks for trying to help! :)

Can't connect to postgres db from pgadmin (both running on docker)?

I'm running postgres and pgadmin4 on docker with docker-compose up on a fedora 28 OS and I'm having trouble creating a new db server from pgadmin's web console.
This is the docker-compose.yml file I'm using.
version: '3.0'
services:
db:
image: postgres:9.6
ports:
- 5432:5432/tcp
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=mydb
pgadmin:
image: dpage/pgadmin4
ports:
- 5454:5454/tcp
environment:
- PGADMIN_DEFAULT_EMAIL=admin#mydomain.com
- PGADMIN_DEFAULT_PASSWORD=postgres
- PGADMIN_LISTEN_PORT=5454
What should I write in the Create new server > Connection tab > "Host name/address" field? If I type in localhost or 127.0.0.1 I get an error (Unable to connect, see screenshot1 and screenshot2). If I type db (the service's name as specified in the yml file), only then pgadmin accepts it and creates a db server with a postgres database called mydb.
Why? How do I find the ip that goes in the address field?
Furthermore, on Fedora28:
$ netstat -napt | grep LIST
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3350 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3389 0.0.0.0:* LISTEN -
tcp6 0 0 :::5454 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::5432 :::* LISTEN -
$
I encountered this problem just recently too. There are two approaches I found:
1) See here. Basically, you just search for the IP address of the postgres container and use that IP address in pgadmin4:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
194d4a5f9dd0 dpage/pgadmin4 "/entrypoint.sh" 48 minutes ago Up 48 minutes 443/tcp, 0.0.0.0:8080->80/tcp docker-postgis_pgadmin_1
334d5bdc87f7 kartoza/postgis:11.0-2.5 "/bin/sh -c /docke..." 48 minutes ago Up 48 minutes (healthy) 0.0.0.0:5432->5432/tcp docker-postgis_db_1
In my case, the postgres container ID is 334d5bdc87f7. Then look for the IP address:
$ docker inspect 334d5bdc87f7 | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.18.0.2",
When I used 172.18.0.2 in pgadmin4, I connected to the database! Yey!
2) The 2nd approach is easier. Instead of using localhost or 127.0.0.1 or ::1, I used my IP address in my local network (e.g. in your case 192.168.122.1?). Afterwards, I connected to the postgres container!
From my reading of the docs and testing this myself you're doing it right using the database service name from the docker-compose yml file as the "Host name/address" field value in pgAdmin.
https://docs.docker.com/compose/networking/
In your "pgadmin" section I would use port values of 8080 or 80, why 5454?

Message addressed to one component delivers to other

following is my ejabberd yml configuration
-
port: 8888
ip: "::"
module: ejabberd_service
access: all
shaper_rule: fast
ip: "127.0.0.1"
privilege_access:
roster: "both"
message: "outgoing"
presence: "roster"
delegations:
"urn:xmpp:mam:1":
filtering: ["node"]
"http://jabber.org/protocol/pubsub":
filtering: []
hosts:
"mycomponent.p-pc":
password: "secret"
"sender.p-pc":
password: "secret"
messages addressed to mycomponent.p-pc are delivered to sender.p-pc
hosts variable is a list of aliases of a single component, not a different component name

Eureka peers not synchronized

I'm prototyping a set of Spring Cloud + Netflix OSS applications and have run into trouble with Eureka. In our setup, we have a Spring Cloud Config Server + Eureka Server, and then 2 modules that utilize that server component for bootstrapping and service discovery.
The problem I run into is that if I spin up 2 instances of the Eureka Server and try to pair them (based on the Two Peer Aware Eureka Servers in the docs) they don't synchronize with each other. See configs below and/or the code on GitHub.
Essentially, Peer1 starts up and looks fine. Peer2 will startup and look fine, with both peers showing each other in the services. However, if the "UserService" module spins up and registers itself with Peer1, Peer2 will never see it. If we then spin up the "Web" module pointing to Peer2, it can never resolve the UserService. They basically act in isolation.
I've tried several combinations of setting the serviceUrl both on the server and the instance of the Eureka servers but to no avail. Am I just configuring things wrong?
Peer 1 / default config:
server:
port: 8888
eureka:
dashboard:
path: /dashboard
instance:
hostname: peer1
leaseRenewalIntervalInSeconds: 3
client:
serviceUrl:
defaultZone: ${eureka.server.serviceUrl:http://localhost:${server.port}/eureka/}
server:
serviceUrl:
defaultZone: http://localhost:${server.port}/eureka/
peer2: http://peer2/eureka/
waitTimeInMsWhenSyncEmpty: 0
spring:
application:
name: demo-config-service
profiles:
active: native
# required for Spring Cloud Bus
rabbitmq:
host: ${DOCKER_IP:192.168.59.103}
port: 5672
username: guest
password: guest
virtualHost: /
cloud:
config:
server:
prefix: /configs
native:
searchLocations: /Users/dave/workspace/oss/distributed-spring/modules/config-server/src/main/resources/testConfigs
# git :
# uri: https://github.com/joshlong/microservices-lab-configuration
Peer 2 config:
server:
port: 8889
eureka:
dashboard:
path: /dashboard
instance:
hostname: peer2
leaseRenewalIntervalInSeconds: 3
client:
serviceUrl:
defaultZone: ${eureka.server.serviceUrl:http://localhost:${server.port}/eureka/}
server:
serviceUrl:
defaultZone: http://localhost:8888/eureka/
peer1: http://peer1/eureka/
waitTimeInMsWhenSyncEmpty: 0
spring:
application:
name: demo-config-service
profiles:
active: native
# required for Spring Cloud Bus
rabbitmq:
host: ${DOCKER_IP:192.168.59.103}
port: 5672
username: guest
password: guest
virtualHost: /
cloud:
config:
server:
prefix: /configs
native:
searchLocations: /Users/dave/workspace/oss/distributed-spring/modules/config-server/src/main/resources/testConfigs
# git :
# uri: https://github.com/joshlong/microservices-lab-configuration
There were a few problems. The defaultZone needs to be in the client section as noted in the docs. The defaultZone url needs the port.
/etc/hosts
127.0.0.1 peer1
127.0.0.1 peer2
Peer 1 Config (Partial)
eureka:
instance:
hostname: peer1
leaseRenewalIntervalInSeconds: 3
client:
serviceUrl:
defaultZone: http://peer2:8889/eureka/
Peer 2 Config (Partial)
eureka:
dashboard:
path: /dashboard
instance:
hostname: peer2
leaseRenewalIntervalInSeconds: 3
client:
serviceUrl:
defaultZone: http://peer1:8888/eureka/
server:
waitTimeInMsWhenSyncEmpty: 0
User service config (Partial) Config port was wrong.
spring:
application:
name: user-service
cloud:
config:
uri: http://localhost:8888/configs
You can see user-service replicated to both peer1 and peer2. I can post a PR to your code if you want.
Peer 1
Peer 2
#spencergibb's didn't mention why this hack-ish workaround is required. There is a gotcha with running more than one Eureka server on the same host. Netflix code (com.netflix.eureka.cluster.PeerEurekaNodes.isThisMyUrl) filters out the peer URLs that are on the same host. This may have been done to prevent the server registering as its own peer (I’m guessing here) but because they don’t check for the port, peer awareness doesn’t work unless the Eureka hostnames in the eureka.client.serviceUrl.defaultZone are different. The hacky workaround for this is to define unique hostnames and then map them to 127.0.0.1 in the /etc/hosts file (or its Windows equivalent).
I've created a blog post with the details of Eureka here, that fills in some missing detail from Spring doc or Netflix blog. It is the result of several days of debugging and digging through source code.