i configure
Hello, I configure haproxy by digitalocean manual, roundrobin for percona 5.7 bases, but on the haproxy server, when I try to connect to the database I getting error.
On the haproxy server:
mysql -h 127.0.0.1 -u haproxy_root -p -e "SHOW DATABASES"
And i get error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2
Haproxy config:
lobal
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 1024
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 1024
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen galera_cluster
bind 127.0.0.1:3306
mode tcp
option httpchk
balance leastconn
server galera-node01 192.168.0.101:3306 check port 9200
server galera-node02 192.168.0.102:3306 check port 9200
server galera-node03 192.168.0.103:3306 check port 9200
If I connect directly to the database 192.168.0.101, everything works, I get a response from the database, but when I make the request through to haproxy 127.0.0.1 I get this error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading
initial communication packet', system error: 2
My xinetd config on mysql:
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
server_args = percona percona
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
If i telnet to PXC node on port 9200, i got:
telnet 192.168.0.101 9200
Trying 192.168.0.101...
Connected to 192.168.0.101.
Escape character is '^]'.
HTTP/1.1 503 Service Unavailable
Content-Type: text/plain
Connection: close
Content-Length: 57
Percona XtraDB Cluster Node is not synced or non-PRIM.
Connection closed by foreign host.
The most common reason for this is that all nodes in the cluster is down. If you have enabled your HAProxy stats, check that all nodes are up. If not, you mysqlchk service is likely not being able to connect to the cluster nodes properly.
Check your mysqlchk xinetd service should have the proper server_args configured. Once these are set, restart xinetd, and telnet to port 9200 to validate
[root#node02 log]# cat /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
...
server = /usr/bin/clustercheck
server_args = percona percona
...
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
}
UPDATE:
A more thorough procedure, including making sure mysqlchk is configured, can be found here https://www.percona.com/doc/percona-xtradb-cluster/5.6/howtos/virt_sandbox.html
Related
I reproduce MongoDB Cluster replica-set and added user like admin with Non-SSL following below link.
Link : https://github.com/arun2pratap/mongodbClusterForWindowsOneClick
Environment :
OS : Windows 2019 server ( set all instance in one windows server)
1 mongos ( port : 26000 )
2 shards ( port : sh01 : 27011 ~ 27013 / sh02 : 27021 ~ 27023 )
1 conf servers ( port : csrs : 26001 ~ 26003 )
After reproduce Cluster with Non-SSL, I tried to upgrade Cluster to use SSL following MongoDB Manual for 4.5 and other links but I couldn't found clear answer or guide.
Below are my refer links.
https://www.mongodb.com/docs/v4.4/tutorial/upgrade-cluster-to-ssl/
https://www.mongodb.com/docs/v4.4/tutorial/deploy-replica-set-with-keyfile-access-control/
https://www.mongodb.com/community/forums/t/cannot-start-mongodb-service-after-configuring-tls/2802
MongoDB Shell connection errors using test self signed certificates
https://www.mongodb.com/community/forums/t/creating-openssl-server-certificates-for-testing-failed/109058
I just configured conf files like sh011.conf following manuals, guides and started. but server seems only started csrs instances. because, I couldn't found other instance's port numbers.
1. sh011.conf
sharding:
clusterRole: shardsvr
replication:
replSetName: sh01
net:
bindIpAll: true
port: 27011
tls:
mode: requireTLS
certificateKeyFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-server1.pem
CAFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-ca.pem
systemLog:
destination: file
path: sh01/sh011/log/sh011.log
logAppend: true
storage:
dbPath: sh01/sh011/db/
2. mongos.conf
sharding:
configDB: csrs/WIN-BKEV4AO0KED:26001,WIN-BKEV4AO0KED:26002,WIN-BKEV4AO0KED:26003
net:
bindIpAll: true
port: 26000
tls:
mode: requireTLS
certificateKeyFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-server1.pem
CAFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-ca.pem
systemLog:
destination: file
path: router/log/mongos.log
logAppend: true
security:
authorization: enabled
clusterAuthMode: x509
3. "netstat -an" output
C:\database\MongoDB\Server\4.4\bin>netstat -an
Active Connections
Proto Local Address Foreign Address State
TCP 0.0.0.0:22 0.0.0.0:0 LISTENING
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5432 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26001 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26002 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26003 0.0.0.0:0 LISTENING
TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING
When I checked log files, each shard nodes occurred SSL error like below
{"t":{"$date":"2022-05-09T14:34:54.933+09:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"csrs","host":"WIN-BKEV4AO0KED:26001","error":{"code":6,"codeName":"HostUnreachable","errmsg":"Error connecting to WIN-BKEV4AO0KED:26001 (192.168.100.202:26001) :: caused by :: SSL peer certificate validation failed: (80096004)The signature of the certificate cannot be verified."},"action":{"dropConnections":true,"requestImmediateCheck":false,"outcome":{"host":":26001","success":false}}}}
{"t":{"$date":"2022-05-09T14:34:55.164+09:00"},"s":"I", "c":"-", "id":4333222, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM received failed isMaster","attr":{"host":"WIN-BKEV4AO0KED:26003","error":"HostUnreachable: Error connecting to WIN-BKEV4AO0KED:26003 (192.168.100.202:26003) :: caused by :: SSL peer certificate validation failed: (80096004)The signature of the certificate cannot be verified.","replicaSet":"csrs","isMasterReply":"{}"}}
I thought, that issues cause is relate host names so, I configured hosts file.
Then, re-created certification files for CA, Server, Client following manual.
1. openssl-test-server.conf
[ alt_names ]
DNS.1 = WIN-BKEV4AO0KED
IP.1 = 192.168.100.202
[ req_dn ]
countryName = Country Name (2 letter code)
countryName_default = AA
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = City
stateOrProvinceName_max = 64
localityName = Locality Name (eg, city)
localityName_default = City
localityName_max = 64
organizationName = Organization Name (eg, company)
organizationName_default = DevCompany
organizationName_max = 64
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Dev
organizationalUnitName_max = 64
commonName = Common Name (eg, YOUR name)
commonName_default = WIN-BKEV4AO0KED
commonName_max = 64
But, still mongos and other instances are not started.
Finally, I think some configuration is wrong. I want know what I missed or wrong for SSL.
Finally, I found what is cause of issue and How to start MongoDB Cluster with SSL Myself.
1st, Root cause is that I couldn't start MongoDB instances like mongos, mongod with SSL enable and missed some parameters while starting like below :
before start command
$ mongod -f csrs1.conf
modified start command
$ mongod -f csrs1.conf --tlsMode requireTLS --tlsCertificateKeyFile test-server1.pem --tlsCAFile test-ca.pem
Note : I was not set MongoDB as service and just control through prompt
When I generated certification base on default setting and start each MongoDB with new command, that was working fine.
And I tried modify START.bat file for convenience like above new command.
But, that was not working. So, I opened prompt for each nodes and executed start command manually.
I hope this information will help.
I am trying to load balance two server using HAProxy v1.8 but in my case the backends are domain names instead of IP addresses.
My HAProxy config looks like this:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
pidfile /var/run/rh-haproxy18-haproxy.pid
user haproxy
group haproxy
daemon
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
spread-checks 21
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 10000
balance roundrobin
frontend https-443
bind *:443
mode http
option httplog
acl ACL_global.domain.com hdr(host) -i global.domain.com
use_backend www-443-app if ACL_global.domain.com
backend www-443-app
balance roundrobin
mode http
option httpchk GET /health
option forwardfor
http-check expect status 200
server backendnode1 app1.domain.com:443 check
server backendnode2 app2.domain.com:443 check
frontend health-443
bind *:8443
acl backend_dead nbsrv(www-443-app) lt 1
monitor-uri /haproxy_status
monitor fail if backend_dead
listen stats # Define a listen section called "stats"
bind :9000 # Listen on localhost:9000
mode http
stats enable # Enable stats page
stats hide-version # Hide HAProxy version
stats realm Haproxy\ Statistics # Title text for popup window
stats uri /haproxy_stats # Stats URI
stats auth haproxy:passwd # Authentication credentials
However, the health check is not passing. When I checked the stat page it says: Layer7 invalid response.
I checked if I can connect to the backend domains from my HAProxy server and I am successfully able to do so.
curl -X GET -I https://app1.domain.com/health
HTTP/2 200
cache-control: no-cache, private, max-age=0
content-type: application/json
expires: Thu, 01 Jan 1970 00:00:00 UTC
pragma: no-cache
x-accel-expires: 0
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
date: Wed, 28 Jul 2021 12:05:09 GMT
content-length: 18
x-envoy-upstream-service-time: 0
endpoint: health
version: 1.0.0
server: istio-envoy
Is there something that I am missing in my configuration or something that I need to change to make this work?
You're missing ssl keyword for server lines. You may also want to set sni
backend foo
default-server ssl check verify none
server backendnode1 app1.domain.com:443 sni str('app1.domain.com')
server backendnode2 app2.domain.com:443 sni str('app2.domain.com')
You should also decide if you want to verify SSL certificates of your backend servers. Can you trust the connection? Is it your network? Haproxy encourages you to verify, but requires supplying CA certificate for them to verify. You can also add verifyhost and check-sni settings if you verify certificate:
backend foo
default-server ssl check verify required
server backendnode1 app1.domain.com:443 sni str('app1.domain.com') check-sni 'app1.domain.com' verifyhost 'app1.domain.com' ca-file /path/to/CA1.pem
server backendnode2 app2.domain.com:443 sni str('app2.domain.com') check-sni 'app2.domain.com' verifyhost 'app2.domain.com' ca-file /path/to/CA2.pem
I was trying to implement SSL termination with HAProxy.
This is how my haproxy.cfg looks like
frontend Local_Server
bind *:443 ssl crt /home/vagrant/ingress-certificate/k8s.pem
mode tcp
reqadd X-Forwarded-Proto:\ https
default_backend k8s_server
backend k8s_server
mode tcp
balance roundrobin
redirect scheme https if !{ ssl_fc }
server web1 100.0.0.2:8080 check
I have generated the self signed certificate which k8s.pem.
My normal URL (without https) is working perfectly fine .i.e. - http://100.0.0.2/hello
But when i try to access the same url with HTTPS .i.e.- https://100.0.0.2/hello i get 404 and when i checked my haproxy logs i can see following message
Jul 21 18:10:19 node1 haproxy[10813]: Server k8s_server/web1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jul 21 18:10:19 node1 haproxy[10813]: Server k8s_server/web1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Any suggestions which i can incorporate in my haproxy.cfg ?
PS - The microservice which i am trying to access is deployed under kubernetes cluster with service exposed as ClusterIP
Filebeat is running on Machine B which read logs and push to ELK logstash on Machine A.
But in the Machine B filebeat log, it shows the error i/o timeout
2019-08-24T12:13:10.065+0800 ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://example.com:5044)): dial tcp xx.xx.xx.xx:5044: i/o timeout
2019-08-24T12:13:10.065+0800 INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://example.com:5044)) with 1 reconnect attempt(s)
I've check the logstash on Machine A which running well, can listening on 0 0.0.0.0:5044
Here is the logstash log
[INFO ] 2019-08-24 12:09:35.217 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
And here is netstat output,
$ sudo netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5044 0.0.0.0:* LISTEN 20668/java
I also check the firewall on Machine A is closed.
$ firewall-cmd --list-all
FirewallD is not running
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
I also use telnet to connect Machine A, But I get this,
$ telnet example.com 5044
Trying xx.xx.xx.xx...
telnet: connect to address xx.xx.xx.xx: Connection timed out
I run the filebeat with same config on Machine A(local) to check it the config for filebeat on Machine B(remote) is wrong, it works well.
2019-08-24T14:17:35.195+0800 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
2019-08-24T14:17:35.198+0800 INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:5044)) established
At last I find it's caused by the VPS Provider aliyun, it only open some common port such 22, 80,443.
I need to login to aliyun VPS management page, and open 5044 to make VPS Provider bypass the 5044 port.
*Note: * Attachment: some other issues I encountered when config filebeat with ELK.
**Issue 1: ** Failed to connect to backoff(async(tcp://ip:5044)): dial tcp ip:5044: connect: connection refused
2019-08-26T10:25:41.955+0800 ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://example.com:5044)): dial tcp xx.xx.xx.xx:5044: connect: connection refused
2019-08-26T10:25:41.955+0800 INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://example:5044)) with 2 reconnect attempt(s)
Issue 2: Failed to publish events caused by: write tcp ip:46890->ip:5044: write: connection reset by peer
2019-08-26T10:28:32.274+0800 ERROR logstash/async.go:256 Failed to publish events caused by: write tcp xx.xx.xx.xx:46890->xx.xx.xx.xx:5044: write: connection reset by peer
2019-08-26T10:28:33.311+0800 ERROR pipeline/output.go:121 Failed to publish events: write tcp xx.xx.xx.xx:46890->xx.xx.xx.xx:5044: write: connection reset by peer
Issue 3: Filebeat error: lumberjack protocol error and Logstash error: OPENSSL_internal:WRONG_VERSION_NUMBER
Filebeat log error,
2019-08-26T08:49:09.505+0800 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://example.com:5044))
2019-08-26T08:49:09.588+0800 INFO pipeline/output.go:105 Connection to backoff(async(tcp://example.com:5044)) established
2019-08-26T08:49:09.605+0800 ERROR logstash/async.go:256 Failed to publish events caused by: lumberjack protocol error
2019-08-26T08:49:09.606+0800 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
Logstash log,
[INFO ] 2019-08-26 08:49:29.444 [defaultEventExecutorGroup-4-2] BeatsHandler - [local: 0.0.0.0:5044, remote: undefined] Handling exception: javax.net.ssl.SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[WARN ] 2019-08-26 08:49:29.445 [nioEventLoopGroup-2-7] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
...
All the three issues are caused by miss configuration, here is the workable config,
logstash version,
/usr/share/logstash/bin/logstash -V
logstash 7.3.1
filebeat version,
/usr/share/filebeat/bin/filebeat version
filebeat version 7.3.1 (amd64), libbeat 7.3.1 [a4be71b90ce3e3b8213b616adfcd9e455513da45 built 2019-08-19 19:30:50 +0000 UTC]
logstash conf file /etc/logstash/conf.d/beat.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate_authorities => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
ssl_verify_mode => "peer"
}
}
output {
elasticsearch {
hosts => "http://127.0.0.1:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
filebeat conf file /etc/filebeat/filebeat.yml
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /data/error_logs/Log_error_201908
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["example.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for SSL client authentication
ssl.certificate: "/etc/pki/tls/certs/logstash-forwarder.crt"
# Client Certificate Key
ssl.key: "/etc/pki/tls/private/logstash-forwarder.key"
I installed a HAPROXY for balance between two servers. Unfortunately the HAPROXY return random ERR_EMPTY_RESPONSE. I installed the stats also but the stats does not appear frequently because sometimes the stats is shown. I double check with some friends my configuration and I did not found problems.
defaults
timeout connect 3000ms
timeout server 10000ms
timeout client 10000ms
global
log 127.0.0.1 local0 notice
maxconn 2000
user haproxy
group haproxy
frontend stats
bind *:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth user:password
frontend http_in
bind *:80
acl is_audio hdr_end(host) -i subdomain.myserver.com
acl is_proxystats hdr_end(host) -i stats.myserver.com
use_backend srv_audio if is_audio
use_backend srv_stats if is_proxystats
# acl url_blog path_beg /blog
# use_backend blog_back if url_blog
default_backend srv_audio
backend srv_audio
balance roundrobin
server audio1 10.10.10.1:80 check
server audio2 10.10.10.2:80 check
backend srv_stats
server Local 127.0.0.1:1936
My configuration:
HA Proxy version 1.6.3 (package 1.6.3-1ubuntu0.1 amd64)
Ubuntu 16.04.2 LTS
Cloud Machine on AWS LightSail 512KB RAM
System with all packages updated.
I already read the answer of similar question at HAProxy random HTTP 503 errors and the answer is not the same. As suggested there the command netstat -tulpn | grep 80 does not show two HAPROXY running:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
But ps ax | grep haproxy returns:
22890 ? Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
22891 ? S 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
22894 ? Ss 0:31 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Well, I dig more into HAProxy and read a lot of tutorials and I believe I found the solution.
I did two changes:
Changed hdr_end(host) to hdr_dom(host)
Added mode httpto: frontend http_in, backend srv_audio and backend srv_stats
Now, HAPROXY is very stable without bizarre behavior