JBoss 4.0.3 Start up Error jgroups - jboss

I know this is an older version of JBoss, but I'm really scratching my head as to what to look for with the errors provided by the log on startup. It seems that it is unhappy with the configuration for jgroups? Here are the log entires that might help:
15:33:38,696 ERROR [STDERR] 102 [DownHandler (UDP)] INFO org.jgroups.protocols.UDP
15:33:38,699 ERROR [STDERR] 105 [DownHandler (UDP)] INFO org.jgroups.protocols.UDP
15:33:41,529 ERROR [STDERR] 2935 [main] INFO org.jboss.cache.TreeCache
15:33:43,333 ERROR [URLDeploymentScanner] Incomplete Deployment listing:
...
I apologize for the lack of any detailed info, but everything in the socket information seems correct. I know this is a long shot, thank you to anyone who can help.
Edit: Link to pastebin for more detailed log: https://pastebin.com/Z5yCjq99

I'd suggest you take a look at JBoss 4.2 Clustering Guide whether you got the UDP route or the TCP route - also, I dont know how involved a network setup you have with switches and routers - but our IP engineering team had to setup UDP multicast IP's and ports when we did JBoss EAP 6 clusters
From the guide, and with regards to TCP
Alternatively, a JGroups-based cluster can also work over TCP connections. Compared with UDP, TCP generates more network traffic when the cluster size increases. TCP is fundamentally a unicast protocol. To send multicast messages, JGroups uses multiple TCP unicasts. To use TCP as a transport protocol, you should define a TCP element in the JGroups Config element. Here is an example of the TCP element.
<TCP start_port="7800"
bind_addr="192.168.5.1"
loopback="true"
down_thread="false" up_thread="false"/>`
Theres a lot more information on the page - and if you click one page back, its the UDP setup
UDP is the preferred protocol for JGroups. UDP uses multicast or multiple unicasts to send and receive messages. If you choose UDP as the transport protocol for your cluster service, you need to configure it in the UDP sub-element in the JGroups Config element. Here is an example.
<UDP mcast_addr="${jboss.partition.udpGroup:228.1.2.3}"
mcast_port="${jboss.hapartition.mcast_port:45566}"
tos="8"
ucast_recv_buf_size="20000000"
ucast_send_buf_size="640000"
mcast_recv_buf_size="25000000"
mcast_send_buf_size="640000"
loopback="false"
discard_incompatible_packets="true"
enable_bundling="false"
max_bundle_size="64000"
max_bundle_timeout="30"
use_incoming_packet_handler="true"
use_outgoing_packet_handler="false"
ip_ttl="${jgroups.udp.ip_ttl:2}"
down_thread="false" up_thread="false"/>`
The next 20 pages of clicks or so go into all the different types of setups you can use!
Update I also just found the better pdf version of the documentation!

Related

TCP socket state become persist after changing IP address even configured keep-alive early

I met a problem about TCP socket keepalive.
TCP keep-alive is enabled and configured after the socket connection, and system has its own TCP keep-alive configuration.
'ss -to' can show the keep-alive information of the connection.
The network interface is a PPPOE device, if we ifup the interface, it will get a new ip address. And the old TCP connection will keep establish until keep-alive timeout.
But sometimes 'ss -to' shows that the tcp connection becomes 'persist', which will take long time (about 15 minutes) to close.
Following is the result of 'ss -to':
ESTAB 0 591 172.0.0.60:46402 10.184.20.2:4335 timer:(persist,1min26sec,14)
The source address is '172.0.0.60', but the network interface's actual address has been updated to '172.0.0.62'.
This is the correct result of 'ss -to':
ESTAB 0 0 172.0.0.62:46120 10.184.20.2:4335 timer:(keepalive,4.480ms,0)
I don't know why the "timer" is changed to 'persist', which makes keep-alive be disable.
In short: TCP keepalive is only relevant if the connection is idle, i.e. no data to send. If instead there are still data to send but sending is currently impossible due to missing ACK or a window of 0 then other timeouts are relevant. This is likely the problem in your case.
For the deeper details see The Cloudflare Blog: When TCP sockets refuse to die.

https://dnsflagday.net/ report edns512tcp=timeout

i have a Ubuntu 16.04.5 server with Vesta CP.
I checked the server on https://dnsflagday.net, and I got this report:
domain.cl. #123.456.78.90 (ns1.domain.cl.): dns=ok edns=ok edns1=ok edns#512=ok ednsopt=ok edns1opt=ok do=ok ednsflags=ok docookie=ok edns512tcp=timeout optlist=ok
domain.cl. #123.456.78.90 (ns2.domain.cl.): dns=ok edns=ok edns1=ok edns#512=ok ednsopt=ok edns1opt=ok do=ok ednsflags=ok docookie=ok edns512tcp=timeout optlist=ok
I do not know what edns512tcp = timeout means and I have not had much luck looking for a solution on internet.
Can someone help me? thanks
For that tool, any kind of "timeout" error is a problem, it means some server did not reply or the message (either query or reply) was eaten by some active element on the path, so it needs to be fixed.
edns512tcp is when the testing software does an EDNS query with a buffer of 512 bytes and over TCP.
If you go to https://ednscomp.isc.org/ednscomp/ for your domain you will have the full test results.
For that specific error it is:
EDNS - over TCP Response (edns#512tcp)
dig +vc +nocookie +norec +noad +edns +dnssec +bufsize=512 dnskey zone #server
expect: NOERROR
expect: OPT record with version set to 0
See RFC5966 and See RFC6891
So you can see which DNS query was done with dig, that you can reproduce it (+vc is an old flag name that is an alias for +tcp). The test expects to get a NOERROR code back and an OPT record. Your servers did not reply at all, so the test failed.
It seems that your servers did not reply to that at all, which is wrong. Maybe they do not reply to TCP queries at all which is even more wrong. In all cases you will need to contact the entity responsible for maintaining those servers and point it to the test results so that they start to fix the problem.
thanks for your help.
I read more about it and I could detect that port 53 was being blocked by the firewall, I added the rule to the firewall to allow TCP connections on port 53.
Everything it's fine now

Kube-proxy or ELB "delaying" packets of HTTP requests

We're running a web API app on Kubernetes (1.9.3) in AWS (set with KOPS). The app is a Deployment and represented by a Service (type: LoadBalancer) which is actually an ELB (v1) on AWS.
This generally works - except that some packets (fragments of HTTP requests) are "delayed" somewhere between the client <-> app container. (In both HTTP and HTTPS which terminates on ELB).
From the node side:
( Note: Almost all packets on server-side arrive duplicated 3 times )
We use keep-alive so the tcp socket is open and requests arrive and return pretty fast. Then the problem happens:
first, a packet with only the headers arrives [PSH,ACK] (I see the headers in the payload with tcpdump).
an [ACK] is sent back by the container.
The tcp socket/stream is quiet for a very long time (up to 30s and more - but the interval is not consistent, we consider >1s as a problem ).
another [PSH, ACK] with the HTTP data arrives, and the request can finally be processed in the app.
From the client side:
I've run some traffic from my computer, recording it on the client side to see the other end of the problem, but not 100% sure it represents the real client side.
a [PSH,ASK] with the headers go out.
a couple of [ACK]s with parts of the payload start going out.
no response arrives for a few seconds (or more) and no more packets go out.
[ACK] marked as [TCP Window update] arrives.
a short pause again and [ACK]s start arriving and the session continues until the end of the payload.
This is only happening under load.
To my understanding, this is somewhere between the ELB and the Kube-Proxy, but I'm clueless and desperate for help.
This is the arguments Kube-Proxy runs with:
Commands: /bin/sh -c mkfifo /tmp/pipe; (tee -a /var/log/kube-proxy.log < /tmp/pipe & ) ; exec /usr/local/bin/kube-proxy --cluster-cidr=100.96.0.0/11 --conntrack-max-per-core=131072 --hostname-override=ip-10-176-111-91.ec2.internal --kubeconfig=/var/lib/kube-proxy/kubeconfig --master=https://api.internal.prd.k8s.local --oom-score-adj=-998 --resource-container="" --v=2 > /tmp/pipe 2>&1
And we use Calico as a CNI:
So far I've tried:
Using service.beta.kubernetes.io/aws-load-balancer-type: "nlb" - the issue remained.
(Playing around with ELB settings hoping something will do the trick ¯_(ツ)_/¯ )
Looking for errors in the Kube-Proxy, found rare occurrences of the following:
E0801 04:10:57.269475 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Endpoints: Get https://api.internal.prd.k8s.local/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp: lookup api.internal.prd.k8s.local on 10.176.0.2:53: no such host
...and...
E0801 04:09:48.075452 1 proxier.go:1667] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 7 failed
)
I0801 04:09:48.075496 1 proxier.go:1669] Closing local ports after iptables-restore failure
I couldn't find anything describing such issue and will appreciate any help. Ideas on how to continue and troubleshoot are welcome.
Best,
A

ejabberd configuration - unknown listen option 'timeout' for 'ejabberd_c2s'

In Ejabberd 18, the parameter timeout under ejabberd_c2s in configuration seems like not working. When I checked the Log,
unknown listen option 'timeout' for 'ejabberd_c2s'
Now the problem I am facing is, the client connections are getting closed immediately.
Looking at https://docs.ejabberd.im/admin/configuration/#listening-ports right now I see that the ejabberd_c2s listener does not support any option named timeout.
See:
ejabberd_c2s: Handles c2s connections.
Options: access, certfile, ciphers, dhfile, protocol_options, max_fsm_queue, max_stanza_size, shaper, starttls, starttls_required, tls, zlib, tls_compression
Where did you read about such an option for that listener?

Mule ESB catch plain text message and transform it to SOAP

I'm doing some testing with Mule ESB. I want to receive come plain text over TCP and convert it to a soap Message, for that I created a TCP connector a logger and an echo Component
I'm sending a simple "Hello" and getting the following error con mule console
ERROR 2016-01-27 09:10:54,402 [[mule].connector.tcp.mule.default.receiver.02] org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: An error occurred while verifying your connection. You may not be using a consistent protocol on your TCP transport. Please read the documentation for the TCP transport, paying particular attention to the protocol parameter.
I have been playing with the Transformer and Metadata parameter but still could´t make it work. How do I config the connector so It knows how to dial with the Text input?
Just in case someone has the same problem:
I solve this by adding the "tcp:direct-protocol payloadOnly=true protocol" in TCP connector->general->connector configuration->protocol.
After that I got no problems receiving neither text not data
If you want to consume a WS from your HTTP request you could use the "Parte Template" Transformer and load the SOAP envelope from a .txt! Later add your payload to the template!
You can check the next url! Hope it helps!
http://forums.mulesoft.com/questions/994/consuming-webservices-in-mule-community-edition-without-datamapper-or-dataweave.html