Remote EJB in Kubernetes - kubernetes

I'm trying to setup a remote EJB call between 2 WebSphere Liberty servers deployed in k8s.
Yes, I'm aware that EJB is not something one would want to use when deploying in k8s, but I have to deal with it for now.
The problem I have is how to expose remote ORB IP:port in k8s. From what I understand, it's only possible to get it to work if both client and remote "listen" on the same IP. I'm not a network expert, and I'm quite fresh in k8s, so maybe I'm missing something here, that's why I need help.
The only way I got it to work is when I explicitly set host on remote server to it's own IP address and then accessed it from client on that same IP. This test was done on Docker host with macvlan0 network (each container had it's own IP address).
This is ORB setup for remote server.xml configuration:
<iiopEndpoint id="defaultIiopEndpoint" host="172.30.106.227" iiopPort="2809" />
<orb id="defaultOrb" iiopEndpointRef="defaultIiopEndpoint">
<serverPolicy.csiv2>
<layers>
<!-- don't care about security at this point -->
<authenticationLayer establishTrustInClient="Never"/>
<transportLayer sslEnabled="false"/>
</layers>
</serverPolicy.csiv2>
</orb>
And client server.xml configuration:
<orb id="defaultOrb">
<clientPolicy.csiv2>
<layers>
<!-- really, I don't care about security -->
<authenticationLayer establishTrustInClient="Never"/>
<transportLayer sslEnabled="false"/>
</layers>
</clientPolicy.csiv2>
</orb>
From client, this is JNDI name I try to access it:
corbaname::172.30.106.227:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
And this works.
Since one doesn't want to set fixed IP when exposing ORB port, I have to find a way to expose it dynamically, based on host IP.
Exposing on 0.0.0.0 does not work. Same goes for localhost. In both cases, client refuses to connect with this kind of error:
Error connecting to host=0.0.0.0, port=2809: Connection refused (Connection refused)
In k8s, I've exposed port 2809 through LoadBalancer service for remote pods, and try to access remote server from client pod, where I've set remote's service IP address in corbaname definition.
This, of course, does not work. I can access remote ip:port by telnet, so it's not a network issue.
I've tried all combinations of setup on remote server. Exporting on host="0.0.0.0" results with same exception as above (Connection refused).
I'm not sure exporting on internal IP address would work either, but even if it would, I don't know the internal IP before pod is deployed in k8s. Or is there a way to know? There is no env. variable with it, I've checked.
Exposing on service IP address (with host="${REMOTE_APP_SERVICE_HOST}") fails with this error:
The server socket could not be opened on 2,809. The exception message is Cannot assign requested address (Bind failed).
Again, I know replacing EJB with Rest is the way to go, but it's not an option for now (don't ask why).
Help, please!
EDIT:
I've managed to get some progress. Actually, I believe I've successfully called remote EJB.
What I did was add hostAliases in pod definition, which added alias for my host, something like this:
hostAliases:
- ip: 0.0.0.0
hostnames:
- my.host.name
Then I added this host name to remote server.xml:
<iiopEndpoint id="defaultIiopEndpoint" host="my.host.name" iiopPort="2809" />
I've also added host alias to my client pod:
hostAliases:
- ip: {remote.server.service.ip.here}
hostnames:
- my.host.name
Finally, I've changed JNDI name to:
corbaname::my.host.name:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
With this setup, remote server was successfully called!
However, now I have another problem which I didn't have while testing on Docker host. Lookup is done, but what I get is not what I expect.
Lookup code is pretty much what you'd expect:
Object obj = new InitialContext().lookup(jndi);
BeanRemote remote = (BeanRemote) PortableRemoteObject.narrow(obj, BeanRemote.class);
Unfortunatelly, this narrow call fails with ClassCastException:
Caused by: java.lang.ClassCastException: org.example.com.BeanRemote
at com.ibm.ws.transport.iiop.internal.WSPortableRemoteObjectImpl.narrow(WSPortableRemoteObjectImpl.java:50)
at [internal classes]
at javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:62)
Object I do receive is org.omg.stub.java.rmi._Remote_Stub. Any ideas?

Solved it!
So, the first problem was resolving host mapping, which was resolved as mentioned in edit above, by adding host aliases id pod definitions:
Remote pod:
hostAliases:
- ip: 0.0.0.0
hostnames:
- my.host.name
Client pod:
hostAliases:
- ip: {remote.server.service.ip.here}
hostnames:
- my.host.name
Remote server then has to use that host name in iiop host definition:
<iiopEndpoint id="defaultIiopEndpoint" host="my.host.name" iiopPort="2809" />
Also, client has to reference that host name through JNDI lookup:
corbaname::my.host.name:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
This setup resolves remote EJB call.
The other problem with ClassCastException was really unusual. I managed to reproduce the error on Docker host and then changed one thing at a time until the problem was resolved. It turns out that the problem was with ldapRegistry-3.0 feature (!?). Adding this feature to client's feature list resolved my problem:
<feature>ldapRegistry-3.0</feature>
With this feature added, remote EJB was successfully called.

Related

Failed to accept an incoming connection: connection from "9.42.x.x" rejected, allowed hosts: "zabbix-server"

SUMMARY
I have installed zabbix on OpenShift cluster. I am trying to monitor a host(vm) outside the cluster but the zabbix server is unable to connect to it. In the /etc/zabbix/zabbix_agentd.conf file I have mentioned the DNS name of the server zabbix-server but it looks like there server is trying to connect through a different public IP. I am not sure what this IP is.
OS / ENVIRONMENT / Used docker-compose files
I applied the kubernetes.yaml file present in this repo - https://github.com/zabbix/zabbix-docker/blob/6.2/kubernetes.yaml - on an OpenShift cluster.
CONFIGURATION
In the /etc/zabbix/zabbix_agentd.conf file Server=zabbix-server.
STEPS TO REPRODUCE
Apply the kubernetes.yaml file on Openshift cluster and try to monitor any external vm.
EXPECTED RESULTS
The zabbix server should be able to connect to the vm.
ACTUAL RESULTS
Zabbix server logs.
Defaulted container "zabbix-server" out of: zabbix-server, zabbix-snmptraps
\*\* Updating '/etc/zabbix/zabbix_server.conf' parameter "DBHost": 'mysql-server'...added
287:20230120:060843.131 Zabbix agent item "system.cpu.load\[all,avg5\]" on host "Host-C" failed: first network error, wait for 15 seconds
289:20230120:060858.592 Zabbix agent item "system.cpu.num" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060913.843 Zabbix agent item "system.sw.arch" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060929.095 temporarily disabling Zabbix agent checks on host "Host-C": interface unavailable
Logs from the agent installed on the vm.
350446:20230122:103232.230 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350444:20230122:103332.525 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350445:20230122:103432.819 failed to accept an incoming connection: connection from "9.x.x.210" rejected, allowed hosts: "zabbix-server"
350446:20230122:103533.114 failed to accept an incoming connection: connection from "9.x.x.217" rejected, allowed hosts: "zabbix-server"
If I add this IP in /etc/zabbix/zabbix_agentd.conf it will work. But what IP is this? Is this a service? Or any node/pod IP? It keeps on changing. Everytime I cannot change this id in the conf file. I need something more stable.
Kindly help me out with this issue.
So I don't know zabbix. So I have to make some educated guesses both in how the agent works and how the server works.
But, to summarize, unlike something like docker compose where you are running the zabbix server on a known server, in Openshift/Kubernetes you are deploying into a cluster of machines with their own networking. In other words, the whole point of OpenShift is that OpenShift will control where the application's pod gets deployed and will relocate/restart that pod as needed. With a different IP every time. (And the DNS name is meaningless since the two systems aren't sharing DNS anyway.) Most likely the IP's you are seeing are the pod's randomly assigned IPs.
So, what are you to do when you have a situation like yours where an external application requires a predicable IP? Well, option 1, is to remove that requirement. Using something like a certificate is obviously more secure and more reliable than depending on an IP anyway. But another option is to use an egress IP. This is a feature of OpenShift where you essentially use a proxy to provide an external application with a consistent IP.

Docker: run multiple container on same tcp ports with different hostname

Is there a way to run multiple docker containers on the same ports? For example, I have used the ports 80/443 (HTTP), 3306 (TCP/MySQL) and 22 (TCP/SSH) in my docker-compose file. Now I want to run this docker-compose for different hostnames on the same ip address on my machine.
- traffic from example1.com (default public ip) => container1
- traffic from example2.com (default public ip) => container2
I have already found a solution only for the HTTP traffic by using an additional nginx/haproxy as a proxy on my machine. But unfortunately, this can't handle other TCP ports.
This isn't possible in the general (non-HTTP) case.
At a lower level, if I connect to 10.20.30.40:3306, the Linux kernel selects a single process that's listening on that port and sends the request there. You're not allowed to bind(2) a second process to the same port. (This is also why you get an error if you try to docker run -p picking a host port that's already in use.)
In the case of HTTP, there's the further detail that the host-name part of the URL is also sent in an HTTP Host: header: the Web browser both does a DNS lookup for e.g. stackoverflow.com and connects to its IP address, and also sends a Host: stackoverflow.com HTTP header. That's the specific mechanism that lets you run a proxy on port 80, and then forward to some other backend service via a virtual-host setup.
That mechanism is very specific to HTTP, though, and doesn't work for other protocols that don't have support for it. I don't think either MySQL or ssh have similar mechanisms in their wire protocol.
(In the particular situation you describe this is probably relatively easy to handle. You wouldn't want to make either your internal database or an sshd visible publicly, so delete their ports: from your docker-compose.yml file, and then just worry about proxying the HTTP service. It's pretty unusual and a complex setup to run sshd in Docker so you also might remove that and simplify your stack a little.)

Allow access to wildfly port 8080 over WAN for web page

My team needs to see a web page I have built that I am hosting temporarily on my local windows 10 laptop using Wildfly 11.
I have changed the configuration standalone.xml from commented value to this
<interface name="public">
<!-- <inet-address value="${jboss.bind.address:127.0.0.1}"/>-->
<inet-address value="${jboss.bind.address:xx.xx.xxx.xxx}"/>
</interface>
Where xxx is equal to my ip address as determined from my internet provider's control page. I can ping that address from any of my local machines and my co-workers can also ping the address.
However, when I go to run with this value in the xml, I get the error:
Failed to start service org.wildfly.network.interface.public: org.jboss.msc.service.StartException in service org.wildfly.network.interface.public: WFLYSRV0082: failed to resolve interface public
What else do I need to do to enable access to the port? Thank you in advance for your help.
If your "xx.xx.xxx.xxx" is not the IP number of an interface on your machine, then you won't be able to bind to it. You can only bind to an interface that is actually present on the host. Typically the IP number of your machine, as seen from the public Internet, will not be the same as an IP number on the machine itself. You need to bind your HTTP server to the machine's real IP number (not localhost, 127.0.0.1, but the IP corresponding to some real network connection -- Ethernet, Wifi, whatever) and you need to configure your Internet router to forward packets addressed to port 8080 to the IP number of your wildfly host.
I would think that, if your co-workers are on the same site as you, they would have access to your machine without going through the public Internet. In that case, all you need to do is to bind the port to the (non-localhost) IP number of your machine, and have your colleagues use that IP number. You might also need to configure any firewall you have -- either on your wildfly host or your router -- to allow access to port 8080.
I would recommend that you run Wildfly on the command line with something like:
bin\standalone.bat -b 0.0.0.0
This will have Wildfly bind to all available interfaces. For testing this should be safe - it should be ok to bind to more than on interface. You will not need any changes in standalone.xml.

Are service addresses available to the dc/os host OS?

I’m trying to have my dc/os 1.8 docker containers send log messages to a logstash that is also running in dc/os by using the service address of the logstash service.
that doesn’t appear to work as docker throws an error: logstash.marathon.l4lb.thisdcos.directory: no such host
are service addresses not exposed to the host systems (or do I need to configure something for this)?
on dc/os 1.7 I used a fixed host port in my logstash config and logstash.marathon.mesos as host, but these .marathon.mesos hostnames seem to not exist in 1.8 anymore.
the service addresses work fine when I try to use them from within a container (for example to link my prometheus service to my alertmanager service). but from the host level they don’t exist.
EDIT:
my statement about the missing marathon.mesos urls was wrong. they do work, but I uses the wrong one. for now this fixes my problem kind of. I configured logging using this host and a fixed container port.
for everybody trying the same thing: you have to configure the fixed host port everytime you make changes to the service config in the ui via the json mode. the fixed host port config is no longer available in the network tab of the ui, so the dc/os ui will DELETE the host port config on every load.
still no idea why the l4lb urls don't work.
EDIT2
still no idea, but i figured out that minuteman generates crash and error logs every other second:
/opt/mesosphere/active/minuteman/minuteman/error.log:
CRASH REPORT Process <0.25809.2> with 0 neighbours exited with reason: {timeout,{gen_server,call,[{lashup_kv,'navstar#10.2.140.216'},{start_kv_sync_fsm,'minuteman#10.2.103.143',<0.25809.2>}]}} in gen_server:call/2 line 204
/opt/mesosphere/active/minuteman/minuteman/log/crash.log
2016-10-12 13:16:49 =CRASH REPORT====
crasher:
initial call: lashup_kv_sync_tx_fsm:init/1
pid: <0.29002.2>
registered_name: []
exception exit: {{timeout,{gen_server,call,[{lashup_kv,'navstar#10.2.140.216'},{start_kv_sync_fsm,'minuteman#10.2.103.143',<0.29002.2>}]}},[{gen_server,call,2,[{file,"gen_server.erl"},{line,204}]},{lashup_kv_sync_tx_fsm,init,1,[{file,"/pkg/src/minuteman/_build/default/lib/lashup/src/lashup_kv_sync_tx_fsm.erl"},{line,23}]},{gen_statem,init_it,6,[{file,"gen_statem.erl"},{line,554}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: [lashup_kv_aae_sup,lashup_kv_sup,lashup_platform_sup,lashup_sup,<0.916.0>]
messages: []
links: [<0.992.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 127
neighbours:
the dc/os ui claims spartan and minuteman are healthy, but while the crash.log of the dns dispatcher is empty the l4lb gets new crashes every other second.
They should certainly be available from the host OS. Are these host services running the "Spartan" and "Minuteman" services?
my problem was twofold:
the l4b did not properly run, that was only fixed after a total reinstall of the cluster
the l4b only supports TCP traffic. because i wanted to use it to send container-logs to logstash using udp (docker-gelf only supports UDP) this failed

RhodeCode - What is blocking my connection?

All connection attempts on RhodeCode on CentOS 6.3 are refused except from localhost.
Note that iptables is not running, and I am only trying to visit the web interface.
I have googled the exact error message below and looked around SO. I have yet to find a solution.
abort: error: No connection could be made because the target machine actively refused it
If the firewall is down, and I am not trying to modify any repository, what else is preventing me from connecting? EDIT: See #5 below. Not sure how to address it yet.
Things tried and other info
Using localhost, 127.0.0.1 and hostname in production.ini
service iptables stop
Connected over HTTP successfully. In other words, connections are accepted outside RhodeCode.
Made sure no authentication methods were enabled or configured in production.ini
Although the server accepts connections on localhost, netstat -l does not show that port 5000 is listening. Port 5000 is set in production.ini and ps uax | grep paster confirms the server is running. No other software tries to grab port 5000.
Ok, apparently I have been misunderstanding the host configuration. I was running on the assumption that host should be set to 127.0.0.1 or localhost in production.ini for RhodeCode to know what host to look for for another service. This was a faulty presumption on my part, since I am used to pointing web applications to local systems to look for databases.
It turns out that host binds the application to a specific address for access, meaning that it RhodeCode was supposed to only respond to local requests, regardless of what other system policies say. The setup docs did not make this clear because it did not specify that external connections would be refused. All it said was:
This command [paster serve] runs the RhodeCode server. The web app should be available at the 127.0.0.1:5000. This ip and port is configurable via the production.ini file created in previous step
The problem was fixed by binding RhodeCode to 0.0.0.0, which opened it to outside connections. Kudos to Łukasz Balcerzak for pointing this out in the RC support google group.