Kubernetes ingress-nginx fails when accessing the host from within the cluster - kubernetes

I've got a problem with ingress-nginx. Which is working if I access the host publicly. However, accessing the host from within the cluster seems to fail.
PS: I installed nginx-ingress as DigitalOcean one-click install suggestion. But I don't think the problem has in DO. I've been trying find and solve this issue for last 3 days.
2021/04/19 07:40:28 [error] 887#887: *2244049 broken header: "���'V�����Y%��i�����:U��Ta�fv�n
mg
{>3�����B� �ջ��+Mw���hc��ސZ�,�+�0��.�2���/��-�1�" while reading PROXY protocol, client: 10.244.3.30, server: 0.0.0.0:443
2021/04/19 07:40:28 [error] 887#887: *2244050 broken header: "������b���Tڒ�1��w0���zF�A<� �e�USs!��l�JOca�"�� *)�˄Z�,�+�0��.�2���/��-�1�" while reading PROXY protocol, client: 10.244.3.30, server: 0.0.0.0:443
I tested it by SSLPoke, got same result. This is testing it within as cluster.
the result from curl in pod (within cluster), but when i check it publicly it's working right:
the configuration in digitalocean loadbalancer

Inside the cluster you should use pure http without SSL.
SSL is used only from "outside"

Related

How to force kubernetes pod to route through the internet?

I have an issue with my kubernetes routing.
The issue is that one of the pods makes a GET request to auth.domain.com:443 but the internal routing is directing it to auth.domain.com:8443 which is the container port.
Because the host returning the SSL negotiation identifies itself as auth.domain.com:8443 instead of auth.domain.com:443 the connection times out.
[2023/01/16 18:03:45] [provider.go:55] Performing OIDC Discovery...
[2023/01/16 18:03:55] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://auth.domain.com/realms/master/.well-known/openid-configuration": net/http: TLS handshake timeout
(If someone knows the root cause of why it is not identifying itself with the correct port 443 but instead the container port 8443, that would be extremely helpful as I could fix the root cause.)
To workaround this issue, I have the idea to force it to route out of the pod onto the internet and then back into the cluster.
I tested this by setting up the file I am trying to GET on a host external to the cluster, and in this case the SSL negoiation works fine and the GET request succeeds. However, I need to server the file from within the cluster, so this isn't a viable option.
However, if I can somehow force the pod to route through the internet, I believe it would work. I am having trouble with this though, because everytime the pod looks up auth.domain.com it sees that it is an internal kubernetes IP, and it rewrites the routing so that it is routed locally to the 10.0.0.0/24 address. After doing this, it seems to always return with auth.domain.com:8443 with the wrong port.
If I could force the pod to route through the full publicly routable IP, I believe it would work as it would come back with the external facing auth.domain.com:443 with the correct 443 port.
Anyone have any ideas on how I can achieve this or how to fix the server from identifying itself with the wrong auth.domain.com:8443 port instead of auth.domain.com:443 causing the SSL negotiation to fail?

Failed to accept an incoming connection: connection from "9.42.x.x" rejected, allowed hosts: "zabbix-server"

SUMMARY
I have installed zabbix on OpenShift cluster. I am trying to monitor a host(vm) outside the cluster but the zabbix server is unable to connect to it. In the /etc/zabbix/zabbix_agentd.conf file I have mentioned the DNS name of the server zabbix-server but it looks like there server is trying to connect through a different public IP. I am not sure what this IP is.
OS / ENVIRONMENT / Used docker-compose files
I applied the kubernetes.yaml file present in this repo - https://github.com/zabbix/zabbix-docker/blob/6.2/kubernetes.yaml - on an OpenShift cluster.
CONFIGURATION
In the /etc/zabbix/zabbix_agentd.conf file Server=zabbix-server.
STEPS TO REPRODUCE
Apply the kubernetes.yaml file on Openshift cluster and try to monitor any external vm.
EXPECTED RESULTS
The zabbix server should be able to connect to the vm.
ACTUAL RESULTS
Zabbix server logs.
Defaulted container "zabbix-server" out of: zabbix-server, zabbix-snmptraps
\*\* Updating '/etc/zabbix/zabbix_server.conf' parameter "DBHost": 'mysql-server'...added
287:20230120:060843.131 Zabbix agent item "system.cpu.load\[all,avg5\]" on host "Host-C" failed: first network error, wait for 15 seconds
289:20230120:060858.592 Zabbix agent item "system.cpu.num" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060913.843 Zabbix agent item "system.sw.arch" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060929.095 temporarily disabling Zabbix agent checks on host "Host-C": interface unavailable
Logs from the agent installed on the vm.
350446:20230122:103232.230 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350444:20230122:103332.525 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350445:20230122:103432.819 failed to accept an incoming connection: connection from "9.x.x.210" rejected, allowed hosts: "zabbix-server"
350446:20230122:103533.114 failed to accept an incoming connection: connection from "9.x.x.217" rejected, allowed hosts: "zabbix-server"
If I add this IP in /etc/zabbix/zabbix_agentd.conf it will work. But what IP is this? Is this a service? Or any node/pod IP? It keeps on changing. Everytime I cannot change this id in the conf file. I need something more stable.
Kindly help me out with this issue.
So I don't know zabbix. So I have to make some educated guesses both in how the agent works and how the server works.
But, to summarize, unlike something like docker compose where you are running the zabbix server on a known server, in Openshift/Kubernetes you are deploying into a cluster of machines with their own networking. In other words, the whole point of OpenShift is that OpenShift will control where the application's pod gets deployed and will relocate/restart that pod as needed. With a different IP every time. (And the DNS name is meaningless since the two systems aren't sharing DNS anyway.) Most likely the IP's you are seeing are the pod's randomly assigned IPs.
So, what are you to do when you have a situation like yours where an external application requires a predicable IP? Well, option 1, is to remove that requirement. Using something like a certificate is obviously more secure and more reliable than depending on an IP anyway. But another option is to use an egress IP. This is a feature of OpenShift where you essentially use a proxy to provide an external application with a consistent IP.

K3s dial tcp lookup server misbehaving during letsencrypt staging

After succesfully hosting a first service on a single node cluster I am trying to add a second service with both its own dnsName.
The first service uses LetsEncrypt succesfully and now I am trying out the second service with a test-certifcate and the staging endpoint/clusterissuer
The error I am seeing once I describe the Letsencrypt Order is:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://example.nl/.well-known/acme-challenge/9kdpAMRFKtp_t8SaCB4fM8itLesLxPkgT58RNeRCwL0': Get "http://example.nl/.well-known/acme-challenge/9kdpAMRFKtp_t8SaCB4fM8itLesLxPkgT58RNeRCwL0": dial tcp: lookup example.nl on 10.43.0.11:53: server misbehaving
The port that is misbehaving is pointing to the internal IP of my service/kube-dns, which means it is past my service/traefik i think.
The cluster is running on a VPS and I have also checked the example.nl domain name is added to /etc/hosts with the VPS's ip like so:
206.190.101.190 example1.nl
206.190.101.190 example.nl
The error is a bit vague to me because I do not know exactly what de kube-dns is doing and why it thinks the server is misbehaving, I think maybe it is because it has now 2 domain names to handle I missed something. Anyone can shed some light on it?
Feel free to ask for more ingress or other server config!
Everything was setup right to be able to work, however this issue had definitely had something to do with DNS resolving. Not internally in the k3s cluster, but externally at the domain registrar.
I found it by using https://unboundtest.com for my domain and saw my old namespaces still being used.
Contacted the registrar and they had to change something for the domain in the DNS of the registry.
Pretty unique situation, but maybe helpful for people who also think the solution has to be found internally (inside k3s).

Remote EJB in Kubernetes

I'm trying to setup a remote EJB call between 2 WebSphere Liberty servers deployed in k8s.
Yes, I'm aware that EJB is not something one would want to use when deploying in k8s, but I have to deal with it for now.
The problem I have is how to expose remote ORB IP:port in k8s. From what I understand, it's only possible to get it to work if both client and remote "listen" on the same IP. I'm not a network expert, and I'm quite fresh in k8s, so maybe I'm missing something here, that's why I need help.
The only way I got it to work is when I explicitly set host on remote server to it's own IP address and then accessed it from client on that same IP. This test was done on Docker host with macvlan0 network (each container had it's own IP address).
This is ORB setup for remote server.xml configuration:
<iiopEndpoint id="defaultIiopEndpoint" host="172.30.106.227" iiopPort="2809" />
<orb id="defaultOrb" iiopEndpointRef="defaultIiopEndpoint">
<serverPolicy.csiv2>
<layers>
<!-- don't care about security at this point -->
<authenticationLayer establishTrustInClient="Never"/>
<transportLayer sslEnabled="false"/>
</layers>
</serverPolicy.csiv2>
</orb>
And client server.xml configuration:
<orb id="defaultOrb">
<clientPolicy.csiv2>
<layers>
<!-- really, I don't care about security -->
<authenticationLayer establishTrustInClient="Never"/>
<transportLayer sslEnabled="false"/>
</layers>
</clientPolicy.csiv2>
</orb>
From client, this is JNDI name I try to access it:
corbaname::172.30.106.227:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
And this works.
Since one doesn't want to set fixed IP when exposing ORB port, I have to find a way to expose it dynamically, based on host IP.
Exposing on 0.0.0.0 does not work. Same goes for localhost. In both cases, client refuses to connect with this kind of error:
Error connecting to host=0.0.0.0, port=2809: Connection refused (Connection refused)
In k8s, I've exposed port 2809 through LoadBalancer service for remote pods, and try to access remote server from client pod, where I've set remote's service IP address in corbaname definition.
This, of course, does not work. I can access remote ip:port by telnet, so it's not a network issue.
I've tried all combinations of setup on remote server. Exporting on host="0.0.0.0" results with same exception as above (Connection refused).
I'm not sure exporting on internal IP address would work either, but even if it would, I don't know the internal IP before pod is deployed in k8s. Or is there a way to know? There is no env. variable with it, I've checked.
Exposing on service IP address (with host="${REMOTE_APP_SERVICE_HOST}") fails with this error:
The server socket could not be opened on 2,809. The exception message is Cannot assign requested address (Bind failed).
Again, I know replacing EJB with Rest is the way to go, but it's not an option for now (don't ask why).
Help, please!
EDIT:
I've managed to get some progress. Actually, I believe I've successfully called remote EJB.
What I did was add hostAliases in pod definition, which added alias for my host, something like this:
hostAliases:
- ip: 0.0.0.0
hostnames:
- my.host.name
Then I added this host name to remote server.xml:
<iiopEndpoint id="defaultIiopEndpoint" host="my.host.name" iiopPort="2809" />
I've also added host alias to my client pod:
hostAliases:
- ip: {remote.server.service.ip.here}
hostnames:
- my.host.name
Finally, I've changed JNDI name to:
corbaname::my.host.name:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
With this setup, remote server was successfully called!
However, now I have another problem which I didn't have while testing on Docker host. Lookup is done, but what I get is not what I expect.
Lookup code is pretty much what you'd expect:
Object obj = new InitialContext().lookup(jndi);
BeanRemote remote = (BeanRemote) PortableRemoteObject.narrow(obj, BeanRemote.class);
Unfortunatelly, this narrow call fails with ClassCastException:
Caused by: java.lang.ClassCastException: org.example.com.BeanRemote
at com.ibm.ws.transport.iiop.internal.WSPortableRemoteObjectImpl.narrow(WSPortableRemoteObjectImpl.java:50)
at [internal classes]
at javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:62)
Object I do receive is org.omg.stub.java.rmi._Remote_Stub. Any ideas?
Solved it!
So, the first problem was resolving host mapping, which was resolved as mentioned in edit above, by adding host aliases id pod definitions:
Remote pod:
hostAliases:
- ip: 0.0.0.0
hostnames:
- my.host.name
Client pod:
hostAliases:
- ip: {remote.server.service.ip.here}
hostnames:
- my.host.name
Remote server then has to use that host name in iiop host definition:
<iiopEndpoint id="defaultIiopEndpoint" host="my.host.name" iiopPort="2809" />
Also, client has to reference that host name through JNDI lookup:
corbaname::my.host.name:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
This setup resolves remote EJB call.
The other problem with ClassCastException was really unusual. I managed to reproduce the error on Docker host and then changed one thing at a time until the problem was resolved. It turns out that the problem was with ldapRegistry-3.0 feature (!?). Adding this feature to client's feature list resolved my problem:
<feature>ldapRegistry-3.0</feature>
With this feature added, remote EJB was successfully called.

Hyperledger Fabric orderer container and client REST/Postman oversized record error

can't communicate with my Hyperledger Fabric's First-Network...
I can query and invoke from inside CLI docker container. Works fine!
But if i want to use Postman and Json to invoke or query from a client PC, than i get an error message in the orderer log:
[grpc] Printf -> DEBU fc9 grpc: Server.Serve failed to complete security handshake from "10.xx.xx.xxx:56694": tls: oversized record received with length 21536
The docker containers are on Suse Linux Server and not on locally VM.
I can ping my server and the Orderer-Container Port is mapped as default config(7050:7050)
I don't really know where to find the right cert.pem and key.pem files on the linux server filesystem. Tried different one in Postman = Option client certificates.
Also tried to search a solution but can't find a working one.
Hyperledger Fabric Peer and Orderer nodes only support direct communication using gRPC (which is protocol buffers over HTTP/2) APIs. They do not offer an HTTP/REST interface. Postman only supports HTTP endpoints so it will not work with peer or orderer nodes. (the error you see if also likely due to the fact that postman was not using HTTPS).
If you want to attempt to use REST with the peer and orderer nodes, you might want to check out https://github.com/hyperledger/fabric-sdk-rest which aims to provide a REST server in front of Hyperledger Fabric nodes.