Corda RPC connection on Kubernetes - kubernetes

I bootstrapped a Corda network with Cordite network map service following official instructions. Everything, including SpringBoot server, works fine locally when nodes are deployed in Docker containers. But when I put the corda nodes in Kubernetes, SpringBoot server cannot communicate with Corda node. In Docker setup, I'm dropped into corda console but in Kubernetes the console doesn't appear (no error though; everything same except the console). There is probably some issues with RPC communication. Can anyone with experience point what could go wrong?

Inter pod-communication in Kubernetes works differently from docker container's communication. To establish a Spring RPC connection with Corda, you need to configure your RPC connection using the Kubernetes services.

Related

SocketTimeoutException when container in Pod try to contact Kubernetes API Server

I have a Kubernetes cluster deployed locally with Kubeadm/Vagrant with a master and two workers with the following IPs:
master: 192.168.250.10
worker1: 192.168.250.11
worker2: 192.168.250.12
then I have an application composed by a ReactJS frontend and SpringBoot backend running in two separate containers on the same Pod. When I submit a form on the frontend the application calls an API in the backend that internally calls a Kubernetes API. To authenticate to the cluster I use a .kube/config file correctly configured.
When the application (frontend/backend) is outside the cluster everything works fine. I use docker-compose to startup the two containers just for the unit tests. The .kube/config file has as API URL https://192.168.250.10:6443. The problem is when I try to run the application in the containers the IP 192.168.250.10 doesn't work properly and communication goes in Timeout exception.
I am sure the application is OK because the same application works fine in IBM Cloud wherein .kube/config there is an API server with public IP reachable.
My question is, which IP should I put into .kube/config when I run the application locally inside my cluster? How can I get this IP using kubectl commands?
Thanks in advance for any help.

Fail to connect the GKE with GCE on the same VPC?

I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080

Connecting ignite server that runs as docker container inside kubernetes from outside client process

I have minikube running kubernetes inside a virtual box.
one of the docker container it runs is an ignite server.
during my development I try to access the ignite server from outside java client but the discovery fails with all configurations I tried.
is it possible at all?
If yes can someone give an example?
To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Read more about this on https://apacheignite.readme.io/docs/kubernetes-deployment. Your Kubernetes service definitions should have the container exposed port specified, then minikube should give you service URL after successful deployment.

Communication among pods not working after exposing as service

I have set up a kubernetes Cluster manually. The cluster is healthy. The nodes are up. The pods and services are also created and running.
I have a web pod which is a python flask application. A db-pod which is redis. Exposed redis as a service to be accessible from python. Exposed web pod as external service also. The external service is running in 31727 port.
When i access the web application through browser, it reports redis host is not accessible.
The application works well when deployed in a kubernetes cluster created using kubeadm/kops.
Sounds like kube-proxy or overlay networking issue at first glance. Are you sure kube-proxy is launched on nodes and you have a working overlay ? Can you ping pods directly on a pod-to-pod basis ?
Update: as your pod-to-pod connectivity is down, you need to look into your flannel configuration, and make sure it works fine, as well as make sure pods are started with flannel networking (ie. via CNI) rather than local docker0 interfaces network.

Docker container can't leverage external Cloudant service (network resolution / visibility?)

I've built a Container that leverages a CF app that's bound to a service, Cloudant to be specific.
When I run the container locally I can connect to my Cloudant service.
When I build and run my image in the Bluemix container service I can no longer connect to my Cloudant service. I did use --bind to bind my app to the container. I have verified that the VCAP_Services info is propagating to my container successfully.
To narrow the problem down further, I tried just doing an
ice -run --name NAME IMAGE_NAME ping CLOUDANT_HOST
and I found I was getting an unknown host.
So I then tried to just ping the IP, and got Network is unreachable.
If we can not resolve bluemix services over the network, how can we leverage them? Is there just a temporary problem, or perhaps I'm missing something?
Again, runs fine locally but fails when hosted in the container service.
It has been my experience that networking is not reliable in IBM Containers for about 5 seconds at startup. Try adding a "sleep 10" to your CMD or ENTRYPOINT. Or set it up to retry for X seconds before giving up.
Once the networking comes up it has been reliable for me. But the first few seconds of a containers life have had troubles with DNS, binding, and outgoing traffic.
looking at your problem it could be related to a network error on container when on Bluemix.
Try to access your container through shell when on Bluemix (using cf ic console or docker one) and check if the network has been rised correctly and then if its network interface(s) has an IP to use.