I have a Apache Ignite server installed on AWS EC2 instance. I'm using s3 bucket for client discovery. I have multiple micro-services deployed in docker container and they are communicating with Ignite server. The problem i'm into is, when my micro-service is registering it self to Ignite server as client, it is working perfectly fine. It is registering with docker containers private IP range, which is not accessible for Ignite server. Now when Ignite server is checking for client heartbeat, it is not able to reach. Can someone please tell what will be the best approach to use ignite with container based architecture.
output : while server trying to check client status
(wrn) <visor>: Failed to connect to node (is node still alive?). Make sure that each ComputeTask and cache Transaction has a timeout
set in order to prevent parties from waiting forever in case of
network issues [nodeId=8b04f5a6-6b1d-498b-98b2-1044b8c25f3a,
addrs=[/172.17.0.4:47100, /127.0.0.1:47100]]
1) Let ignite know the address of your EC2 ip
TcpDiscoverySpi spi = //whatever
spi.setLocalAddress( /* ip on ec2 subnet */ );
2) Run docker in "host" network mode, as opposed to the default "bridge" mode
These two steps should allow bidirectional TCP handshakes between the members of your Ignite cluster
You can forward host name to the ignite container via a system environment variable in your ignite configuration:
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>#{systemEnvironment['IGNITE_HOST'] ?: '127.0.0.1'}:47500..47509</value>
</list>
</property>
</bean>
An example of docker-compose.yml for 2 communicated ignite services:
version: "3"
services:
ignite:
image: image_name1
networks:
- net
face:
image: image_name2
depends_on:
- ignite
networks:
- net
environment:
IGNITE_HOST: 'ignite'
The ignite node of 'face' can connect to the another ignite node of 'ignite' using the address ignite:47500..47509
Related
I have an application with Spring backend and Angular frontend. I am using docker-compose and it is creating 2 containers.
I have bridge network, so locally I am able to test.
Now I want to deploy to Google Cloud.
Question: (1) Do I need to create any gcp specific yaml file?
The cluster I created seems not good enough, Using GKE in this case
Does not have minimum availability
I haven't seen any examples where spring and angular are deployed using CloudRun individually. But is this possible?
I desperately need to deploy my containers. IS there a way?
Edit:
The backend spring is able to talk to CloudSQL (answered in another post)
The angular app is not running because it doesnt know upstream host
nginx-py-sha256-2
2021/07/14 15:21:13 [emerg] 1#1: host not found in upstream "appserver:2623" in /etc/nginx/conf.d/default.conf:2
In my docker compose -
services:
# App backend service
appserver:
container_name: appserver
image: appserver
pui:
container_name: nginx-py
image: nginx-py
and my nginx.conf refers as appserver
The image I push is
docker push eu.gcr.io/myapp/appserver
what name should I use in nginx.conf so that it can identify host upstream? nice If I can disable prefix
GCP Kubernetes workload "Does not have minimum availability" is unanswered. so not a duplicate
You have a container for the backend and a front end with static file. the best pattern for that is:
Use Cloud Run for your backend
Use Cloud Storage for the frontend (static files) (make the bucket publicly accessible)
Use a load balancer where you route the static request to the bucket backend, and the backend request to Cloud Run
And, of course, forgot Docker Compose.
Note: if you have a container for your frontend, with a webserver (Angular server, NGINX or something else) you can deploy it on Cloud Run also, but you will pay processing for nothing. Cloud Storage is a better solution.
In both cases, a load balancer is recommended to avoid CORS issue. And in addition, you will be able to add CDN on the load balancer if you need it in your business
My Goal
I have a web frontend application which communicates via a specific port (REST API) with a backend application (lets call it "My_App"). My_App needs to send Multicast frames to communicate with something in the outside world. And I need to put everything into Docker Containers.
What I tried (and failed)
First I tried to have nginx and the app inside a single Container using host networking for the whole Container. Like depicted in the following picture:
This works on my developement PC but does not on the target system due to some firewall rules preventing me from accessing Port 8081 and 12345 from outside the target system. If I log into the target system e.g. via ssh I can access those Ports just fine. So the situation on the target system is like that:
What I am trying to achieve
So now I am trying to split the app and nginx into separate Containers (Port Mapping for bridged networks is allowed on the host system and I can access maped Ports from the outside).
What I want to to do is connect a Container A using Bridge Networking with another Container B using host networking, so that an Application (e.g. Nginx as reverse Proxy) inside Container A can send Requests to Container B on a specific Port. Something like shown in the following picture:
But now I do not know how to achieve this. I know that I can communicate between Containers if both are using bridge Networking by using Dockers DNS and the name of each Container. But how to do this if one Container uses host networking, bypassing Dockers DNS?
Here is my current docker-compose file:
version: "2"
services:
Container_B:
image: my_app
network_mode: host
Container_A:
image: nginx:alpine
ports:
- 80:8081
- 12342:12342 // apparently does not work because in use by Container_B
depends_on:
- "Container_B"
The actual question
Is it possible to access a Port, inside a Container_B using host networking, from within a Container_A using Bridge networking? Without some external application doing the routing?
If you are on Mac or Windows, you can use host.docker.internal to connect to containers in network_mode: 'host'.
For linux, this feature is not available yet. It will be available in the new release of docker. Till then, you can use a docker image dockerhost (docker pull qoomon/docker-host) and connect to containers in host mode using dockerhost:{port} method.
add this in your docker-compose.yml file:
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- back_end
Container_A should also be in the same network as dockerhost.
After this, you can connect to containers from within bridge network to containers in host mode using dockerhost:{port} where port is where container in host is listening to.
I am using kakfa orderer service for hyperledger fabric 1.4. while updating chaincode or making any puutState call i am getting error message stated as Failed to send the transaction successfully to the order status: SERVICE UNAVAILABLE. while checking zoopeker and kafka node it seems like kafka nodes are not able to talk to each other.
kakfa & zookeeper logs
Could you provide more info about the topology of the zookeeper-kafka cluster?
Did you use docker to deploy the zk cluster? If so, you can refer to this file.
https://github.com/whchengaa/hyperledger-fabric-technical-tutorial/blob/cross-machine-hosts-file/balance-transfer/artifacts/docker-compose-kafka.yaml
Remember to specify the IP address of the other zookeeper nodes in the hosts file which is mounted to /etc/hosts of that zookeeper node.
Make sure the port number of zookeeper nodes listed in ZOO_SERVERS environment variables are correct.
I am trying to deploy Apache -Druid in docker container. Image is built successfully. All the services including zookeeper are starting normally when Docker Image of Apache-druid is deployed.
Here is my setup, I am deploying Druid docker image on Docker remote host, it using Docker swarm internally. Ihave configured different container name, hostname for each service of Apache Druid. I have configured external network, I found out internally swarm is initiating those service on different hosts. I have configured "link" as zookeeper for Druid services and vice versa.
But, middle-magaer, co-ordinator and Broker are failing to connect to Zookeeper. Following is the error:
org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/IP Address:2181. Will not attempt to authenticate using SASL (unknown error) 2020-03-19T22:04:05,673 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/IP Address:2181.: Connection refused
So I have different services running on Docker network, on different
nodes(Docker on Linux).
These services are part of Apache Druid like middle manager, broker,router etc. These services are part on one single docker
compose file.
Services start but then not able to connect to
zookeeper which is part of Apache Druid package. Found out from my
infra team that these services are launched on different nodes within
network.
I have used defined external network. Also, I am defining
links . How do I configure services to talk to each other. Here is my
docker compose.
Here is my docker-compose file in comment below
Request inputs.
Thanks and Regards, Shubhada
I have fixed this issue y setting druid host to gateway.docker.internal
I have minikube running kubernetes inside a virtual box.
one of the docker container it runs is an ignite server.
during my development I try to access the ignite server from outside java client but the discovery fails with all configurations I tried.
is it possible at all?
If yes can someone give an example?
To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Read more about this on https://apacheignite.readme.io/docs/kubernetes-deployment. Your Kubernetes service definitions should have the container exposed port specified, then minikube should give you service URL after successful deployment.