I deployed a mongodb in the default docker network bridge.
Please recall that, the gateway of the bridge network is 172.17.0.1.
For more information, refer to https://docs.docker.com/network/network-tutorial-standalone/.
Recently, I discovered that the mongodb receives a lot of slow queries from a process running behind 172.17.0.1:39694
How do I find out what process is running on the gateway port 172.17.0.1:39694?
docker network inspect bridge
shows only nodes within the bridge network, but shows nothing related what processes are running on its gateway ports.
Each MongoDB client identifies itself when it establishes the connection. Example:
{"t":{"$date":"2020-11-25T10:49:02.505-05:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn216","msg":"client metadata","attr":{"remote":"127.0.0.1:58122","client":"conn216","doc":{"driver":{"name":"mongo-ruby-driver","version":"2.14.0.rc1"},"os":{"type":"linux","name":"linux-gnu","architecture":"x86_64"},"platform":"Ruby 2.7.1, x86_64-linux, x86_64-pc-linux-gnu"}}}
This gives you the language, driver and driver's version.
You can pass additional metadata to identify connections. For example in Ruby you would do this via Client#initialize :app_name option.
For mapping ports to processes, see e.g. https://www.putorius.net/process-listening-on-port.html
Related
On Google Cloud Platform I need to create two virtual machines that will act as the main server and replication server (as a database).
It happens that I will have several applications that will connect to the main server, which requires me to define in these applications the local network IP (VPC) of this main machine.
My problem occurs when there is some failure on the main machine or even an emergency/maintenance reboot. This type of operation will require me to urgently change all applications to use the replication machine's VPC IP instead of the main one.
Is there any way I can have one IP that can be dedicated to connect to the main machine, but when necessary, change its destination to be the replication machine?
Instead use an internal L7 load balancer. See the comparision in order to decide if this is suitable. This PDF explains the stack - and envoyproxy.io is the load balancer.
Andromeda even implements round robin, but for NIC instead of IP.
Also see: Patterns for using floating IP addresses in Compute Engine
I want to run a socket program in aws ecs with client and server in one task definition. I am able to run it when I use awsvpc network mode and connect to server on localhost every time. This is good so I don’t need to know the IP address of server. The issue is server has to start on some port and if I run 10 of these tasks only 3 tasks(= number of running instances) run at a time. This is clearly because 10 tasks cannot open the same port. I can manually check for open ports before starting the server and somehow write it to docker shared volume where client can read and connect. But this seems complicated and my server has unnecessary code. For the Services there is dynamic port mapping by using Application Load Balancer but there isn’t anything for simply running tasks.
How can I run multiple socket programs without having to manage the port number in Aws ecs?
If you're using awsvpc mode, each task will get its own eni and there shouldn't be any port conflict. But each instance type has a limited number of enis available. You can increase that by enabling eni trunking which, however is supported by a handful of instance types:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html#eni-trunking-supported-instance-types
I have created a MongoDB replica set using 5 EC2 instances on AWS. I added the nodes using rs.add("[IP_Address]") command.
I want to perform network partition in the replica set. In order to that, I have specified 2 kinds of security groups. 'SG1' has 27017 port (MongoDB port) opened. 'SG2' doesn't expose 27017.
I want to isolate 2 nodes from the replica set. When I apply SG2 on these 2 nodes (EC2 instances), ideally they should stop getting write and read from the primary as I am blocking the 27017 port using security group SG2. But in my case, they are still writable. Data written on Primary reflects on the partitioned node. Can someone help? TYA.
Most firewalls, including AWS Security groups, will block incoming connections when the connection is being opened. Changing settings will affect all new connection, but existing open connections are not re-evaluated when they are applied.
MongoDB maintains connections between hosts and that would only get blocked after loss of connection between the hosts.
On Linux you can restart the networking which will reset the connections. You can do this after applying the new rules by running:
/etc/init.d/networking stop && /etc/init.d/networking start
Is there a method for specific IP address while setting up Orion Context Broker using any of those methods mentioned here? Now I'm running it as a docker container simultaneously with mongodb. I tried modifying docker-compose file, however couldn't find any network settings for orion.
I recently came across many difficulties with Freeboard and OCB connection and it may be because of OCB running on default loopback interface. It was the same deal when fiware's accumulator server started on that interface and after change to other available the connection was established.
You can use the -localIp CLI option in order to specify on which IP interface the broker listens to. By default it listens to all the interfaces.
I have created a 8 node MongoDB cluster with 2 shards + 2 Replica(1 for each shard) + 3 Config Servers + 1 Mongos.
All these are on network 192.168.1.(eth0) with application server. So this network is handling all the traffic.
So I have created one another network 192.168.10.(eth1) which is having only these 8 MongoDB nodes.
Now all the eight nodes are the part of both the networks with dual IP's.
Now I want to shift the internal traffic between these mongodb nodes to network 192.168.10.(eth1) to reduce the load from main network 192.168.1.(eth0)
So how to bind the ports/nodes for the purpose?
You can use bind_ip as a startup or configuration option. Keep in mind that various nodes need to be accessible in the event of failover.
Notably here is your single mongos where it would be advised to either co-locate the service per app server, or depending on requirements, have a pool available to your driver connection. Preferably both and having a large instance for each 'mongos' where aggregate operations are used.
I got the solution of the problem I was looking for. I configured the cluster according to the IP's of network 192.168.11._
Now the internal data traffic is going through this network.