Orion Context Broker specified IP - fiware-orion

Is there a method for specific IP address while setting up Orion Context Broker using any of those methods mentioned here? Now I'm running it as a docker container simultaneously with mongodb. I tried modifying docker-compose file, however couldn't find any network settings for orion.
I recently came across many difficulties with Freeboard and OCB connection and it may be because of OCB running on default loopback interface. It was the same deal when fiware's accumulator server started on that interface and after change to other available the connection was established.

You can use the -localIp CLI option in order to specify on which IP interface the broker listens to. By default it listens to all the interfaces.

Related

Process behind gateway ports

I deployed a mongodb in the default docker network bridge.
Please recall that, the gateway of the bridge network is 172.17.0.1.
For more information, refer to https://docs.docker.com/network/network-tutorial-standalone/.
Recently, I discovered that the mongodb receives a lot of slow queries from a process running behind 172.17.0.1:39694
How do I find out what process is running on the gateway port 172.17.0.1:39694?
docker network inspect bridge
shows only nodes within the bridge network, but shows nothing related what processes are running on its gateway ports.
Each MongoDB client identifies itself when it establishes the connection. Example:
{"t":{"$date":"2020-11-25T10:49:02.505-05:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn216","msg":"client metadata","attr":{"remote":"127.0.0.1:58122","client":"conn216","doc":{"driver":{"name":"mongo-ruby-driver","version":"2.14.0.rc1"},"os":{"type":"linux","name":"linux-gnu","architecture":"x86_64"},"platform":"Ruby 2.7.1, x86_64-linux, x86_64-pc-linux-gnu"}}}
This gives you the language, driver and driver's version.
You can pass additional metadata to identify connections. For example in Ruby you would do this via Client#initialize :app_name option.
For mapping ports to processes, see e.g. https://www.putorius.net/process-listening-on-port.html

Dynamic port mapping for ECS tasks

I want to run a socket program in aws ecs with client and server in one task definition. I am able to run it when I use awsvpc network mode and connect to server on localhost every time. This is good so I don’t need to know the IP address of server. The issue is server has to start on some port and if I run 10 of these tasks only 3 tasks(= number of running instances) run at a time. This is clearly because 10 tasks cannot open the same port. I can manually check for open ports before starting the server and somehow write it to docker shared volume where client can read and connect. But this seems complicated and my server has unnecessary code. For the Services there is dynamic port mapping by using Application Load Balancer but there isn’t anything for simply running tasks.
How can I run multiple socket programs without having to manage the port number in Aws ecs?
If you're using awsvpc mode, each task will get its own eni and there shouldn't be any port conflict. But each instance type has a limited number of enis available. You can increase that by enabling eni trunking which, however is supported by a handful of instance types:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html#eni-trunking-supported-instance-types

Heavy multicast traffic created by Jboss 5 AS cluster

We have a Jboss 5 AS cluster consiteing of 2 nodes using multicast, every thing works fine and the servers are able to discover and make a cluster
but the problem is these servers generate heavy multicast traffic which effects the network performace of other servers shareing the same network.
I am new to Jboss clustering is there any way to use unicast (point-to-point) instead of multicast ? Or configure the multicast such that its not problem for rest of the network ? can you refer me to some documentation , blog post or simmillar that can help me get rid of this problem.
Didn't got any answers here but this might be of help to someone in future we managed to resolve it by
Set the following TTL property for jboss in the start up script
-Djgroups.udp.ip_ttl=1
this will restrict the number of hops to 1 for the multicast messages. This will not reduce the amount of network traffic between the clustered JBoss but will prevent it spreading outside.
If you have other servers in the same subnet that are effected by flooding problem then
you might have to switch to TCP stack and do unicast instead of multicast
-Djboss.default.jgroups.stack=tcp
Also there are more configuration files in jboss deploy for clustering that you should look at.
server/production/deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml
and other conf files in the JGroups config.
If multicast is not an option of for some reason it doesn't work due to network topology we can use the unicast.
To use unicast clustering instead of UDP mcast. Open up your profile and look into file jgroups-channelfactory-stacks.xml and locate the stack named "tcp". That stacks still uses UDP only for multicast discovery. If low UDP traffic is alright, you dont need to change it. If it is or mcast doesn't work, you will need to configure TCPPING protocol and configure intial_hosts where to look for cluster members.
Afterwards, you will need to tell JBoss Cache to use this stack, open up jboss-cache-manager-jboss-beans.xml where for each cache you have a stack defined. You can either change it here from udp to tcp or you can simply use the property when starting AS, just add:
-Djboss.default.jgroups.stack=tcp

jboss clustering GMS, join

I have jboss 5.1.0.
we have configured jboss somehow using clustering, but in fact we do not use clustering while developing or testing. But in order to launch the project i have to type the following:
./run.sh -c all -g uniqueclustername
-b 0.0.0.0 -Djboss.messaging.ServerPeerID=1 -Djboss.service.binding.set=ports-01
but while jboss starting i able to see something like this in the console:
17:24:45,149 WARN [GMS]
join(172.24.224.7:60519) sent to
172.24.224.2:61247 timed out (after 3000 ms), retrying 17:24:48,170 WARN
[GMS] join(172.24.224.7:60519) sent to
172.24.224.2:61247 timed out (after 3000 ms), retrying 17:24:51,172 WARN
[GMS] join(172.24.224.7:60519)
here 172.24.224.7 it is my local IP
though 172.24.224.2 other IP of other developer in our room (and jboss there is stoped).
So, it tries to join to the other node or something. (i'm not very familiar how jboss acts in clusters). And as a result the application are not starting.
What may be the problem in? how to avoid this joining ?
You can probably fix this by specifying
-Djgroups.udp.ip_ttl=0
in your startup. This Sets the IP time-to-live on the JGroups packets to zero, so they never get anywhere, and the cluster will never form. We use this in dev here to stop the various developer machines from forming a cluster. There's no need to specify a unique cluster name.
I'm assuming you need to do clustering in production, is that right? Could you just use the default configuration instead of all? This would remove the clustering stuff altogether.
while setting up the server, keeping the host name = localhost and --host=localhost instead of ip address will solve the problem. That makes the server to start in non clustered mode.

jndi.properties in JBoss

Is there any way to configure JNDI so the lookup first checks localhost and if it doesn't find matching name it performs automatic discovery of other jndi servers?
My understanding of the documentation is that this is the default behavior when using clustering:
16.2.2. Client configuration
The JNDI client needs to be aware of
the HA-JNDI cluster. You can pass a
list of JNDI servers (i.e., the nodes
in the HA-JNDI cluster) to the
java.naming.provider.url JNDI
setting in the jndi.properties file.
Each server node is identified by its
IP address and the JNDI port number.
The server nodes are separated by
commas (see Section 16.2.3, “JBoss
configuration” on how to configure the
servers and ports).
java.naming.provider.url=server1:1100,server2:1100,server3:1100,server4:1100
When initialising, the JNP client code
will try to get in touch with each
server node from the list, one after
the other, stopping as soon as one
server has been reached. It will then
download the HA-JNDI stub from this
node.
Note - There is no load balancing behavior in the JNP client lookup
process. It just goes through the
provider list and use the first
available server. The HA-JNDI provider
list only needs to contain a subset of
HA-JNDI nodes in the cluster.
The downloaded smart stub contains the
logic to fail-over to another node if
necessary and the updated list of
currently running nodes. Furthermore,
each time a JNDI invocation is made to
the server, the list of targets in the
stub interceptor is updated (only if
the list has changed since the last
call).
If the property string java.naming.provider.url is empty or
if all servers it mentions are not
reachable, the JNP client will try to
discover a bootstrap HA-JNDI server
through a multicast call on the
network (auto-discovery). See
Section 16.2.3, “JBoss
configuration” on how to configure
auto-discovery on the JNDI server
nodes. Through auto-discovery, the
client might be able to get a valid
HA-JNDI server node without any
configuration. Of course, for the
auto-discovery to work, the client
must reside in the same LAN as the
server cluster (e.g., the web servlets
using the EJB servers). The LAN or WAN
must also be configured to propagate
such multicast datagrams.