Visualization problem in #graphdb : This node has no visible connections - visualization

I'm using graphdb to visualize connections between my nodes, but when I used visual graph and I tape an IRI, a single node appear although I know it has several connections but they do not appear.
Architecture screenshot:
Node screenshot]:

Related

How to isolate a local application completely (intercept inward and outward traffic) using mitmproxy?

I'm using mitmproxy on my macOS machine. The requirement is simple - I would like to isolate a local application (java or node) completely. All the inward and outward traffic to/from this application should go through mitmproxy.
I tried looking for answers but couldn't find one with concrete example or explanation. Can someone help here?
Transparent mode of mitmproxy is definitely one approach but it gives me everything (all the traffic) in one go.

Is sticky sessions across multiple Data Centres with GTM possible?

I'm a software developer with limited knowledge of F5 GTMs.
We are building web applications that will run in Edge within a company network.
We have a set-up of two data centres & a GTM (F5) across the two. We use Kubernetes inside the DCs with Istio. We are building web applications that hold state in the DCs.
I'm told the GTM will allocate an IP for the data centre it decides to direct to with a TTL.
Once the TTL expires the browser will call the GTM again to get a new IP which the GTM could allocate to the other DC.
We were hoping to use sticky sessions & hold state in application memory but the GTM & it's TTL seem to make this unreliable. What I don't understand is I've worked at other places with F5 GTMs, active active deployments and sticky sessions. Is anyone aware of any way to get this to work with the GTM set-up we have? We could invest more and externalize the state but would prefer not to.

Kubernetes: how best to autoscale nodes containing websocket connections?

Is there support for autoscaling nodes where the pods only contain websockets used for push notifications back to the client? I suspect we would likely be hitting connection constraints before either CPU or memory limits are hit. Please correct me if others have different experiences here.
The main issue I see is the persistent nature of the connections - pods that contain active websockets must remain intact when down-scaling and tenancy is not relocatable.
So my questions here are these:
is this support available? Would we want to make these statefulsets? I am not even sure what model works best here.
would we want to use Kubernetes services to route incoming websocketconnections to the worker nodes? If so, how would we set the kube-proxy to respect and ignore those worker nodes whose connections limits have been reached and thus should not get new connection requests?
how do we autoscale based upon a configurable limits on the number of connections maintained by a pod? How do we scale down without destroying any nodes that have > 1 active connections?
Thanks in advance for all tips/pointers, especially any advice on how to best ask these questions.

How to provide multiple services through a cloud gateway?

Assume I'm working on a multiplayer online game. Each group of players may start an instance of the game to play. Take League Of Legends as an example.
At any moment of time, there are many game matches being served at the same time. My question is about the architecture of this case. Here are my suggestions:
Assume we have a cloud with a gateway. Any game instance requires a game server behind this gateway to serve the game. For different clients outside the cloud to access different game servers in the cloud, the gateway may differentiate between connections according to ports. It is like we have one machine with many processes each of them listening on a different port.
Is this the best we can get?
Is there another way for the gateway to differentiate connections and forward them to different game instances?
Notice that these are socket connections NOT HTTP requests to an API gateway.
EDIT 1: This question is not about Load Balancing
The keyword is ports. Will each match be served on a different port? or is there another way to serve multiple services on the same host (host = IP)?
Elaboration: I'm using client-server model for each match instance. So multiple clients may connect to the same match server to participate in the same match. Each match need to be server by a match server.
The limitation in mind is: For one host (=IP) to serve multiple services it need to provide them on different ports. Match 1 on port 1234. So clients participating in match 1 will connect to and communicate with the match server on port 1234.
EDIT 2: Scalability is the target
My match server does not calculate and maintain the world of many matches. It maintains the world of one match. This is why each match need another instance of the match server. It is not scalable to have all clients communicating about different matches to connect to one process and to be processed by one process.
My idea is to serve the world of each match by different process. This will require each process to be listening on a different port.
Example: Any client will start a TCP connection with a server listening on port A. Is there is a way to serve multiple MatchServers on the same port A (so that more simultaneous MatchServers won't result in more ports)?
Is there a better scalable way to serve the different worlds of multiple matches?
Short answer: you probably shouldn't use proxy-gateway to handle user connections unless you are absolutely sure there's no other way - you are severely limiting your scaling ability.
Long answer:
What you've described is just a load balancing problem. You can find plenty of solutions based on given restrictions via google.
For League Of Legends it can be quite simple: using some health-check find server with lowest amount of load and stick (kinda like sticky sessions) current game to this server - until the game is finished any computations for particular game are made there. You could use any kind of caching mechanism to store game - server relation for subsequent requests on gateway side.
Another, a bit more complicated example could be data storage for statistics for particular game - it's usually solved via sharding which is a usual consequence of distributed computing. It could be solved this way: use some kind of hashing function (for example, modulo) with game ID as parameter to calculate server number. For example 18283 mod 15 = 13 for game ID = 18283 and 15 available shards - so 13th server should store/serve this data.
Main problem here would be "rebalancing" - adding/remove a shard from cluster, for example.
Those are just two examples, you can google more of them using appropriate keywords. Just keep in mind that all of this is just a subset of problems of distributed computing.

Model multiple instances of hardware with UML

Imagine that we have a simple structure of a client machine and a sever which are connected through the Internet. It is well-known how to show such structure with a deployment diagram.
Now I want to show that it is possible to have unlimited number of such clients and servers where each client has a possibility to be connected to each server. Moreover I want to show that each client machine has the same software client on it and each server has the same database scheme, but with one of different implementations (MySQL, Oracle, ...).
What the best and detailed way of model this with UML?
You can draw deployment diagram as depicted on picture below.
Communication path between nodes defines that many clients and many servers communicate with each other. (top part of diagram)
There are instances of nodes from top part of diagram to define that there are concrete instances of database server on your server machines. Internet is not node of deployment from this point of view. Internet is form of communication realization between nodes.
If it is needed to define deployed code on nodes, use artifacts.
See, the database server is node as well, but of execution environment type !
here is my diagram. I hope it will help you
Use instances of Components and Nodes:
The fact that each client can connect to any server could be shown as a simple note (pragmatic approach) or in some other diagram - collaboration for example (formal approach). If there is a dispatching algorithm in between, I sugest tha latter. If not, the note will be enough, as this is somehow expected.
I suggest to use a plain deployment diagram to show the dependencies