Draw communication between nodes in deployment diagram - deployment

I'm drawing a deployment diagram and I have one node that communicates with a second one through TCP/IP, however, this second node also hears UDP messages from the first one, how can I model that one my diagram? Should I draw 2 lines each one with a different stereotype?
Thanks.

Deployment diagrams are for depicting the structure of artifacts to be deplyed. How does the networking stuff relate to that? If you want to depict the communication within the system you are deplyoing, then try some of the interaction diagrams.

Related

Kubernetes: single POD with many container, or many Pod with single container

I've rather a teoretical question which I can't answer with the reousrces found online. The question is: what's the rule to decide how to compose containers in POD? . Let me explain with an example.
I've these microservices:
Authentication
Authorization
Serving content
(plus) OpenResty to forward the calls form one to the other and orhcestarate the flow. (is there a possibility to do so natively in K8?, it seems to have services base on nginx+lua, but not sure how it works)
For the sake of the example I avoid Databases and co, I assume they are external and not managed by kubernetes
Now, what's the correct way here LEFT or RIGHT of the image?
LEFT : this seems easier to make it working, everything works on "localhost" , the downside is that it looses a bit the benefit of the microservices. For example, if the auth become slows and it would need more instances, I've to duplicate the whole pod and not just that service.
RIGHT seems a bit more complex, need services to expose each POD to the other PODs. Yet, here, I could duplicate auth as I need without duplicating the other containers. On the other hand I'll have a lot of pods since each pod is basically a container.
It is generally recommended to keep different services in different pods or better deployments that will scale independently. The reasons are what is generally discussed as benefits of a microservices architecture.
A more loose coupling allowing the different services to be developed independently in their own languages/technologies,
be deployed and updated independently and
also to scale independently.
The exception are what is considered a "helper application" to assist a "primary application". Examples given in the k8s docs are data pullers, data pushers and proxies. In those cases a share file system or exchange via loopback network interface can help with critical performance use cases. A data puller can be a side-car container for an nginx container pulling a website to serve from a GIT repository for example.
right image, each in own pod. multi containers in a pod should really only be used when they are highly coupled or needed for support of the main container such as a data loader.
With separate pods, it allows for each service to be updated and deployed independently. It also allows for more efficient scaling. in the future, you may need 2 or 3 content pods but still only one authorization. if they are all together you scale them all since you don't have a choice with them all together in the same pod.
Right image is better option. Easier management, upgrades, scaling.
Should choose the right side of the structure, on the grounds that the deployment of the left side of the architecture model is tight coupling is not conducive to a module according to the actual needs of the business expansion capacity.

Deployment Topology and Data center

I am trying to learn deployment topology and data centre topology for complex application. I know that answer to this question may vary depending on different scenarios. But I am talking about most probable and general case.
I browsed over the internet and came to know that complex large scale e-commerce J2EE applications are deployed on multiple application servers (e.g. weblogic cluster) and these weblogic application servers are connected to multiple data centres (where customer/application specific data is stored). I would like to know how these servers talk to multiple data centres (in terms of JDBC Connection and data source).
Also if there are multiple data centres then there is possibility of inconsistent data.
My knowledge about data centre is zero and would like to know more about how communication between application server and data centres takes place. In short I want to know typical production deployment topology along with data centers.
Thanks in advance.
This is a very complex and broad question that is open to interpretation. You might want to start with something like this to get enough information to ask more specific questions that are a better fit for Stack Overflow: http://www.javaworld.com/article/2077094/soa/construct-java-applications-through-distributed-object-technology.html

Model multiple instances of hardware with UML

Imagine that we have a simple structure of a client machine and a sever which are connected through the Internet. It is well-known how to show such structure with a deployment diagram.
Now I want to show that it is possible to have unlimited number of such clients and servers where each client has a possibility to be connected to each server. Moreover I want to show that each client machine has the same software client on it and each server has the same database scheme, but with one of different implementations (MySQL, Oracle, ...).
What the best and detailed way of model this with UML?
You can draw deployment diagram as depicted on picture below.
Communication path between nodes defines that many clients and many servers communicate with each other. (top part of diagram)
There are instances of nodes from top part of diagram to define that there are concrete instances of database server on your server machines. Internet is not node of deployment from this point of view. Internet is form of communication realization between nodes.
If it is needed to define deployed code on nodes, use artifacts.
See, the database server is node as well, but of execution environment type !
here is my diagram. I hope it will help you
Use instances of Components and Nodes:
The fact that each client can connect to any server could be shown as a simple note (pragmatic approach) or in some other diagram - collaboration for example (formal approach). If there is a dispatching algorithm in between, I sugest tha latter. If not, the note will be enough, as this is somehow expected.
I suggest to use a plain deployment diagram to show the dependencies

Correct use of signals in an activity diagram

I have a question regarding activity diagrams. I read a lot of materials on the topic but to be honest, I am still not sure about the signal sending and receiving.
I did a simple activity diagram for the password authentication.
Basically what I want to do is to send a message to the client after the server performs a search in the database. Is this use of signals correct?
Any criticism toward the structure of the diagram will be welcomed.
Here is the mentioned diagram:
Signals are used to indicate a communication to some entity external to the system considered (e.g., an e-mail to a customer). So it depends on which are the borders of your system: are the client and the server part of separate systems from the viewpoint of this activity?
Signals are also somehow used to indicate an asynchronous communication. In this case I would not use signals: in my opinions these are actions. Maybe you can add the object transferred (the data in this case) in the diagram if you think it is useful but I would avoid a signal.

Scala + Akka: How to develop a Multi-Machine Highly Available Cluster

We're developing a server system in Scala + Akka for a game that will serve clients in Android, iPhone, and Second Life. There are parts of this server that need to be highly available, running on multiple machines. If one of those servers dies (of, say, hardware failure), the system needs to keep running. I think I want the clients to have a list of machines they will try to connect with, similar to how Cassandra works.
The multi-node examples I've seen so far with Akka seem to me to be centered around the idea of scalability, rather than high availability (at least with regard to hardware). The multi-node examples seem to always have a single point of failure. For example there are load balancers, but if I need to reboot one of the machines that have load balancers, my system will suffer some downtime.
Are there any examples that show this type of hardware fault tolerance for Akka? Or, do you have any thoughts on good ways to make this happen?
So far, the best answer I've been able to come up with is to study the Erlang OTP docs, meditate on them, and try to figure out how to put my system together using the building blocks available in Akka.
But if there are resources, examples, or ideas on how to share state between multiple machines in a way that if one of them goes down things keep running, I'd sure appreciate them, because I'm concerned I might be re-inventing the wheel here. Maybe there is a multi-node STM container that automatically keeps the shared state in sync across multiple nodes? Or maybe this is so easy to make that the documentation doesn't bother showing examples of how to do it, or perhaps I haven't been thorough enough in my research and experimentation yet. Any thoughts or ideas will be appreciated.
HA and load management is a very important aspect of scalability and is available as a part of the AkkaSource commercial offering.
If you're listing multiple potential hosts in your clients already, then those can effectively become load balancers.
You could offer a host suggestion service and recommends to the client which machine they should connect to (based on current load, or whatever), then the client can pin to that until the connection fails.
If the host suggestion service is not there, then the client can simply pick a random host from it internal list, trying them until it connects.
Ideally on first time start up, the client will connect to the host suggestion service and not only get directed to an appropriate host, but a list of other potential hosts as well. This list can routinely be updated every time the client connects.
If the host suggestion service is down on the clients first attempt (unlikely, but...) then you can pre-deploy a list of hosts in the client install so it can start immediately randomly selecting hosts from the very beginning if it has too.
Make sure that your list of hosts is actual host names, and not IPs, that give you more flexibility long term (i.e. you'll "always have" host1.example.com, host2.example.com... etc. even if you move infrastructure and change IPs).
You could take a look how RedDwarf and it's fork DimDwarf are built. They are both horizontally scalable crash-only game app servers and DimDwarf is partly written in Scala (new messaging functionality). Their approach and architecture should match your needs quite well :)
2 cents..
"how to share state between multiple machines in a way that if one of them goes down things keep running"
Don't share state between machines, instead partition state across machines. I don't know your domain so I don't know if this will work. But essentially if you assign certain aggregates ( in DDD terms ) to certain nodes, you can keep those aggregates in memory ( actor, agent, etc ) when they are being used. In order to do this you will need to use something like zookeeper to coordinate which nodes handle which aggregates. In the event of failure you can bring the aggregate up on a different node.
Further more, if you use an event sourcing model to build your aggregates, it becomes almost trivial to have real-time copies ( slaves ) of your aggregate on other nodes by those nodes listening for events and maintaining their own copies.
By using Akka, we get remoting between nodes almost for free. This means that which ever node handles a request that might need to interact with an Aggregate/Entity on another nodes can do so with RemoteActors.
What I have outlined here is very general but gives an approach to distributed fault-tolerance with Akka and ZooKeeper. It may or may not help. I hope it does.
All the best,
Andy