Discovery in SCOM - scom

We have SCOM 2007 R2. I need to do 3 levels discovery. First level is the seed discovery and I use local app class and the native registry module in SCOM. For level 2 and 3 I need to use app component class and powershell because it is a custom app. But as soon as I created the relationship between the level 2 and 3, the discovery for the level 3 broke and nothing came back. Once I removed the relationship between the two, my discovery is successful, but just missing the path information (expected since I have none defined).
How can I define multi-layer discovery (> 2) using the local app and app component classes?

Yean,
Of course, I can tell more, as soon as you give me the source code for your MP, but right now it looks like you are failing with filling the discovery data for the relationship between levels 2 and 3. I suppose that your L3 discovery is powershell-based so you must create DiscoveryData for your L3 class and for relation class as well. And you know what happens sometimes? Despite of the expectation that you have your L2 entity already discovered when it initiate the L3 discovery it may not happen. So, you are trying to create relation from the new L3 object to L2 object, which is not completely created in SCOM DB yet. So, relationship DiscoveryData becomes non-consistent (because SCOM can't find ID of the L2 object in DB) and it causes the whole L3 object discovery failure.
That is my assumption, based on my experience with SCOM 2007...I'm not sure if it's fixed in 2007 R2 SP1 (which is the most reliable 2007 version).
How to fix that? So, you may get rid of multi-layer discovery and discover both L2 and L3 objects in one datasource - it is also bring you better performance!
I hope it helps, good luck!
Roman.

Related

Microservice Adapter. One for many or many / countries to one / country. Architectural/deployment decision

Say, I have System1 that connect to System 2 through the adapter-microservice between them.
System1 -> rest-calls --> Adapter (converts request-response + some extra logic, like validation) -> System2
System1 is more like a monolith, exists for many countries (but it may change).
Question is: From the perspective of MicroService architecture and deployment, should the Adapter be one per country. Say Adapter-Uk, Adappter-AU, etc. Or it should be just the Adapter that could handle many countries at the same time?
I mean:
To have a single system/adapter-service :
Advantage: is having one code-base in or place, the adaptive code-logic between countries in 90 % are the same. Easy to introduce new changes.
Disadvantage: once we deploy the system, and there is a bug, it could affect many countries at the same time. Not safe.
To have a separate system:
Disadvantage: once some generic change has been introduced to one system, then it should be "copy-pasted" for all other countries/services. Repetitive, not smart.. work, from developer point of view.
Advantage:
Safer to change/deploy.
Q: what is a preferable way from the point of view of microservice architecture?
I would suggest the following:
the adapter covers all countries in order to maintain single codebase and improve code reusability
unit and/or integration tests to cope with the bugs
spawn multiple identical instances of the adapter with load balancer in front
Since Prabhat Mishra asked in the comment.
After two years.. (took some time to understand what I have asked.)
Back then, for me was quite critical to have a resilient system, i.e. if I change a code in one adapter I did not want all my countries to go down (it is SAP enterprise system, millions clients). I wanted only once country to go down (still million clients, but fewer millions :)).
So, for this case: i would create many adapters one per country, BUT, I would use some code-generated common solution to create them, like common jar - so I would would not repeat my infrastructure or communication layers. Like scaffolding thing.
country-adapter new "country-1"
add some country specific code (not changing any generated one (less code to repeat, less to support))
Otherwise, if you feel safe (like code reviewing your change, making sure you do not change other countries code), then 1 adapter is Ok.
Another solution, is to start with 1 adapter, and split to more if it is critical (feeling not safe about the potential damage / cost if fails).
In general, seems, all boils down to the same problem, which is: WHEN to split "monolith" to pieces. The answer is always: when it causes problems to be as big as it is. But, if you know your system well, you know in advance that WHEN is now (or not).

too many rest api calls in Microservices

Say there are two services,
service A and service B.
Service A needs data from service B to process a request. So as to avoid tight coupling we make a rest API call to the service B instead of directly querying service B's database.
Doesn't making an HTTP call to the service B for every request reduces the response time?
I have seen the other solution to cache the data at service A. I have following questions.
What if the data is rapidly changing?
what if the data is critically important such as user account balance details and there has to be strong consistency.
what about data duplication and data consistency?
By introducing the rest call arent are we introducing a point of failure? what if service B is down?
Also by the increasing requests to service A for that particular API, service B load is also increasing.
Please help me with this.
These are many questions at once, let me try to give a few comments in random order:
If Service A needs data from service B, then B is already a single point of failure, so the reliability question is just moved from B's database to B's API endpoint. It's very unlikely, that this makes a big difference.
A similar argument goes for the latency: A good API layer including caching might even decrease average latency.
Once more the same with load: The data dependency of A on B already includes the load on B's database. And again a good API layer with caching might even help with the load.
So while the decoupling (from tight to loose) brings a lot of advantages, load and reliability are not necessarily in the disadvantages list.
A few words about caching:
Read caching can help a lot with load: Typically a request from A to B should indicate the version of the requested entity, that is available in the cache (possibly none of course), Endpoint B then can just verify if the entity has changed and if not stop all processing and just return an "unchanged" message. B can keep the information, which entities have changed in the immediate past in a much smaller data store than the entities themselves, most likely keeping them in RAM or even in process, speeding up things quite noticeably.
Such a mechanism can much easier be introduced in an API endpoint for B then in the database itself, so querying the API can scale much better than querying the DB.
I guess the first question you should ask yourself is are A and B really two different services - what's the reason for partitioning them in the first place? After all, they seem to be coupled both temporally and by data.
one of the reasons to separate a service into two executables might be the can change independently or serve different access paths, in which case you may want to consider them different aspects of the same service - now this may seem like a distinction without a difference, but it is important when looking at the whole picture and which parts of the system can know about internal structures of others and defending the system into deteriorating to a big ball of mud where every "service" can access any other "service" data and they are all dependent on each other
If these two components are indeed different services, you may also consider moving to a model where service B published data changes actively. This way service A can cache the relevant parts of B's data. B is still the source of truth and A is decoupled from B's availability (depending on the expiration of data)

looking best notation for UML Class diagram

I have the following problem to resolv. Given the following UML diagram:
I need to complete the diagram doing the following steps:
a) When an employee has a skill, the relationship between the employee and the skill shows years of experience.
b) A worker may have another employee as a manager and a worker who is a manager must manage five or more workers. As a manager, you can determine which workers he manages, but a worker can not determine who is his manager.
c) An activity can have a maximum activity precedent and any number of upcoming activities. Using these roles, we can show how the activities are arranged. Given an activity, you can only determine their own activities (if you have), but not what your previous activity (if any).
d) A worker is not simply associated with a set of skills, but a worker has skills. In particular, every worker should have three or more skills, and any number of employees can have the same ability.
e) A project is not simply a set of activities associated with, but contains a project activities. Specifically, a project must have one or more activities, and activity must belong to exactly one project.
f) Projects and activities are a specific type of job.
My solution is shown in the following picture, but because I am new in this I would like to check if is fine.
Thank you in advance!
Looks good in most parts. Honestly I don't understand the later parts of c)
Your -boss relation is wrong. Your Northern Koreans should not have a private known boss. Instead there's only the other way around and the boss has - let's call them - -slaves. If you put in a private -boss it actually means that the slave can navigate to its private boss which is explicitly not wanted. Only the boss shall know the one he's responsible for. So actually the object itself is the boss object. As a thought, since only the boss should have those 5 employees, it could be an idea to create a separate boss object like this:
Note that this might also have drawbacks since Boss is now actually a different object than Employee, but it seems to fit the requirements.
Point f) seems to call for a generalization. So you would need a generalization towards SpecificJob. This would be an arrow with open triangle, not the one you used:
This actually reads Project and Activity are specific kinds of Job as they both inherit from the latter.

CQRS, Event-Sourcing and Web-Applications

As I am reading some CQRS resources, there is a recurrent point I do not catch. For instance, let's say a client emits a command. This command is integrated by the domain, so it can refresh its domain model (DM). On the other hand, the command is persisted in an Event-Store. That is the most common scenario.
1) When we say the DM is refreshed, I suppose data is persisted in the underlying database (if any). Am I right ? Otherwise, we would deal with a memory-transient model, which I suppose, would not be a good thing ? (state is not supposed to remain in memory on server side outside a client request).
2) If data is persisted, I suppose the read-model that relies on it is automatically updated, as each client that requests it generates a new "state/context" in the application (in case of a Web-Application or a RESTful architecture) ?
3) If the command is persisted, does that mean we deal with Event-Sourcing (by construct when we use CQRS) ? Does Event-Sourcing invalidate the database update process ? (as if state is reconstructed from the Event-Store, maintaining the database seems useless) ?
Does CQRS only apply to multi-databases systems (when data is propagated on separate databases), and, if it deals with memory-transient models, does that fit well with Web-Applications or RESTful services ?
1) As already said, the only things that are really stored are the events.
The only things that commands do are consistency checks prior to the raise of events. In pseudo-code:
public void BorrowBook(BorrowableBook dto){
if (dto is valid)
RaiseEvent(new BookBorrowedEvent(dto))
else
throw exception
}
public void Apply(BookBorrowedEvent evt) {
this.aProperty = evt.aProperty;
...
}
Current state is retrieved by sequential Apply. Since this, you have to point a great attention in the design phase cause there are common pitfalls to avoid (maybe you already read it, but let me suggest this article by Martin Fowler).
So far so good, but this is just Event Sourcing. CQRS come into play if you decide to use a different database to persist the state of an aggregate.
In my project we have a projection that every x minutes apply the new events (from event store) on the aggregate and save the results on a separate instance of MongoDB (presentation layer will access to this DB for reading). This model is clearly eventually consistent, but in this way you really separate Command (write) from Query (read).
2) If you have decided to divide the write model from the read model there are various options that you can use to make them synchronized:
Every x seconds apply events from the last checkpoint (some solutions offer snapshot to avoid reapplying of heavy commands)
A projection that subscribe events and update the read model as soon event is raised
3) The only thing stored are the events. Infact we have an event-store, not a command store :)
Is database is useless? Depends! How many events do you need to reapply for take the aggregate to the current state?
Three? Maybe you don't need to have a database for read-model
The thing to grok is that the ONLY thing stored is the events*. The domain model is rebuilt from the events.
So yes, the domain model is memory transient as you say in that no representation of the domain model is stored* only the events which happend to the domain to put the model in the current state.
When an element from the domain model is loaded what happens is a new instance of the element is created and then the events that affect that instance are replayed one after the other in the right order to put the element into the correct state.
you could keep instances of your domain objects around and subscribing to new events so that they can be kept up to date without loading them from all the events every time, but usually its quick enough just to load all the events from the database and apply them every time in the same way that you might load the instance from the database on every call to your web service.
*Unless you have snapshots of you domain object to reduce the number of events you need to load/process
Persistence of data is not strictly needed. It might be sufficient to have enough copies in enough different locations (GigaSpaces). So no, a database is not required. This is (at least was a few years ago) used in production by the Dutch eBay equivalent.

GWT : Type of Container

I see that there are two ways of transferring objects from server to client
Use the same domain object (Contact.java) as used in the service layer. (I do not use hibernate)
Use the HashMap to send the domain object field values in the form of
Map with the help of BeanUtilsBean class. For multiple objects, use
the List>. Similary, use the Map to submit form
values from client to server
Is there any performance advantage for option 1 over 2?.
Is there a way to hide the classname/package name that is sent to the browser if we
use option 1?.
thanks!.
You have to understand that whatever option you choose, it will need to get converted to JavaScript (+ some wrappers, etc.) - this stuff takes more time and space/bandwidth (note: I haven't done any benchmarks, this is just a [reasonable] conclusion I came up with ;)) than, say, JSON. But if you used JSON, you have to recreate the object on the server side, os it's not a silver bullet. In the end, it all depends how much performance is of an issue to you - for more insight, see this question.
I'd go with option 1: just leave it to the GWT team to pack your domain objects and transfer them between client and server. In the future (GWT 2.1), we'll have some really nice things, including a more lightweight transfer protocol - see this years presentation from Google I/O on architecting GWT apps - it's something worth keeping in mind.
PS: It's always good to do benchmarks yourself in this kind of situations - your configuration, the type of objects, etc. might yield some different results than expected.