CIM/WBEM Packages which Provide SMI-S Compliant Client APIs - wbem

How can I determine if a CIM/WBEM package e.g. OpenPegasus, OpenWBEM, pyWBEM, SBLIM provide
SMI-S compliant client APIs to develop and management application.
These all are CIM compliant but I could not find out whether SMI-S is supported.
And how can SMI-S client API support be included in a CIM Compliant CIM/WBEM package?

Your question unfortunately is a "given all apples are fruit, how can I verify that the specific piece of fruit I am holding is also an apple?"... it's not an easy question to answer... unless you have a lab full of equipment to test the genome of the fruit before you... or purchased it from a reputable dealer and it came pre-certified as an apple.
CIM is the base protocol.
WBEM is a specification, based on CIM which lays out some additional specifics.
SMI-S is another spec, based on WBEM and laying out a number of additional specifics.
So from the start, OpenPegasus & OpenWBEM are not automatically SMI-S compatible... only through the creation of SMI-S compatible profiles & providers are they can be.
When it comes to determine if a SMI-S provider/api/etc is actually compliant with the specs, that depends on what your requirements are and how much time & money you have to invest.
Like many protocols, it can sometimes be enough to simply see if it works well enough for your purposes and test with different configurations from different vendors along the way... one way to do that is to attend a SNIA plugfest: http://www.snia.org/forums/smi/tech_programs/lab_program
Given that SNIA owns the SMI-S standard, they also have a program for verifying compliance with it: http://www.snia.org/ctp/ (though it like many standard based verification will cost some $$$).

anukalp,
Any client which claims that it supports CIM operations should be able to do the profile discovery.
The clients which I am aware of :- pegasus client, Java client from sblim project are all capable of this.
As a starting point, you should enumerate the RegisteredProfiles in the interop namespace and then follow the CIM_ElementConformsToProfile association to reach the implementation namespace.
Hope this helps.

Related

Can one prove that a website/service uses a specific open source codebase?

Suppose there is a deployed, public website/web service. They say it is an instance of an open source codebase. Can they prove it to me?
For example, take the useful www.s3auth.com, an HTTP auth gateway for S3 that lets you password protect static sites hosted on S3 (thanks to #yegor256). Yegor graciously open-sourced the service to assuage privacy concerns:
I made this software open source mostly to guarantee to my users that the server doesn't store their private data anywhere, but rather acts only as a pass-through service. As a result, the software is on GitHub.
Can he somehow prove to users that www.s3auth.com actually uses the exact codebase on Github?
Short answer
In the general case: No, without a complete and external audit of the software, hardware and network infrastructure during a specific period of time
Explanation
Ensure that a sofware is according an specification it is not at all a trivial task. Note that even the requirement actually uses the exact codebase on Github is not clear:
- Includes any part of the repository
- Includes a full tag
- Includes a full tag at a point of time
- Includes a full tag at a point of time and uses some functionality
- Includes a full tag at a point in time and uses a significant part of the functionality
- Includes a full tag at a point in time, uses a significant portion of the functionality, and there is no additional function that substantially modifies the behavior
- Includes a full tag at a point of time, uses a significant part of the functionality, and there is no additional function
- etc
An auditor should check:
the code to ensure that the requirement is acomplished
the build process to verify that the deliverable is the expected and does not include or remove parts
The hardware infraestructure to ensure the software is deployed, not altered an used as is
The network infraestructure to verify the deployed version is really the same that the users are getting
Any change on the code or infraestructure will invalidate the audit results
The auditor should be external to ensure independence and should in turn be audited by a regulatory body that certifies that it is capable of performing the process
I think I am beating around the bush. .I want to illustrate that assert that a software meets a specification it is difficult, expensive, and sometimes not useful. Of course, this process it is not needed if both parties agree on something simpler
This kind of audits exists in the real world, especially in the world of security. For example FIPS 140 or Common Criteria evaluations certifies that Hardware Security Modules or even software packages meets some security requirements. I know also some trusted certifiers that prove that a site shows a specific content at a point of time (usually applied to e-goverment)

Erlang/OTP architecture: RESTful protocol for SOAish services

Let us imagine we have an orders processing system for a pizza shop to design and build.
The requirements are:
R1. The system should be client- and use-case-agnostic, which means that the system can be accessed by a client which was not taken into account during the initial design. For example, if the pizza shop decides that many of its customers use the Samsung Bada smartphones later, writing a client for Bada OS will not require rewriting the system's API and the system itself; or for instance, if it turns out that using iPads instead of Android devices is somehow better for delivery drivers, then it would be easy to create an iPad client and will not affect the system's API in any way;
R2. Reusability, which means that the system can be easily reconfigured without rewriting much code if the business process changes. For example, if later the pizza shop will start accepting payments online along with accepting cash by delivery drivers (accepting a payment before taking an order VS accepting a payment on delivery), then it would be easy to adapt the system to the new business process;
R3. High-availability and fault-tolerance, which means that the system should be online and should accept orders 24/7.
So, in order to meet R3 we could use Erlang/OTP and have the following architecture:
The problem here is that this kind of architecture has a lot of "hard-coded" functionality in it. If, for example, the pizza shop will move from accepting cash payments on delivery to accepting online payments before the order is placed, then it will take a lot of time and effort to rewrite the whole system and modify the system's API.
Moreover, if the pizza shop will need some enhancements to its CRM client, then again we would have to rewrite the API, the clients and the system itself.
So, the following architecture is aimed to solve those problems and thus to help meeting R1, R2 and R3:
Each 'service' in the system is a Webmachine webserver with a RESTful API. Such an approach has the following benefits:
all goodness of Erlang/OTP, since each Webmachine is an Erlang application, which can be supervised and can be put into an Erlang release;
service oriented architecture with all the benefits of SOA;
easy adaptable to changes in the business process;
easy to add new clients and new functions to clients (e.g. to the CRM client), because a client can use RESTful APIs of all the services in the system instead of one 'central' API (Service composability in terms of SOA).
So, essentially, the system architecture proposed in the second picture is a Service Oriented Architecture where each service has a RESTful API instead of a WSDL contract and where each service is an Erlang/OTP application.
And here are my questions:
Picture 2: Am I trying to reinvent the wheel here? Should I just stick with the pure Erlang/OTP architecture instead? ("Pure Erlang" means Erlang applications packed into a release, talking to each other via
gen_server:call and gen_server:cast function calls);
Can you name any disadvantages in suggested approach? (Picture 2)
Do you think it would be easier to maintain and grow (R1 and R2) a system like this (Picture 2) than a truly Erlang/OTP one?
The security of such a system (Picture 2) could be an issue, since there are many entry points open to the web (RESTful APIs of all services) instead of just one entry point (Picture 1), isn't it so?
Is it ok to have several 'orchestrating modules' in such a system or maybe some better practice exists? ("Accept orders", "CRM" and "Dispatch orders" services on Picture 2);
Does pure Erlang/OTP (Picture 1) have any advantages over this approach (Picture 2) in terms of message passing and the limitations of the protocol? (partly discussed in my previous similar question, gen_server:call VS HTTP RESTful calls)
The thing to keep in mind regarding SOA is that the architecture is not about the technology (REST, WS* ). So you can get a good SOA in place with endpoints of several types if/when needed (what I call an Edge component - separating business logic from other concerns like communications and protocols)
Also it is important to note that service boundary is a trust boundary so when you cross it you may need to authenticate and authorize, cross network etc. Additionally, separation into layers (like data and logic) shouldn't drive the way you partition your services.
So from what I am reading in your questions, I'd probably partition the services into more coarse grained services (see below). communications within the boundary of the service can be whatever, where-as communications across services uses a public API (REST or Erlang native is up to you, but the point is that it is managed, versioned, secured etc.) - again, a service may have endpoints in multiple technologies to facilitate different users (sometimes you'd use an ESB to mediate between services and protocols but the need for that depends on size and complexity of your system)
Regarding your specific questions
1 As noted above, I think theres a place to expose more public APIs
than just a single entry point, I am not sure that exposing each and
every capability as a service with public-api is the right way to go
see.
2&3 The disadvantages of exposing every little thing is
management overhead, decreased performance (e.g. you'd have to
authenticate on these calls). You get nano-services services
whose overhead is more than their utility.
One thing to add about the security is that the fact that some service has a REST API does not have to translate to having that API available to the general public. Deployment-wise you can keep it behind a firewall and restrict the access to it for known addresses
etc.
5 It is ok to have several orchestrating modules, though if you get beyond a few you should probably consider some orchestration module (and ESB or an Orchestration engine) alternatively you can use event based integration and get choreography based integration which is more flexible (but somewhat less manageable )
6 The first option has the advantage of easy development and probably better performance (if that's an issue). The hard-coded integration layer can prove harder to maintain over time. The erlang services, if you wrote them write should be able to evolve independently if you keep API integration and message passing between them (luckily Erland makes it relatively easy to get this right by its inherent features (e.g. immutability))
I'd introduce the third way that is rather more cost effective and change-reactive. The architecture definitely should be service oriented because you have services explicitly. But there's no requirement to expose each service as Restful or WSDL-defined one. I'm not an Erlang developer but I believe there's a way to invoke local and remote processes by messaging and thus avoid unnecessary serialisation/serialisation activities for internal calls. But one day you will be faced with new integration issue. For example you will be to integrate accounting or logistic system. Then if you designed architecture well regarding SOA principles the most efforts will be related to exposing existing service with RESTful front-end wrapper with no effort to refactor existing connections to other services. But the issue is to keep domain of responsibilities clean. I mean each service should be responsible to the activity it was originally designed.
The security issue you mentioned is known one. You should have authentication/authorization in all exposed services using tokens for example.

OSGi Specs and RFCs

I can only find and download OSGi Specs(e.g. core-spec, enterprise-spec) from its website. What about so-called OSGi RFCs? Are they publicly accessable, and how related to the Specs? Thanks!
From this osgi.org page:
Each Expert Group works on items defined in documents known as Requests for Proposals (RFPs), which set the requirements for the technical development.
RFPs may be created by anyone but are always reviewed by the Requirements Committee to ensure they meet real-world needs and complement the larger objectives of the OSGi Alliance.
Assuming the RFP is accepted, the relevant Expert Group develops Requests for Comments (RFCs), which define the technical solution to the RFP.
The Expert Group also develops Reference Implementations and Test Cases to support the RFC where this is appropriate.
The Member Area of the OSGi web site contains much more information and detail on specific activities, including drafts and final versions of RFPs and RFCs, final but pre-release versions of specifications and other technical documents, minutes, schedules and calendars of Expert Group meetings, and other important information. This information is only available to members.
So only the members can access those RFCs.
Regarding "draft specifications":
From time to time the expert groups of the OSGi Alliance publish some draft RFCs under a special license (the Distribution and Feedback License) for a public review in order to receive comments from non-OSGi members and other organisations.
The download page to access these draft specs is http://www.osgi.org/Specifications/Drafts .
To keep the RFCs non-public was a decision to protect the IPR as well as to keep the resulting specifications as unconfusing as possible. Sometimes one or more RFCs are combined into one specification, sometimes an RFC amends an existing spec. The RFCs are basically work-in-progress.
There are some RFCs the OSGi Alliance decided to publish. Those are the ones you can access. One example is RFC 112 Bundle Repository. This is a stand-alone spec, which is complete in itself.

Is it expected to disclose all the frameworks / open source software used in a project to a client

Taken aback to day when I was confronted about the use of validation code used from the Csla framework. It felt like I was reprimanded for not disclosing the use of the framework to the client.
Is this not the same as using libraries such as jQuery etc?
You absolutely should acknowledge what you're using, IMO.
Some clients may have particularly strict legal requirements (whether for legitimate reasons or not - they're the client, it's not up to you to judge their laywers) and detailing any third party software you're using to create a product for them seems only reasonable.
What reason could you have for not wanting to be open with your client?
This depends on the license of the open source code you are using. Many of them require to acknowledge the use in some credits section, others require you to redistribute the source code, etc. You should read the license and act accordingly.
It depends on the project and the kind of client and whatever contracts you had. However, for a typical consultant delivering code to a customer, I would say no it is very strange that you would be reprimanded for not bothering them with details such as the use of CSLA. That's pretty odd.
It is the same, I have a feeling that you would have been reprimanded for using jQuery as well. There are enterprises that frown upon the use of open source for various reasons.
They boil down to
The type of license and what does it force the user to do
The availability of support in some commercial form
The need to 'share-alike' the results
You should know what's your customer/employer's stance on this. If they don't have a stance, then you have to discuss on a case-by-case basis.
I usually tell people I use a lot of open source and, by seeing the response I get I know the path to follow. If they jump and scream at the mention of open source and the lack of support and whatnot, I just tend to ask for budget to buy commercial components or present good cases as to why the open source version of X is better than the commercial alternatives.
It very much depends on the type of project and the type of client. The real problem here is that you were surprised, which indicates non-alignment of expectations. How did the client motivate its interest in Csla specifically?
If your client needs to know or cares about which technology you use, then you should specify everything as part of the project documentation. If the choices are clearly described, then it is easier to have a discussion about them, if required. Documentation also gives you a way to ask (literally) for 'sign-off', if that is the way you work.
From your question it is not clear whether the problem was the choice of framework, or not having informed the customer.
Even on projects with minimal documentation, if the customer owns the code then I always deliver at least a High-level architecture document that includes the names and exact versions of every software component used, along with a brief description of what it is for and why it was selected. This is also the correct place to address any license issues.

Service and/or Commodity

Do we create services when we write programs, or are they commodities?
Are we like window-washers in that our programs(actions) provide some services to the users?
OR: are we like carpenters in that our programs(products) are sold and used by their new owners?
Or should this be seen in different aspects: The act of programming being a service, and the resulting program is a product?
The above has a direct impact on the following question: Is it theft or fraud when you copy software that you have no rights to? Theft is the physical removal of an object of value from the possession of another; fraud is representation of a falsehood in such a way that leads to the economic loss of the victim (the representation here being your assumption of right-to-copy).
It also impacts on questions of causal liability: If the program you wrote to crack passwords are used by others to rob a bank: are you an accomplice? If your program is a service then it could be argued that you are; if a commodity then you 'should' be in the clear.
Or: should each program be treated as a unique instance, based purely on the intentions of that program's originator, as to wether it should be treated as a service or commodity? Or should the user's intentions be used?
How does this reflect on the open-source world where many programs are available that seemingly infringe on commercial rights, e.g.: copy-protection and DRM circumventions?
(This impacts us all every time that we write code.)
It's both actually.
Sometime you sell a product which just works. It's a commodity. A notepad program for example is a commodity, you don't go into any relationship with its author. Most small tools fall into this category.
Sometimes you develop a custom application tailored for your specific user, or you integrate an existing product with their legacy applications and adjust it to work for their specific situation. It's definitely a service and you are usually in a long-term relationship with the customer. Most 'big'/expensive programs fall into this category. You could buy MS SharePoint Server license as a commodity but in most scenarios most likely you will also buy a service of someone to make it work for you.