how to identify ITIL phases - itil

how to identify the phases in implementation of ITIL and how to know the effective processes are in place to manage the significant investment in IT infrastructure?

You have to create processes for your organization following the ITIL guide lines and, of course, you have to define a set of KPIs so you can measure the effectiveness of those processes.
It's not a linear path, you have to create processes according with your organization culture and maturity and sometimes it could be a long path.

The Phases are relevant to the Stage of the New Service being introduced in ITIL, Its not the stage of a ITIL Implementation. If you are referring to the Maturity of an Organization - You should consider CMMI. About understanding if the Processes are in place to Manage the IT Infrastructure, You should start with GAP Analysis - Basically identifying the Current State and GAP in terms of Best Practices.

Related

AnyLogic - simulation model documentation

Not referring to the build-in tool "Create Documentation...", are there any best practices and/or examples of how to document an AnyLogic model?
Even though I am slightly unsatisfied with the documentation of java programs, that is my current "inspiration". I try to create flowcharts which describe the model's input, processing, output, and, most important but hard to break down in terms of complexity, the agent interactions.
Curious for any experiences!
Best

Chaos engineering best practice

I studied the principles of chaos, and looks for some opensource project, such as chaosblade which is open sourced by Alibaba, and mangle, by vmware.
These tools are both fault injection tools, and do nothing to analysis on the tested system.
According to the principles of chaos, we should
1.Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
2.Hypothesize that this steady state will continue in both the control group and the experimental group.
3.Introduce variables that reflect real world events like servers that crash, hard drives that malfunction, network connections that are severed, etc.
4.Try to disprove the hypothesis by looking for a difference in steady state between the control group and the experimental group.
so how we do step 4? Should we use monitoring system to monitor some major metrics, to check the status of the system after fault injection.
Is there any good suggestions or best practice?
so how we do step 4? Should we use monitoring system to monitor some major metrics, to check the status of the system after fault injection.
As always the answer is it depends.... It depends how do you want to measure your hypothesis, it depends on the hypothesis itself and it depends on the system. But normally it makes totally sense to introduce metrics to improve/increase the observability.
If your hypothesis is like Our service can process 120 requests in a second, even if one node fails. Then you could do it via metrics to measure that yes, but you could also measure it via the requests you send and receive the responses back. It is up to you.
But if your Hypothesis is I get a response for an request which was send before a node goes down. Then it makes more sense to verify this directly with the requests and response.
At our project we use for example chaostoolkit, which lets you specify the hypothesis in json or yaml and related action to prove it.
So you can say I have a steady state X and if I do Y, then the steady state X should be still valid. The toolkit is also able to verify metrics if you want to.
The Principles of Chaos are a bit above the actual testing, they reflect the philosophy of designed vs actual system and system under injection vs baseline, but are a bit too abstract to apply in everyday testing, they are a way of reasoning, not a work process methodology.
I'm think the control group vs experiment wording is one especially doubtful part - you stage a test (injection) in a controlled environment and try to catch if there is a user-facing incident, SLA breach of any kind or a degradation. I do not see where there is a control group out there if you test on a stand or dedicated environment.
We use a very linear variety of chaos methodology which is:
find failure points in the system (based on architecture, critical user scenarios and history of incidents)
design choas test scenarios (may be a single attack or more elaborate sequence)
run tests, register results and reuse green for new releases
start tasks to fix red tests, verify the solutions when they are available
One may say we are actually using the Principles of Choas in 1 and 2, but we tend to think of choas testing as quite linear and simple process.
Mangle 3.0 released with an option for analysis using resiliency score. Detailed documentation available at https://github.com/vmware/mangle/blob/master/docs/sre-developers-and-users/resiliency-score.md

What are the Parameters on which RTOS are compared?

I want to compare two RTOS (e.g. -> Keil-RTX ,Ucos-iii and freertos), but I do not know on what parameters I need to compare them for e.g. Memory footprint, certified etc.
On which points do we compare RTOS ?
You need to compare them on the parameters that are important to your application and meeting its requirements. Those may include for example:
Context switch time
Message passing performance
Scalability
RAM footprint
ROM footprint
Heap usage
OS primitives (queues, mutex, event-flags, semaphores, timer etc.)
Scheduling algorithms (priority-preemptive, round-robin, cooperative)
Per developer cost
Per unit royalty cost
Licence type/terms
Source or object code provided
Availability integrated middleware libraries (filesystem, USB, CAN, TCP/IP etc.)
Safety certified
Platform/target support
RTOS aware debugger support
RTOS/scheduling monitor/debug tools availability
Vendor support
Community support
Documentation quality
The possible parameters are many, and only you can determine what is useful and important to your project.
I suggest selecting about five parameters important to your project, and then analysing each option using the Kepner-Tregoe method. For each parameter you assign a weight based on its relative importance, you score each solution against each parameter, and then you sum the score x weight for an over all score. The method takes some of the subjectivity out of selection and perhaps importantly provides evidence of your decision making process when you have to justify it to your boss.

How to represent part of BPMN workflow that is automated by system?

I am documenting a user workflow where part of the flow is automated by a system (e.g. if the order quantity is less than 10 then approve the order immediately rather than sending it to a staff for review).
I have swim lanes that goes from people to people but not sure where I can fit this system task/decision path. What's the best practice? Possibly a dumb idea but I'm inclined to create a new swim lane and call it the "system".
Any thoughts?
The approach of detaching system task into separate lane is quite possible as BPMN 2.0 specification does not explicitly specify meaning of lanes and says something like that:
Lanes are used to organize and categorize Activities within a Pool.
The meaning of the Lanes is up to the modeler. BPMN does not specify
the usage of Lanes. Lanes are often used for such things as internal
roles (e.g., Manager, Associate), systems (e.g., an enterprise
application), an internal department (e.g., shipping, finance), etc.
So you are completely free to fill them with everything you want. However, your case is quite evident and doesn't require such separation at all. According to your description we have typical conditional activity which can be expressed via Service task or Sub-process. These are 2 different approaches and they hold different semantics.
According to BPMN specification Service Task is a task that uses some sort of service, which could be a Web service or an automated application. I.e it is usually used when modeller don't want to decompose some process and is intended to outsource it to some external tool or agent.
Another cup of tea is Sub-process, which is typically used when you
want to wrap some complex piece of workflow for reuse or if that
piece of workflow can be decomposed into sub-elements.
In your use case sub-process is a thing of choice. It is highly adjustable, transparent and maintainable. For example, inside sub-process you can use Business Rules Engine for your condition parameter (Order Quantity) and flexibly adjust its value on-the-fly.
In greater detail you can learn the difference of these approcahes from this blog.
There is a technique of expressing system tasks/decisions via dedicated participant/lane. Then all system tasks are collocated on a system lane.
System tasks (service tasks in BPMN) are usually done on behalf of an actor, so in my opinion it is useful to position them in the lane for that actor.
Usually such design also help to keep the diagram easy to read by limiting the number of transition between "users" lanes and "system" lane.

Steps to consider during planning of new project

What are the points I must remember during the planning phase of the project to have a really firm foundation?
Thanks
Edit: I mean more specifically related to coding. (I don't mean the budgets etc etc).
For example: Where can we use generics,reflection or concepts in C#
During the planning phase you need to:
Define the problem your solving
Validate the problem actually exists
Define a solution with your customer
(This is more of a starting point, I
recommend constant user feedback
into your lifecycle but you need to
start somewhere)
Define the scope of the project, including features, cost / budget and time
Communicate..Communicate..Communicate..
1) Know your deadlines
2) Know your budget
If you let either one of these get away on you, you are setting yourself up for a disaster.
Check out Steve McConnell's book on Software Estimation. It will help you consider all area's before getting started. For if you have to estimate it then you should know what has to be done.
You should also consider reading Code Complete.
Software Estimations, Code Complete