I was searching for tutorials on Electric cloud over the net but found nothing. Also could not find good blogs dealing with it. Can somebody point me in right directions for this?
Also we are planning on using Electric cloud for executing perl scripts in parallel. We are not going to build software. We are trying to test our hardware in parallel by executing the same perl script in parallel using electric commander. But I think Electric commander might not be the right tool given its cost. Can you suggest some of the pros and cons of using electric commander for this and any other feature which might be useful for our testing.
Thanks...
RE #1: All of the ElectricCommander documentation is located in the Electric Cloud Knowledge Base located at https://electriccloud.zendesk.com/entries/229369-documentation.
ElectricCommander can also be a valuable application to drive your tests in parallel. Here are just a few aspects for consideration:
Subprocedures: With EC, you can just take your existing scripts, drop them into a procedure definition and call that procedure multiple times (concurrently) in a single procedure invocation. If you want, you can further decompose your scripts into more granular subprocedures. This will drive reuse, lower cost of administration, and it will enable your procedures to run as fast as possible (see parallelism below).
Parallelism: Enabling a script to run in parallel is literally as simple as checking a box within EC. I'm not just referring to running 2 procedures at the same time without risk of data collision. I'm referring to the ability to run multiple steps within a procedure concurrently. Coupled with the subprocedure capability mentioned above, this enables your procedures to run as fast as possible as you can nest suprocedures within other subprocedures and enable everything to run in parallel where the tests will allow it.
Root-cause Analysis: Tests can generate an immense amount of data, but often only the failures, warnings, etc. are relevant (tell me what's broken). EC can be configured to look for very specific strings in your test output and will produce diagnostic based on that configuration. So if your test produces a thousand lines of output, but only 5 lines reference errors, EC will automatically highlight those 5 lines for you. This makes it much easier for developers to quickly identify root-cause analysis.
Results Tracking: ElectricCommander's properties mechanism allows you to store any piece of information that you determine to be relevant. These properties can be associated with any object in the system whether it be the procedure itself or the job that resulted from the invocation of a procedure. Coupled with EC's reporting capabilities, this means that you can produce valuable metrics indicating your overall project health or throughput without any constraint.
Defect Tracking Integration: With EC, you can automatically file bugs in your defect tracking system when tests fail or you can have EC create a "defect triage report" where developers/QA review the failures and denote which ones should be auto-filed by EC. This eliminates redundant data entry and streamlines overall software development.
In short, EC will behave exactly they way you want it to. It will not force you to change your process to fit the tool. As far as cost goes, Electric Cloud provides a version known as ElectricCommander Workgroup Edition for cost-sensitive customers. It is available for a small annual subscription fee and something that you may want to follow up on.
I hope this helps. Feel free to contact your account manager or myself directly if you have additional questions (dfarhang#electric-cloud.com).
Maybe you could execute the same perl script on several machines by using r-commands, or cron, or something similar.
To further address the parallel aspect of your question:
The command-line interface lets you write scripts to construct
procedures, including this kind of subprocedure with parallel steps.
So you are not limited, in the number of parallel steps, to what you
wrote previously: you can write a procedure which dynamically sizes
itself to (for example) the number of steps you would like to run in
parallel, or the number of resources you have to run steps in
parallel.
Related
Case in point - I have a build which invokes a lot of REST API calls and processes the results. I would like to split the monolithic step that does all that into 3 steps:
initial data acquisition - gets data from REST Api. Plain objects, no reference loops or duplicate references
data massaging - enriches the data from (1) with all kinds of useful information. May result in duplicate references (the same object is referenced from multiple places) or reference loops.
data processing
The catch is that there is a lot of data and converting it to json takes too much time to my taste. I did not check the Export-CliXml function, but I think it would be slow too.
If I wrote the code in C# I would use some kind of binary serialization, which should be sophisticated enough to handle reference loops and duplicate references.
Please, note that serialization would write to the build staging directory and would be deserialized almost immediately as soon as the next step runs.
I wonder what are my options in Powershell?
EDIT 1
I would like to clarify what do I mean by steps. This is a build running on a CI build server. Each step runs in a separate shell and is reported individually on the build page. There is no memory sharing between the steps. The only way to communicate between the steps is either through build variables or file system. Of course, using databases is also possible, but it is an overkill.
Build variables are set using certain API and are exposed to the subsequent steps as environment variables. As such they are quite limited in length.
So I am talking about communicating through the file system. I am sacrificing performance here for the sake of build granularity - instead of having one monolithic step I want to have 3 smaller steps. This way the build is more transparent and communicates clearly what it is doing. But I have to temporarily persist payloads between steps. If it is possible to minimize the overhead, then the benefits worth it. If the performance is going to degrade significantly, then I will keep the monolithic step.
I am trying to run Dymola on the cluster so that everyone in my research group could submit a model and simulate jobs, is it possible to run Dymola on a cluster and utilize the power of HPC?
I could use some flags to make Dymola run parallel on a many-cores computer, but how to run a parallel simulation on many computers?
Parallelization on a single computer:
Parallelizing a Modelica model is possible, but the the model needs to be
suitable by nature (which doesn't happen too often, at least to my experience), for some examples where it works well see e.g. here
modified manually by the modeler to allow parallelization, e.g. by introducing delay blocks, see here or some similar approach here.
Often Dymola will output No Parallelization in the translation log, presumably due to the model not allowing parallelization (efficiently). Also the manual states: It should be noted that for many kinds of models the internal dependencies don’t allow efficient parallelization for getting any substantial speed-up.
I'm not an expert on this, but as to my understanding HPC depends on massive parallelization. Therefore, models generated by Dymola do not seem to be a very good application to be run on HPC-clusters.
Dymola on multiple computers:
Running a single Dymola-simulation on multiple computers in parallel is not possible as far as I know.
I think there are several answers to this question.
The flags under Translation all refer to parallelized code inside a
single simulation executable. If you mention HPC I do not think you
need to consider this.
To run multiple simulations on a single
multi-core computer there is built-in support in Dymola. The relevant
function is simulateModelMulti, etc. The Sweep Parameters feature uses this automatically.
There is no built-in support
to distribute the simulation on several computers in a cluster.
However, if you generate your dymosim.exe with the Binary Model
Export option enabled, it can be run on other computers. You need to
distribute dymosim.exe, dsin.txt and and data files you read across
the cluster. I'm sure your HPC cluster has tools for that.
I added an instance that is RedHat Linux64. Installed JDK successfully. Then used SSH to send MarkLogic9 installation package to Linux and install finished. When I start MarkLogic service the messages came as following. (P.S: this is my first time to install MarkLogic)
Instance is not managed
Waiting for device mounted to come online : /dev/xvdf
Volume /dev/sdf has failed to attach - aborting
Warning: ec2-startup did not complete successfully
Check the error logs for details
Starting MarkLogic: [FAILED]
And following is log info:
2017-11-27 11:16:39 ERROR [HandleAwsError # awserr.go.48] [instanceID=i-06sdwwa33d24d232df [HealthCheck] error when calling AWS APIs. error details - NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Using the Source of Infinate Wisdom, I googled for "Install MarkLogic ec2 aws"
Not far down I find [https://docs.marklogic.com/guide/ec2.pdf][1]
Good document to read.
If you choose to ignore the (literally "STOP" in all caps) "STOP: Before you do Anything!" suggestion on the first page, you can go further and find that ML needs a Data volume, and that using the root volume is A Bad Idea (its too small and crash your system when it fills up, let alone vanish if your instance terminates). So if you choose to not use the recommended CloudFormation script for your first experience, you will need to manually create and attach a data volume, among other things.
. [1]: https://docs.marklogic.com/guide/ec2.pdf
the size and compute power of the host systems runnin ML are othaomal o the deployment and orchestration methods.
entirely diverent issues. yes you should start wit the sample cloud formation scripts... but not due to size and performance, due to
the fact they were built to make a successful first time experience as painless as possible. you would have had your ML server up and running in less time them it took to post to stackoverflow a question asking why it wasn’t,
totally unrelated - except for the built in set of instance types for the amis (1)
what configurations are possible v recommended o supported,
all large;y dependent on workload and performance expectations.
marklogic can and run on resource constrained system — whether and how it works well requires the same mythodology to answer for micro and mega systems ..., workload. data size and format, query and data processing code used, performance requirements, working set, hw, sw, vm, networking, storage ... while designed to support large enterprise workloads well,
there are also very constrained platforms and workloads in use in production systems. a typical low end laptop can run ML fine ... for the some use cases, where others may need a cluster of a dozen or a hundred high end monsters.
(1). ‘supported instance types’ with marketplace amis ...
yes these do NOT include entry level ec2 instance types last i looked.
the rationale similar to why the st dre scripts make it hard to abuse the root volume for a data volume — not because it cannot be done,
rather an attempt to provide the best chance of a successful first time experience to the targeted market segment ... constrained by having only one chance to do it, knowing nothing at all about the intended use. ... a blind educated guess coupled with a lot of testing and support history about how people get things wrong no matter how much you lead them.
while ‘micro’ systems can be made to work successfully —in some specialized use cases, usually they don’t do as well as, as easily, reliably and handle as large a variety of whateveryouthrowatthem without careful workload specific tuning and sophisticated application code —
similarly ,,, there is a reason the docs make it as clear as humanly possible, even annoyingly so, that you should start with the cloud formation templates —
short of refusing to run without them.
can ML run on Platform X with Y-Memory, Z hypervisor, on docker or vmware or virtual box or brand acme raid controller ...
very likely —,with some definition of ‘run’ and configured for those exact constraints
very unlikely for arbitrary definitions of ‘run’ and no thought or effort to match the deployment with the environment
will it be easy to setup by someone who’s never done it before, run ‘my program’, at ‘my required speeds’ out of the box with no problems, no optimization’s, performance analysis, data refactoring, custom queries.
for a reasonably large set of initial use cases — for at least a reasonable and quick POC, very likely — if you follow the installation guide, with perhaps a few parameter adjustments
is that the best it can do ? absolutely not.
but it’s very close given absolutely no knowledge of the users actual application, technical,experience, workloads, budget, IT staff, dev and qa team, requirements, business policies, future needs, staff, phase of the moon.
recommend, read the ec2 docs.
do what they say
try it out with a realistic set of data and applications for your use,
test. measure, experiment , learn
THEN and ONLY THEN worry about if it will work on. t2.micro or m4.64xlarge9orbclusters thereof .. )
that is the beginning not the end
the end is never, you can and should consider continual analysis and improving IT configurations as part of ongoing operating procedures.
minimizing cost is a systemic problem with many dimensions —
and on aws it’s FREE to change. It’s EXPENSIVE to not plan forchange.
change is cheep
experimentation is cheep
choose instance types, storage, networking etc last not first.
consider TCOA . question requirements ... do you NEED that dev system running sunday at 3am? can QA tolerate occasional failures in exchange for 90% cost savings ? Can you avoid over commitment by auto scaling ?
Do you need 5 9’s or is 3 9’s enough ? can ingest be offloaded to non production systems with cheaper storage ? Can a middle tear be used ... or removed to novevwork to the most cost effectiv4 components ? is labor or it more costly
instant type is actually one of the least relevant components in TCOA
IBM V6.1
When using the I system navigator and when you click System values the following display.
By default the Do not allow parallel processing is selected.
What will the impact be on processing in programs when you choose multiple processes, we have allot of rpgiv programs and sql queries being executed and I think it will increase performance?
Basically I want to turn this on in production environment but not sure if I will break anything by doing this for example input or output of different programs running parallel or data getting out of sequence?
I did do some research :
https://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/rzakz/rzakzqqrydegree.htm
And understand each option but I do not know the risk of changing it from default to multiple.
First off, in order get the most out of *MAX and *OPTIMIZE, you'd need a system with more than one core (enabled for IBM i / DB2) along with the DB2 Symmetric Multiprocessing (SMP) (57xx-SS1 option 26) license program installed; thus allowing the system to use SMP for queries and index builds.
For *IO, the system can use multiple tasks via simultaneous multithreading (SMT) even on a single core POWER 5 or higher box. SMT is enabled via the Processor multi tasking (QPRCMLTTSK) system value
You're unlikely to "break" anything by changing the value. As long as your applications don't make bad assumptions about result set ordering. For example, CPYxxxIMPF makes use of SQL behind the scenes; with anything but *NONE you might end up with the rows in your DB2 table in different order from the rows in the import file.
You will most certainly increase the CPU usage. This is not a bad thing; unless you're currently pushing 90% + CPU usage regularly. If you're only using 50% of your CPU, it's probably a good thing to make use of SMT/SMP to provide better response time even if it increases the CPU utilization to 60%.
Having said that, here's a story of it being a problem... http://archive.midrange.com/midrange-l/200304/msg01338.html
Note that in the above case, the OP was pre-building work tables at sign on in order to minimize the wait when it was time to use them. Great idea 20 years ago with single threaded systems. Today, the alternative would be to take advantage of SMP/SMT and build only what's needed when needed.
As you note in a comment, this kind of change is difficult to test in non-production environments since workloads in DEV & TEST are different. So it's important to collect good performance data before & after the change. You might also consider moving it stages *NONE --> *IO --> *OPTIMIZE and then *MAX if you wish. I'd spend at least a month at each level, if you have periodic month end jobs.
I would like to identify and analyze different machine instruction executed and required clock cycle for each of them, throughout running of a code.
Is there any way to do this simply? Dynamic binary translation might be a way but i am looking for more easier mechanism.
Thanks in advance
If you are programming, consider using a performance analysis tool such as a profiling tool, such as Intel VTune (http://en.wikipedia.org/wiki/VTune), or oprof.
It is much less common for most programmers to have access to a cycle accurate simulator, although in the embedded space it is quite common.
Dynamic binary translation is probably NOT a good way to measure your program at individual instruction granularity. DBT tools like http://www.pintool.org/ do allow you to insert code to read timers. You could do this around individual instructions is way too slow, and the instrumentation adds too much overhead. But doing this at function granularity can be okay. Basic block granularity, i.e. every branch, borderline.
Bottom line: try a profiling tool like VTune first. Then go looking for a cycle accurate simulator.