Multiple slave agent in UVM - system-verilog

I am planning to implement multiple slave agents in my env and a single master agent.
Can anybody show me an example how to use this feature in UVM.And how do we start the item for a particular slave sequencer from the testcase.

This is pretty typical. Without details, the general outline is:
In build_phase:
Create and configure the agents just as you have described.
The masters and slaves will be configured as active.
A slave agent is typically a reactive agent that responds to stimulus from the DUT so in that case, sequence items in the slave driver will be initiated by the DUT so they aren't under direct testcase control.

Related

Rundeck ansible inventory: static instead of dynamic

Deployed Rundeck (rundeck/rundeck:4.2.0) importing and discovering my inventory using Ansible Resource Model Source. Having 300 nodes, out of which statistically ~150 are accessible/online, the rest is offline (IOT devices). All working fine.
My challenge is when creating jobs i can assign only those nodes which are online, while i wanted to assign ALL nodes (including those offline) and keep retrying the job for the failed ones only. Only this way i could track the completeness of my deployment. Ideally i would love rundeck to be intelligent enough to automatically deploy the job as soon as my node goes back online.
Any ideas/hints how to achieve that ?
Thanks,
The easiest way is to use the health checks feature (only available on PagerDuty Process Automation On-Prem, formerly "Rundeck Enterprise"), in that way you can use a node filter only for "healthy" (up) nodes.
Using this approach (e.g: configuring a command health check against all nodes) you can dispatch your jobs only for "up" nodes (from a global set of nodes), this is possible using the .* as node filter and !healthcheck:status: HEALTHY as exclude node filter. If any "offline" node "turns on", the filter/exclude filter should work automatically.
On Ansible/Rundeck integration it works using the following environment variable: ANSIBLE_HOST_KEY_CHECKING=False or using host_key_checking=false on the ansible.cfg file (at [defaults] section).
In that way, you can see all ansible hosts in your Rundeck nodes, and your commands/jobs should be dispatched only for online nodes, if any "offline" node changes their status, the filter should work.

How to build a scene where zookeeper jitter cause to multiple rapid elections

I have two program, fc(failoverController) and web(webServer). And I use zookeeper to ensure high reliability.
fc will deploy on two server, two fc use apache-curator LeaderSelector to elect master, and the master will start a web process, and web process will provide services. In order not to give up leadership, I use a while(true) at the end of the function takeLeadership().
But in a certain situation, our custom deploy zookeeper on three vmware esxi virtual machine. and they are snapshot the three vm (snapshot vm memory) everyday.
one day, there has been a strange phenomenon, fc1 become master, A few milliseconds, fc2 become master, The time difference between before and after is very short. This triggered a bug in our program, we have two master.
In order to fix this problem, we use an AtomBoolean var, declare if zk status become LOST or SUSPEND, and use this var mark whether to exit takeLeadership.
now I want to test this two master case, how can I build a scene where zookeeper jitter cause to multiple rapid elections.
I has tested the following operations, but can't reproduce:
frequent restart of zk services.
use tcpkill to kill one of fc to zk port.

Uber Cadence production setup

I was looking for a microservice orchestrator and came across Uber Cadence. I have gone through the documentation and also used it in the development setup.
I had a few questions for production scenarios:
Is it recommended to have a dedicated tasklist for the workflow and the different activities used by it? Or, should we use a single tasklist for all? Does this decision impact the scalability or performance?
When we add a new worker machine, is it a common practice to run all the workers for different activities/workflows in the same machine? Example:
Worker.Factory factory = new Worker.Factory("samples-domain");
Worker helloWorkflowWorker = factory.newWorker("HelloWorkflowTaskList");
helloWorkflowWorker.registerWorkflowImplementationTypes(HelloWorkflowImpl.class);
Worker helloActivityWorker = factory.newWorker("HelloActivityTaskList");
helloActivityWorker.registerActivitiesImplementations(new HelloActivityImpl());
Worker upperCaseActivityWorker = factory.newWorker("UpperCaseActivityTaskList");
upperCaseActivityWorker.registerActivitiesImplementations(new UpperCaseActivityImpl());
factory.start();
Or should we run each activity/workflow worker in a dedicated machine?
In a single worker machine, how many workers can we create for a given activity? For example, if we have activity HelloActivityImpl, should we create multiple workers for it in the same worker machine?
I have not found any documentation for production set up. For example, how to install and configure the Cadence Service in production? It will be great if someone can direct me to the right material for this.
In some of the video tutorials, it was mentioned that, for High Availability, we can setup Cadence Service across multiple data centers. How do I configure Cadence service for that?
Unless you need to have separate flow control and rate limiting for a set of activities there is no reason to use more than one task queue per worker process.
As I mentioned in 1 I would rewrite your code as:
Worker.Factory factory = new Worker.Factory("samples-domain");
Worker worker = factory.newWorker("HelloWorkflow");
worker.registerWorkflowImplementationTypes(HelloWorkflowImpl.class);
worker.registerActivitiesImplementations(new HelloActivityImpl(), new UpperCaseActivityImpl());
factory.start();
There is no reason to create more than one worker for the same activity.
Not sure about Cadence. Here is the Temporal documentation that shows how to deploy to Kubernetes.
This documentation is not yet available. We at Temporal are working on it.
You can also use Cadence helmchart https://hub.helm.sh/charts/banzaicloud-stable/cadence
I am actively working with Cadence team to have operation documentation for the community. It will be useful for those don't want to run on K8s, like myself. I will come back later as we make progress.
Current draft version: https://docs.google.com/document/d/1tQyLv2gEMDOjzFibKeuVYAA4fucjUFlxpojkOMAIwnA
will be published to cadence-docs soon.

Can one private agent from VSTS be installed on multiple VM's?

I have installed agent on VM and configured a CI build pipeline. The pipeline is triggered and works perfectly fine.
Now I want to use same build pipeline, same agent, but different VM. Is this possible?
How will the execution happen for builds and on which VM will the source be copied?
Thank you.
Like the others I'm also not sure what you're trying to do and also think that the same agent across multiple machines is not possible.
But if you have to alternate or choose easily between VMs, you could set up for each of your VMs (used for this special scenario) an individual agent queue with one agent in that pool. That way you can choose the agent pool at queue time via the agent queue dropdown field. But that would only work if you're triggering manually, not in a typical CI scenario. In that case you would have to edit the definition to enforce any particular VM each time you want to swap VMs.
NO. These private agents are supposed to have a unique name and are assigned to an Agent Pool/Queue. They are polling up to VSTS/Azure Devops server if they have a job to do. Then they execute it. If you clone a machine with the same private build agent, then theoretically the agent that picks it up will execute the job, but that is theoretic. I really don't know how the Agent Queues will handle this.
It depends on what you want to do.
If you want to spread the workload, like 2 build servers and have builds go to whichever build server isn't busy, then you would create 1 Agent Pool/Queue. Create a Private Agent on one server and register it to that Pool, then on the second server un-register the agent and then re-register the agent add it to the SAME pool.
If you want to do work on 2 servers at the exact same time, like a deployment to 2 servers at the same time, then you would create a 'Deployment Group' and add both servers to that. You would unregister both agents from the Agent Pool/Queue. From your 'Deployment Group' copy the PowerShell script snippet and run it on each machine. This way you can use this in your Release Pipeline and deployments in parallel, which take less time to do deployments.
You could set up a variable in the pipeline so you can specify the name of the VM at build-time.
Also, once you have one or more agents, you would add them to an app pool. When builds are run, it will choose one agent from the pool and use that.

Is there a way to make jobs in Jenkins mutually exclusive?

I have a few jobs in Jenkins that use Selenium to modify a database through a website's front end. If some of these jobs run at the same time, errors due to dirty reads can result. Is there a way to force certain jobs in Jenkins to be unable to run at the same time? I would prefer not to have to place or pick up a lock on the database, which could be read or modified by any number of users who are also testing.
You want the Throttle Concurrent Builds plugin which lets you define global and per-node semaphores.
Locks and latches is being deprecated in favor of Throttle Concurrent builds.
I've tried both the locks & latches plugin and the port allocator plugin as ways to achieve what you're trying to do. Neither worked reliably for me. Locks & latches worked some of the time, but I'd occasionally get hung jobs. Using port allocator as a hack will work unless you have multiple jenkins nodes, but the config overhead is kind of high. What I've ultimately settled upon is another hack, but it works reliably and uses core Jenkins stuff (no plugins):
create a slave node running on the same box as the master (or not, if you have lots of boxes)
give this slave a single executor (key)
tie the 2 (or n) jobs that must not run together to this new slave node
optionally set the slave's usage to 'tied jobs only' if it'll screw up your other jobs if they happen to run on the new slave
Since the slave has only one executor, the jobs tied to it can never run together.
If you regard the database as a shared resource that can only be used exclusively then this fits the usecase of the Lockable resources plugin.
It is being actively developed and improved and is very versatile.