how to define start order in group of processes using supervisord? - supervisord

Does the program priority determine the start order? i.e. baz then bar ?
If I have:
[group:foo]
programs=bar,baz
And:
[program:bar]
command=/path/to/bar
priority=200
As well as:
[program:baz]
command=/path/to/baz
priority=150

Yes. Lower priorities indicate programs that start first and shut down last at startup and when aggregate commands are used in various clients (e.g. “start all”/”stop all”). Higher priorities indicate programs that start last and shut down first.

Related

In Anylogic, is it possible to send an agent from one storage to another directly?

I have 2 storages (called storageA & storageB) and I want to move an agent (pallet) from one to the other via forklifts. I have set up the following.
A pallet is created at a node and is moved to storageA via 'store'. This part works fine. The pallet is then moved to storageB via 'store1' after a delay. This is when the following error occurs:
Exception during discrete event execution:
root.store1.seizeTrans.freeSpaceSendTo:
Path not found! {agent=2, source={level=level, pos=(1673.3333333333333, 3245.0, 0.0)}, target={level=level, pos=(1857.25, 3160.4845, 0.0)}}
It works if I replace 'store1' with a retrieve block and send it to a node first. However I would like to send the pallet directly to another storage rather than via another location. Is this possible?
Please let me know if I have not provided enough information.
Thanks
yeah unfortunately you can't do that as far as I know, the solution I use is the following, which is actually not a super robust solution... but has been ok in applications so far
Place a retrieve block between your delay and your store1
Use the agent you pick up as destination:
on the on seize action of the retrieve block do:agent.transporter=unit;
4.On the store1 block put the highest priority for the task
5. ON the store1 block use resource custom transporter choice: agent.transporter.equals(unit)
6. The dispatching policy should be nearest to the agent in store1, but doing all the above ensures that the resource continues doing the task no matter what... by only using the dispatch policy your model will work 99.999999% of the time... the problem occurs only if another task with higher priority occurs at the exact same time as the transporter is released in the retrieve block, which is rare, but can happen.
I had the same question today so I landed here. But luckily, only after the second step written above, the whole process needed did already work for my case. We can move an agent from one storage to another by simply set the destination of the 'retrieve' block to the coordinate of the agent and the move to independently instead of by fleets or resources. after that we put the 'store' block.
Destination is: (x,y,z)
X: agent.getX()
Y: agent.getY()
Z: agent.getZ()
after agents being retrieved to a specified coordinate, it seems that fleets do not comply paths in the network anymore

How to force the next node to visit in VRPTWs without changing time window

Let's say I have 3 pairs of pickup and delivery(6 nodes), and their own time windows.
0-Node_start #Index(0)
1-Pickup, 2-Delivery #Index(1,2)
3-Pickup, 4-Delivery #Index(3,4)
5-Pickup, 6-Delivery #Index(5,6)
7-Node_end #Index(7)
How can I force my vehicle to go from node start to index_3, then continue with the rest of the route directly without changing the time window of a node in index_3, or changing the traveling time to 0 from node_0 to node_3? This should be possible regardless of the time taken from index_0 to index_3, as long as time windows allow.
Also, not sure if this is important in this case, but I use FirstSolutionStrategy.GLOBAL_CHEAPEST_ARC
I have found a solution that works for my case, hopeful it will work for others too. I used NextVar
consecutive_locations = [[1,3], [7,9]]
for location_index in consecutive_locations:
routing.solver().Add(routing.NextVar(location_index[0]) == location_index[1])
I used a loop because I have multiple vehicles, each vehicle has a specific starting point, and a location I want it to visit next after starting point.
The solution takes longer though, I think firstSolutionStrategy might be the issue(Not sure)

System call to count the number of system calls in xv6

I want to create a system call which gives the number of times every system call was done, since a certain switch was tripped. I.e, I want to define a certain variable (let's name it 'counting'). When the variable 'counting' is on ('counting' is being switched between 0 for OFF and 1 for ON by a different system call, but leave THAT thing for now), I want my system call to print a list of all the system calls and the number of times they have been done, since "counting" was last set to ON (a value of 1). If the variable 'counting' is set to OFF, then I want this system call to display no list, or just display some message that "Counting hasn't started yet" or "Counting has not been turned on" or whatever. How can I proceed with this?

How does Solaris SMF determine if something is to be in maintenance or to be restarted?

I have a daemon process that I wrote being executed by SMF. The problem is when an error occurs, I have fail code and then it will need to restart from scratch. Right now it is sending sys.exit(0) (Python), but SMF keeps throwing it in maintenance mode.
I've worked with SMF enough to know that it sometimes auto-restarts certain services (and lets others fail and have you deal with them like this). How do I classify this process as one that needs to auto-restart? Is it an SMF setting, a method of failing, what?
Manpage
Solaris uses a combination of startd/critical_failure_count and startd/critical_failure_period as described in the svc.startd manpage:
startd/critical_failure_count
startd/critical_failure_period
The critical_failure_count and critical_failure_period properties together specify the maximum number of service failures allowed in a given time interval before svc.startd transitions the service to maintenance. If the number of failures exceeds critical_failure_count in any period of critical_failure_period seconds, svc.startd will transition the service to maintenance.
Defaults in the source code
The defaults can be found in the source, the value depends on whether the service is "wait style":
if (instance_is_wait_style(inst))
critical_failure_period = RINST_WT_SVC_FAILURE_RATE_NS;
else
critical_failure_period = RINST_FAILURE_RATE_NS;
The defaults are either 5 failures/10 minutes or 5 failures/second:
#define RINST_START_TIMES 5 /* failures to consider */
#define RINST_FAILURE_RATE_NS 600000000000LL /* 1 failure/10 minutes */
#define RINST_WT_SVC_FAILURE_RATE_NS NANOSEC /* 1 failure/second */
These variables can be set in the SMF as properties:
<service_bundle type="manifest" name="npm2es">
<service name="site/npm2es" type="service" version="1">
...
<property_group name="startd" type="framework">
<propval name='critical_failure_count' type='integer' value='10'/>
<propval name='critical_failure_period' type='integer' value='30'/>
<propval name="ignore_error" type="astring" value="core,signal" />
</property_group>
...
</service>
</service_bundle>
TL;DR
After checking against the startd values, If the service is "wait style", it will be throttled to a max restart of 1/sec, until it no longer exits with a non-cfg error. If the service is not "wait style" it will be put into maintenance mode.
Presuming a normal service manifest, I would suspect that you're dropping into maintenance because SMF is restarting you "too quickly" (which is a bit arbitrarily defined). svcs -xv should tell you if that is the case. If it is, SMF is restarting you, and then you're exiting again rapidly and it's decided to give up until the problem is fixed (and you've manually svcadm clear'd it.
I'd wondered if exiting 0 (and indicating success) may cause further confusion, but it doesn't appear that it will.
I don't think Oracle Solaris allows you to tune what SMF considers "too quickly".
You have to create a service manifest. This is more complicated than not. This has example manifests and documents the manifest structure.
http://www.oracle.com/technetwork/server-storage/solaris/solaris-smf-manifest-wp-167902.pdf
As it turns out, I had two pkills in a row to make sure everything was terminated correctly. The second one, naturally, was exiting something other than 0. Changing this to include an exit 0 at the end of the script solved the problem.

Getting the IO count

I am using xen hypervisor. I am trying to get the IO count of the VMs running on top of the xen hypervisor. Can someone suggest me some way or tool to get the IO count ? I tried using xenmon and virt-top. Virt-top doesnt give any value and xenmon always shows 0. Any suggestions to get the number of read or write calls made by a VM or the read and write(Block IO) bandwidth of a particular VM. Thanks !
Regards,
Sethu
You can read this directly from sysfs on most systems. You want to open the following directory:
/sys/devices/xen-backend
And look for directories starting with vbd-
The nomenclature is:
vbd-{domain_id}-{vbd_id}/statistics
Inside, you'll find what you need, which is:
br_req - Number of block read requests
oo_req - Number of 'out of' requests (no room left in list to service any given request)
rd_req - Number of read requests
rd_sect - Number of sectors read
wr_sect - Number of sectors written
The br_req will be an aggregate count of things like write barriers, aborts, etc.
Note, for this to work, The kernel has to be told to export Xen attributes via sysfs, but most Xen packages have this enabled. Additionally, the location in sysfs might be different with earlier versions of Xen.
have you tried xentop?
There is also bwm-ng (check your distro). It shows block utilization per disk (real/virtual). If you know the name of the virtual disk attached to the VM, then you can use bwm-ng to get those stats.