How do you take an user input? - q#

How do you take an user input as integer or string?

You can do this as part of a Main method in your C# or Python driver code, or you could use the new support for Q# command-line executables called out in the recent release (0.11.2004.2825). If you follow the link for the quantum random number generator sample and scroll down, you'll see an example there for using #EntryPoint() to denote the Q# operation that should be used to generate the entry point code. It will also cause any arguments to that operation to automatically become command-line parameters for the built executable. You can try this in the sample by updating the code to take max as an argument, like this:
#EntryPoint()
operation SampleRandomNumber(max : Int) : Int {
Message($"Sampling a random number between 0 and {max}: ");
return SampleRandomNumberInRange(max);
}
Then when you run the sample via dotnet run you'll see that it now requires --max as a command line parameter, and handles the translation into the correct input type for the Q# operation. You can then pass the param like this to get the same behavior as the original sample: dotnet run --max 50
Hope that helps!

Related

How to change verbosity of uvm components after certain condition

I am trying to change the UVM verbosity of the simulation after satisfying certain conditions. Verbosity options of different components are passing through the command line as +uvm_set_verbosity. Once the conditions are satisfied, then the simulations should run with the the command line +uvm_set_verbosity option. Till then simulation runs with low verbosity for all components.
Looking through the UVM library code, it appears that there is a function called m_set_cl_msg_args(). This function calls three other functions that appear to consume the command line arguments like: +uvm_set_verbosity, +uvm_set_action, +uvm_set_severity.
So what I did was get the uvm_root instance from the uvm_coreservice singleton, and then use the get_children() function from uvm_component class to recursively get a queue of all of the uvm_components in the simulation. Then call the m_set_cl_msg_args() function on all of the components.
My code looks like:
begin
uvm_root r;
uvm_coreservice_t cs_t;
uvm_component array_uvm[$];
cs_t = uvm_coreservice_t::get();
r = cs_t.get_root();
r.get_children(array_uvm);
foreach(array_uvm[i])
array_uvm[i].m_set_cl_msg_args();
end
Even though this code compiles properly, But this is not changing verbosity. Any idea ?
Moreover I am able to print all the components in array_uvm. So I am guessing
array_uvm[i].m_set_cl_msg_args();
this as a wrong call.
Anyone have any other suggestion to change verbosity during run time.
You should never use functions in the UVM that are not documented in the language reference manual. They can (and do) change in any revision. I'm guessing +uvm_set_verbosity only works at time 0 by default.
There is already a function to do what you want
umm_top.set_report_verbosity_level_hier()
I suggest using +UVM_VERBOSITY=UVM_LOW to start your test, and then define your own switch for activating the conditional setting.
If you want a specific component, use component_h.set_report_verbosity_level() (add the _hier to set all its children)
You can use the UVM's command line processor get_arg_values() method to specify the name of the component(s) you want to set, and then use umm_top.find() to get a handle to the component.

ShardManagement Binary ShardingKey Max Value

I am currently setting up my shardmap manager, shard map ... via the PowerShell script provided here.
While the example uses int as ShardingKey type, I have a binary(16) ShardingKey, does anyone know how to determine and pass a high key of max binary(16) to Add-RangeMapping ?
The Add-RangeMapping function calls the Range(low, high) constructor. In order to create a range with a max high value, it would need to call the Range(low) constructor. So you'll have to edit the Add-RangeMapping function appropriately (or just directly call into the library code without using the Add-RangeMapping function).
The PowerShell helper scripts don't cover the complete Elastic Database Client Library surface area so you may find a few more gaps.

Echoing output in Netlogo (from extensions)

I am trying to make an extension in Netlogo. it is easy to make a method that invokes a function that returns a value using org.nlogo.api.Reporter But sometime functions have side effects and write output in the standard output chanel. How can I make things so that the output is written or is redirected to the Netlogo command center? For example, there is the extension Netprologo that allows me to invoke SWI-Prolog. So if I invoke the predicate "write(6)" Netlogo receives the predicate output which is true or false (i.e. the predicate execution result) but actually the 6 (the side effect) is not written in the command center of Netlogo. Best
I believe Workspace.outputObject is what you're looking for. Checkout the code for print and show for a couple of examples of how to use it. In both cases, the args(0).report(context) thing is just the thing to be printed.

Managing multiple anylogic simulations within an experiment

We are developing an ABM under AnyLogic 7 and are at the point where we want to make multiple simulations from a single experiment. Different parameters are to be set for each simulation run so as to generate results for a small suite of standard scenarios.
We have an experiment that auto-starts without the need to press the "Run". Subsequent pressing of the Run does increment the experiment counter and reruns the model.
What we'd like is a way to have the auto-run, or single press of Run, launch a loop of simulations. Within that loop would be the programmatic adjustment of the variables linked to passed parameters.
EDIT- One wrinkle is that some parameters are strings. The Optimization or Parameter Variation experiments don't lend themselves to enumerating a set of strings to be be used across a set of simulation runs. You can set a string per parameter for all the simulation runs within one experiment.
We've used the help sample for "Running a Model from Outside Without Presentation Window", to add the auto-run capability to the initial experiment setup block of code. A method to wait for Run 0 to complete, then dispatch Run 1, 2, etc, is needed.
Pointers to tutorial models with such features, or to a snip of code for the experiment's java blocks are much appreciated.
maybe I don't understand your need but this certainly sounds like you'd want to use a "Parameter Variation" experiment. You can specify which parameters should be varied in which steps and running the experiment automatically starts as many simulation runs as needed, all without animation.
hope that helps
As you, I was confronted to this problem. My aim was to use parameter variation with a model and variation were on non numeric data, and I knew the number of runs to start.
Then i succeed in this task with the help of Custom Variation.
Firstly I build an experiment typed as 'multiple run', create my GUI (user was able to select the string values used in each run.
Then, I create a new java class which inherit from the previous 'multiple run' experiment,
In this class (called MyMultipleRunClass) was present:
- overload of the getMaximumIterations method from default experiment to provide to default anylogic callback the correct number of iteration, and idnex was also used to retrieve my parameter value from array,
- implementation of the static method start,
public static void start() {
prepareBeforeExperimentStart_xjal( MyMultipleRunClass.class);
MyMultipleRunClass ex = new MyMultipleRunClass();
ex.setCommandLuneArguments_xjal(null);
ex.setup(null);
}
Then the experiment to run is the 'empty' customExperiment, which automatically start the other Multiple run experiment thru the presented subclass.
Maybe it exists shortest path, but from my point of view anylogic is correctly used (no trick with non exposed interface) and it works as expected.

MATLAB Engine: engEvalString() won't return if given incomplete input

I'm using the MATLAB Engine C interface on OS X. I noticed that if engEvalString() is given an incomplete MATLAB input such as
engEvalString(ep, "x=[1 2");
or
engEvalString(ep, "for i=1:10");
then the function simply never returns. The quickest way to test this is using the engdemo.c example which will prompt for a piece of MATLAB code and evaluate it (i.e. you can type anything).
My application lets the user enter arbitrary MATLAB input and evaluate it, so I can't easily protect against incomplete input. Is there a workaround? Is there a way to prevent engEvalString() from hanging in this situation or is there a way to check an arbitrary piece of code for correctness/completeness before I actually pass it to MATLAB?
As you noted, it seems this bug is specific to Mac and/or Linux (I couldn't reproduce it on my Windows machine). As a workaround wrap the calls in eval, evalc, or evalin:
engEvalString(ep, "eval('x = [1,2')")
Furthermore, an undocumented feature of those functions is that they take a second input that is evaluated in case an error occurs in the first one. For example:
ERR_FLAG = false;
eval('x = [1,2', 'x=nan; ERR_FLAG=true;')
You can trap errors that way by querying the value of a global error flag, and still avoid the bug above...
This was confirmed by support to be a bug in the MATLAB Engine interface on OS X (it's not present in Windows). Workarounds are possible by using the MATLAB functions eval, evalc, or similar. Instead of directly passing the code to engEvalString(), wrap it in these first.