Seizing multiple resources sets by Java - anylogic

I have a generic process which will vary in terms of the type of carriers involved for different processes. E.g. Process A uses Carrier 1, Process B uses Carrier 2 and 3 and so on. Hence I have a col_Carrier to store the types of Carriers involved with each of the processes. However, as the number of Carriers can vary, I need a way for the model to dynamically read all the contents within col_Carrier instead of hardcoding it to col_Carrier.get(index).
I tried various methods but I can't figure out what is the correct Java code to allow me to read all the items within the col_Carrier for this scenario. Ive tried:
{{
col_Carrier
}}
this gives an error Type mismatch: cannot convert ArrayList to ResourcePool
{{
(ResourcePool) I : col_Carrier
}}
this gives 2 errors: The field Agent.i is not visible, Syntax error on token ":", :: expected

Related

bwboundaries() C++ code Generation Error?

I have enabled dynamic memory allocation in Simulation Target>Advanced with default threshold as 0. Then I attempt to run this code:
B={{nan}}; %%Defining so that it can be set as size variable.
coder.varsize('B',[50 1]);
B={B;bwboundaries(BW,'noholes')} %%Not B{:}=bwboundaries() as multiple inputs are not allowed
Y=B{2}
boundary=Y{2}
This is the error message:
=== Simulation (Elapsed: 1:21 min) ===
Error:Simulink does not have enough information to determine output sizes for this block. If you think the errors below are inaccurate, try specifying types for the block inputs and/or sizes for the block outputs.
Error:This assignment writes a 'cell' value into a 'double' type in element '{1}{1}'. Code generation does not support changing types through assignment. Check preceding assignments or input type specifications for type mismatches.
Function 'Image Processing System/corner_detection' (#95.612.613), line 21, column 1:
"B"
Launch diagnostic report.
Error:Errors occurred during parsing of MATLAB function 'flightControlSystem/Image Processing System/corner_detection'
Error:Simulink cannot determine sizes and/or types of the outputs for block 'flightControlSystem/Image Processing System/corner_detection' due to errors in the block body, or limitations of the underlying analysis. The errors might be inaccurate. Fix the indicated errors, or explicitly specify sizes and/or types for all block outputs.
Error:Simulink cannot determine sizes and/or types of the outputs for block 'flightControlSystem/Image Processing System/corner_detection' due to errors in the block body, or limitations of the underlying analysis. The errors might be inaccurate. Fix the indicated errors, or explicitly specify sizes and/or types for all block outputs.
How can I modify the code so it works?

Unique symbol value on type level

Is it possible to have some kind of unique symbol value on the type level, that could be used to distinct (tag) some record without the need to supply a unique string value?
In JS there is Symbol often used for such things. But I would like to have it without using Effect, in pure context.
Well, it could even like accessing Full qualified module name (which is quite unique for the task), but I'm not sure if this is a really relevant/possible thing in the Purescript context.
Example:
Say There is some module that exposes:
type Worker value state =
{ tag :: String
, work :: value -> state -> Effect state
}
makeWorker :: forall value state. Worker value state
performWork :: forall value state. woker -> Worker value state -> value -> Unit
This module is used to manage the state of workers, it passes them value and current state value, and gets Effect with new state value, and puts in state map where keys are tags.
Users of the module:
In one module:
worker = makeWorker { tag: "WorkerOne", work }
-- Then this tagged `worker` is used to performWork:
-- performWork worker "Some value"
In another module we use worker with another tag:
worker = makeWorker { tag: "WorkerTwo", work }
So it would be nice if there would be no need to supply a unique string ("WorkerOne", "WorkerTwo") as a tag but use some "generated" unique value. But the task is that worker should be created on the top level of the module in pure context.
Semantics of PureScript as such is pure and pretty much incompatible with this sort of thing. Same expression always produces same result. The results can be represented differently at a lower level, but in the language semantics they're the same.
And this is a feature, not a bug. In my experience, more often than not, a requirement like yours is an indication of a flawed design somewhere upstream.
An exception to this rule is FFI: if you have to interact with the underlying platform, there is no choice but to play by that platform's rules. One example I can give is React, which uses the JavaScript's implicit object identity as a way to tell components apart.
So the bottom line is: I urge you to reconsider the requirement. Chances are, you don't really need it. And even if you do, manually specified strings might actually be better than automatically generated ones, because they may help you troubleshoot later.
But if you really insist on doing it this way, good news: you can cheat! :-)
You can generate your IDs effectfully and then wrap them in unsafePerformEffect to make it look pure to the compiler. For example:
import Effect.Unsafe (unsafePerformEffect)
import Data.UUID (toString, genUUID)
workerTag :: String
workerTag = toString $ unsafePerformEffect genUUID

Drools RETE algorithm confusion

I am having an issue understanding RETE algorithm Beta node JoinNode and notNode?
Documentation says :
There are two two-input nodes, JoinNode and NotNode, and both are
types of BetaNodes. BetaNodes are used to compare 2 objects, and their
fields, to each other. The objects may be the same or different types.
By convention, we refer to the two inputs as left and right. The left
input for a BetaNode is generally a list of objects; in Drools this is
a Tuple. The right input is a single object. Two Nodes can be used to
implement 'exists' checks. BetaNodes also have memory. The left input
is called the Beta Memory and remembers all incoming tuples. The right
input is called the Alpha Memory and remembers all incoming objects.
I understood, Alpha Node: Various literal conditions for drl rules but above documentation for BetaNodes is confusing me a bit.
say below is drl condition for above diagram:
$person : Person( favouriteCheese == $cheddar )
Query: 1) what are these left and right inputs to two-input Beta Nodes exactly as explained in above documentation? I believe it's referring to facts and rules where I believe tuples would be facts?
2) notNode would be basically drl condition matching literal condition with not?
Updated question on 6Sep17:
3) I believe above diagram represent joinNode, how would notNode be represented , if above workflow is altered to suit notNode?
The condition corresponding to the diagram would be
Cheese( $name: name == "Cheddar" )
Person( favouriteCheese == $name )
Once there is a match, a new tuple consisting of the matching Cheese and Person is composed and can act as a new tuple for further matches if there is a third pattern in the condition.
A not-node would be one that asserts the non-existence of some fact. It would fire only once.
You might find a much better description of "rete" on the web.

How do purely functional compilers annotate the AST with type info?

In the syntax analysis phase, an imperative compiler can build an AST out of nodes that already contain a type field that is set to null during construction, and then later, in the semantic analysis phase, fill in the types by assigning the declared/inferred types into the type fields.
How do purely functional languages handle this, where you do not have the luxury of assignment? Is the type-less AST mapped to a different kind of type-enriched AST? Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?
Are there purely functional programming tricks that help the compiler writer with this problem?
I usually rewrite a source (or an already several steps lowered) AST into a new form, replacing each expression node with a pair (tag, expression).
Tags are unique numbers or symbols which are then used by the next pass which derives type equations from the AST. E.g., a + b will yield something like { numeric(Tag_a). numeric(Tag_b). equals(Tag_a, Tag_b). equals(Tag_e, Tag_a).}.
Then types equations are solved (e.g., by simply running them as a Prolog program), and, if successful, all the tags (which are variables in this program) are now bound to concrete types, and if not, they're left as type parameters.
In a next step, our previous AST is rewritten again, this time replacing tags with all the inferred type information.
The whole process is a sequence of pure rewrites, no need to replace anything in your AST destructively. A typical compilation pipeline may take a couple of dozens of rewrites, some of them changing the AST datatype.
There are several options to model this. You may use the same kind of nullable data fields as in your imperative case:
data Exp = Var Name (Maybe Type) | ...
parse :: String -> Maybe Exp -- types are Nothings here
typeCheck :: Exp -> Maybe Exp -- turns Nothings into Justs
or even, using a more precise type
data Exp ty = Var Name ty | ...
parse :: String -> Maybe (Exp ())
typeCheck :: Exp () -> Maybe (Exp Type)
I cant speak for how it is supposed to be done, but I did do this in F# for a C# compiler here
The approach was basically - build an AST from the source, leaving things like type information unconstrained - So AST.fs basically is the AST which strings for the type names, function names, etc.
As the AST starts to be compiled to (in this case) .NET IL, we end up with more type information (we create the types in the source - lets call these type-stubs). This then gives us the information needed to created method-stubs (the code may have signatures that include type-stubs as well as built in types). From here we now have enough type information to resolve any of the type names, or method signatures in the code.
I store that in the file TypedAST.fs. I do this in a single pass, however the approach may be naive.
Now we have a fully typed AST you could then do things like compile it, fully analyze it, or whatever you like with it.
So in answer to the question "Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?", I cant say definitively that this is the case, but it is certainly what I did, and it appears to be what MS have done with Roslyn (although they have essentially decorated the original tree with type info IIRC)
"Are there purely functional programming tricks that help the compiler writer with this problem?"
Given the ASTs are essentially mirrored in my case, it would be possible to make it generic and transform the tree, but the code may end up (more) horrendous.
i.e.
type 'type AST;
| MethodInvoke of 'type * Name * 'type list
| ....
Like in the case when dealing with relational databases, in functional programming it is often a good idea not to put everything in a single data structure.
In particular, there may not be a data structure that is "the AST".
Most probably, there will be data structures that represent parsed expressions. One possible way to deal with type information is to assign a unique identifier (like an integer) to each node of the tree already during parsing and have some suitable data structure (like a hash map) that associates those node-ids with types. The job of the type inference pass, then, would be just to create this map.

Recommended macros to add functionality to Clojure's defrecord constructor?

defrecord in clojure allows for defining simple data containers with custom fields.
e.g.
user=> (defrecord Book [author title ISBN])
user.Book
The minimal constructor that results takes only positional arguments with no additional functionality such as defaulting of fields, field validation etc.
user=> (Book. "J.R.R Tolkien" "The Lord of the Rings" 9780618517657)
#:user.Book{:author "J.R.R Tolkien", :title "The Lord of the Rings", :ISBN 9780618517657}
It is always possible to write functions wrapping the default constructor to get more complex construction semantics - using keyword arguments, supplying defaults and so on.
This seems like the ideal scenario for a macro to provide expanded semantics. What macros have people written and/or recommend for richer defrecord construction?
Examples of support for full and partial record constructor functions and support for eval-able print and pprint forms:
http://david-mcneil.com/post/765563763/enhanced-clojure-records
http://github.com/david-mcneil/defrecord2
David is a colleague of mine and we are using this defrecord2 extensively in our project. I think something like this should really be part of Clojure core (details might vary considerably of course).
The things we've found to be important are:
Ability to construct a record with named (possibly partial) parameters: (new-foo {:a 1})
Ability to construct a record by copying an existing record and making modifications: (new-foo old-foo {:a 10})
Field validation - if you pass a field outside the declared record fields, throw an error. Of course, this is actually legal and potentially useful, so there are ways to make it optional. Since it would be rare in our usage, it's far more likely to be an error.
Default values - these would be very useful but we haven't implemented it. Chas Emerick has written about adding support for default values here: http://cemerick.com/2010/08/02/defrecord-slot-defaults/
Print and pprint support - we find it very useful to have records print and pprint in a form that is eval-able back to the original record. For example, this allows you to run a test, swipe the actual output, verify it, and use it as the expected output. Or to swipe output from a debug trace and get a real eval-able form.
Here is one that defines a record with default values and invariants. It creates a ctor that can take keyword args to set the values of the fields.
(defconstrainedrecord Foo [a 1 b 2]
[(every? number? [a b])])
(new-Foo)
;=> #user.Foo{:a 1, :b 2}
(new-Foo :a 42)
; #user.Foo{:a 42, :b 2}
And like I said... invariants:
(new-Foo :a "bad")
; AssertionError
But they only make sense in the context of Trammel.
Here is one approach: http://david-mcneil.com/post/765563763/enhanced-clojure-records