Is there a method to randomly generate agents created from a database? - anylogic

Example of packages I am currently generating agents with parameters read from DB at a source node. These agents in the model are packages of different types (so basically the packages have the same list of parameter names like package name, volume, etc. but with differing parameter values). The issue is that I need to generate these packages randomly, but it is currently generated in sequence of how the packages are listed in DB. Is there any way to amend the code in order to achieve what is needed?
Snippet of current code:
{
DbPackages _result_xjal = new DbPackages();
_result_xjal.setParametersToDefaultValues();
_result_xjal.packageDb = self.databaseTable.getValue( "package_db", String.class );
_result_xjal.tester = self.databaseTable.getValue( "tester", String.class );
_result_xjal.handler = self.databaseTable.getValue( "handler", String.class );
...
return _result_xjal;
}

This can be achieved using the following code. This example is for a table with 3 records:
List<Tuple> vals =
selectFrom(db_table)
.offset(uniform_discr(0, 2)) // <- this randomly offsets the output
.orderBy(db_table.db_name.asc())
.list();
Tuple t = vals.get(0); // <- this line picks the first record in result
traceln("Got value: {%s, %s, %d}",
t.get(db_table.db_name),
t.get(db_table.db_value1),
t.get(db_table.db_value2)
);
So suppose the input db_table looks like this:
db_name
db_value1
db_value2
a
foo1
1
b
bar10
10
c
foo100
100
Then the above code will produce this output:
Got value: {c, foo100, 100}
Got value: {b, bar10, 10}
Got value: {a, foo1, 1}
Got value: {a, foo1, 1}
Got value: {c, foo100, 100}
Got value: {a, foo1, 1}
Got value: {c, foo100, 100}
Got value: {c, foo100, 100}
Got value: {c, foo100, 100}
Got value: {c, foo100, 100}
Got value: {b, bar10, 10}
Got value: {b, bar10, 10}
Got value: {a, foo1, 1}
as evident, it picks a random record from 3 in the database and prints its contents.

You can read from the database with code, then shuffle the list to randomize it and then generate the agents with their characteristics.
List <Tuple> x=selectFrom(db).list();
Collections.shuffle(x);
for(Tuple t : x)
add_agents(t.get(db.whatever));

Related

problems generating interval information

Given a a binary function over time, I try to extract the information about the intervals occuring in this function.
E.g. I have the states a and b, and the following function:
a, a, b, b, b, a, b, b, a, a
Then i would want a fact interval(Start, Length, Value) like this:
interval(0, 2, a)
interval(2, 3, b)
interval(5, 1, a)
interval(6, 2, b)
interval(8, 2, a)
Here is what I got so far:
time(0..9).
duration(1..10).
value(a;b).
1{ function(T, V): value(V) }1 :- time(T).
interval1(T, Length, Value) :-
time(T), duration(Length), value(Value),
function(Ti, Value): Ti >= T, Ti < T + Length, time(Ti).
:- interval1(T, L, V), function(T + L, V).
#show function/2.
#show interval1/3.
This actually works kinda well, but still not correctly, this is my output, when I run it with clingo 4.5.4:
function(0,b)
function(1,a)
function(2,b)
function(3,a)
function(4,b)
function(5,a)
function(6,b)
function(7,a)
function(8,b)
function(9,a)
interval1(0,1,b)
interval1(1,1,a)
interval1(2,1,b)
interval1(3,1,a)
interval1(4,1,b)
interval1(5,1,a)
interval1(6,1,b)
interval1(7,1,a)
interval1(8,1,b)
interval1(9,1,a)
interval1(9,10,a)
interval1(9,2,a)
interval1(9,3,a)
interval1(9,4,a)
interval1(9,5,a)
interval1(9,6,a)
interval1(9,7,a)
interval1(9,8,a)
interval1(9,9,a)
which has only one bug: all the intervals at T == 9 (except for the one where L == 1)
So I tried to add the following constraint, to get rid of those:
:- interval1(T, L, V), not time(T + L - 1).
which in my mind translates to "it is prohibited, to have an interval, such that T + L is not a time"
But now clingo said the problem would be unsatisfiable.
So I tried another solution, which should do the same, but in a little less general way:
:- interval1(T, L, V), T + L > 10.
Which also made the whole thing unsolvable.
Which I really don't understand, I'd just expect both of those rules to just get rid of the intervals, that run out of the function.
So why do they completely kill all elements of the model?
Also, during my experiments, I replaced the function rule with:
function(
0, a;
1, a;
2, b;
3, b;
4, b;
5, b;
6, a;
7, b;
8, a;
9, a
).
Which would make the whole thing unsatisfiable even without the problematic constraints, why is that?
So yeah ... I guess, I fundamentally missunderstood something, and I would be really greatfull if someone would tell me what exactly that is.
Best Regards
Uzaku
The programs with constraints are inconsistent because in ASP any program which contains both the fact a. and the constraint :-a. is inconsistent. You are basically saying that a is true, and, at the same time, a cannot be true.
In your case, for example, you have a rule which tells that interval1(9,10,a) is true for some function, and, on the other hand, you have a constraint which says that interval(9,10,a) cannot be true, so you get inconsistency.
A way to get rid of the undesired intervals would be, for example, to add the extra atom in the definition of an interval, e.g:
interval1(T, Length, Value) :-
time(T), duration(Length), value(Value),
time(T+Length-1), % I added this
function(Ti, Value): Ti >= T, Ti < T + Length, time(Ti).
Now the program is consistent.
I couldn't reproduce the inconsistency for the specific function you have provided. For me, the following is consistent:
time(0..9).
duration(1..10).
value(a;b).
%1{ function(T, V): value(V) }1 :- time(T).
function(0,a).
function(1,a).
function(2,b).
function(3,b).
function(4,b).
function(5,b).
function(6,a).
function(7,b).
function(8,a).
function(9,a).
interval1(T, Length, Value) :-
time(T), duration(Length), value(Value),
time(T+Length-1),
function(Ti, Value): Ti >= T, Ti < T + Length, time(Ti).
#show function/2.
#show interval1/3.
This is what I get in the output:
$ clingo test 0
clingo version 4.5.4
Reading from test
Solving...
Answer: 1
function(0,a) function(1,a) function(2,b) function(3,b) function(4,b) function(5,b) function(6,a) function(7,b) function(8,a) function(9,a) interval1(0,1,a) interval1(1,1,a) interval1(0,2,a) interval1(6,1,a) interval1(8,1,a) interval1(9,1,a) interval1(8,2,a) interval1(2,1,b) interval1(3,1,b) interval1(2,2,b) interval1(4,1,b) interval1(3,2,b) interval1(2,3,b) interval1(5,1,b) interval1(4,2,b) interval1(3,3,b) interval1(2,4,b) interval1(7,1,b)
SATISFIABLE
Models : 1
Calls : 1
Time : 0.002s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.000s
We are getting more intervals than needed, since some of them are not maximal, but I am leaving this for you to think about :)
Hope this helps.

Custom index comparator in MongoDB

I'm working with a dataset composed by probabilistic encrypted elements indistinguishable from random samples. This way, sequential encryptions of the same number results in different ciphertexts. However, these still comparable through a special function that applies algorithms like SHA256 to compare two ciphertexts.
I want to add a list of the described ciphertexts to a MongoDB database and index it using a tree-based structure (i.e.: AVL). I can't simply apply the default indexing of the database because, as described, the records must be comparable using the special function.
An example: Suppose I have a database db and a collection c composed by the following document type:
{
"_id":ObjectId,
"r":string
}
Moreover, let F(int,string,string) be the following function:
F(h,l,r) = ( SHA256(l | r) + h ) % 3
where the operator | is a standard concatenation function.
I want to execute the following query in an efficient way, such as in a collection with some suitable indexing:
db.c.find( { F(h,l,r) :{ $eq: 0 } } )
for h and l chosen arbitrarily but not constants. I.e.: Suppose I want to find all records that satisfy F(h1,l1,r), for some pair (h1, l1). Later, in another moment, I want to do the same but using (h2, l2) such that h1 != h2 and l1 != l2. h and l may assume any value in the set of integers.
How can I do that?
You can execute this query use the operator $where, but this way can't use index. So, for query performance it's dependents on the size of your dataset.
db.c.find({$where: function() { return F(1, "bb", this.r) == 0; }})
Before execute the code above, you need store your function F on the mongodb server:
db.system.js.save({
_id: "F",
value: function(h, l, r) {
// the body of function
}
})
Links:
store javascript function on server
I've tried a solution that store the result of the function in your collection, so I changed the schema, like below:
{
"_id": ObjectId,
"r": {
"_key": F(H, L, value),
"value": String
}
}
The field r._key is value of F(h,l,r) with constant h and l, and the field r.value is original r field.
So you can create index on field r._key and your query condition will be:
db.c.find( { "r._key" : 0 } )

Combine Records in Purescript

Given I have the folowing records in purescript:
let name = {name: "Jim"}
let age = {age: 37}
is it possible to combine those two records some how in a generic way?
Something like:
name 'comb' age
such that I get the following record:
{name: "Jim", age: 37}
Somehow it seems to be possible with the Eff rowtype, but I'm curious if it would be possible with 'normal' records. I'm new to purescript and it's record syntax.
Thanks a lot.
EDIT:
It seems that currently the official package for handling record manipulations is purescript-record - you can find Builder.purs there which provides merge and build functions:
> import Data.Record.Builder (build, merge)
> name = {name: "Jim"}
> age = {age: 37}
> :t (build (merge age) name)
{ name :: String
, age :: Int
}
API NOTE:
This API looks overcomplicated at first glance - especially when you compare it to simple unionMerge name age call (unionMerge is intoduced at the end of this answer). The reason behind Builder existence (and so this API) is performance. I can assure you that this:
> build (merge name >>> merge age) {email: "someone#example.com"}
creates only one new record. But this:
> unionMerge name (unionMerge age {email: "someone#example.com"})
creates two records during execution.
What is even more interesting is how Builder, build and merge are implemented - Builder is newtype wrapper around a function (and its composition is just a function composition) and build is just a function application on copied version of the record:
newtype Builder a b = Builder (a -> b)
build (Builder b) r1 = b (copyRecord r1)
In merge there is unsafeMerge performed:
merge r2 = Builder \r1 -> unsafeMerge r1 r2
So why are we gaining here anything?? Because we can be sure that intermediate results can't escape function scope and that every value is consumed exactly once in builder chain. Therefore we can perform all transformations "in place" in a mutable manner. In other words this intermediate value:
> intermediate = unionMerge name {email: "someone#example.com"}
> unionMerge age intermediate
can't be "extracted" from here:
> build (merge name >>> merge age) {email: "someone#example.com"}
and it is only consumed once by the next builder, namely merge age.
TYPESYSTEM COMMENT:
It seems that Purescript type system can handle this now thanks to the Union type class from Prim:
The Union type class is used to compute the union of two rows
of types (left-biased, including duplicates).
The third type argument represents the union of the first two.
Which has this "magic type" (source: slide 23):
Union r1 r2 r3 | r1 r2 -> r3, r1 r3 -> r2
OLD METHOD (still valid but not preferred):
There is purescript-records package which exposes unionMerge which does exactly what you want (in new psci we don't have to use let):
> import Data.Record (unionMerge)
> name = {name: "Jim"}
> age = {age: 37}
> :t (unionMerge age name)
{ name :: String
, age :: Int
}
note: When this answer was accepted, it was true, but now we do have the row constraints it mentions, and a library for manipulating records that includes merges/unions: https://github.com/purescript/purescript-record
It's not possible to do this at the moment, as we don't have a way of saying that a row lacks some label or other. It is possible to have an open record type:
something :: forall r. { name :: String | r } -> ...
But this only allows us to accept a record with name and any other labels, it doesn't help us out if we want to combine, extend, or subtract from records as it stands.
The issue with combining arbitrary records is we'd have a type signature like this:
comb :: forall r1 r2. { | r1 } -> { | r2 } -> ???
We need some way to say the result (???) is the union of r1 and r2, but also we'd perhaps want to say that r1's labels do not overlap with r2's.
In the future this may be possible via row constraints.

Meteor indexing for efficient sorting

I want to sort a collection of documents on the backend with Meteor in a publication. I want to make sure I index the collection properly and I couldnt find any concrete ways to do this for the fields I want.
Basically I want to find all the documents in collection "S" that have field "a" equal to a specific string value. Then after I have all those documents, I want to sort them by three fields: b, c (descending), and d.
Would this be the right way to index it:
S._ensureIndex({ a: 1 });
S._ensureIndex({ b: 1, c: -1, d: 1 });
S.find({ a: "specificString"}, { sort: { b: 1, c: -1, d: 1 } });
As a bonus question, field "d" is the time added in milliseconds and it would be highly unlikely that there is a duplicate document with all 3 fields being the same. Should I also include the unique option as well in the index?
S._ensureIndex({ b: 1, c: -1, d: 1 }, { unique: true });
Will adding the unique option help any with sorting and indexing performance?
I was also wondering what the general performance was for an indexed sort by mongo? On their page it says indexed sorts dont need to be done in memory because it is just doing a read. Does this mean the sort is instant?
Thanks for all your help :)
I was able to find my own answer here:
http://docs.mongodb.org/manual/tutorial/sort-results-with-indexes/#sort-and-non-prefix-subset-of-an-index
Basically the correct way to index for my example here is:
S._ensureIndex({ a: 1, b: 1, c: -1, d: 1 });
S.find({ a: "specificString"}, { sort: { b: 1, c: -1, d: 1 } });
-- As for my other subquestions.
Because you are using the index here, the performance is instant. However do keep mind that having a 4 field index can be very expensive in other ways.
Without looking this up for sure....my hunch is that including the "unique: true" would not make a difference in performance since it is already instant.

How does MongoDB pick a first property it will sort by?

db.numbers.find().sort( { a : 1, b : 1, c : 1 })
If I execute this command MongoDB will sort numbers collection by property 'a', if 'a' is the same on two docs it will sort them by 'b' property, if that is the same too it will go on to 'c'. I hope I got that right, correct me if not.
But how does it pick 'a' property as first when it is just a JS object? Does it iterate over sorting object properties using for(var propr in ...) and whichever is first is also first to be sorted by?
Internally, MongoDB doesn't use JSON, is uses BSON. While JSON is technically un-ordered, BSON, (per the specification) is ordered. This is how MongoDB knows that in {a:1, b:1, c:1} that the keys are ordered "a,b,c": the underlying implementation is ordered as well.
As #Sammaye posted above in the comments, the JavaScript dictionary must be created with key priority in mind.
Hence, if you do something like this:
db.numbers.find().sort({
a: 1,
b: 1,
c: 1
});
your results will be sorted first by a, then by b, then by c.
If you do this, however:
db.numbers.find().sort({
c: 1,
a: 1,
b: 1
});
your results will be sorted first by c, then by a, then by b.
By using those keys you mentioned:
db.numbers.find().sort({
a: 1,
b: 1,
c: 1
});
MongoDB sorts with property a, b, and then c.
Basically, MongoDB scans from the beginning of the datafile (CMIIW). Hence, if MongoDB finds two or more documents with the same keys as yours (a,b, and c), it prints the first document found in the datafile first.