How to create a Network Port during run time? - anylogic

Is it possible to create a Network Port during run time (similar to creating Nodes and Paths, etc)? If so, what is the code/syntax?
Assuming I already have a path called 'path1', I tried the following:
NetworkPort nwp1 = new NetworkPort( this, SHAPE_DRAW_2D3D, true, new PathEnd(path1, PathEndType.END) );
But it gives and the following error:
Description: Cannot instantiate the type NetworkPort.
Thanks

It seems this is not possible. There is an API function createPort() but it is deprecated.
However, you may (!) be able to "agentify" your port: create an agent type with a port in it and instantiate it as needed dynamically.

Related

Deploying an application with Application Parameters through FabricClient

I am currently deploying multiple application instances through the FabricClient. A simple implementation of this would be:
var appDesc = new ApplicationDescription(new Uri(appName), appType, appVersion);
await fabricClient.ApplicationManager.CreateApplicationAsync(appDesc);
Whenever this code is executed the new application is started with its default parameters. It is possible to add name value pairs to the ApplicationDescription through its constructor. I would prefer to use the ApplicationParameters.xml files however. Is there a way to specify the new application to use an ApplicationParameters.xml file for its parameters?
Try creating the application with the ApplicationDescription(Uri, String, String, NameValueCollection) overload.
The last parameter is the list of parameters passed to the application.

How to update information in an existing node instead of creating a new one using Dgraph?

I am writing a Golang application using Dgraph for persisting objects. From the documentation, I can infer that a new UID and hence a new node is created everytime I mutate an object/run the code.
Is there a way to update the same node data instead for creating a new node?
I tried changing the UID to use "_:name" for the UID field but even this creates a new node everytime the application is run. I wish to be able to update the existing node if it is already present in the DB instead of creating a new node for it.
Unfortunately the docs aren't very beginner friendly yet :/
To modify / mutate existing data you have to run a set operation and supply a rdf-triple like <uid> <predicate> "value" / <objectYouWantToModify> <attributeYouWantToModify> "quotedStringValue". If it is not an attribute but an edge, the value has to be another <uid>.
The full mutation would be for example
{
set {
<0x2> <name> "modified-name" .
}
}
The . terminates the sequence and there is an optional fourth parameter you can use to also assign a label.
Check https://www.w3.org/TR/n-quads/ for further details.

What causes MARSHALLINGERROR when creating a znode?

I am doing a simple createAsync() with my ZooKeeperNetEx nuget package and it is throwing an exception which is triggered by a MARSHALLINGERROR.
Here's is the two-line summary (between these lines, the connection successfully confirmed to Zookeeper):
var Zoo = new ZooKeeper("localhost:50002", 5000, new ClusterWatcher());
. . .
var parentNode = Zoo.createAsync("/election", null, null, CreateMode.PERSISTENT).Result
I do not get it. ClusterWatcher is my own class derived from Watcher, of course. Yes, I am writing this in C# but this such a simple matter, I would not think it mattered. The host machine is running Windows 10 Pro, if that matters.
This exception can be triggered by not specifying the ACL mode (you seem to pass null). In Java you can pass the predefined list ZooDefs.Ids.OPEN_ACL_UNSAFE (for example, or one of the others in that class) for the ACL mode; for C# there will probably be a similarly named constant.
In the Java client library this is a convenience constant that is defined as:
/**
* This is a completely open ACL .
*/
public final ArrayList<ACL> OPEN_ACL_UNSAFE = new ArrayList<ACL>(
Collections.singletonList(new ACL(Perms.ALL, ANYONE_ID_UNSAFE))
);

Scala Netty is there any way to share a ReplayingDecoder

I am looking to open up multiple connections using a netty client bootstrap in order to parse messages coming from multiple sources. The messages all have the same format, however, due to the amount of data that needs to be processed, I must run each connection on separate threads (This is assuming netty creates a thread per client channel, which I couldn't find a reference for - if that's not the case, how would this be achieved?).
This is the code that I use to connect to the data server:
var b = new Bootstrap()
.group(group)
.channel(classOf[NioSocketChannel])
.handler(RawFeedChannelInitializer)
var ch1 = b.clone().connect(host, port).sync().channel();
var ch2 = b.clone().connect(host, port).sync().channel();
The initializer calls RawPacketDecoder, which extends ReplayingDecoder, and is defined here.
The code works well without #Sharable when opening a single connection, but for the purpose of my application I must connect to the same server multiple times.
This results in the runtime error #Sharable annotation is not allowed pointing to my RawPacketDecoder class.
I am not entirely sure on how to get past this issue, short of reimplementing in scala an instantiable class of ReplayingDecoder as my decoder based directly on ByteToMessageDecoder.
Any help would be greatly appreciated.
Note: I am using netty 4.0.32 Final
I found the solution in this StockExchange answer.
My issue was that I was using an object based ChannelInitializer (singleton), and ReplayingDecoder as well as ByteToMessageDecoder are not sharable.
My initializer was created as a scala object, and therefore a single instance allowed. Changing the initializer to a scala class and instantiating for each bootstrap clone solved the problem. I modified the bootstrap code above as follows:
var b = new Bootstrap()
.group(group)
.channel(classOf[NioSocketChannel])
//.handler(RawFeedChannelInitializer)
var ch1 = b.clone().handler(new RawFeedChannelInitializer()).connect(host, port).sync().channel();
var ch2 = b.clone().handler(new RawFeedChannelInitializer()).connect(host, port).sync().channel();
I am not sure whether this ensures multithreading as wanted but it does allow to split the data access into multiple connections to the feed server.
Edit Update: After performing additional research on the subject, I have determined that netty does in fact create a thread per channel; this was verified by printing to console after the creation of each channel:
println("No. of active threads: " + Thread.activeCount());
The output shows an incremental number as channels are created and associated with their respective threads.
By default NioEventLoopGroup uses 2*Num_CPU_cores threads as defined here:
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"io.netty.eventLoopThreads",
Runtime.getRuntime().availableProcessors() * 2));
This value can be overriden to something else by setting
val group = new NioEventLoopGroup(16)
and then using the group to create/setup the bootstrap.

the hook orm taking too long to load

i am using two database adapters with sails.
one for mondoDB and second for mysql.whenever i run command "sails lift".once it gives an error
error: Error: The hook `orm` is taking too long to load.
Make sure it is triggering its `initialize()` callback, or else set `sails.config.orm._hookTimeout to a higher value (currently 20000)
at tooLong [as _onTimeout] (C:\Users\KAMI\AppData\Roaming\npm\node_modules\sails\lib\app\private\loadHooks.js:92:21)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15
when i rerun sails without changes it gives no error then.how can i avoid this error everytime.this is my 1st experience with sailsjs so any help will be apreciated....
I ran into this problem last night because of a slow internet connection between my laptop and the DB server. My solution was to create a new file in the config directory called orm.js (name doesn't really matter).
Then add the following code:
// config/orm.js
module.exports.orm = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
I also found I had to change my pubsub timeout but that may not be necessary for you.
// config/pubsub.js
module.exports.pubsub = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
Note: The other answer recommends changing the sails files inside the node_modules folder. This is almost always a bad idea because any npm update could revert your changes.
It is likely best to do this on a per env basis. Under config directory, you will have something like:
Then enter, inside module.exports of each:
module.exports = {
hookTimeout: 40000
}
Notice, there is no need for an underscore in front of the attribute name either.
I realise this is quite an old question, but I also had the same problem. I was convinced it wasn't my connection.
My solution is to change your migration option for your models and you'll have a choice of 3
safe - never auto-migrate my database(s). I will do it myself (by hand)
alter - auto-migrate, but attempt to keep my existing data (experimental)
drop - wipe/drop ALL my data and rebuild models every time I lift Sails
Got to config/models.js and in there put:
migrate: 'safe'
or whatever option from above you want to use.
There are two ways, which we can probably call them as:
1- System-wide method: (as #arcseldon has told)
Try to add the hookTimeout key to the project's config/env/development.js or config/env/production.js file. Next almost all the hooks (except some hooks, such as moduleloader) will retrieve the timeout value and consider it for themeselves.
2- Hook specific method: (as #davepreston has told)
create a [module-name].js file in the project's config folder and add _hookTimeout key to it. So, it will lead into assigning the timeout value only to that specific module. (Be careful about the specific json structure for the sails config files.)
Go to you node_modules folder and browse to \sails\lib\app\private
In your case you should go to this folder:
C:\Users\KAMI\AppData\Roaming\npm\node_modules\sails\lib\app\private
Then open the file named loadHooks.js and go to the line that says:
var timeoutInterval = (sails.config[hooks[id].configKey || id] && sails.config[hooks[id].configKey || id]._hookTimeout) || sails.config.hookTimeout || 20000;
Change the last value in this line from 20000 to some higher value and save the file then run your application by "sails lift" as you normally do
NB: you may need to try out a few higher values instead of 20000 until you reach a value that works for you. My application successfully lifted when I changed the value to 50000
Go to models.js file and uncomment migrate: 'alter'
while running sails lift run this command in the command line
sails lift hookTimeout=75000
You can also try to add defaults: { timeout: 30000 } to your hook
Reference: https://sailsjs.com/documentation/concepts/extending-sails/hooks/hook-specification/defaults