Agent should be located in the same network as the destination node - anylogic

I am trying to retrieve the Pallet from the Storage and want to place it in a truck for shipment. It is successfully retrieving the Pallet and placing it at a point, where the fork lifters here responsible to pick up and place it on a truck but after they pickup and are ready to move, i am getting an error:
java.lang.RuntimeException: root.[10]: Agent should be located in the same network as the destination node. Current node: Loading Zone, destination node: Loading Point Node
[This is the Error message]
https://i.stack.imgur.com/YJV4o.png
[Logic part]
https://i.stack.imgur.com/hBp1n.png
[calling Truck]
https://i.stack.imgur.com/iyKbP.png
[seize]
https://i.stack.imgur.com/pZemg.png
[Forklifters]
https://i.stack.imgur.com/FjVSZ.png
[RetailTrucks]
https://i.stack.imgur.com/s84F2.png
[MoveTo]
https://i.stack.imgur.com/U5n9f.png

Related

Kafka connector's task status are different when queried against different kafka connect nodes in a clustered enviroment

We have a 3 node Kafka connect cluster running version 5.5.4 in distributed mode. We are observing a strange issue regarding connector's task status.
The REST calls to node 1 and 2 are returning different results.
The first node returned this result:
{
"connector":{
"state":"RUNNING",
"worker_id":"x.com:8083"
},
"name":"connector",
"type":"source",
"tasks":[
]
}
Yes the task is empty where as the other node returned this result:
{
"connector":{
"state":"RUNNING",
"worker_id":"x.com:8083"
},
"name":"connector...",
"type":"source",
"tasks":[
{
"id":0,
"state":"RUNNING",
"worker_id":"x.com:8083"
}
]
}
As mentioned in this doc https://docs.confluent.io/home/connect/userguide.html#kconnect-internal-topics, I have configured group.id, config.storage.topic, offset.storage.topic and status.storage.topic with identical values in all 3 nodes.
I did go through connect-statuses-0 data directory and the file sizes for log, index and timestamp are all identical in node 1 and node 2. I don't know what is the .snapshot file but I see only one with root user/group in first node where as I see 2 of them in the 2nd node. One owned by root user/group and the other owned by our custom created user. Not sure this has anything to do with this problem.
Please guide me in identifying the root cause for this problem. If I do need to check any configuration, please let me know.

ejabberd clustering problems and solutions

Setup Details
2 ejabberd nodes with postgresql as database (OS : Ubuntu 16.04)
Trying to do clustering of two ejabberd as mentioned in
https://docs.ejabberd.im/admin/guide/clustering/
After starting the master node the following steps have been performed on the slave node
copy .erlang.cookie to the slave node
copy ejabbed.yml from master to slave.
slave started successfully but shows the below error.
=====Error=========
Eshell V9.2 (abort with ^G)
(ejabberd#gim-Veriton-M6650G)1> 18:29:41.856 [notice] Changed loghwm of /usr/local/var/log/ejabberd/error.log to 100
18:29:41.856 [notice] Changed loghwm of /usr/local/var/log/ejabberd/ejabberd.log to 100
18:29:41.857 [info] Application lager started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.860 [info] Application crypto started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.865 [info] Application sasl started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.871 [info] Application asn1 started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.871 [info] Application public_key started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.880 [info] Application ssl started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.881 [info] Application p1_utils started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.883 [info] Application fast_yaml started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.888 [info] Application fast_tls started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.892 [info] Application fast_xml started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.895 [info] Application stringprep started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.899 [info] Application xmpp started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.903 [info] Application cache_tab started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.910 [info] Application eimp started on node 'ejabberd#gim-Veriton-M6650G'
18:29:41.910 [info] Loading configuration from /usr/local/etc/ejabberd/ejabberd.yml
18:29:41.913 [error] CRASH REPORT Process <0.67.0> with 0 neighbours exited with reason: no case clause matching <<>> in ejabberd_config:get_config_option_key/2 line 473 in application_master:init/4 line 134
18:29:41.913 [info] Application ejabberd exited with reason: no case clause matching <<>> in ejabberd_config:get_config_option_key/2 line 473
(ejabberd#gim-Veriton-M6650G)1>
I've tried re creating mnesia DB also but didn't help.
ejabberdctl status shows ejabberd is not running in that node
Can some oe please look into the issue and help.
Finally I found the solution to the problem
The issue is with the node name as the node name of the master ia FQ name
but the slave node's name is without a domain.
Also added both the node names in the /etc/hosts file
For ejabberd clustering ,Please refer the below steps.
Before starting , configure proper entries in the /etc/hosts files of both nodes.
ie the nodes should resolve each other using their host names.
set ejaberd node name in ejabberd.cfg file , both the nodes should have different node names.
1.cofigure ejabberd in one master node with a proper node name (either a FQDN or just a name of your convenience)
2.Configure slave node with the same config as that of master ie. bot the nodes should have the same configuration in ejabberd.yml file)
3.copy erlang.cookie from master node to slave and the ejabberd user should be ale to read the cookie file.
4.Start the master node in live mode (ejabberdctl live )
5.Start slave node in live mode
6.Check the cookie value in erlang console of both the nodes using the command 'erlang:get_cookie().' , both the nodes should have the same value.
7.If bot the nodes have same value then execute "ejabberdctl --not-timeout join_cluser ejabberd#nodename" in the slave.
change ejabberd#nodename according to your environment.
In my case I ran ejabberd with 'ejabberd' user with node name as ejabberd#cluster-node1 (If you want you can use a FQDN also like ejabberd#example.com)
8.If the abode command executed without any error then the nodes are in cluster
9.Confirm the cluster in any of the erlang console using the command mnesia:info(). here you will get the node details in "running_db_nodes"
10.Hurrayyyy you are done...
For load balancing the cluster you can use HAProxy
Please refer https://blog.onefellow.com/post/76702632637/haproxy-and-ejabberd for details
I've not done load balancing using any hardware load balancer , need to check on that
If anyone have done that please do post here ..

Service Fabric - Warning: Failed to create infrastructure coordinator

I have deployed a SB using ARM-Templates and when I go to the explorer it shows all nodes are healthy, but on the system tree my 2 node types have a warning saying:
Unhealthy event: SourceId='System.InfrastructureService',
Property='CoordinatorStatus', HealthState='Warning',
ConsiderWarningAsError=false. Failed to create infrastructure
coordinator:
Microsoft.WindowsAzure.ServiceRuntime.Management.DeploymentManagementEndpointNotFoundException:
Could not find the deployment management endpoint: ManagementUri at
Microsoft.WindowsAzure.ServiceRuntime.Management.DeploymentManagementServer.CreateChannelFactory()
at
Microsoft.WindowsAzure.ServiceRuntime.Management.DeploymentManagementServer.Initialize(IDeploymentManagementServer
server) at
Microsoft.WindowsAzure.ServiceRuntime.Management.DeploymentManagementClient..ctor(IDeploymentManagementServer
server) at
Microsoft.WindowsAzure.ServiceRuntime.Management.DeploymentManagementClient.CreateInstanceImpl(IDeploymentManagementServer
server) at
System.Fabric.InfrastructureService.ManagementClientFactory.Create()
at
System.Fabric.InfrastructureService.WindowsAzureInfrastructureCoordinatorFactory.Create()
at
System.Fabric.InfrastructureService.ServiceFactory.CreateCoordinatorByReflection(String
assemblyName, String factoryTypeName, Object[]
factoryCreateMethodArgs) at
System.Fabric.InfrastructureService.DelayLoadCoordinator.d__c.MoveNext()
.....any idea??? I would really appreciate it
Thanks
It could be possible that your ARM template has a mismatch on the durabilityLevel setting. There are 2 places it needs to be set for each node type.
a. VM extension resource section
b. Service Fabric cluster resource section.
Could you please check if both of those sections have the same value for each node type. E.g. "durabilityLevel": "Gold"

[ejabberd w/ smack]: how to successfully create a leaf node inside a pubsub collection node

A registered user created a collection node on my ejabberd server using smack library and following config:
PubSubManager psMgr = new PubSubManager(conn, "pubsub.mydomain");
ConfigureForm CForm = new ConfigureForm(DataForm.Type.submit);
CForm.setAccessModel(AccessModel.open); //anyone can access
CForm.setDeliverPayloads(true); //allow payloads with notif
CForm.setNotifyDelete(true); //notify subscribers when nodeis deleted
CForm.setPersistentItems(true); //save published items in storage # server
CForm.setPresenceBasedDelivery(false); //notify subscribers even when offline
CForm.setPublishModel(PublishModel.open); //only publishers can post to this node
CForm.setNodeType(NodeType.collection);
CForm.setChildrenAssociationPolicy(ChildrenAssociationPolicy.all);
CForm.setChildrenMax(65536);
psMgr.createNode("/collection_node", lCForm);
....this collection node is created fine. Note that the children association policy is 'all'.
Now, if a different user, registered on the same server, tries to create a leaf node inside this collection node, the server returns 'forbidden - auth' error.
ConfigureForm form = new ConfigureForm(DataForm.Type.submit);
form.setNodeType(NodeType.leaf);
form.setCollection("/collection_node");
psMgr.createNode("/collection_node/leaf_node", form);
I have these plugins enabled in my ejabberd server for the pubsub module ["collections", "dag", "flat", "hometree", "pep"].
Can anyone please suggest why should the leaf node creation fail even after the collection node has granted 'all' to associate child nodes with itself.
Smack version is: 4.1.2
ejabberd version: (for some weird reason shows): 0.0 . [However, the server was installed from source code available on (https://github.com/processone/ejabberd/archive/master.zip) in Nov-2015 with erlang installed at the same time (OTP 17.1). So should be pretty much latest unless i screwed up something during installation.]

javax.jcr.ItemExistsException when moving node in JCR

I am trying to move the nodes from one path to another & getting the exception :
com.aem.tagmodels.MoveNodes Source is --> /content/dam/geometrixx/portraits/scott_reynolds.jpg
10.12.2014 16:38:27.952 *INFO* [127.0.0.1 [1418209707948] GET /content/AEMProject/Test/jcr:content/par/session_op.html HTTP/1.1] com.aem.tagmodels.MoveNodes Destination is --> /content/dam/geometrixx/drm
10.12.2014 16:38:27.952 *INFO* [127.0.0.1 [1418209707948] GET /content/AEMProject/Test/jcr:content/par/session_op.html HTTP/1.1] com.aem.tagmodels.MoveNodes Session --> session-38784
10.12.2014 16:38:27.952 *ERROR* [127.0.0.1 [1418209707948] GET /content/AEMProject/Test/jcr:content/par/session_op.html HTTP/1.1] com.aem.tagmodels.MoveNodes Error is javax.jcr.ItemExistsException: /content/dam/geometrixx/drm
I have checked there is no node inside drm with the same name as scott_reynolds.jpg . Below is my code snippet.
session.getWorkspace().move(source,destination);
session.save();
Thanks
It seems that you use parent node, /content/dam/geometrixx/drm, as a destination. This destination exists and that's why you are getting the exception, as the move() method expects the complete new path as the second parameter:
Strictly speaking, the destAbsPath parameter is actually an absolute path to the parent node of the new location, appended with the new name desired for the moved node.
(from Javadoc).
You should use full path, parent followed by the new name, for instance: /content/dam/geometrixx/drm/scott_reynolds.jpg.