RMI - remote object , rmiregistry - rmi

Is it true that every remote object must be registered in rmiregistry ? Can we get one object from rmiregistry , call method on it and as a result get a reference ( not a serialized copy ) to another remote object , which isn't registred in rmiregistry ?

Is it true that every remote object must be registered in rmiregistry?
No.
Can we get one object from rmiregistry , call method on it and as a result get a reference ( not a serialized copy ) to another remote object , which isn't registred in rmiregistry ?
Yes.
Remote methods can return remote objects. The Registry is only a bootstrap mechanism to get you started, i.e. to provide you with an initial stub. After that you can do anything you like.

Related

Serverside viemodel to use server's name in RestPost

I use serverside job (viewmodel) to inform another not mdriven service via restpost periodically.
And I have the stupid question - how to get MDrivenServer computer's name in the viemodel?
Is there something like "string.currentuser" but for the computer's name?
Thank you!
You do not have access to the computers name from with the serverside job. But you can give your MDrivenServer variables that you can access.
This way you can have a variable vComputerName:string defined in your server, assign it a value once and for all: vComputerName:='TheCorrectName'
If your ServerSideViewModel has a matching variable : vComputerName:string it will be set prior to execution.

Distinguish between create and update event of Managed Objects

I'm using Apama in Cumulocity. Whenever a managed object (device) is created in Cumulocity, I would like to provide it with some initial parameters, here the required interval the device needs to report to Cumulocity before it's considered to be unavailable.
My problem is that in Apama I don't seem to have any way to distinguish between create and update events. So if I receive a managed object, add some parameters to it and send it back to the managed object channel, I end up in a loop.
I can of course do some check after the event has been received, but I would prefer to filter on only the create events of managed objects and don't perform any IF checks.
Is there any way I can filter on only create events? What is the difference between the CHANNEL and the UPDATE_CHANNEL? It doesn't seem to make a difference which one I use.
My current code looks as follows. What I want to achieve is avoid the IF statement and filter directly on create events in the listener.
monitor InitializeDevice {
action onload() {
monitor.subscribe(ManagedObject.CHANNEL);
on all ManagedObject(type = "c8y_MQTTDevice") as mo {
log "###Received managed object. Content is: " + mo.toString() at INFO;
if (mo.params.hasKey("c8y_RequiredAvailability")) {
//Assuming an interval has already been set, do nothing.
log "###Received managed object with required availability fragment. Doing nothing." at INFO;
}
else {
//Set the response interval on the managed object
dictionary<string,any> params := mo.params;
dictionary<string,any> paramssub := new dictionary<string,any>;
paramssub.add("responseInterval",3);
params.add("c8y_RequiredAvailability",paramssub);
mo.params := params;
log "###Added required interval to managed object. Content is: " + mo.toString() at INFO;
send mo to ManagedObject.UPDATE_CHANNEL;
}
}
}
}
When I execute this monitor and create a new managed object, this is what is printed to the logs:
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Received managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_IsDevice":any(dictionary<any,any>,{}),"owner":any(string,"some-owner")})
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Added required interval to managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_IsDevice":any(dictionary<any,any>,{}),"c8y_RequiredAvailability":any(dictionary<string,any>,{"responseInterval":any(integer,3)}),"owner":any(string,"some-owner")})
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Received managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_IsDevice":any(dictionary<any,any>,{}),"c8y_RequiredAvailability":any(dictionary<string,any>,{"responseInterval":any(integer,3)}),"owner":any(string,"some-owner")})
2019-05-27 16:15:07.310 INFO [12648] - InitializeDevice [6] ###Received managed object with required availability fragment. Doing nothing.
2019-05-27 16:15:08.244 INFO [7868] - InitializeDevice [6] ###Received managed object. Content is: com.apama.cumulocity.ManagedObject("5708279","c8y_MQTTDevice","some-device",[],[],[],[],[],[],{},{"c8y_Availability":any(dictionary<any,any>,{any(string,"lastMessage"):any(dictionary<any,any>,{any(string,"date"):any(integer,27),any(string,"day"):any(integer,1),any(string,"hours"):any(integer,16),any(string,"minutes"):any(integer,15),any(string,"month"):any(integer,4),any(string,"seconds"):any(integer,7),any(string,"time"):any(integer,1558966507220),any(string,"timezoneOffset"):any(integer,-120),any(string,"year"):any(integer,119)}),any(string,"status"):any(string,"AVAILABLE")}),"c8y_Connection":any(dictionary<any,any>,{any(string,"status"):any(string,"DISCONNECTED")}),"c8y_IsDevice":any(dictionary<any,any>,{}),"c8y_RequiredAvailability":any(dictionary<any,any>,{any(string,"responseInterval"):any(integer,3)}),"owner":any(string,"some-owner")})
2019-05-27 16:15:08.244 INFO [7868] - InitializeDevice [6] ###Received managed object with required availability fragment. Doing nothing.
Is there any way to filter directly on create events? Why do I received two print statements after the update?
Thanks
Mathias
After some investigation it seems that there isn't any way to distinguish the Creation and Update messages. So the code you are using at the moment is probably the only way to do this.
Edited:
BUT The second part of the question:
Why do I received two print statements after the update?
c8y sends managed object to MO.CHANNEL -> Apama monitor
monitor adds c8y_RequiredAvailability
monitor sends managed object with update to MO.UPDATE_CHANNEL -> c8y
c8y sends updated managed object containing c8y_RequiredAvailability -> Apama monitor
c8y sends managed object + c8y_Availability-> Apama monitor
so 3 is the confirmation of your update, and 4 is c8y asynchronously sending the final update with availability on the MO.CHANNEL
To be explicit - the MO.CHANNEL is where created and updated objects arrive into Apama. Sending on that channel shouldn't have an effect. The MO.UPDATE_CHANNEL is the request channel where you send updates to which may then trigger further messages on the MO.CHANNEL as c8y processes.

Sharing objects with all verticles instances

My application, an API server, is thought to be organized as follows:
MainVerticle is called on startup and should create all necessary objects for the application to work. Mainly a mongoDB pool of connections (MongoClient.createShared(...)) and a global configuration object available instance-wide. It also starts the HTTP Listener, several instances of a HttpVerticle.
HttpVerticle is in charge of receiving requests and, based the command xxx in the payload, execute the XxxHandler.handle(...) method.
Most of the XxxHandler.handle(...) methods will need to access the database. In addition, some others will also deploy additional verticles with parameters from the global conf. For example LoginHandler.handle(...) will deploy a verticle to keep user state while he's connected and this verticle will be undeployed when the user logs out.
I can't figure out how to get the global configuration object while being in XxxHandler.handle(...) or in a "sub"-verticle. Same for the mongo client.
Q1: For configuration data, I tried to use SharedData. In `MainVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
lm.put("var", "val");
and in `HttpVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
log.debug("var={}", lm.get("var"));
but the log output is var=null.... What am I doing wrong ?
Q2: Besides this basic example with a <String, String> map type, what if the value is a mutable Object like JsonObject which actually is what I would need ?
Q3: Finally how to make the instance of the mongo client available to all verticles?
Instead of getLocalMap() you should be using getClusterWideMap(). Then you should be able to operate on shared data accross the whole cluster and not just in one verticle.
Be aware that the shared operations are async and the code might look like (code in Groovy):
vertx.sharedData().getClusterWideMap( 'your-name' ){ AsyncResult<AsyncMap<String,String>> res ->
if( res.succeeded() )
res.result().put( 'var', 'val', { log.info "put succeeded: ${it.succeeded()}" } )
}
You should be able to use any Serializable objects in your map.

Geode: Serialization Exception & Could not create an instance of a class on server

I have the client side working, looks like everything is alright. I can save the object and retrieve the object back from the client side.
However, if i am trying to use gfsh or data browser to view the data, i got this exception on gfsh
Message : Could not create an instance of a class com.room.backend.soa.spring.cache.geode.test.domain.Book
Result : false
and on data browser
javalangException- Query could not be executed due to - orgapachegeodepdxPdxSerializationException- Could not create an instance of a class comroombackendsoaspringcachegeodetestdomainBook
My code is like this
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties);
gemfireCache.setPdxSerializer(pdxSerializer);
pdxSerializer is
ReflectionBasedAutoSerializer serializer =
new ReflectionBasedAutoSerializer(
"com.room.backend.soa.spring.cache.geode.test.domain.*",
"com.room.backend.soa.spring.cache.geode.test.domain.Book",
"com.room.backend.soa.spring.cache.geode.test.domain.Book#identity=id.*",
"com.room.backend.soa.spring.cache.geode.test.domain.#identity=id.*"
);
You need to run the gfsh>configure pdx --read-serialized=true command. This should be done after starting the locator, but before starting the servers. Please refer to this docs page for details.

cq.util.eval function retuning null object when the request coming from Web server

I have below function in the search.jsp. I'm using an object abc to get the search and promote account details dynamically.
var abc= CQ.Util.eval(spPath+'/jcr:content.json');
Here consider sPath is search and promote directory (ex: /etc/cloudservoices/search-promote/abc-search-promote)
If I print the value of abc in publish instance (http://publish:4503/en-US/home.html) then it's returning an object whereas it's returning null if I access it through the web server (http://test:80/en-US/home.html).
Can you kindly suggest what might possibly be wrong?