How to bind different transport configs to datareader in OpenDDS - opendds

can different transport(shmem, tcp) bind to different datawriter/datareader in one publisher/subscriber in OpenDDS?
I'm not sure OpenDDS supports this way with RepoInfo Discovery or only in Static Discovery?
I use
`
TheTransportRegistry->bind_config("tcp1", datawriter1);
TheTransportRegistry->bind_config("shmem1", datawriter2);
`
but it seems not work. still use the publisher‘s transport config

Yes, it should be possible, but it needs a bit more setup. After they are created writers and readers (as well as any DDS::Entity) have an enable function that has to be called before they can be used. By default this is called automatically by create_datawriter and create_datareader. This is important because readers and writers can't change their config after they're enabled. You have to disable the autoenable_created_entities property in parent entity's QoS, create the reader or writer, call bin_config, and finally call enable manually. Section 3.2.16 of the OpenDDS Developer's Guide talks a bit about this, but doesn't have an example, so here's snippet that I tested with the error checks and unrelated args omitted:
DDS::PublisherQos pub_qos;
participant->get_default_publisher_qos(pub_qos);
pub_qos.entity_factory.autoenable_created_entities = false;
DDS::Publisher_var publisher =
participant->create_publisher(pub_qos, /*...*/);
DDS::DataWriter_var datawriter1 = publisher->create_datawriter(/*...*/);
TheTransportRegistry->bind_config("tcp1", datawriter1);
datawriter1->enable();
You can also set this QoS on the domain participant or the service participant instead of the publisher or subscriber, but that requires manually enabling all the entities, which includes the publishers, subscribers, and topics, so I'm not sure I recommend that.

Related

Project Reactor and Server Side Events

I'm looking for a solution that will have the backend publish an event to the frontend as soon as a modification is done on the server side. To be more concise I want to emit a new List of objects as soon as one item is modified.
I've tried implementing on a SpringBoot project, that uses Reactive Web, MongoDB which has a #Tailable cursor that publish an event as soon as the capped collection is modified. The problem is that the capped collection has some limitation and is not really compatible with what I want to do. The thing is I cannot update an existing element if the new one has a different size(as I understood this is illegal because you cannot make a rollback).
I honestly don't even know if it's doable, but maybe I'm lucky and I'll run into a rocket scientist right here that will prove otherwise.
Thanks in advance!!
*** EDIT:
Sorry for the vague question. Yes I'm more focused on the HOW, using the Spring Reactive framework.
When I had a similar need - to inform frontend that something is done on the backend side - I have used a message queue.
I have published a message to the queue from the backend and the frontend consumed the message.
But I am not sure if that is what you're looking for.
if you are using webflux with spring reactor, I think you can simply have a client request with content-type as 'text/event-stream' or 'application/stream+json' and You shall have API that can produce those content-type. This gives you SSE model without too much effort.
#GetMapping(value = "/stream", produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE, MediaType.APPLICATION_JSON_UTF8_VALUE})
public Flux<Message> get(HttpServletRequest request) {
Just as an idea - maybe you need to use a web socket technology here:
The frontend side (I assume its a client side application that runs in a browser, written in react, angular or something like that) can establish a web-socket communication with the backend server.
When the process on backend finishes, the message from backend to frontend can be sent.
You can do emitting changes by hand. For example:
endpoint:
public final Sinks.Many<SimpleInfoEvent> infoEventSink = Sinks.many().multicast().onBackpressureBuffer();
#RequestMapping(path = "/sseApproach", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<SimpleInfoEvent>> sse() {
return infoEventSink.asFlux()
.map(e -> ServerSentEvent.builder(e)
.id(counter.incrementAndGet() + "")
.event(e.getClass().getName())
.build());
}
Code anywhere for emitting data:
infoEventSink.tryEmitNext(new SimpleInfoEvent("any custom event"));
Watch out of threads and things like "subscribeOn", "publishOn", but basically (when not using any third party code), this should work good enough.

Sharing objects with all verticles instances

My application, an API server, is thought to be organized as follows:
MainVerticle is called on startup and should create all necessary objects for the application to work. Mainly a mongoDB pool of connections (MongoClient.createShared(...)) and a global configuration object available instance-wide. It also starts the HTTP Listener, several instances of a HttpVerticle.
HttpVerticle is in charge of receiving requests and, based the command xxx in the payload, execute the XxxHandler.handle(...) method.
Most of the XxxHandler.handle(...) methods will need to access the database. In addition, some others will also deploy additional verticles with parameters from the global conf. For example LoginHandler.handle(...) will deploy a verticle to keep user state while he's connected and this verticle will be undeployed when the user logs out.
I can't figure out how to get the global configuration object while being in XxxHandler.handle(...) or in a "sub"-verticle. Same for the mongo client.
Q1: For configuration data, I tried to use SharedData. In `MainVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
lm.put("var", "val");
and in `HttpVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
log.debug("var={}", lm.get("var"));
but the log output is var=null.... What am I doing wrong ?
Q2: Besides this basic example with a <String, String> map type, what if the value is a mutable Object like JsonObject which actually is what I would need ?
Q3: Finally how to make the instance of the mongo client available to all verticles?
Instead of getLocalMap() you should be using getClusterWideMap(). Then you should be able to operate on shared data accross the whole cluster and not just in one verticle.
Be aware that the shared operations are async and the code might look like (code in Groovy):
vertx.sharedData().getClusterWideMap( 'your-name' ){ AsyncResult<AsyncMap<String,String>> res ->
if( res.succeeded() )
res.result().put( 'var', 'val', { log.info "put succeeded: ${it.succeeded()}" } )
}
You should be able to use any Serializable objects in your map.

Watson Conversation Service - Quit Parameter for Slots/ Entities

There is a new feature in the conversation service where you can define slots/ entities for specific intents to extract the relevant information from the user input like currencies or specific string inputs. Those slots can be set mandatory in case you need them to proceed and the user will be asked for missing slots until he provides them.
Is it possible to define sth. like quit parameter so I can easily interrupt this conversation? The general documentation does not provide any information regarding this problem.
https://console.bluemix.net/docs/services/conversation/entities.html#defining-entities
You can do it by adding node-level handler which will listen to your cancellation intent and fill the slots with dummy values.
You can read more about this approach in the documentation: https://console.bluemix.net/docs/services/conversation/dialog-build.html (paragraph "Handle requests to exit the process")

Service Fabric ServicePartitionResolver ResolveAsync

I am currently using the ServicePartitionResolver to get the http endpoint of another application within my cluster.
var resolver = ServicePartitionResolver.GetDefault();
var partition = await resolver.ResolveAsync(serviceUri, partitionKey ?? ServicePartitionKey.Singleton, CancellationToken.None);
var endpoints = JObject.Parse(partition.GetEndpoint().Address)["Endpoints"];
return endpoints[endpointName].ToString().TrimEnd('/');
This works as expected, however if I redeploy my target application and its port changes on my local dev box, the source application still returns the old endpoint (which is now invalid). Is there a cache somewhere that I can clear? Or is this a bug?
Yes, they are cached. If you know that the partition is no longer valid, or if you receive an error, you can call the resolver.ResolveAsync() that has an overload that takes the earlier ResolvedServicePartition previousRsp, which triggers a refresh.
This api-overload is used in cases where the client knows that the
resolved service partition that it has is no longer valid.
See this article too.
Yes. They are cached. There are 2 solutions to overcome this.
The simplest code change that you need to do is replace var resolver = ServicePartitionResolver.GetDefault(); with var resolver = new ServicePartitionResolver();. This forces the service to create a new ServicePartitionResolver object to every time. Whereas, GetDefault() gets the cached object.
[Recommended] The right way of handling this is to implement a custom CommunicationClientFactory that implements CommunicationClientFactoryBase. And then initialize a ServicePartitionClient and call InvokeWithRetryAsync. It is documented clearly in Service Communication in the Communication clients and factories section.

Alternate way of configuring data sources in quartz scheduler properties file

We are configuring the Quartz Scheduler data sources as specified in the documentation that is by providing all the details without encrypting the data base details. By this the data base details are exposed to the other users and any one who have access to the file system can easily get hands on.
So are there any other ways to provide the data sources details using API or provide the database details by encrypting and providing the details as part of quartz.properties file
On class "StdSchedulerFactory" you can call the method "initialize(Properties props)" to set needed propertries by API. Then you don't need a property-file. (See: StdSchedulerFactory API)
Example:
public Scheduler createSchedulerWithProperties(Properties props)
throws SchedulerException {
StdSchedulerFactory factory = new StdSchedulerFactory(props);
return factory.getScheduler();
}
But then you have to set all properties of SchedulerFactory. Also the properties, that have a default value with default constructor. (Search for 'quartz.properties' inside of 'quartz-2.2.X.jar' to get default property values of quartz.)