The objective that I have is to run multiple applications with some metadata embedded into applications/services so that I could query applications/services using the metadata. Is this possible?
I was looking at the following post and the answer hints at this possibility, but no specific details on how to achieve the result.
The primary piece of "metadata" you get is the service/application instance name. That's what I talked about in my other post. The way that works is by creating each service/application instance with a name that contains some information clients can use when resolving them. Clients can then query Service Fabric for named application/service instances and connect to a specific one. A service/application instance name is URI, so you can use a path hierarchy to categorize information.
Continuing with the audio/video example: Let's extend that example so we have an application that can perform specific tasks for specific media formats for audio or video. Each combination of task + media format is a unique named service instance, resulting in a deployment that looks something like this:
Application:
fabric:/avapp
Services:
fabric:/avapp/video/encoding/mp4
fabric:/avapp/video/encoding/h264
fabric:/avapp/video/captioning/english
fabric:/avapp/video/captioning/czech
fabric:/avapp/audio/encoding/aac
fabric:/avapp/audio/encoding/mp3
etc.
Now clients can query Service Fabric to discover what services are available:
FabricClient fabricClient = new FabricClient();
System.Fabric.Query.ServiceList services = await fabricClient.QueryManager.GetServiceListAsync(new Uri("fabric:/avapp"));
Then you can simply query the list of services with LINQ. For example, if I want to see all services that do video encoding:
services.Where(x => x.ServiceName.AbsolutePath.Contains("video/encoding"));
And then you can resolve an address for a specific service to connect to it:
ServicePartitionResolver resolver = ServicePartitionResolver.GetDefault();
ResolvedServicePartition servicePartition = await resolver.ResolveAsync(new Uri("fabric:/avapp/video/encoding/h264"), new ServicePartitionKey(1), cancellationToken);
ResolvedServiceEndpoint endpoint = servicePartition.GetEndpoint();
There's a bit more to the address resolution part (see here), but that's the general idea.
Application instances also allow you to set custom application parameters (key-value pairs) that can be set per instance at creation time. They don't show up in the application name, but you get that information back when you ask Service Fabric for a list of running application instances. That can potentially also be used as metadata by clients when they need to decide what application to connect to.
Update: More info on application instance parameters:
When you create a new application instance you can supply a set of key-value pairs in the application description. Then when you query Service Fabric for application instances you get back a list of Application result objects that have said parameters. This also shows up in Visual Studio, in your application project, where you have environment-specific application parameter files. Visual Studio extracts those key-value pairs from the XML files and uses them in the application description when it creates an instance of your application.
Related
I was wondering if there is any possible way, to get the names of all the functions, which a deployed Chaincode contains, along with the arguments each of them expects, as well as their return-types.
So that the client application can utilize it to minimize inconsistencies while calling them.
You can use one of the Fabric SDKs (e.g Node SDK) to submit a transaction with the argument 'org.hyperledger.fabric:GetMetadata'.
This will return a buffer containing your smart contract metadata, which will have information about your transactions, their arguments, etc.
Are there any recommendations on querying remote state stores between application instances that are deployed in Kubernetes? Our application instances are deployed with 2 or more replicas.
Based on documentation
https://kafka.apache.org/10/documentation/streams/developer-guide/interactive-queries.html#id7
streams.allMetadataForStore("word-count")
.stream()
.map(streamsMetadata -> {
// Construct the (fictituous) full endpoint URL to query the current remote application instance
String url = "http://" + streamsMetadata.host() + ":" + streamsMetadata.port() + "/word-count/alice";
// Read and return the count for 'alice', if any.
return http.getLong(url);
})
.filter(s -> s != null)
.findFirst();
will streamsMetadata.host() result in the POD IP? And if it does, will the call from this pod to another be allowed? Is this the correct approach?
streamsMetadata.host()
This method returns whatever you configured via application.server configuration parameter. I.e., each application instance (in your case each POD), must set this config to provide the information how it is reachable (e.g., its IP and port). Kafka Streams distributes this information for you to all application instances.
You also need to configure your PODs accordingly to allow sending/receiving query request via the specified port. This part is additional code you need to write yourself, i.e., some kind of "query routing layer". Kafka Streams has only built-in support to query local state and to distribute the metadata about which state is hosted where; but there is no built-in remove query support.
An example implementation (WordCountInteractiveQueries) of a query routing layer can be found on Github: https://github.com/confluentinc/kafka-streams-examples
I would also recommend to checkout the docs and blog post:
https://docs.confluent.io/current/streams/developer-guide/interactive-queries.html
https://www.confluent.io/blog/unifying-stream-processing-and-interactive-queries-in-apache-kafka/
A customer has implemented an OPC-UA server and has provided some documentation for us to access it. The only information we have is the endpoint to contact the server at and the tags that the data points are linked to.
I have to implement a client without having access to the server to test it with. Is this enough information to go by? I imagine we would at least need some namespace uri. From what I understand, in order to use a function such as translateBrowsePathsToNodeIds I would also need to know some namespace ID's.
For instance, in python-opcua it would be something like:
mynode = client.uaclient.translate_browsepaths_to_nodeids(ua.QualifiedName("StaticData", 3)) (which somehow is not working but that's another question)
It doesn't help that the client examples I find somehow all use hardcoded namespace ID's.
TranslateBrowsePathToNodeIds is generally used when programming against type definitions where you know what the path of BrowseNames will be because they are defined by the type definition of each node in the path.
If this doesn't sound like your situation then you should push back for the documentation to include the NodeIds of all the Nodes you need to access.
Let's say there are two (or more) RESTful microservices serving JSON. Service (A) stores user information (name, login, password, etc) and service (B) stores messages to/from that user (e.g. sender_id, subject, body, rcpt_ids).
Service (A) on /profile/{user_id} may respond with:
{id: 1, name:'Bob'}
{id: 2, name:'Alice'}
{id: 3, name:'Sue'}
and so on
Service (B) responding at /user/{user_id}/messages returns a list of messages destined for that {user_id} like so:
{id: 1, subj:'Hey', body:'Lorem ipsum', sender_id: 2, rcpt_ids: [1,3]},
{id: 2, subj:'Test', body:'blah blah', sender_id: 3, rcpt_ids: [1]}
How does the client application consuming these services handle putting the message listing together such that names are shown instead of sender/rcpt ids?
Method 1: Pull the list of messages, then start pulling profile info for each id listed in sender_id and rcpt_ids? That may require 100's of requests and could take a while. Rather naive and inefficient and may not scale with complex apps???
Method 2: Pull the list of messages, extract all user ids and make bulk request for all relevant users separately... this assumes such service endpoint exists. There is still delay between getting message listing, extracting user ids, sending request for bulk user info, and then awaiting for bulk user info response.
Ideally I want to serve out a complete response set in one go (messages and user info). My research brings me to merging of responses at service layer... a.k.a. Method 3: API Gateway technique.
But how does one even implement this?
I can obtain list of messages, extract user ids, make a call behind the scenes and obtain users data, merge result sets, then serve this final result up... This works ok with 2 services behind the scenes... But what if the message listing depends on more services... What if I needed to query multiple services behind the scenes, further parse responses of these, query more services based on secondary (tertiary?) results, and then finally merge... where does this madness stop? How does this affect response times?
And I've now effectively created another "client" that combines all microservice responses into one mega-response... which is no different that Method 1 above... except at server level.
Is that how it's done in the "real world"? Any insights? Are there any open source projects that are built on such API Gateway architecture I could examine?
The solution which we used for such problem was denormalization of data and events for updating.
Basically, a microservice has a subset of data it requires from other microservices beforehand so that it doesn't have to call them at run time. This data is managed through events. Other microservices when updated, fire an event with id as a context which can be consumed by any microservice which have any interest in it. This way the data remain in sync (of course it requires some form of failure mechanism for events). This seems lots of work but helps us with any future decisions regarding consolidation of data from different microservices. Our microservice will always have all data available locally for it process any request without synchronous dependency on other services
In your case i.e. for showing names with a message, you can keep an extra property for names in Service(B). So whenever a name update in Service(A) it will fire an update event with id for the updated name. The Service(B) then gets consumes the event, fetches relevant data from Service(A) and updates its database. This way even if Service(A) is down Service(B) will function, albeit with some stale data which will eventually be consistent when Service(A) comes up and you will always have some name to be shown on UI.
https://enterprisecraftsmanship.com/2017/07/05/how-to-request-information-from-multiple-microservices/
You might want to perform response aggregation strategies on your API gateway. I've written an article on how to perform this on ASP.net Core and Ocelot, but there should be a counter-part for other API gateway technologies:
https://www.pogsdotnet.com/2018/09/api-gateway-response-aggregation-with.html
You need to write another service called Aggregator which will internally call both services and get the response and merge/filter them and return the desired result. This can be easily achieved in non-blocking using Mono/Flux in Spring Reactive.
An API Gateway often does API composition.
But this is typical engineering problem where you have microservices which is implementing databases per service pattern.
The API Composition and Command Query Responsibility Segregation (CQRS) pattern are useful ways to implement queries .
Ideally I want to serve out a complete response set in one go
(messages and user info).
The problem you've described is what Facebook realized years ago in which they decided to tackle that by creating an open source specification called GraphQL.
But how does one even implement this?
It is already implemented in various popular programming languages and maybe you can give it a try in the programming language of your choice.
Is there a way to request the cluster id from a cluster name?
Context:
In our context the default cluster allow child cluster to see, but not update, the data. I would like to notify the user that it is looking at data provided from a different cluster than its own.
Example
Cluster Product
#15:0 // cluster name : product. Product provided from head office
#115:0 //cluster name : productmybranch. Product provided from the branch
The branch sees the product from the head office, but should not update it. The oRoles fixes that.
How can I know that a branch user is currently looking at a product from the head office?
I was looking for a request like (That query does not work)
select metatdata:cluster from cluster:productmybranch
Or any way via the JAVA API
(running on orientdb 2.1.0)
this might help you at least with knowing, it isn't directly possible.
Can I access Cluster/s name for objects during a query on OrientDB?
Scott
I have done it through the Java API. The call to getClusterIdByName just looks up the class/table name in a ConcurrentHashMap that the OrientDB Java API uses.
OObjectDatabaseTx objectDB = OObjectDatabasePool.global().setup(1, 4).acquire(dbName, dbUser, dbPassword);
int clusterId = objectDB.getClusterIdByName("ClassName");
objectDB.close();