Apache Camel and Drools Fusion Integration - rest

Has anyone tried integrating Apache Camel with Drools Fusion or just Drools.
Following is my use case.
Get data from an external service using REST.
Filter the data (using rules defined in Drools.)
The data from the external service could also be a stream of information (e.g., Tweeter feed, real-time location of a user)
Any help or pointers would be appreciated.
Thanks.

Drools has a camel component. Using it is not much different than using any camel component.
source: https://github.com/droolsjbpm/droolsjbpm-integration/tree/master/drools-camel
binary (in the droolsjbpm-integration bundle): http://www.jboss.org/drools/downloads.html
The only thing to be "aware" of is that Drools can treat camel messages as:
commands
regular facts
as-is objects and re-route then
Some articles:
http://blog.athico.com/search?q=camel
Documentation unfortunately only describes the "command" (1) use case:
http://docs.jboss.org/drools/release/5.4.0.Beta2/droolsjbpm-integration-docs/html/ch01.html
Some test cases you can use as examples for the use cases (2) and (3) above:
https://github.com/droolsjbpm/droolsjbpm-integration/tree/master/drools-camel/src/test/java/org/drools/camel/component
Hope this helps.

Related

Recommended way to handle REST parameters in Spring cloud function

I really like the way Spring cloud function decouples the business logic from the runtime target (local or cloud) and makes it easy to integrate with serverless providers.
I plan to use SCF with AWS Lambda behind an API gateway to design the backend of a system.
However, I am not completely clear on what is the recommended way to handle REST related parameters such as Query params, headers, path etc. inside the Spring cloud functions.
As per our initial analysis, we could derive two possible approaches:
When enabling “Lambda proxy integration” in API Gateway, Query params and other information are available as Message headers inside the SCF.
We can use “Mapping templates” in API Gateway to map all the required information into a JSON body and deserialize as a POJO to take input directly into the SCF.
This way, the SCF does not need to bother about how the required data is passed to the API.
What is the recommended way to achieve this? Are we missing something that enables to do this in a better way?
I don't think you are missing anything featurewise, except perhaps that it might also be convenient to work with composite functions - e.g. marshal|transform, where marshal is a Function<Message<?>, ?> and transform is the business logic. The marshal function could be generic (and convert to some sort of canonical form), and be provided as an autoconfiguration in a shared library (for instance).

Multiple GET Rest APIs on different fields of a table or One Rest API in DDD

I wanted to provide functionality to clients from my service to get the data based on different fields or sometimes combination of fields. Eg.
getByA
getByB
getByC
getByAandB
getByAandC
In domain driven design, while designing the GET APIs, what should I do out of the following 2:
Should I create individual get api for all such functionalities I wanted to provide?
Should I create one get API with all the possible gets by using all these fields in query parameter. Eg.
get?A=?&B=?&C=?
Which one is the better way to do this? Any suggestions on best practice?
There is a middle path between using individual GET APIs for each of these queries and creating one GET API.
You could use the Specification pattern to expose one GET API, but translate it into a Domain Specification Object before passing it on to the Domain layer for querying. You typically do this transformation in your View Controller, before invoking the Application Service.
Martin Fowler and Eric Evans have published a great paper on using Specifications: https://martinfowler.com/apsupp/spec.pdf
As the paper states, The central idea of Specification is to separate the statement of how to match a candidate, from the candidate object that it is matched against.
Note:
You are fine if you are using this pattern for the Query side as you have outlined in your question, and avoid reusing it in different contexts. For ex., DO NOT use a specification object on both the query side and command side, if you are using (or plan to use) CQRS. You will be creating a central dependency between two parts, that NEED to be kept separate.
Specifications are handy when you want to represent a domain concept. Evaluate your queries (getByAandB and getByAandC) to draw out the question you are asking to the domain (For ex., ask your domain expert to describe the data he is trying to fetch).

How to connect Esper CEP engine with DDS

I belive I am missing something related with dds concept. My idea is to use a EsperIO adapter, data flow or plug-in to insert incomming events from dds to a esper engine, but I can't see it clear.
Somebody help!! (Thanks in advance)
The step by step would be
1) receive event data from DDS i.e. Java DataReader
2) build an event object that Esper can understand; for this use a JavaBean-style class for example
3) send event object into Esper
There is no need to build an adapter or use EsperIO. The API to feed events into Esper is really simple. For API code see http://www.espertech.com/esper/longer-case-study/ or Esper docs.

Conditional routing in Apache NiFi

I'm using NiFi to get data from an Oracle database and put some of this data in Kafka (using the processor PutKafka).
Example : if the attribute "id" contains "aaabb"
Is that possible in Apache NiFi? How can i do it?
This should definitely be possible, the flow might be something like this...
1) ExecuteSQL or QueryDatabaseTable to get the data from the database, these produce Avro
2) ConvertAvroToJson processor to convert the Avro to Json
3) EvaluateJsonPath to extract the id field into an attribute
4) RouteOnAttribute to route flow files where the id attribute contains "aaabbb"
5) PutKafka to deliver any of the matching results from RouteOnAttribute
To add on to Bryan's example flow, I wanted to point you to some great documentation that should help introduce you to Apache NiFi.
Firstly, I would suggest checking out the NiFi documentation. It is very good and should help a lot. In addition to providing details on each of the processors Bryan mentioned it also has general documentation for every type of user.
For a basic introduction to build a NiFi flow check out this video.
For example templates check out this repo. It's a has an excel file at it's root level which has a description and list of processors for each template.

A RESTful container contains many things. But how to know how to interact with it and what is contained in it?

What I meant was: How do we know what requests a particular URI of the container accepts and what parameters we can use?
For example:
the container URI: http://example.com/containers/container1
-> Now I want to know a way to access the metadata of the container. How do I do it?
The main reason I am trying to ask this question is I am working on migration of Fedora Commons from 3 to 4. And I am confused by many different schemas and notations. In some places, they use http://something.com/smthng/fcr:metadata.
At some places, they use http://something.com/smthng/metadata. Sometimes, fedora namespace will work in the URI and in some places it won't work. I am confused.
I want to know a way to know to all the accepted conventions on a URI.
I really don't think you have to go beyond Wikipedia's Uniform Resource Identifier definition to understand the standards for URIs, URLs and URNs.
However, the question is more likely about the Linked Data Platform. If you go to Concept Mapping - Fedora 3 to 4 , the links for Fedora 4 go to the W3C Linked Data Platform (LDP) recommendation. That basically states how the REST API works to query RDF data.
LDP containers are a way to partition RDF data so you can query the container and get a list of RDF resources. I don't think there is a way to query their metadata. The set of available containers defined in the data itself, and are not required. I.e. the data may be entirely defined with resources and containers are just a way to partition RDF data. If you have SPARQL access to the data, one idea is to query the data looking for LDP containers. Then you can sent REST requests to get that data.
(BTW, A RDF text serialization is a text-based representation of RDF graphs. Using a text serialization allows users to exchange data in a standard format. RDF standards include RDF/XML, Turtle, N-Triples, and JSON-LD.)