Is it possible to get insertLogical fact from REST or add custom REST url in drools 6.2? - rest

I have a rule where in RHS I insertLogical another Fact. In Drools 6.2 we can deploy the rules in a container and then fire the rules on that container. When I run the POST request for fireAllRules(batch-execution), I can just get back the facts which I inserted. There seems to be no way to access the insertLogical Fact. Even the getObjects expects fact-handle and since I had not insert the fact there is no way to get it. Is there an option to get fact inserted in RHS?
Other option I thought of trying out was to add another REST url which I can expose from with-in container. This url can fire rule locally from within container and pass me back custom objects. Is this possible?

A simple solution for your situation could be to define a query in your DRL to return the logically-inserted fact.
Using a BatchCommand you can then execute that query and get its result.
Hope it helps,

Related

Ansible: Check (GET) before Applying (POST/PUT) if applying is idempotent

I am writing roles in ansible, which use the ansible.builtin.uri method to do my bidding against the api of the service.
As I don't want to POST/PUT every time I run the Playbook, I check if the item I want to create already exists.
Does this make sense? In the end I introduce an extra step to GET the status, to skip the POST/PUT, where the POST/PUT itself would simply set what I want in the end.
For example, I wrote an Ansible Role which gives a User a role in Nexus.
Every time I run the Role, it first checks if the User already has the role and if not, it provides it.
If I don't check before and the User would already have the Role, it would simply apply it again.
But as I would like to know exactly whats going to happen, I believe it is better to explicitly check before applying.
What is the best practice for my scenario and are there any reasons against checking before or directly applying the changes?

Is there a way to prevent Spring Cloud Gateway from reordering query parameters?

Spring Cloud Gateway appears to be reordering my query parameters to put duplicate parameters together.
I'm trying to route some requests to one of our end points to a third party system. These requests include some query parameters that need to be in a specific order (including some duplicate parameters), or the third party system returns a 500 error, but upon receiving the initial request with the parameters in the proper order, the Spring Cloud Gateway reorders these parameters to put the duplicates together by the first instance of the parameter.
Example:
http://some-url.com/a/path/here?foo=bar&anotherParam=paramValue2&aThirdParam=paramValue3&foo=bar
Becomes:
http://some-url.com/a/path/here?foo=bar&foo=bar&anotherParam=paramValue2&aThirdParam=paramValue3
Where the last parameter was moved to be by the first parameter because they had the same name.
The actual request output I need is for the query parameters to be passed through without change.
The issue lays in the UriComponentsBuilder which is used in RouteToRequestFilter.
UriComponentsBuilder.fromUri(uri) is going to build up a map of query params. Because this is a LinkedMultiValueMap you see the reordering of the used query params.
Note that RFC3986 contains the following
The query component contains non-hierarchical data that, along with data in the path component (Section 3.3), serves to identify a resource within the scope of the URI’s scheme and naming authority (if any).
Therefor I don’t think there needs to be a fix in Spring Cloud Gateway.
In order to fix this in your gateway, you'll need to add a custom filter which kicks in after the RouteToRequestFilter by setting the order to RouteToRequestUrlFilter.ROUTE_TO_URL_FILTER_ORDER + 1.
Take a look at the RouteToRequestUrlFilter how the exchange is adapted to go to the downstream URI.
Hope that helps! :)

Renewing instances in Autofac

I know that the entire context of this issue is a bit specific, but I'll try to do my best explaining it. I'm performing a quite big importation from one ecommerce platform to nopCommerce.
nopCommerce works with Autofac as dependency injection container. Importing one product to nopCommerce involves some queries over nopCommerce tables and finally an insertion to the products table. These steps are repeated a lot of times, and Entity Framework context gets bigger, as it has to track more and more entities and trying to detect changes and figure out how many objects has to persist.
What I want to do is, in every iteration of the loop, renew the context, so it only tracks the entities associated to the current iteration. Obviously I want to achieve this, trying to not modify (as much as possible) nopCommerce core. In the container configuration, it is explicitly set that the EF context instances are given per http request (something I want to avoid, as I need a new instance per iteration).
An easy way to do it would be:
foreach job in jobs
Eject all instances in container
service1 = Container.RequestInstance<SomeServiceINeed>
service2 = Container.RequestInstance<SomeServiceINeed2>
DoTheJob
The thing is, I don't know how to accomplish this with Autofac. I have been trying to create a new ContainerBuilder and update the existing one, but _context.GetHashCode will always return the same instance.
Any idea about the best way to do it?
EDIT:
As it was suggested in the comments, I've tried to get the instances inside a lifetime scope. Basically:
using (var lifeTime = EngineContext.Current.ContainerManager.Container.BeginLifetimeScope())
{
service1 = lifeTime.Resolve<SomeServiceINeed>();
service2= lifeTime.Resolve<SomeServiceINeed2>();
..............
}
But I get this exception:
No scope with a Tag matching 'AutofacWebRequest' is visible from the scope in
which the instance was requested. This generally indicates that a component
registered as per-HTTP request is being requested by a SingleInstance() component
(or a similar scenario.) Under the web integration always request dependencies from
the DependencyResolver.Current or ILifetimeScopeProvider.RequestLifetime,
never from the container itself.
The services I'm trying to resolve, obviously depends also on a lot of different repositories and other services that are already defined in the container wiring (app start). Some of them are configured as 'PerHttpRequest'.
Thanks a lot!

Marklogic REST API search for latest document version

We need to restrict a MarkLogic search to the latest version of managed documents, using Marklogic's REST api. We're using MarkLogic 6.
Using straight xquery, you can use dls:documents-query() as an additional-query option (see
Is there any way to restrict marklogic search on specific version of the document).
But the REST api requires XML, not arbitrary xquery. You can turn ordinary cts queries into XML easily enough (execute <some-element>{cts:word-query("hello world")}</some-element> in QConsole).
If I try that with dls:documents-query() I get this:
<cts:properties-query xmlns:cts="http://marklogic.com/cts">
<cts:registered-query>
<cts:id>17524193535823153377</cts:id>
</cts:registered-query>
</cts:properties-query>
Apart from being less than totally transparent... how safe is that number? We'll need to put it in our query options, so it's not something we can regenerate every time we need it. I've looked on two different installations here and the the number's the same, but is it guaranteed to be the same, and will it ever change? On, for example, a MarkLogic upgrade?
Also, assuming the number is safe, will the registered-query always be there? The documentation says that registered queries may be cleared by the system at various times, but it's talking about user-defined registered queries, and I'm not sure how much of that applies to internal queries.
Is this even the right approach? If we can't do this we can always set up collections and restrict the search that way, but we'd rather use dls:documents-query if possible.
The number is a registered query id, and is deterministic. That is, it will be the same every time the query is registered. That behavior has been invariant across a couple of major releases, but is not guaranteed. And as you already know, the server can unregister a query at any time. If that happens, any query using that id will throw an XDMP-UNREGISTERED error. So it's best to regenerate the query when you need it, perhaps by calling dls:documents-query again. It's safest to do this in the same request as the subsequent search.
So I'd suggest extending the REST API with your own version of the search endpoint. Your new endpoint could add dls:documents-query to the input query. That way the registered query would be generated in the same request with the subsequent search. For ML6, http://docs.marklogic.com/6.0/guide/rest-dev/extensions explains how to do this.
The call to dls:documents-query() makes sure the query is actually registered (on the fly if necessary), but that won't work from REST api. You could extend the REST api with a custom extension as suggested by Mike, but you could also use the following:
cts:properties-query(
cts:and-not-query(
cts:element-value-query(
xs:QName("dls:latest"),
"true",
(),
0
),
cts:element-query(
xs:QName("dls:version-id"),
cts:and-query(())
)
)
)
That is the query that is registered by dls:documents-query(). Might not be future proof though, so check at each upgrade. You can find the definition of the function in /Modules/MarkLogic/dls.xqy
HTH!

Exposing database query parameters via REST interface

I have the basics of a REST service done, with "standard" list and GET/POST/PUT/DELETE verbs implemented around my nouns.
However, the client base I'm working with also wants to have more powerful operations. I'm using Mongo DB on the back-end, and it'd be easy to expose an "update" operation. This page describes how Mongo can do updates.
It'd be easy to write a page that takes a couple of JSON/XML/whatever arguments for the "criteria" and the "objNew" parts of the Mongo update function. Maybe I make a page like http://myserver.com/collection/update that takes a POST (or PUT?) request, with a request body that contains that data. Scrub the input for malicious querying and to enforce security, and we're done. Piece of cake.
My question is: what's the "best" way to expose this in a RESTful manner? Obviously, the approach I described above isn't kosher because "update" isn't a noun. This sort of thing seems much more suitable for a SOAP/RPC method, but the rest of the service is already using REST over HTTP, and I don't want users to have to make two different types of calls.
Thoughts?
Typically, I would handle this as:
url/collection
url/collection/item
GET collection: Returns a representation of the collection resource
GET collection/item: Returns a representation of the item resource
(optional URI params for content-types: json, xml, txt, etc)
POST collection/: Creates a new item (if via XML, I use XSD to validate)
PUT collection/item: Update an existing item
DELETE collection/item: Delete an existing item
Does that help?
Since as you're aware it isn't a good fit for REST, you're just going to have to do your best and invent a standard to make it work. Mongo's update functionality is so far removed from REST, I'd actually allow PUTs on the collection. Ignore the parameters in my examples, I haven't thought too hard about them.
PUT collection?set={field:value}
PUT collection?pop={field:1}
Or:
PUT collection/pop?field=1