MicroProfile LRA - How to define custom participant URI on a WildFly JEE application? - wildfly

When MicroProfile LRA coordinator and participants run on different Docker containers, it is needed to define a custom URI for each participant.
Otherwise the LRA coordinator tries to call participant compensate/complete APIs by referring them with "localhost" based URI.
Is it possible, on WildFly environment, to define a custom URI for a participant?
And ingeneral is it possible to define a how participants can register with any LRA?

By default, the participant endpoints are derived from the caller information provided in the JAX-RS UriInfo class. This class contains the base URI of the caller of the participant. The idea is that coordinator should be able to call the participant on the same URI as the original request to the participant came in.
I've created a simple docker image https://hub.docker.com/repository/docker/xstefank/uriinfo-wildfly which will on path /uriinfo/ping return the the base URI that will be used for participant URLs.
Calling this locally gives you - Base URI is http://localhost:8080/uriinfo/ which is expected called from the local machine. Deploying this image to OpenShift/Kubernetes and exposing a route for this application gives you Base URI is http://uriinfo-wildfly-testing.6923.rh-us-east-1.openshiftapps.com/uriinfo/. And finally, calling it from a different pod deployed in the same project gives you Base URI is http://uriinfo-wildfly:8080/uriinfo/ which corresponds to the actual calls:
$ curl localhost:8080/uriinfo/ping
Base URI is http://localhost:8080/uriinfo/
$ curl http://uriinfo-wildfly-testing.6923.rh-us-east-1.openshiftapps.com/uriinfo/ping
Base URI is http://uriinfo-wildfly-testing.6923.rh-us-east-1.openshiftapps.com/uriinfo/
# called from different pod in the same openshift project
$ curl http://uriinfo-wildfly:8080/uriinfo/ping
Base URI is http://uriinfo-wildfly:8080/uriinfo/
Another option is to manually register the URLs you require through NarayanaLRAClient#joinLRA methods which either take individual URLs for different LRA endpoints or a base URI that will derive the URLs as mentioned above.

Related

Create Service broker for Play Framework Api

I have created a sample play framework api which has one endpoint.
http://play-demo-broker.cfapps.io/say?number=20
Which just return me number that have passed.
I am able successfully deploy the service. Next want this service to Act like service broker
For same want to register this as by using below command
cf create-service-broker play-demo-broker admin admin http://play-demo-broker.cfapps.io --space-scoped
This command it giving me below error -
The service broker rejected the request. Status Code: 404 Not Found
Not sure what is causing this issue as there not much information available for Play Framework Service broker Setup.
The play framework is implemented above the akka packages. Akka rejects paths that are not implemented.
If I an not mistaken, cf create-service-broker command access the / endpoint. If you implemented only say?number=20 endpoint, then be default all other paths, such as the empty path, are rejected by Akka.
In order to open that endpoint you need to add it into the routes.
For example you can add:
GET / controllers.ControllerName.GetEmptyPath
And implement the GetEmptyPath method in ControllerName

Is it possible to have a single frontend select between backends (defined dynamically)?

I am currently looking into deploying Traefik/Træfik on our service fabric cluster.
Basically I have a setup where I have any number of Applications (services), defined with a tenant name and each of these services is in fact a separate Web UI.
I am trying to figure out if I can configure a single frontend to target a backend so I don't have to define a new frontend each time I deploy a new UI app. Something like
[frontend.tenantui]
rule = "HostRegexp:localhost,{tenantName:[a-z]+}.example.com"
backend = "fabric:/WebApp/{tenantName}"
The idea is to have it such that I can just deploy new UI services without updating the frontend configuration.
I am currently using the Service Fabric provider for my backend services, but I am open to using the file provider or something else if that is required.
Update:
The servicemanifset contains labels, so as to let traefik create backends and frontends.
The labels are defined for one service, lets call it WebUI as an example. Now when I deploy an instance of WebUI it gets a label and traefik understands it.
Then I deploy ANOTHER instance with a DIFFERENT set of parameters, its still the WebUI service and it uses the same manifest, so it gets the same labels, and the same routing. But what I would really want was to let it have a label containing some sort of rule so I could route to the name of the service instance (determine at runtime not design time). Specifically I would like for the runtime part to be part of the domainname (thus the suggestion of a HostRegexp style rule)
I don't think it is possible to use the matched group from the HostRegexp to determine the backend.
A possibility would be to use the Property Manager API to dynamically set the frontend rule for the service instance after creating it. Also, see this for a complete example on using the API.

Publicly exposing a WCF restful service via http from Service Fabric

I am trying to expose a WCF based restful service via http and am so far unsuccessful. I'm trying on my local machine first so prove it works. I found a suggestion here that I remove my local cluster and then manually run this powershell command from the SF SDK folder as administrator to recreate it with the machine name binding: .\DevClusterSetup.ps1 -UseMachineName
It created the cluster successfully. I can use the SF Explorer and see in the cluster manifest that entries in the NodeList show the machine name rather than localhost. This seems good.
But the first problem I notice is that if I expand my way through SF Explorer down to the node my app is running on I see an endpoints entry but the URL is not what I'd expect. I am seeing this: http://surfacelap/d5be9425-3247-4290-b77f-1a90f728fb8d/39cda0f7-cef4-4c7f-8af2-d05786a834b0-131111019607641260
Is that what I should see even though I have an endpoint setup? I did not expect the guid and other numbers in the path. This makes me suspect that SF is not seeing my service as being publicly accessible and instead is maybe only setup for internal access within the app? If I dig down into my service manifest I see this as expected:
<Resources>
<Endpoints>
<Endpoint Name="ResolverEndpoint" Protocol="http" Type="Input" Port="80" />
</Endpoints>
</Resources>
But how do I know if the service itself is mapped to it? When I use the crazy long url above and try a simple method of my service I get an http 202 response and no response data as expected. If I then change the method name to one that doesn't exist I get the same thing, not the expected http 404. I've tried using both my machine name and localhost. Same result.
So clearly I'm doing something wrong. Below is my CreateServiceInstanceListeners override. In it you can see I use "ResolverEndpoint" as my endpoint resource name, which matches the service manifest:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener((context) =>
new WcfCommunicationListener<IResolverV2>(
serviceContext: context,
wcfServiceObject: new ResolverServiceV2(),
listenerBinding: new WebHttpBinding(WebHttpSecurityMode.None),
endpointResourceName: "ResolverEndpoint"
)
)};
}
What am I doing wrong here?
Here's a way to get it to work: https://github.com/loekd/ServiceFabric.WcfCalc
Essential changes to your code are the use of the public name of your cluster as endpoint URL and an additional WebHttpBehavior behavior on that endpoint.
The endpoint resources specified in the service manifest is shared by all of the replica listeners in the process that use the same endpoint resource name. So if your service has more than one partition it is possible that more than one replica from different partition may end up in the same process. In order to differentiate the messages addressed to different partitions, the listener adds partition ID and additional instance GUID to the path.
If you are going to have a singleton partition service and know that there will not be more than one replicas in the same process you can directly supply EndpointAddress you want to the listener to open at. Use the CodePackageActivationContext API to get the port from the endpoint resource name, node name or IP address from the NodeContext and then provide the path you want the listener to open at.
Here is the code in the WcfCommunicationListener that constructs the Listen Address.
private static Uri GetListenAddress(
ServiceContext serviceContext,
string scheme,
int port)
{
return new Uri(
string.Format(
CultureInfo.InvariantCulture,
"{0}://{1}:{2}/{5}/{3}-{4}",
scheme,
serviceContext.NodeContext.IPAddressOrFQDN,
port,
serviceContext.PartitionId,
serviceContext.ReplicaOrInstanceId,
Guid.NewGuid()));
}
Please note that you can now have only once application, one service and one partition on a node and when you are testing locally, keep the instance count of that service as 1. When deployed in the actual cluster you can use -1 instance count.

URL Mapping based on Resource resolver in AEM

We have the following website structure:
content
mysite
en
home
testlevel1page
testlevel2page
Now the requirement is to map:
http://www.mysite.com/ --> /content/mysite/en/home.html
http://www.mysite.com/testlevel1page/ --> /content/mysite/en/home/testlevel1page.html
http://www.mysite.com/testlevel1page/testlevel2page/ --> /content/mysite/en/home/testlevel1page/testlevel2page.html
How can we achieve this through resource resolver?
Under the /etc/map/http directory, add a node "www.mysite.com" and give this a sling:internalRedirect property of /content/mysite/en/home.
As per the Sling documentation, this will "prefix the URI paths of the requests sent to this domain with the string" — i.e. in this case, appending "/content/my/en/home" after the domain name for any incoming requests to "www.mysite.com".
Optionally, if you place this under /etc/map.publish/http, this will only be applied to instances with a Sling run mode set to publish.
(As the rule is under a node called 'http', this won't be applied to secure requests. If you need to cater for 'https' too, you could copy the http node, or preferrably create a regex — this isn't as common a use case, but more info on the docs linked above.)

Client identifier in jboss httpinvoker (auditing)

I am using httpinvoker in JBoss 4.0.4 (little old) for EJB invocations.
Since there are so many clients that make calls to my server, I want to identify the clients for each call in server.
Is there a way to do this with JBoss httpinvoker?
I could imagine adding a header to identify my client in each HTTP request, but cannot find a way to add a header in httpinvoker.
Auditing builds on a name, and thus on an authentication scheme somehow.
Therefore I suggest using the standard client authentication infrastructure to solve your problem. This works for RMI as well (it's not bound to HTTP), and the user ID is even passed down into your EJBs.
Server
Put the EJB in a security-domain (ejb.jar: META-INF/jboss.xml)
You could use the application-policy other which just the UsersRolesLoginModule (conf/login-config.xml); this is the default policy, it's already configured.
Add users.properties and roles.properties to your ejb.jar file (top level package): These are used by the UsersRolesLoginModule
For each user, add his name and a (dummy) password to users.properties
Client
Create a callback class which implements a javax.security.auth.callback.CallbackHandler: This callback is used, when the authentication needs the user and the password.
Create a javax.security.auth.login.LoginContext; pass the callback handler as the 2nd argument; call login() on the instance of the LoginContext
Connect normally to the EJB server using an InitialContext
Add -Djava.security.auth.login.config=.../jboss-4/client/auth.conf when you start the client
This way a user ID is passed from the client to the EJB (as part of the standard authentication process). Now, in the EJB methods, you can get the user ID by calling getCallerPrincipal() on the SessionContext instance. I have tested this against JBoss 4.2.3
Additional information: JBoss client authentication
Addendum 1:
Using RMI or HTTP, the password is not transported in a secure way. In this case just use a dummy password, this is OK for auditing.
On the other hand, if you use RMI over SSL or HttpInvoker over HTTPS, you could change to a real and secure authentication quickly.
Addendum 2:
I am not sure, if it works without defining roles. Possibly you have to
Add a line in roles.properties for each user: Add a connect role, for example
Add role definitions in ejb-jar.xml as well: security-role-ref for each EJB, and security-role and method-permission in the assembly-descriptor
Update
As there is already a login module, there might be another possibility:
If you have the source code of the login module, you could possibly use another TextCallback to get additional information from the client (in your case a user ID). The information could be used to create a custom Principal. Within the EJB, the result of getCallerPrincipal() could be cast to the custom principal.