How/where are actors defined in Oracle ATG for Rest API - atg

My ATG installation has the Rest MVC enabled and a few actor chains registered in ActorChainRestRegistry.properties.
After looking through the Oracle ATG Rest API documentation I found a reference to atg.commerce.sitemap.CatalogSitemapGenerator, which I would like to be able to use.
I notice that the other entries in ActorChainRestRegistry have an Actor listed in their actor chains but I can't seem to figure out where these are defined.
in short, how can I implement atg.commerce.sitemap.CatalogSitemapGenerator as a REST API endpoint?

Somewhere in the config path define a BlehActor.properties and a BlehActor.xml (Replace Bleh with your actor name, of course.
This should match the actor name in your URL in the ActorChainRestRegistry.properties
In the properties file basically define:
$class=atg.service.actor.ActorChainService
definitionFile=/your/config/location/BlehActor.xml
BlehActor.xml is your chain definitions.

Related

Add global custom values to Play Framework logger

I have a cluster of different Akka actors, all using logback as logger. In the pure Akka actors startup, I can do this during the app initialization:
MDC.put("role", role)
role being a string representing the process main role (like "worker"), and all the logs will have this additional context values, helping the investigation.
One of the role is a frontend and uses Play framework to publish a REST API. In that case, I do not define an object extending App, and I do not know how/where to set global values like that, so that all logs emitted in the play application are marked with the role (and other additional things I want to put).
Play is a multi threaded application, so using MDC here is not going to work effectively. The best thing you can do is use the SLF4J marker API, which can be passed between threads.
Play 2.6.x will have support for the Marker API directly, but in the mean time you should use SLF4J directly to leverage the Logstash Logback Encoder to create a rich Marker that contains your role and other information.
import static net.logstash.logback.marker.Markers.*
Marker logstashMarker = append("name", "value");
private val logger = org.slf4j.LoggerFactory.getLogger(this.getClass)
logger.debug(logstashMarker, "My message")
Then you can pass logstashMarker as an implicit parameter to your methods, without worrying about thread local information.
Note that Play handles requests and so any "global" information you have in Akka that you want in Play will have to be extracted and added -- for maximum convenience you can put that information in a WrappedRequest using action composition or by adding a filter.

Typesafe RPC shared between client/server but with REST methods

I would like to know if there is a way to join RPC(so client know what he can call and server know what should he respond to) and HTTP Rest(so any other client, without shared codebase can make a call).
There is a lot of http libraries for scala(akka-http, http4s, etc.) and there is good RPC lib autowire. But I see no way to connect them. I know autowire is protocol agnostic, but it's a drawback here, because i would like to routing happened in http layer(e.g akka-http), not rpc(autowire).
I would like to know if it possible. If it is, is there any implementation ongoing?
endpoints is a work in progress in this direction (note: I am the author of this library). It provides means of defining an API made of HTTP endpoints (which verb, URL, etc. to use), and then it provides implementations that use such APIs as a client or as a server. It is compatible with Scala.js, so you can share your API definition between the client side and the server side of your application and benefit from statically type checked remote calls.
It is designed to give you full control over the usage of HTTP features (e.g. caching, headers, authentication, etc.).
Here is a basic API definition with two endpoints:
// POST /my-resources
val create: Endpoint[CreateMyResource, MyResource] =
endpoint(post(path / "my-resources", jsonRequest[CreateMyResource]), jsonResponse[MyResource])
// GET /my-resources/:id
val read: Endpoint[String, Option[MyResource]] =
endpoint(get(path / "my-resources" / segment[String]), option(jsonResponse[MyResource]))
You can then use it as follows from the client-side, to perform an actual call:
val eventuallyResource: Future[MyResource] =
create(CreateMyResource("foo", 42))
val eventuallyResource2: Future[Option[MyResource]] =
read("abc123")

Manipulating path mapping in AWS API gateway integration

I would like to modify an url parameter /resource/{VaRiAbLe} in an API gateway to S3 mapping so that it actually points to /my-bucket/{variable}. That is, it accepts mixed-case input, and maps it to a lower-case name. Mapping path variables is relatively simple enough to S3 integrations, but I can't seem to get a lower-case mapping working.
Reading through the documentation for mapping parameters, it looks like the path parameters are simple string values (and not templated values), and so defining a mapping as method.request.path.variable.toLowerCase() won't work.
Does anyone have any ideas how to implement this mapping?
Map path variables to a JSON body, and then call another API method that actually does the S3 call?
Bite the bullet, and implement a Lambda function to do the S3 get for me?
Find another api method for S3 that accepts a JSON body that I can use to get the data?
Update using Orchestrated calls
Following the info from Jack, I figured I should try doing the orchestrated call, since the traffic volume is low enough that I am sure that I won't be able to keep the lambda hot.
As a proof of concept, I added two methods to my resource (sitting at /resource/{variable} - GET and POST. The GET method chains to the POST, which does the actual retrieving of the data.
POST method configuration
This is a vanilla S3 proxying method, where you set the URL Path parameter for {variable} to be method.request.body.variable.
GET method configuration
This is a HTTPS proxying method. You'll need an URL for the POST method, so you'll need to deploy the API to get the URL. The only other configuration needed here is a body mapping template with content like:
{
"variable" : "$input.params('variable').toLowerCase()",
"something" : "$input.params('something')"
}
This should be enough to get this working.
The downside to this looks to be that I'm adding an extra method (POST) to my API for that resource that could confuse consumers of the API. I think it should be possible to make the POST on the /resource resource, which would at least make a bit more sense from an API design standpoint.
Depending on how frequently this API will be called, I'd either go with the Lambda proxy or chaining two API Gateway methods together. If the API is called frequently enough to keep a Lambda function warm (say once a minute), then go with Lambda. If not, go with the orchestrated API call.
The orchestrated API call would be interesting, I'd be happy to help with that if you have questions.
As far as I know the only S3 API for getting object data is the GET that is documented in their API reference.

How to resolve Autofac per-request service from custom attribute

I have configured my EF context configured like so
b.RegisterAssemblyTypes(webAssembly, coreAssembly)
.Where(t => t.IsAssignableTo<DbContext>())
.InstancePerLifetimeScope();
I would like to use it from a custom authorization attribute that hits the database using my EF context. This means no constructor-injection. I achieve this by using CommonSeviceLocator
var csl = new AutofacServiceLocator(container);
ServiceLocator.SetLocatorProvider(() => csl);
...
var user = await ServiceLocator.Current
.GetInstance<SiteContext>().FindAsync(id);
I am finding that this fails with a "multiple connections not supported" error if the browser issues two simultaneous requests to routes using this attribute. It seems like this might be due to what is mentioned in this answer. My guess is that ServiceLocator resolves from the root scope rather than the web request scope and the two request are conflicting (either request in isolation works fine).
This seems confirmed by that when I change to InstancePerRequest() I get this from any invocation of the attribute.
Autofac.Core.DependencyResolutionException No scope with a Tag matching 'AutofacWebRequest' is visible from the scope in which the instance was requested. This generally indicates that a component registered as per-HTTP request is
being requested by a SingleInstance() component (or a similar scenario.) Under the web integration always request dependencies from the DependencyResolver.Current or ILifetimeScopeProvider.RequestLifetime, never from the container itself.
So it seems like maybe ServiceLocator is simply not the way to go.
How do I resolve the request-scoped SiteContext from inside the attribute (using a service-locator pattern)?
Your issue derives from the fact that you are trying to put behavior inside of an Attribute. Attributes are for defining meta-data on code elements and assemblies, not for behavior.
Microsoft's marketing of Action Filter Attributes has led people implementing DI down the wrong path by putting both the Filter and the Attribute into the same class. As described in the post passive attributes, the solution is to break apart filter attributes into 2 classes:
An attribute that contains no behavior to mark code elements with meta-data.
A globally-registered filter that scans for the attribute and executes the desired behavior if present.
See the following for more examples:
Constructor Dependency Injection WebApi Attributes
Unity Inject dependencies into MVC filter class with parameters
Implementing passive attributes with dependencies that should be resolved by a DI container
Dependency Injection in Attributes: don’t do it!
Injecting dependencies into ASP.NET MVC 3 action filters. What's wrong with this approach?
How can I test for the presence of an Action Filter with constructor arguments?
Another option is to use IFilterProvider to resolve the filters as in IFilterProvider and separation of concerns.
Once you get your head around the fact that Attributes should not be doing anything themselves, using them with DI is rather straightforward.

What is the difference between BasicHttpRequest and HttpGet, HttpPost, etc in Apache HTTP Client 4.3 ?

I am creating HTTP request using Apache HTTP Client version 4.3.4. I see there are some classes like HttpGet,... and there is also a class BasicHttpRequest. I am not sure which one to use.
Whats the difference and which one should be used in which condition ?
BasicHttpRequest is provided by the core library. As its name suggests it is pretty basic: it enforces no particular method name or type, nor does it attempt to validate the request URI. The URI parameter can be any arbitrary garbage. HttpClient will dutifully transmit it to server as is, if it is unable to parse it to a valid URI.
HttpUriRequest variety on the other hand will enforce specific method type and will require a valid URI. Another important feature is that HttpUriRequest can be aborted at any point of their execution.
You should always be using classes that implement HttpUriRequest per default.
I was just browsing the 4.3.6 javadoc attempting to locate your BasicHttpRequest and was unable to find it. Do you have a reference to the javadoc of this class?
I would be under the impression that BasicHttpRequest would be a base class providing operations and attributes common to more than one HttpRequest. It may be extremely generic for extension purposes.
To the first part of your question, use HttpGet, HttpPost etc for their specific operations. If you only need to HTTP/GET information then use HttpGet, if you need to post a form or document body, then use HttpPost. If you are attempting to use things like the Head, Put, Delete method, then use the correspoding HttpXXX class.