Can I configure Autofac to use dependencies from my Azure Functions bindings? - inversion-of-control

We use Autofac for our API projects. Porting out functions over has been on our todo list since Azure Functions announced support for Autofac. We already share a lot of services / repositories, but currently new() them up in the function body, which is a bit verbose.
I took a swing at it today, but am finding that I'm just being more verbose elsewhere. Specifically, I realised that I was writing a lot of code to replicate the functionality offered by bindings.
As an example, take the [CosmosDB()] binding attribute, which basically gives one a working, authenticated ("read-to-eat") DocumentClient in a single line.
When I'm using Autofac, it's necessary to manually read several settings from the config and to initialise a KeyVault client, etc...
Can I have my cake and eat it? Is there some way to register dependencies from my bindings so that they are available for Autofac to supply to my services etc.?

For Functions created in Azure Functions V2, you should be able to register your dependencies via dependency injection. Here's a blog post on how to do this for Autofac: http://dontcodetired.com/blog/post/Azure-Functions-Dependency-Injection-with-Autofac

Related

What is the difference betweeen: aws-sdk/clients/appsync and aws-appsync?

Can anyone please tell me what is the difference between:
aws-sdk/clients/appsync, and aws-appsync
According to the doc, aws-sdk/clients/appsync is used because just including aws-sdk is too large when we just need appsync, we use aws-sdk/clients/appsync.
However, aws-appsync seems also doing the same thing. When I create a client using aws-sdk/clients/appsync, there is no hydrated() function while aws-appsync has.
So
Why can't we simplify everything by just using aws-sdk/clients/<any_aws_package> and break everything that did not use it?
What is the point of separating something that is similar?
Thank you very much in advance for all your help.
difference between: aws-sdk/clients/appsync, and aws-appsync
GraphQL originated at Facebook and the React/React Native apps used react-apollo to talk to the GraphQL based APIs.
AWS AppSync is the AWS offering for GraphQL. aws-sdk/clients/appsync is the JS SDK to invoke various AppSync management/control APIs (like create GraphQL API, create data source etc). Not sure if they also provided apis to consume the GraphQL based api.
aws-appsync is the way to consume AppSync backed GraphQL APIs. It plays well with react-apollo which is now moved to apollo-client.
There is another high level JS library from AWS Amplify, which is used to consume AppSync backed GraphQL APIs.
aws-sdk/clients/appsync is used because just including aws-sdk is too
large
The initial release of the aws-sdk (Github repo here) contained the clients for all the AWS services and more. You need to install aws-sdk and you can talk to pretty much all the AWS services by initialising the clients for them. Obvisously it was not so good in terms of packaging JS bundles, the library was also not very modular.
Now the latest version V3 (rewritten using TypeScript) which is GA now, moves the clients to individual npm packages i.e. you need to only install the client you need e.g. npm i #aws-sdk/client-appsync instead of npm i aws-sdk. Learn more about it here. If you are getting started use the V3 packages.
Why can't we simplify everything by just using
aws-sdk/clients/<any_aws_package> and break everything that did not
use it?
It is the case with the V3 JS SDK.
What is the point of separating something that is similar?
They are not similar. One is used to consume the GraphQL based APIs (provided by AppSync) and the other is to manage AppSync itself. The V3 packages follow this very strictly while the earlier versions may have some utility code/ high level abstractions along with the core AWS apis.

Is it possible to run a webserver with apache openwhisk?

I'm working on a project in my University, and we want to use an Apache OpenWhisk service for our student projects. I already set up the OpenWhisk service and add some components like Java, NodeJS and Python, everything works fine. My next step is to set up a webserver, so our students can use this instances to publish their websites (written in HTML, PHP, Java Script). I have already searched for this topic but don't found anything.
Hopefully someone can help me.
There are several ways to accomplish this and it depends on how far you want to go in the serverless direction.
For example this repo https://github.com/jthomas/express_example is a way to package an existing web server inside a function. This is another variation on the same https://github.com/IBM/expressjs-openwhisk.
If you want the students to implement a serverless web application from scratch, then generally every API end point becomes a function using web actions https://github.com/apache/openwhisk/blob/master/docs/webactions.md.
You can use web actions to also serve static content (html, js, css) by inlining those files and returning them as part of the function result. This is not ideal and should be done from a CDN instead. OpenWhisk itself does not offer object storage/CDN support but you can use S3 or Google buckets to accomplish the same.
Some serverless platforms like Netlify or Nimbella might be suitable. The latter provides an integrated CDN + OpenWhisk to implement entire web applications including JAMstack.

How to separate routes, handlers, 3rd-party interfaces and business logic in real world Go project

After reading the official guide on how to structure projects and going through various (1, 2, 3 to name a few) examples and projects I can't help wondering whether my approach of structuring my REST-API server app is structured properly.
What is the API meant to do?
POST /auth/sign-in
Accepts a username and password and issues a JWT (JSON Web Token).
GET /auth/sign-out
Adds the JWT to a blacklist to invalidate the auth session.
GET /resources
Retrieves a list of all resources.
POST /resources (requires valid JWT authentication)
Accepts a JSON body, creates a new resource and sends out an email and notification to everyone about the new resource.
What my project looks like
Currently I'm not creating any libraries. Everything is sitting in the main package, the overarching server setup with routes etc. all done in main.go. I didn't go for the MVC pattern found à la Rails or Django to avoid overcomplicating things just for the sake of it. Also my impression was it doesn't really comply with the recommended structure for commands and libraries as described in the guide already mentioned above.
auth.go # generates, validates JWT, etc
auth-handler.go # Handles sign-in/out requests; includes middleware for required authentication
mailer.go # Provides methods to send out transactional email, loads correct template etc.
main.go # set up routes, runs server; inits mailer and notification instance for the request context
models.go # struct definition for User, Resource
notifications.go # Provides methods to publish push notifications
resource-handler.go # Handles request for resources, uses mailer and notifications instances for the POST request
What should it look like?
Should routes be separated? What about middleware? And how do you deal with interfaces to 3rd party code – imagine mailer.go in the outlined sample app talking to Mandrill and notifications.go to Amazon AWS SNS?
I can share a bit from my own experience.
In application code:
as opposed to library code, separating into packages and sub-packages is less important - as long as you don't have too much complexity in your code. I mostly design apps as integrating self contained libraries, so the app code itself is usually rather small. In general, try to avoid package separation if you don't really need it. But don't just slap tons of code in one package - that's also bad.
but don't have general packages like "util", they soon start to accumulate baggage and suck. I have a separate repo for generic utils reusable across projects, and under it each utility API is a sub package. e.g. github.com/me/myutils/countrycodes, github.com/me/myutils/set, github.com/me/myutils/whatevs.
Regardless of the package structure, the most important thing is to separate internal APIs from handler code. The handlers code should be a very thin layer that handles input, and calls an internal, self contained API, that can be tested without handlers, or tied to other handlers. Looks like you're doing this. Then you can separate your internal API into another package or not, it doesn't really matter.
When you're deciding what parts of the code should be separated into libraries, think in terms of code reuse. If this code will be used only by your app, there's no point in that.
I like to wrap integration with third party APIs in an interface that is defined in a secondary package. For example if you have something like sending emails with AWS SES, I'd create a pakcage called github.com/my_org/mailer, with an abstract interface, and under it an github.com/my_org/mailer/ses package that implements the SES integration. The application code imports the mailer package and its interface, and only in main do I somehow inject the usage of SES and integrate things together.
re middleware - I usually keep it in the same package as the API itself.

Is MEF a Service locator?

I'm trying to design the architecture for a new LOB MVVM project utilising Caliburn Micro and nHibernate and am now at the point of looking into DI and IOC.
A lot of the examples for bootstrapping Caliburn Micro use MEF as the DI\IOC mechanism.
What I'm struggling with is that MEF seems to by reasonably popular but the idea of the Mef [Imports] annotations smells to me like another flavour of a Service Locator?
Am I missing something about MEF whereby almost all the examples I've seen are not using it correctly or have I completely not understood something about how it's used whereby it side steps the whole service locator issues?
MEF is not a Service Locator, on it's own. It can be used to implement a Service Locator (CompositionInitializer in the Silverlight version is effectively a Service Locator built into MEF), but it can also do dependency injection directly.
While the attributes may "smell" to you, they alone don't cause this to be a service locator, as you can use [ImportingConstructor] to inject data at creation time.
Note that the attributes are actually not the only way to use MEF - it can also work via direct registration or convention based registration instead (which is supported in the CodePlex drops and .NET 4.5).
I suppose if you were to just new up parts that had property imports and try to use them, then you could run into some of the same problems described here: Service Locator is an Anti-Pattern
But in practice you get your parts from the container, and if you use [Import] without the additional allowDefault property, then the part is required and the container will blow up on you if you ask for the part doing the import. It will blow up at runtime, yes, but unlike a run of the mill service-locator, its fairly straightforward to do static analysis of your MEF container using a test framework. I've written about it a couple times here and here.
It hasn't been a problem for me in practice, for a couple of reasons:
I get my parts from the container.
I use composition tests.
I'm writing applications, not framework code.

Why should I prefer OSGi Services over exported packages?

I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages?
As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me.
Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too.
In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies.
So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages.
To sum my questions up:
What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?
Addition
I have tried to gather further information about this issue and come up with some kind of comparison between plain export/import of packages and services. Maybe this will help us to find a satisfying answer.
Start/Stop/Update
Both, bundles (hence packages) and services, can be started and stopped. In addition to that they can be kind of updated. Services are also tied to the bundle life cycle itself. But in this case I just mean if you can start and stop services or bundles (so that the exported packages "disappear").
Tracking of changes
ServiceTracker and BundleTracker make it possible to track and react to changes in the availability of bundles and services.
Specific dependencies to other bundles or services.
If you want to use an exported package you have to import it.
Import-Package: net.jens.helloworld
Would net.jens.helloworld provide a service I would also need to import the package in order to get the interface.
So in both cases their would be some sort of "tight coupling" to a more or less specific package.
Ability to have more than one implementation
Specific packages can be exported by more than one bundle. There could be a package net.jens.twitterclient which is exported by bundle A and bundle B. The same applies to services. The interface net.jens.twitterclient.TwitterService could be published by bundle A and B.
To sum this up here a short comparison (Exported packages/services):
YES/YES
YES/YES
YES/YES
YES/YES
So there is no difference.
Additionally it seems that services add more complexity and introduce another layer of dependencies (see image below).
alt text http://img688.imageshack.us/img688/4421/bundleservicecomparison.png
So if there is no real difference between exported packages and services what is the benefit of using services?
My explanation:
The use of services seems more complex. But services themselves seem to be more lightweight. It should be a difference (in terms of performance and resources) if you start/stop a whole bundle or if you just start and stop a specific service.
From a architectural standpoint I also guess that bundles could be viewed as foundation of the application. A foundation shouldn't change often in terms of starting and stopping bundles. The functionality is provided by services of this packages in some kind of dynamic layer above the "bundle layer". This "service layer" could be subject to frequent changes. For example the service for querying a database is unregistered if the database is going offline.
What's your opinion? Am I starting to get the whole point of services or am I still thinking the wrong way? Are there things I am missing that would make services far more attractive over exported packages?
Its quite simple:
Bundles are just providing classes you can use. Using Imports/Exports you can shield visibility and avoid (for example) versioning conflicts.
Services are instances of classes that satisfy a certain contract (interfaces).
So, when using Services you don't have to care about the origin of a implementation nor of implementation details. They may even change while you are using a certain service.
When you just want to rely on the Bundle Layer of OSGi, you easily introduce crosscutting dependencies to concrete implementations which you usually never want. (read below about DI)
This is not an OSGi thing only - just good practice.
In non OSGi worlds you may use Dependency Injection (DI) frameworks like Guice, Spring or similar. OSGi has the Service Layer built into the framework and lets higher level frameworks (Spring, Guice) use this layer. - so in the end you usually dont use the OSGi Service API directly but DI adapters from user friendly frameworks (Spring-->Spring DM,Guice-->Peaberry etc).
HTH,
Toni
I'd recommend purchasing this book. It does an excellent job explaining services and walking through the construction of a non-trivial application that makes use of OSGi Services.
http://equinoxosgi.org/
My company routinely builds 100+ bundle applications using services. The primary benefits we gain from using services are:
1) Loose coupling of producer/consumer implementation
2) Hot swappable service providers
3) Cleaner application architecture
When you start with OSGi, it is always easier to start with an export-package approach it certainly feels more java-like. But when your application starts growing and you need a bit of dynamicity, services are the way to go.
Export-package only does the resolution on startup, whereas services is an on-going resolution (which you may want or not). From a support point of view having services can be very scary (Is it deterministic? How can I replicate problems?), but it is also very powerful.
Peter Kriens explains why he thinks that Services are a paradigm shift in the same way OO was in its time. see µServices and Duct Tape.
In all my OSGi experience I haven't had yet the occasion to implement complex services (i.e. more than one layer), and certainly annotations seem the way to go. You can also use Spring dynamic module to ease the pain of dealing with service trackers. (and many other options like iPOJO, and Blueprint)
Lets consider the two following scenarios:
Bundle A offers a service which is an arithmetic addition add(x,y) return x+y. To achieve this, it exports "mathOpe package" with "IAddition interface", and registers a service within the service registry. Bundles B, C, D, ... consume this service.
Bundle A exports "mathOpe package", where we found a class Addition exposing an operation (x+y)<--add(x,y). Bundles B, C, D, ... import the package mathOpe.
Comparison of scenario 1 vs. scenario 2:
Just one implementation instance vs. many instances (Feel free to make it static!)
Dynamic service management start, stop, update vs. no management, the consumer is owning the implementation (the "service")
Flexible (we can imagine a remote service over a network) vs. not flexible
... among others.
PS: I am not an OSGI expert nor a Java one, this answer shows only my understanding of the phenomena :)
I think this excellent article could answer a lot of your questions:OSGi, and How It Got That Way.
The main advantage of using a service instead of the implementation class is that the bundle offering the service will do the initialization of the class itself.
The bundle that uses the service does not need to know anything about how the service is initialized.
If you do not use a service you will always have to call kind of a factory to create the service instance. This factory will leak details of the service that should remain private.