A couple quick questions:
I would like to use an external REST API service (e.g. AgileCRM). With their service, I would like to use the REST Connector within a model that allows me to CRUD AgileCRM's API. Is this possible? If so, what model should be the base (e.g. PersistedModel, Model, etc)?
I would like to merge data from AgileCRM and a PersistedModel (e.g. MySQL). Should I do this via relationships OR inheritance? If inheritance, which should be the parent model? It would be ideal to use all data from AgileCRM (represented as a model in LB, if possible) and add information from a local MySQL database.
Have you any thoughts on wrapping an API service (e.g. AgileCRM) as a connector type (e.g. REST Connector for AgileCRM, based on REST Connector)? AgileCRM has many features but their CRUD methods operate slightly different from how LB interacts with data sources.
This is a really old question, sorry it never got answered, but it's also very broad is some portions. I would recommend asking shorter, more specific questions, and making multiple StackOverflow questions for them.
That said, here's some brief answers for people reading this entry:
Yes, this is possible. Check out the REST connector.
I would probably use multiple parent models that are internal and then a single exposed REST model (not "persisted") that collates that data together.
Sure, you could do that. Writing a connector isn't too difficult, check out our docs on building a connector.
Related
I am working on an enterprise-level system and am trying to understand if my idea is super inefficient.
Our company is looking to use GraphQL, and we want to use it as a way to assist the front-end client in retrieving data, but also as a data abstraction over our raw data. What I mean is:
If we have GraphQL closer to the client as one instance (that GraphQL server would sit in front of our domain REST services), but then we also have GraphQL sitting atop the data layer, does that present any issues?
I know the question might arise: "Why don't you have GraphQL over the domain services, and GraphQL over the data, but then federate those into a gateway and have clients pull from there!" But one of the tenants we are sticking to at our company is there must be an abstraction over our data. So, we either abstract that data via a REST API (which we do now), or we have GraphQL over the data and act as the abstraction.
So given that "data abstraction" requirement, I want to understand if there are any issues with the two "hops"/instances of GraphQL in the end-to-end flow?
This is a common pattern. We used this for our backend services, which received graphql on the domain layer and then used prisma for the data layer.
I have two recommendations from our experience.
Try, as best as possible, to auto-generate both your resolvers and your data API using a language, specific tool.
Do testing against the domain layer to make sure that nothing from the data layer slips through. It will be tempting to do simple "pass through" requests as the two schemas will often start off synchronized, and you may wind up accidentally passing through data you don't want going to the client.
(Shameless plug!) For the second one, Meeshkan does this sort of testing in an automated fashion, and there are plenty of testing frameworks you can use to execute hand-written tests as well (ie cucumber.
There is:
a requirement to have a key-values pairs storage shared between multiple services
a simple table in DynamoDB
very simple logic of key-value pairs creation
Intuitively I want to put the DynamoDB table behind a REST service that will implement all the simple logic I have. Unfortunately, this means adding a lot of reliability and performance challenges to the solution, since making my service as good, resilient, and performant as DynamoDB isn't easy.
It's been a while now that I think about creating a shared library for the purpose. The library will implement the logic and connect directly to DynamoDB table. I don't anticipate a lot of changes neither in DynamoDB table, nor in logic that will be implemented in the library.
What are the possible pros and cons of both approaches?
A service is simply a packaging and deployment selection for a library. Both are absolutely valid depending on your particular needs.
I'm curious why you feel the need to wrap dynamodb at all? Is there some particular domain logic you would like to place on top of it to constrain it? DynamoDB is already a restful service... Putting your own restful service on top of it may be advantageous, but you would have to convince me of the value of doing so. If you have particular business logic that requires you constraining the functionality, packaging it as a shared library has certain advantages, especially if you can encapsulate that business logic and separate it from the implementation of DynamoDB.
I am assuming that updating the shared library will not be in your control. And clients (library users) will update whenever it suits them.
If the above assumption is true, You should always go with a rest service. Considering few things
Your rest api may use cache instead of calling dynamodb all the time.
You might want to update the schema of the data you want to put in dynamodb.
You may use another db all together.
You may have some validation logic which will certainly evolve over time.
I'm working an a PHP/JS based project, where I like to introduce Domain Driven Design in the backend. I find that commands and queries are a better way to express my public domain, than CRUD, so I like to build my HTTP-based API following the CQS principle. It is not quiet CQRS since I want to use the same model for the command and query side, however many principles are the same. For API documentation I use Swagger.
I found an article which exposes CQRS through REST resources (https://www.infoq.com/articles/rest-api-on-cqrs). They use 5LMT to distinct commands, which Swagger does not support. Moreover, don't I loose the benefit of the intention-revealing interface which CQS provides by putting it into a resource-oriented REST API? I didn't find any articles or products which expose commands and queries directly through an HTTP-based backend.
So my question is: Is it a good idea to expose commands and queries directly through the API. It would look something like this:
POST /api/module1/command1
GET /api/module1/query1
...
It wouldn't be REST but I don't see how REST brings anything beneficial to the table. Maintaining REST resource would introduce yet another model. Moreover, having commands and queries in the URL would allow to use features like routing frameworks and access logs.
The commands and queries are an implementation detail. This is apparent from the fact that they wouldn't exist at all if you had chosen an alternative style.
A RESTful API usually (if done right) follows the conceptual domain model. The conceptual domain model is not an implementation detail, because it is in your users heads and is the a source for requirements for your system.
Thus, a RESTful API is usually much easier to understand, because the clients (developers) have to understand the conceptual domain model anyway, and RESTful interfaces follow the concepts of such a model. The same is not true for a queries and commands based API.
So we have a trade-off
You already identified the drawbacks of building a RESTful API around the commands and queries, and I pointed out the drawbacks of your suggestion. My advice would be the following:
If you're building an API that other teams or even customers consume, then go the RESTful way. It will be much easier for the clients to understand your API.
If, on the other hand, the API is only an internal one that is e.g. used by a JS front-end that your team builds, and you have no external clients on the API, then your suggestion of exposing the commands and queries directly can be short-cut that's worth the (above mentioned) drawbacks.
If you take the shortcut, be honest to yourself and acknowledge it as such. This means that as soon as your requirements change and you now have external clients, you should probably build a RESTful API.
don't I loose the benefit of the intention-revealing interface which
CQS provides by putting it into a resource-oriented REST API?
Intention revealing for whom? A client side programmer? A server side programmer? In the same team/org that maintains the domain model? Outside of that team/org? Someone on the internet who would access your API naively by just probing a starting URI with an OPTIONS request? Someone who would have access to the full API documentation with URIs and payloads structure?
REST is orthogonal to CQRS. In the end, no matter how you expose your resources on the web, domain notions will be reflected somewhere, whether in the URI, the payloads, the media types. I don't think using DDD or CQRS should influence the way you design your API that much.
We have a web2py application that we want to connect to an EmberJS client. The idea is to use the responsive capabilities of EmberJS to keep the client updated writing minimal code.
We have (REST) primitives which are in charge of creating / updating the underlying datastore (CouchDB). These primitives are sometimes complex and covering corner cases, involving the creation of several documents, connecting them, validating configuration parameters, ... This is implemented in the backend. We would like to avoid duplicating the full modelling of the data in our EmberJS application, and avoid duplicating the logic implemented by those primitives.
I have some questions:
does it make sense in EmberJS to just model a subset of the data in the documents? We would just create models for the small amount of properties that the user is able to interact with. The client would not see the full CouchDB documents, just the data necessary for display / interaction.
is it possible to connect EmberJS to a REST interface, without having to fully model the underlying data in the database?
does it make sense in EmberJS to just model a subset of the data in the documents?
Yes. There is no need to create ember models for objects/properties that user will not need to interact with.
is it possible to connect EmberJS to a REST interface, without having to fully model the underlying data in the database?
Definitely that is possible, it's a fairly common use case. The best way to get started is by building a small MVP that works with just couple of models. Once you've got that wired up it will be easy to add more domain objects.
The tricky part (especially at first) will be mapping your rest endpoints to the ember-data REST adapter. The adapter will work out-of-box with some REST endpoints - see the REST Adapter - but connecting a CouchDB datastore will probably require some customization. The tools for this are still evolving, have a look at ember-data integration tests to see what is available.
I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.