ServiceStack/TypeScript: The typescript-ref ignores namespaces (this causing duplicates) - rest

I am learning NativeScript + Angular2 with ServiceStack in C# as the backend.
For the app, I generated TypeScript classes using the typescript-ref command, for use with the JsonServiceClient in ServiceStack:
>typescript-ref http://192.168.0.147:8080 RestApi
It all looked sweet and dandy, until I discovered that it seems to ignore that the ServiceStack Services and Response DTOs are in different namespaces on the .NET side:
I have different branches of services, where the handlers for each service might differ slightly between the branches. This works well in ServiceStack, the Login and handlers work just as expected.
So, on the app/NativeScript/Angular2-side, I used the typescript-ref and generated the restapi.dtos.ts. The problem is it skips the namespace difference and just creates duplicate classes instead (from VSCode):
The backend WS in ServiceStack is built in this "branched" fashion so I don't have to start different services on different ports, but rather gather all of them on one port and keep it simple and clear.
Can my problem be remedied?

You should never rely on .NET namespaces in your public facing Services Contract. It's supported in .NET Clients but not in any other language which requires that each DTO be uniquely named.
In general your DTOs should be unique within your entire SOA boundary so that there's only 1 Test DTO which maps to a single schema definition which ensures that when it's sent through a Service Gateway, resolved through Service Discovery, published to a MQ Server, etc it only maps to a single DTO contract.

Related

Blazor w/ Entity Framework Core - compile error

I have the following setup but am unable to finish building as I get an obscure error related to line 439 in file Blazor.MonoRuntime.targets (MSB3073).
Does this essentially mean that Entity Framework Core will in no way work with Blazor preview 6?
Details:
Asp.net Hosted Blazor
AspNetCore.Blazor (3.0.0-preview6.19307.2)
Microsoft.EntityFrameworkCore (3.0.0-preview6.19304.10)
Microsoft.EntityFrameworkCore.Design (3.0.0-preview6.19304.10)
Microsoft.EntityFrameworkCore.SqlServer (3.0.0-preview6.19304.10)
Resolved via a hack solution!
Somehow I was able to resolve everything and makes things run
end-to-end. I believe the big, critical thing was:
* Ensure that the Blazor client AND server projects do not directly reference Entity Framework
* Do not let the Blazor client reference (directly or indirectly) the project with the generated entities). To get access to the models, I
just create a duplicate of the generated entities (and removed the
"partial" from the classes that were generated)
Some clarification is needed here, right:
You cannot use Entity Framework on the Client project of Blazor. Entity Framework is a server technology.
You may use Entity Framework on the Server project of your application.
Communication between your Client side and Server hosting side is ordinarily done via Http calls (HttpClient service), but you may also employ SignleR.
To enable Http calls you should expose Http routing endpoints... This can be enabled by using Web Api with the required endpoints. Your Web Api exposed methods (Controllers' methods) can access the database directly (or indirectly if you define repositories, services, etc) via Entity Framework objects, and return the queried data to the calling methods (HttpClient methods).
Note that in my answer I particularly relate to Blazor Client-side apps, but it is mostly true with regards to Blazor server-side apps. I may just add here that in Blazor server-side apps you don't have to use Web Api since Blazor is executed on the server. In such a case, you can define a normal service to retrieve the data from the database, and pass it to the calling methods (no HttpClient involved here).
The Shared project intended to contain objects that can be used by both the front end and back end. This is the place where you can define your Model objects. As for instance, you can define an Employee class that can be used to retrieve the data and pass it to the Client as a list of Employee objects, and in the Client you can define a list of Employee objects that will store the retrieved data. In short, you don't have to define two types of objects, one appropriate to the server, and one appropriate to the client (say your client is an Angular app).
Hope this helps..

What are some methods to document contracts between microservices?

In an HTTP-driven microservices architecture, each service might have a number of public endpoints that return JSON, for example, to a client or an API gateway intermediary. These services could also accept POSTs with JSON bodies of a certain shape, or query strings of a certain shape, etc.
What are some good options for documenting or programmatically keeping track of these "contracts" between services? I.e, if service A's /getThing endpoint has been refactored to return different data, is there a documentation tool or methodology that would facilitate updating the API gateway to adapt to this change?
For programmatically management of contracts, if you using spring-cloud stack then you must look into spring-cloud-contract, by which you can easily keep track of your latest version of contracts for your Rest endpoints and also if any change occurs in your api endpoint, this will help you notify by breaking the contract and failing the test-cases build around it.
Let's say for example, service A's /getThing endpoint has been refactored to return different data then all calling services to this endpoint will fail while build time of your project.
However, this methodology won't facilitate updating the API gateway to adapt to this change as there might different logic you want to perform of every new version of your endpoints.
You can also create Rest Docs snippets using these endpoint contracts. checkout Rest Docs snippets. You can also use swagger for documenting your endpoints.
for NodeJs check here.

Client per MicroService vs Generic Client | Who is responsible for microservice client?

I have a microService architecture with 10 microServices and each of those provides a client. Inside of that client which is managed/controlled by microService team we just receive the parameters and pass them to a generic http invoker which receives the endpoint and N params and then does the call.
All microService use http and web api (I guess technology doesn't matter).
For me doesn't make sense to be the microService team to provide a client, should be the responsibility of the consumer, if they want to create some abstractions or invoke it directly is their problem, not a microService problem. And the way I see a web API is as a contract. So I think we should delete all clients (pass responsibility to consumers) on the microService side and create a service layer on the consumer's side that uses the generic invoker to reach the endpoints.
The image below represents all components where the red line defines the boundaries, who is responsible for what:
The gateway has Adapter Layer
Adapter Layer references the microService client package
MicroService client package references Generic HTTP invoker package
The other side of that is because we might have N number of consumers and they are all repeating the code of the client. And if the microService provides a client, we have a unique/central place to control that.
Which approach is correct? Is the client a responsability of the microService or the consumer?
This is an internal product.
I have a similar setup at work, with several microservices (~40) and a dozen teams. I was asked the same question several times, and my answer is the consumer is responsible for consuming. If the API works as designed and expected, there is no point in making the providing team responsible for anything.
The team that provides the service (team a), may provide a client, if they want (in doubt, without warranty). The consuming team (team B) may use the client if they want (taking all the risks included).
The only contract should be the API, everything else should be a goodie a team may provide on top. If team a has to provide a client, why do they provide an api at all then?
Given that both teams are loosely coupled and may use different technologies (or e.g. different spring framework versions), providing a client library to the other team proves to bring more problems than solve any. In a Java+spring-boot world e.g. you get into dependency problems very fast, especially if you include several clients from different service providing teams that evolve differently in time.
And even worse: what if the client-library of team A makes the system of team B unstable and introduces bugs? Who is responsible to fix that now?
If you want to reduce the work needed for your consuming teams because re-coding the client is so much work, your API may be to complex and/or your microservice may be no more microservice at all.
Imagine using HATEOAS on a restful API - writing a client for that is just a few lines of code, even with included API-Browser, Documentation and whatnot. See e.g. spring-rest-docs, hal-browser, swagger and various other technologies that make reading/browsing/documenting an API and implementing a client a breeze.
Above cases are described with two teams, imagine that with 10. We had a "client library" provided by one team, used by 4 other teams. You can guess how fast it became a complete mess until it was just deleted :)

SOA service vs other kinds of services

What is the difference between an SOA service and other kinds of services like an application or domain service ?
Have a look here. http://www.bennadel.com/blog/2385-application-services-vs-infrastructure-services-vs-domain-services.htm
Short answer
DDD Domain Services operate on Domain Entities. Usually where the work that needs to be done spans multiple Aggregate roots.
DDD Application Services drives workflow. For example if you want to do some work on a domain entity, the Application Service would be responsible to fetch the entity from the data store, call the domain service to do the work, do some work via an integration service if needed, and then lastly persist the change.
This is an interesting question since SOA is such as broad and overloaded term.
If we take SOA to mean any implementation that results in a mechanism to reach 'services' then even application and domain services will form part of SOA services. Application and domain services will even fall within the realm of micro-services although application services are usually surfaced through some integration mechanism.
I like to think of these things in terms of 'reachability'. WikiPedia:
In graph theory, reachability refers to the ability to get from one vertex to another within a graph
So, it depends on how reachable your code is. A bunch of domain services could, theoretically, form a service-oriented architecture.
The only differences is in how you surface your services.

WF4 workflow versions where service contract changes

I just successfully implemented a WF4 "versioning" system using WCF's Routing Service. I had a version1 workflow service to which I added a new Decision activity and saved it as a version2 service. So now I have 2 endpoints (with identical service contracts, i.e. all Receive activities are the same for both service) and a router that checks the content of a message (a "versionId" string on the object that all of my Receive's accept as an argument) to decide which endpoint to hit.
My question is, while this works fine when no changes are made to the service contract, how to I handle the need to add or remove methods from my service contract and create a version3 service? My original thought was, when I add the service reference to my client, I use the latest workflow service's endpoint to get the latest service contract. Then, in the config file, I change the endpoint I connect to to the router's endpoint. But this won't work if v1 and v2 have a different contract than v3. My proxy will have v3's methods and forget all about v1 and v2.
Any ideas of how to handle this? Should I create an actual service contract interface in my workflow solution (instead of just supplying a ServiceContractName in my Receive activities)?
If the WCF contract changes your client will need to be aware of the additional operations and when to call them. I have used the active bookmarks, it contains the WCF operation, from the persistence store in some applications to have the client app adapt to the workflow dynamically by checking the enabled bookmarks and enabling/disabling UI controls based on that. The client will still have to be updated when new operations are added to a new version of the workflow.
While WCF was young I heard a few voices arguing that endpoint versioning (for web services that is) should be accomplished by using a folder structure. I never got to the point of trying it out myself, but just analyzing the consequences of such a strategy seems to me as a splendid solution. I have no production experience of WCF, but am about to launch a rather comprehensive solution using version 4.0 of .NET (ASP.NET, WCF, WF...) and at this stage I would argue that using a folder structure to separate versions of endpoints would be a good solution.
The essence of such a strategy would be to never change or remove the contract of an endpoint (a specific version) until you are 100% sure that it is not used any more. While your services evolves you would just add new contracts and endpoints. This could lead to code duplication if one is not such a structured developer as one should be. But by introducing a service facade the duplication would be insignificant
I have been through the same situation. You can maintain the version by the help of custom implementation. save the Workflow Service URL in Database. And invoke them as per desire.
You can get the information about calling the WF Service with the URL by the client.
http://rajeevkumarsrivastava.blogspot.in/2014/08/consume-workflow-service-45-from-client.html
Hope this helps