How to record api calls to mock in SwaggerHub? - rest

I want to mock api calls from my application, and host the mock, so my tests can work without calls to real api. There is a service called restbird which does exactly that, but it is far from ideal for me. If you want to collaborate you have to host the service by your self. Also it has some errors like not displaying history of calls, or when it sends server errors for no reason. I want a service more robust than this one.
The only service that I think might be a good fit is SwaggerHub, it seems robust, it has virtual servers, and overall it is very popular. But the only problem is that I cannot find a way to record api calls from my application. So how can I record api calls for SwaggerHub?

There does not currently exist any functionality within SwaggerHub itself to record API calls made from the Swagger UI module within the tool. This is a limitation of the open-source Swagger UI tool.
What I can recommend is you use the Swagger Inspector tool. The Swagger Inspector can be used to make API calls from a client, save both the request and the response, and even generate an OpenAPI file for you based off the request/responses. If you create an account and sign in, you can even save your API calls to a collection to use later.
Swagger Inspector: https://inspector.swagger.io/builder

It may also be worth considering using ReadyAPI's Virtualization module to handle this use case. With ReadyAPI Virtualization you can record transactions from a browser, build mock services from the recorded transaction or an existing API definition, and then host the mock service using VirtServer.
ReadyAPI is a part of SmartBears API lifecycle products, so there are integrations between the two tools. For instance, you can port APIs from Swaggerhub into ReadyAPI directly and you can use mock services built in ReadyAPI to do dynamic mocking in Swaggerhub.
You can find more information about ReadyAPI Virtualization here: https://smartbear.com/product/ready-api/api-virtualization/
I realise this is a very late response to this thread, but hopefully this information comes in handy.

Related

JWT authentication for jBASE RESTful API

We are in the process of designing a front-end application with Angular which will call a jBASE server through RESTful APIs. APIs are created from jBASE component called jAgent.
Does jAgent support creating and verifying JWTs?
If not, what is the best way to handle authentication/authorization for the Angular application?
If we need to use JWTs, do we have to use a authentication middleware application (.NET Core or node.js) for that?
Great question! At the moment there is no handler within jAgent and our recommendation is to implement this, and advanced web server/API gateway technology by way of other applications like HAproxy or Kong.
An expansion of jAgent functionality to include things like this is something we're still considering but keep in mind, the power of jBASE lies in its native interactions with the host OS. Since there is no virtual OS layer it can be easier to plug and play off the shelf things to fill in for additional functionality, which gives you the flexibility to bring your own tooling.
In summary:
Not at the moment
Using an off the shelf package to act as your API gateway
Subject to the package you choose
That relegates jAgent to management of the API layer as it exists on the PICK/jBASE side while the off the shelf package manages your API security layer.
One other note for you--I noticed that you included a link to the old jBASE docs hosted on HelpJuice. It's worth mentioning that we've migrated those docs to docs.zumasys.com. You'll find the docs there to be more up to date, and also completely open sourced--part of the migration included their move to a GitHub repo, where we're happy to take community contributions.
For reference, the article you mentioned is available at https://docs.zumasys.com/jbase/connectivity/jagent/introduction-to-jagent-rest-services/.
Update:
One of our engineers has a program that will use openssl to generate the tokens for you, which you can find at https://github.com/patrickp/wjwt.
You will need openssl installed on the machine and in the path.
The WJWT.TEST program shows the usage. The important piece is the SECRET.KEY which is your internal KEY you use to sign the payloads.
When a user first authenticates you create the token with SIGN. Claims are any items/fields you wish to save/store. Do NOT put sensitive data in here as it is viewable by anybody. The concept is we sign this with our key, give it back to the client. On future calls the client sends the token and we pull it and call the VERIFY function which basically re-signs the payload and validates the signatures match. This validates the payload was not manipulated.
Activities such as expiration you would build into your code.
Long term we plan to take this library and refactor the code into our MVDB Toolkit library with more functionality. That library is something we provide to jBASE customers at no additional charge.

What are some methods to document contracts between microservices?

In an HTTP-driven microservices architecture, each service might have a number of public endpoints that return JSON, for example, to a client or an API gateway intermediary. These services could also accept POSTs with JSON bodies of a certain shape, or query strings of a certain shape, etc.
What are some good options for documenting or programmatically keeping track of these "contracts" between services? I.e, if service A's /getThing endpoint has been refactored to return different data, is there a documentation tool or methodology that would facilitate updating the API gateway to adapt to this change?
For programmatically management of contracts, if you using spring-cloud stack then you must look into spring-cloud-contract, by which you can easily keep track of your latest version of contracts for your Rest endpoints and also if any change occurs in your api endpoint, this will help you notify by breaking the contract and failing the test-cases build around it.
Let's say for example, service A's /getThing endpoint has been refactored to return different data then all calling services to this endpoint will fail while build time of your project.
However, this methodology won't facilitate updating the API gateway to adapt to this change as there might different logic you want to perform of every new version of your endpoints.
You can also create Rest Docs snippets using these endpoint contracts. checkout Rest Docs snippets. You can also use swagger for documenting your endpoints.
for NodeJs check here.

Public valid REST Api with wolkenkit.io

I am currently evaluating the framework "wolkenkit" [1] for using it in an application. Within this application I will have a user interface for tenant-based data management. Only authenticated users will have access to this application.
Additionally there should be a public REST API following common standards and being callable by public (tenant security done with submission of a tenant-based API Key within the request headers).
As far as I have found out, the wolkenkit REST API does not seem to fit these standards in forms of HTTP verbs.
But as wolkenkit at all appears to me as a really flexible and easy-to-use framework, I wonder how to basically implement such a public API.
May it be e.g. a valid approach to create an own web application which internally connects to the wolkenkit backend? What about the additional performance overhead then?
[1] https://www.wolkenkit.io/
In addition to the answer of mattwagl, I would like to point out a few things that you may be interested in.
First of all, since wolkenkit is based on CQRS, the application has a separate API for writing and reading. That means, that if you send a command (whose intent is to change state) this goes to the write API. If you subscribe for events or run a query, this goes to the read API.
This again means, that if you send a command, it's up to the write side to respond to it. As the write side is not meant to return application state, all it says is basically: "Thanks, I have received the command." To get the actual result you have to wait for the appropriate event, which means subscribing to the read API.
In the wolkenkit documentation there is a nice diagram which shows this in a clear way:
If you now add a separate REST API (which actually fulfills the requirements of REST), this means that you need to handle waiting for the result internally. In other words: Clients in wolkenkit are always meant to be asynchronous, REST is not. Hence it's your job to handle the asynchronous behavior of the wolkenkit APIs in your REST API. I think that this is the hardest part.
Once you have done this, you will have a synchronous REST API, and of course it will have some overhead. But I think that since its overhead is limited to passing through and translating network requests, it should be negligible.
Oh, and finally, there is another thing that you have to watch out for: Since REST as it was meant originally relies on the HTTP verbs to transport semantics, you need to map GET / POST / PUT / DELETE to the semantic commands of wolkenkit. As long as this can be done 1:1, everything's fine – problems start when there are multiple commands that (technically speaking) do an UPDATE.
PS: I'm also one of the developers of wolkenkit.
PPS: However you are going to solve this, I would be highly interested to hear from you! It would be very great if you could share your experiences with us, as you are most probably not the last one with this idea. If you want to contact us, the easiest way would be via Slack.
wolkenkit applications can be accessed using an HTTP- and a Websocket-API. These APIs are both provided by the tailwind module that wolkenkit uses under the hood. In the tailwind repo you can find a very simple documentation of the available HTTP routes.
You're right, the wolkenkit HTTP-API is not a classic REST-API. It's more RPC-style which in our experience is a good fit for applications. There are only 3 routes that your clients/tenants need to support: /v1/command (POST) is used for issuing commands. The commands you post should follow the command schema. /v1/events (POST) can be used for streaming events to clients. These events will follow the event schema. Finally you have /v1/read/:modelType/:modelName (POST) to read models. You can simply use HTTPie to test these routes.
Authentication of these APIs is currently done using OpenID-Connect. There's a very detailed article on how to setup authentication using Auth0. I'm not quite sure if this fits your use-case but you could basically use any Authentication Service that follows this standard or that is able to issue JWT tokens.
Finally you could also build your own JavaScript client-SDK that runs inside browsers by building a module that uses the wolkenkit-client-js under the hood. This SDK can just use the same API as any other client to connect to your application.
Hope this helps.
PS: Please note that I am one of the authors of wolkenkit.

Where to write tests for a Frontend/Backend application?

I want to write a web application with a simple Frontend-Backend(REST API) architecture.
It's not clear to me where and how to write tests.
Frontend: should I write tests mocking API responses and testing only UX/UI?
Backend: should I write here API call testing and eventually more fine grained unit testing on classes?
But in this way I'm afraid that Frontend testing is not aware of real API response (because it's mocking independently from the backend).
On the other side if I don't mock API response and use real response from backend, how can the Frontend client prepare the DB to get the data he wants?
It seems to me that I need 3 kind of testing types:
- UX/UI testing: the Frontend is working with a set of mock responses
- API testing: the API is giving the correct answers given a set of data
- Integration testing: The Frontend is working by calling really the backend with a set of data (generated by who?).
There are framework or tools to make this as painless as possible?
It seems to me very complicated (if API spec changes I must rewrite a lot of tests)
any suggestion welcome
Well, you are basically right. There are 3 types of tests in this scenario: backend logic, frontend behaviour and integration. Let's split it:
Backend tests
You are testing mainly the business logic of the application. However, you must test the entire application stack: domain, application layer, infrastructure, presentation (API). These layers require both unit testing and integration testing, plus some pure black box testing from user's perspective. But this is a complex problem on this own. The full answer would be extremely long. If you are interested in some techniques regarding testing applications in general - please start another thread.
Frontend behaviour
Here you test if frontend app uses API the right way. You mock the backend layer and write mostly unit tests. Now, as you noticed - there can be some problems regarding real API contract. However, there are ways to mitigate that kind of problems. First, a link to one of these solutions: https://github.com/spring-cloud/spring-cloud-contract. Now, some explanation. The idea is simple: the API contract is driven by consumer. In your case, that would be the frontend app. Frontend team cooperate with backend team to create a sensible API, meeting all of client's needs. Frontend tests are therefore guaranteed to use the "real API". When client tests change - the contract changes, so the backend must refactor to new requirements.
As a side note - you don't really need to use any concrete framework. You can follow the same methodology if you apply some discipline to your team. Just remember - the consumer defines the contract first.
Integration tests
To make sure everything works, you need also some integration e2e testing. You set up a real test instance of your backend app. Then, you perform integration tests using the real server instead of fake mockup responses. However, you don't need (and should not) duplicate the same tests from other layers. You want to test if everything is integrated properly. Therefore, you don't test any real logic. You just choose some happy paths, maybe some failure scenarios and perform these tests from user's perspective. So, you assume nothing about the state of the backend app and simulate user interaction. Something like this: add new product, modify product, fetch updated product or check single authentication point. That kind of tests - not really testing any business logic, but only if real API test server communicates properly with frontend app.
Talking about the tools, it depends on your language preferences and/or how the app is being constructed.
For example, if your team feels comfortable with javascript, it could be interesting to use frameworks like Playwright or WebdriverIO (better if you plan to test mobile) for UI and integration tests. This frameworks can work together with others more specialised in API testing like PactumJS with the plus that can share some functions.
Organising the test correctly you will not have to do a big extra work if some API specs changes.

WF4 workflow versions where service contract changes

I just successfully implemented a WF4 "versioning" system using WCF's Routing Service. I had a version1 workflow service to which I added a new Decision activity and saved it as a version2 service. So now I have 2 endpoints (with identical service contracts, i.e. all Receive activities are the same for both service) and a router that checks the content of a message (a "versionId" string on the object that all of my Receive's accept as an argument) to decide which endpoint to hit.
My question is, while this works fine when no changes are made to the service contract, how to I handle the need to add or remove methods from my service contract and create a version3 service? My original thought was, when I add the service reference to my client, I use the latest workflow service's endpoint to get the latest service contract. Then, in the config file, I change the endpoint I connect to to the router's endpoint. But this won't work if v1 and v2 have a different contract than v3. My proxy will have v3's methods and forget all about v1 and v2.
Any ideas of how to handle this? Should I create an actual service contract interface in my workflow solution (instead of just supplying a ServiceContractName in my Receive activities)?
If the WCF contract changes your client will need to be aware of the additional operations and when to call them. I have used the active bookmarks, it contains the WCF operation, from the persistence store in some applications to have the client app adapt to the workflow dynamically by checking the enabled bookmarks and enabling/disabling UI controls based on that. The client will still have to be updated when new operations are added to a new version of the workflow.
While WCF was young I heard a few voices arguing that endpoint versioning (for web services that is) should be accomplished by using a folder structure. I never got to the point of trying it out myself, but just analyzing the consequences of such a strategy seems to me as a splendid solution. I have no production experience of WCF, but am about to launch a rather comprehensive solution using version 4.0 of .NET (ASP.NET, WCF, WF...) and at this stage I would argue that using a folder structure to separate versions of endpoints would be a good solution.
The essence of such a strategy would be to never change or remove the contract of an endpoint (a specific version) until you are 100% sure that it is not used any more. While your services evolves you would just add new contracts and endpoints. This could lead to code duplication if one is not such a structured developer as one should be. But by introducing a service facade the duplication would be insignificant
I have been through the same situation. You can maintain the version by the help of custom implementation. save the Workflow Service URL in Database. And invoke them as per desire.
You can get the information about calling the WF Service with the URL by the client.
http://rajeevkumarsrivastava.blogspot.in/2014/08/consume-workflow-service-45-from-client.html
Hope this helps