Springdoc (Swagger) grouping configuration after proxy - kubernetes

I'm using newest springdoc library to create one common endpoint with all Swagger configurations in one place. There're a bunch of microservices deployed in Kubernetes, so having documentation in one place would be convenient. The easiest way to do that is by using sth like this (https://springdoc.org/faq.html#how-can-i-define-groups-using-applicationyml):
springdoc:
api-docs:
enabled: true
swagger-ui:
disable-swagger-default-url: true
urls:
- name: one-service
url: 'http://one.server/v3/api-docs'
- name: second-service
url: 'http://second.server/v3/api-docs'
and it works great, I can choose from list in upper right corner.
The problem is that it doesn't work through proxy. According to documentation I need to set some headers (https://springdoc.org/faq.html#how-can-i-deploy-springdoc-openapi-ui-behind-a-reverse-proxy) and it works for single service called directly. But when i try grouping described above, headers are not passed to one-service or second-service, and they generates documentation pointing to localhost.
I suspect I need to use grouping (https://springdoc.org/faq.html#how-can-i-define-multiple-openapi-definitions-in-one-spring-boot-project) but I miss good example, how to achive similar effect (grouping documentation from different microservices). Examples shows only one external address, or grouping local endpoints. I hope, that using this approach, I'll be able to pass headers.

The properties springdoc.swagger-ui.urls.*, are suitable to configure external (/v3/api-docs url), for example if you want to agreagte all the endpoints of other services, inside one single application.
It will not inherit the proxy configuration, but it will use servers urls defined in your: servers http://one.server/v3/api-docs and http://second.server/v3/api-docs.
You want to have a proxy in front of your service, it's up to your service to handle the correct server urls you want to expose.
If you want it work out of the box, you can use a solution that handles proxying and routing like spring-cloud-gateway

Related

Transfer client configuration between environments

For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.

Assign names to applications without Service Fabric

I have an application in the service fabric and I'm going to upload another one.
I wonder if it's possible to assign different names to each application.
With an application, I access using the address:
http://sf-spartan.eastus.cloudapp.azure.com
You can configure for access to look like this:?
http://application1.sf-spartan.eastus.cloudapp.azure.com
or
http://sf-spartan.eastus.cloudapp.azure.com/application1
Sure, have a look here. Use the ApplicationName argument to define it.
Every application instance you create must in fact have a unique name.
You can reach your application instance through its url by using a reverse proxy. (either the built-in one, or a custom one like Traefik)
Usually, the application and service name are part of the url, e.g.:
http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService
This does require a web based communication listener.
Event more info here.

Routing using OSRM for multiple profiles - does profile in the URL actually do anything?

With ORSM there are 3 profiles for different modes of transport, cycle, foot and car. These come with OSRM.
According to the following post which was made 1 year ago, OSRM does not support multiple profiles:
OSM routing (OSRM): do I need to duplicate all data for different profiles?
Yet in the official documentation there is a profile argument as part of the URL called for retrieving a route from a running OSRM instance:
http://project-osrm.org/docs/v5.6.4/api/#general-options
The path would look something like this:
http://router.project-osrm.org/route/v1/driving/
Without driving, foot or cycle in the URL a route won't be retrieved so one of them is required for the API, yet if I compile a route for car on the server, but then use /foot/ in the URL to retrieve a route, it will still retrieve a car based route, completely ignoring 'foot'.
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn, and what the point of driving is in the above URL seeing as it is ignored anyway and just appears to use the profile attached to the running instance of OSRM?
The solution to the problem of multiple profiles appears to be to host parallel copies of the routing machine for each profile and address different IP's, so again, what is the point of 'profile' in the URL?
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn
The support has never been there. You will need to run separate osrm instances for each profile.
The URL option is merely there to make it easier to stick a nginx in front of your OSRM instances and distribute to the correct instance based on profile string.
We might implement multiple profiles in the same OSRM instance in the future, but this is still far out.

Zuul Hystrix stream without using serviceId

I have a few Zuul routes defined and everything works great. The hystrix stream is empty and according to this the reason is that I am not using a service.
So I would like some help on how I could work around this. I do not have Eureka (and do not wish to start using for this simple app). Is there some way to get the hystrix stream with url instead of serviceId?
Any help will be much appreciated.
Here are example routes I have configured. The URL placeholders come from my profile specific configs.
zuul.routes.v1stores.path=/v1/stores
zuul.routes.v1stores.url=${target.url}
zuul.routes.v1order.path=/v1/order/**
zuul.routes.v1order.url=${target.url}/v1/order
Currently using the url attribute, sets up zuul to NOT use Hystrix. You need to use ribbon to access the Hystrix functionality in Zuul. To do so, you could do something like this (see docs):
zuul.routes.v1order.path=/v1/order/**
zuul.routes.v1order.serviceId=v1order
v1order.ribbon.NIWSServerListClassName=com.netflix.loadbalancer.ConfigurationBasedServerList
v1order.ribbon.listOfServers=${target.url}

Loopback.io and CouchDB connector

I am trying to explore the opportunity to build a connector for CouchDB for Loopback.io.
I know CouchDB has a REST interface but - for some reason - when putting the baseURL of my Couch local server into a Rest connector in Loopback, I get an error back on some headers missing in the request from Couch.
Since some useful functions could be added to exploit views and so on, I am exploring the loopback-connector-couchdb creation.
So easy question is: what are the methods that a connector needs to implement to map exactly to the standard API endpoints offered by Loopback.io for a model?
Basic example:
POST /models (with payload body) --> all good on the "create" function of the connector
DELETE /models/{id} --> I get an error saying that the destroyAll function is NOT implemented (correct) but the destroy function IS implemented instead...
what is the difference between HEAD /models/{id} and GET /models/{id}/exists in terms of the functions called?
I try to verify the existence of the model created (successfully) in CouchDB via ID and use GET /models/{id}/exists and instead of having the function "exists" called in the Connector, another function called "Count" is called instead.
It is as if some but not all functions are mapped to the connector (note, I am not using the DataAccessObject property of the connector, as that seems to be more for additional methods, so to speak... and one of the methods does work!)
...I am confused!
Thanks for any guidance. I am trying to follow this, but i can't easily map standard API endpoints to the minimum functions of the connector (see point 2 above, for instance)
Building a connector - Loopback.io documentation
I would suggest playing with the API explorer to figure out your endpoints.
Create a sample LoopBack project via slc loopback
Create some models via slc loopback:model
Start the app via slc run
Browse to localhost:3000/explorer
In there you can see all the endpoints that are automatically generated by LoopBack. Like if you click the GET endpoint for a model, it will show the query as GET /api/<modelname>.