Generate Multiple Clients for Quarkus - openapi

tldr; I want to specify openapi documents and create multiple clients.
I am trying to leverage the https://github.com/quarkiverse/quarkus-openapi-generator/ to build clients to multiple systems.
I would like to be able to specify more than one openapi document (ideally from a url not within my repo), to generate clients for each system. Is there a way to do this? I am currently able to create a single client, and use it.
Note: tried to tag quarkiverse, but lack the 'reputation'.

You can locate multiple openapi spec files in the openapi directory under src/main/.
or
You can dynamically provide open api spec files using OpenApiSpecInputProvider interface (see https://github.com/quarkiverse/quarkus-openapi-generator/#generating-files-via-inputstream). The link in the documentation seems to be broken, here you can find an example implementation.

Related

Is the 'compose objects' API for Google Cloud Storage availabe through the C# client libraries?

I need to upload very large files (approaching hundreds of gigs). I am planning to upload smaller parts (out-of-order and in parallel) and then use the Object Compose API to compose them into one large Object (hierarchically if necessary). However, I cannot see any appropriate API in the C# client libraries (Google.Cloud.Storage.V1 2.4.0-beta03) which would do the same as the JSON API at https://cloud.google.com/storage/docs/json_api/v1/objects/compose. Do I have to use the JSON API?
I'll answer my own question. As of this date, the source code (https://github.com/googleapis/google-cloud-dotnet, commit 1b2de06f70e31382ef4d6de9062b30ce64dfb463) contains no meaningful mention of compose or concatenate except as a possible TODO comment.

Change CKAN API Interface - are there limitations on the API?

I've looked around the site to see if there are any people who have changed the CKAN API interface so that instead of uploading documents and databases, they can directly type onto the site, but I haven't found any use cases.
Currently, we have a page where people upload data sets through excel forms that they've filled out, but we want to make it a bit more user friendly by changing the API so that they can fill out a form on the page rather than downloading the template, filling it out and then uploading it.
Does CKAN have the ability to support this? If so, are there any examples or use cases of websites that have use forms rather than uploads?
This is certainly possible.
I'm not aware of any existing extensions that provide that functionality, but you can check the official list of CKAN extensions if there's anything that fulfills your needs.
If there is no existing extension that suits you then you could write your own, see the extension guide for details on how to do that.
Adding an API function to CKAN's API is possible, but probably not what you want in this case: the web UI usually does not interact with CKAN via the API but via Flask/Pylons controllers. Hence, you would add your add controller which first serves your form and then processes the submitted inputs.
You can take a look at the ckanext-pages extension, which does exactly that (for editing static pages instead of datasets, but your code would be similar).

java interfaces to connect to sharepoint

I have to connect SharePoint resources using Java API for the tasks like uploading the new document and custom metadata to SP,Fetching the document list based on the filtering using the custom metadata and updating the existing document along with custom metadata to SP.
Main Actions:
Storing and updating document with custom metadata
Fetching documents based on document metadata(Using custom metadata filtering)
Please highlight some Java API to do the above tasks.
Check the API from github below.
Sharepoint Java API
Java Sharepoint REST API
You can also take a look of this project i've developed where you'll find a working API that will provide you some common operations to perform against a sharepoint site. It covers all the use cases you want to achieve and some more and it is very easy to use
https://github.com/kikovalle/PLGSharepointRestAPI-java

Bluemix Embedded Reports REST architecture

When one wishes to use the Bluemix Embedded Reports one first creates a package and then a report definition. After that, one is supposed to use the REST APIs that are documented using Swagger here:
https://erservice-impl.ng.bluemix.net/ers/swagger-ui/
Unfortunately, I am unable to find any architectural definitions for these APIs. To elaborate on this notion, there are APIs to get connections, packages, definitions, reports, models, datasources and visualizations ... however I unable to find any documentation describing when I would use what. In addition, some fundamental APIs such as those relating to operations for "reports" seem to want a "reportId" and I am lost on how to retrieve or obtain one of those. Other mysteries are the concept of "What are report links?" and what is the semantics of obtaining a "report instance"? For a report "rendered in a format" ... what are the allowable formats and when would I use vs another?
Again ... the REST API isn't bad and Swagger provides useful syntax documentation but without the associated semantic comprehension, it leaves the reader cold on quite how to use the technology.
I am hoping that there is additional documentation either existing somewhere or else planned for release as soon as practicable. If anyone knows where to find such or has additional information on how to interpret the semantics of the APIs, that would be a fantastic answer to the question.
Some information around the REST API, particularly around running of reports, is available on the documentation page for the service, found here: https://console.ng.bluemix.net/docs/services/EmbeddableReporting/index.html#gettingstartedtemplate
Though the full API is provided in swagger, users are expected to use only 3 resources: connection, definitions, and reports. The other endpoints deal with the management of report artifacts and their related resources (datasources, models, packages)
The first step in using ERS is to define datasources and report specifications (definitions) within the admin dashboard. Then, each definition will be given an ID that you can copy/paste into your RESTful calls.
Connect to ERS using basic auth and the /connection endpoint. This sends back cookies (include a JSESSIONID) that you are expected to send with all other calls.
POST /connection
with an empty json body {} and basic auth headers
Run a report in a particular format (2 flavours)
2.1 For 'vanilla' reports with no special options or parameters, you can use the shortcut call, which both creates a report resource and runs it in the format you choose:
GET /definitions/{definition_id}/reports/{format}
where definition_id is taken from the admin dashboard, and format is one of html, phtml (partial html, for embedding. Most common), pdf, json, xml, csv
2.2 For more complex cases, you need to first create a report instance (this holds state for the report that is being run. You can do a next-page or check parameter values and options). Then you can run the report in a format.
POST /definitions/{definition_id}/reports
with a body with your options/parameters. You can also send an empty json body ({}) for all the defaults. This returns a json payload with a reportId and location to run the report from
GET /reports/{report_id}/{format}
You might also want to look at the sample that is included in the documentation (in javascript, java and node) to see how to do this in an app. The documentation mentioned above also has curl examples.

Document versioning with MarkLogic REST API

We're currently using MarkLogic's dls functions to handle document versioning, and are trying to switch over to use the REST API. The document endpoint doesn't use versioning by default, and I can't figure out a way to get it to. I'm referring to the dls functions for keeping multiple document versions, btw, not the new "content versioning" the REST API documentation mentions. In fact, the only reference to document versions in the REST API docs seems to be a line saying that content versioning isn't the same thing.
The only solution we've been able to come up with is to write a custom endpoint that duplicates everything the existing document endpoint's PUT does, plus document management. I'd rather avoid that if possible, especially when looking at MarkLogic 7's partial document updates. We're using MarkLogic 6 now, if it matters, but it doesn't look like 7 has any new features related to this.
Is there a way to do this using MarkLogic's existing endpoints?
You can write a REST API extension that automates the DLS operations. See http://docs.marklogic.com/guide/rest-dev/extensions. You will largely end up duplicating a lot of the same things, but this will plug into the existing endpoints.
Yes, MarkLogic 7 added content versioning to make refreshing of caches easier. And unfortunately, the DLS library hasn't been integrated into the REST api so far. You can file a feature request at support if you like.
In the mean time, the best suggestion I can give is use a separate route to do document updates using DLS (your current route or a limited custom endpoint that only supports the DLS functions you need for doc updates), and do anything else (as far as possible) using the existing REST api. You can look at this other stackoverflow question to see how to limit searches to the latest doc versions:
Marklogic REST API search for latest document version
HTH!
A member of MarkLogic has put together a REST extension to provide better DLS support in the REST-api. Hopefully that makes working with DLS over the MarkLogic REST-api a lot easier:
https://github.com/sanjuthomas/marklogic-dls-rest-extension
HTH!