What is the purpose of two seperate META files after building a distribution? - perl

When building a new distribution 2 meta files are generated. One uses the YAML format, and the other JSON. As far as I know, these are only used by other CPAN clients or other applications that want to have access to a meta file(for whatever reason). I'm trying to reason why an app would need access to both...
Are these two formats separately generated purely for convenience? i.e. Developer 1 prefers JSON therefore codes his apps to read the distribution META.json file while Developer 2 hates JSON and would rather reach for the YAML version?
Or is therefore some technical reason both would be needed by a single client/app that I'm overlooking?

The reason is that there have been two versions of the CPAN meta spec, with the more recent version specifying JSON instead of YAML. The YAML files are kept around in order to maintain compatibility with older tools that expect them, but any future metadata features will be added to the JSON version.
David Golden has some discussion of the change from YAML to JSON on his blog post announcing version 2 of the spec.

Related

Generate Multiple Clients for Quarkus

tldr; I want to specify openapi documents and create multiple clients.
I am trying to leverage the https://github.com/quarkiverse/quarkus-openapi-generator/ to build clients to multiple systems.
I would like to be able to specify more than one openapi document (ideally from a url not within my repo), to generate clients for each system. Is there a way to do this? I am currently able to create a single client, and use it.
Note: tried to tag quarkiverse, but lack the 'reputation'.
You can locate multiple openapi spec files in the openapi directory under src/main/.
or
You can dynamically provide open api spec files using OpenApiSpecInputProvider interface (see https://github.com/quarkiverse/quarkus-openapi-generator/#generating-files-via-inputstream). The link in the documentation seems to be broken, here you can find an example implementation.

Change version of REST API instance in MarkLogic

When creating an instance of REST API (an application), a version (appearing as a prefix) has then to be included in the URL when calling it.
Is there a way to manager several versions (at the same time) of an API? Are we able to change the version number or how is it changed?
The only link I have found is : https://docs.marklogic.com/guide/rest-dev/intro#id_64988
But it is not pretty clear to me.
Thank you for your help
As the link says, "The version number is only updated when resource addresses and/or parameters have changed. It is not updated when resource addresses and/or parameters are added or removed."
In other words, the REST API will increment the version step if it ever becomes necessary to rename or restructure the addresses of resources. Ideally, that will never need to happen. If incrementing becomes necessary, the goal will be to maintain a deprecated interface if possible at the old address for one release.
In addition to David's good suggestion, you could also build your own version numbers into the name of the resource service extension if it's better to support multiple versions of an extension in a single modules database.
If this is to have versions of your rest extensions and use the V# in that process, then I think you could have multiple sets of your code deployed in different modules databases (per version) and dynamically switch modules database based on the version and then rewrite the URL after that to play well with MarkLogic's REST API.
http://developer.marklogic.com/features/enhanced-http

How can I deploy form / subform (i.e. display only) changes on Notes databases?

I have been asked by a client to assist in making the web frontends of number of Lotus / IBM Notes databases, used for critical LOB functions, compatible with modern browsers.
As it stands, the web frontends of these databases only work in IE7, and even then they're temperamental at best. The JS uses IE-specific extensions, everything is in tables, and they render poorly on pretty much every browser available today. With IE7 no longer in support, they want to modernise these interfaces.
I have very little experience with Notes, but as an exploratory exercise I've managed to open up the databases in Domino Designer, add a few Stylesheet / Script resources, include them in the $$HTMLHead variable and reworked one Form to use a frontend framework, which looks good.
Obviously working on live applications is out of the question, so my thinking is to take a copy of the NSF files, and make the changes on the copies. My question is: how can I then deploy only the form / subform / resource changes to the 'live' NSF files?
Deployment:
In your new modified database :
You define in the Database properties that is a Database file is a master template (give a name)
In the production database :
first do a backup ! copy (only design) to a new copy of the prod
You define in the Database properties that it inherits from master template (same name)
on the prod make refresh design
more details : https://www.ibm.com/support/knowledgecenter/SSVRGU_9.0.1/com.ibm.designer.domino.main.doc/H_ABOUT_REFRESHING_A_DESIGN.html
Sorry to state the obvious, but since you have a Notes client and a Domino server, you have a quite extensive documentation at your disposal in the form of databases located in the /help/ directory. Make sure they are full-text-indexed.
And since we are on the subject of templates, Domino comes with a host of ready-made, ready-to-use apps that you can customize and canibalize. Look for discussion9.ntf for starters.
You may want to start here, then go there, and finally that will give you the keys to build word-class web apps on Domino.
Last thing, if you are on V9, the Designer help is crap. Grap a copy of the 8.5 version. Seriously.
If you want to build a modern web based front-end to existing Domino data, take a look at the following presentations:
http://www.slideshare.net/TexasSwede/ad102-break-out-of-the-box
and
http://www.slideshare.net/TexasSwede/break-out-of-the-box-part-2
As others already said, you should create a template and then just refresh/replace the design of the production database using that template.
You may want to consider working with an experienced Notes/Domino developer for that project, there are quite a few caveats and workarounds you need to know know about...

Bluemix Embedded Reports REST architecture

When one wishes to use the Bluemix Embedded Reports one first creates a package and then a report definition. After that, one is supposed to use the REST APIs that are documented using Swagger here:
https://erservice-impl.ng.bluemix.net/ers/swagger-ui/
Unfortunately, I am unable to find any architectural definitions for these APIs. To elaborate on this notion, there are APIs to get connections, packages, definitions, reports, models, datasources and visualizations ... however I unable to find any documentation describing when I would use what. In addition, some fundamental APIs such as those relating to operations for "reports" seem to want a "reportId" and I am lost on how to retrieve or obtain one of those. Other mysteries are the concept of "What are report links?" and what is the semantics of obtaining a "report instance"? For a report "rendered in a format" ... what are the allowable formats and when would I use vs another?
Again ... the REST API isn't bad and Swagger provides useful syntax documentation but without the associated semantic comprehension, it leaves the reader cold on quite how to use the technology.
I am hoping that there is additional documentation either existing somewhere or else planned for release as soon as practicable. If anyone knows where to find such or has additional information on how to interpret the semantics of the APIs, that would be a fantastic answer to the question.
Some information around the REST API, particularly around running of reports, is available on the documentation page for the service, found here: https://console.ng.bluemix.net/docs/services/EmbeddableReporting/index.html#gettingstartedtemplate
Though the full API is provided in swagger, users are expected to use only 3 resources: connection, definitions, and reports. The other endpoints deal with the management of report artifacts and their related resources (datasources, models, packages)
The first step in using ERS is to define datasources and report specifications (definitions) within the admin dashboard. Then, each definition will be given an ID that you can copy/paste into your RESTful calls.
Connect to ERS using basic auth and the /connection endpoint. This sends back cookies (include a JSESSIONID) that you are expected to send with all other calls.
POST /connection
with an empty json body {} and basic auth headers
Run a report in a particular format (2 flavours)
2.1 For 'vanilla' reports with no special options or parameters, you can use the shortcut call, which both creates a report resource and runs it in the format you choose:
GET /definitions/{definition_id}/reports/{format}
where definition_id is taken from the admin dashboard, and format is one of html, phtml (partial html, for embedding. Most common), pdf, json, xml, csv
2.2 For more complex cases, you need to first create a report instance (this holds state for the report that is being run. You can do a next-page or check parameter values and options). Then you can run the report in a format.
POST /definitions/{definition_id}/reports
with a body with your options/parameters. You can also send an empty json body ({}) for all the defaults. This returns a json payload with a reportId and location to run the report from
GET /reports/{report_id}/{format}
You might also want to look at the sample that is included in the documentation (in javascript, java and node) to see how to do this in an app. The documentation mentioned above also has curl examples.

CCDA to FHIR xml

Is it Possible to convert a complete CCDA xml to a FHIR based xml? I would like to convert an complete CCDA xml to a FHIR compatible XML through Mirthconnect interface.
I like to have sample messages that shows how a complete CCDA is been transformed to FHIR based XML, I googled and ended up with no answers. It would be great if you guys help me.
Strictly speaking, C-CDA is consolidated CDA. It is an IG - Implementation Guide.
In simple terms, there are various IG for generating a CDA document HITSP/C83 for one is an example and there are several others. The main problem with all these seperate IG is that they are not uniform. C-CDA was created to bring uniformity of data. This presentation here is a good place to start. Basically, it says you got to have at least 4 mandatory section in your CCD, and rest optional sections. It entirely depends on your use case.
Secondly, You need to download a copy of a valid C-CDA file from this site. Let's take inpatient summary document.
So that would be your target document, and consider it as a template.
Third, You got to tell your engineering team or if you are the developer yourself, then you need to build logic to extract and place information into that template. This is an iterative process, and everytime you need to validate your developed document, against the validator (site given above).
Until and unless the validator says 0 errors present, your document is not ready.
So, There does not exist a ready made code or logic that you can just plug and play and start developing C-CDA documents.