unable to upload StructureDefinitions when Validation-Requests-Enabled (DSTU3) - hapi-fhir

I am experimenting with the automatic validation feature of HAPI-Fhir Server. I am using the hapi-fhir-jpaserver-starter running in a docker container. For compatibility reasons I am forced to stick at DSTU3 for the moment. My observed behavior is the following:
If request-validation is off (controlled via Env variable API_FHIR_VALIDATION_REQUESTSENABLED unset) I can upload ValueSet and StructureDefinition resources. When uploading eg. Patient or Observation resources, I can use the .../$validate REST call to validate the resources. Works as expected.
If request-validation is on (HAPI_FHIR_VALIDATION_REQUESTSENABLED set to true) then uploading of StructureDefinitions which refer to ValueSet resources being present (by binding.valueSetReference) fail with messages like This context is for FHIR version "DSTU3" but the class "org.hl7.fhir.r4.model.ValueSet" is for version "R4". Validation of resources like Patient or Observation being uploaded works as expected. These resources are marked with a reference to my own StructureDefinitions and are validated against them. Resources with errors will not be persisted.
My current workaround is to disable validation, upload ValueSet and StructureDefinition resources. After a restart with HAPI_FHIR_VALIDATION_REQUESTSENABLED=true the server works as expected and correctly validates all resources being uploaded.
Is there a way to either avoid the errors above or prevent StructureDefinition or ValueSet resources from validation for an individual upload-request?
Every help will be appreciated.
-wolfgang

Related

Pulumi DigitalOcean: different name for droplet

I'm creating a droplet in DigitalOcean with Pulumi. I have the following code:
name = "server"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
The server gets created successfully on DigitalOcean but the name in the DigitalOcean console is something like server-0bbc405 (upon each execution, it's a different name).
Why isn't it just the name I provided? How can I achieve that?
This is a result of auto-naming, which is explained here in the Pulumi docs:
https://www.pulumi.com/docs/intro/concepts/resources/names/#autonaming
The extra characters tacked onto the end of the resource name allow you to use the same "logical" name (your "server") with multiple stacks without risk of a collision (as cloud providers often require resources of the same kind to ba named uniquely). Auto-naming looks a bit strange at first, but it's incredibly useful in practice, and once you start working with multiple stacks, you'll almost surely appreciate it.
That said, you can generally override this name by providing a name in your list of resource arguments:
...
name = "server"
droplet = digitalocean.Droplet(
name,
name="my-name-override", # <-- Override auto-naming
image="ubuntu-18-04-x64",
region="nyc2",
size="s-1vcpu-1gb")
.. which would yield the following result:
+ pulumi:pulumi:Stack: (create)
...
+ digitalocean:index/droplet:Droplet: (create)
...
name : "my-name-override" # <-- As opposed to "server-0bbc405"
...
.. but again, it's usually best to go with auto-naming for the reasons specified in the docs. Quoting here:
It ensures that two stacks for the same project can be deployed without their resources colliding. The suffix helps you to create multiple instances of your project more easily, whether because you want, for example, many development or testing stacks, or to scale to new regions.
It allows Pulumi to do zero-downtime resource updates. Due to the way some cloud providers work, certain updates require replacing resources rather than updating them in place. By default, Pulumi creates replacements first, then updates the existing references to them, and finally deletes the old resources.
Hope it helps!

How to introduce versioning for endpoints for akka http

I have 5 controllers in akka-http. Each endpoint has 5 endpoints(routes). Now I need to introduce versioning for those. All endpoints should be prefixed with /version1.
For example if there was an endpoint xyz now it should be /version1/xyz.
One of the ways is to add a pathPrefix But it needs to be added to each controller.
Is there way to add it at a common place so that it appears for all endpoints.
I am using akka-http with scala.
You can create a base route, that accepts paths like /version1/... and refers to internal routes without path prefix.
val version1Route = path("xyz") {
...
}
val version2Route = path("xyz") {
...
}
val route = pathPrefix("version1") {
version1Route
} ~ pathPrefix("version2") {
version2Route
}
Indirect Answer
Aleksey Isachenkov's answer is the correct direct solution.
One alternative is to put versioning in the hostname instead of the path. Once you have "version1" of your Route values in source-control then you can tag that checkin as "version1", deploy it into production, and then use DNS entries to set the service name to version1.myservice.com.
Then, once newer functionality becomes necessary you update your code and tag it in source-control as "version2". Release this updated build and use DNS to set the name as version2.myservice.com, while still keeping the version1 instance running. This would result in two active services running independently.
The benefits of this method are:
Your code does not continuously grow longer as new versions are released.
You can use logging to figure out if a version hasn't been used in a long time and then just kill that running instance of the service to End-Of-Life the version.
You can use DNS to define your current "production" version by having production.myservice.com point to whichever version of the service you want. For example: once you've released version24.myservice.com and tested it for a while you can update the production.myservice.com pointer to go to 24 from 23. The old version can stay running for any users that don't want to upgrade, but anybody who wants the latest version can always use "production".

BUG: VSTS Release definition Rest API PUT call removes phases

I am trying to Get a RD and then call a PUT operation on the release object after updating some Variables in it.
The PUT operation is successful, the variables get updated in the RD, but all the other phases in the environment gets removed, except the 1st phase.
My RD has only one env I have not tried with more than one ENV for this operation.
Please suggest how can I update the RD through Rest call without loosing data.
URLs tried for GET:
The below URL don’t give Deployphases but PUT is successful with deleted phases
https://xxxxxxx.vsrm.visualstudio.com/xxxxxxx/_apis/Release/definitions/2016?api-version=4.1-preview.1
The below URL gives Deployphases but PUT fails with error that Deployphases should not be used rather Deploy step should be used.
https://xxxxxxx.vsrm.visualstudio.com/xxxxxxx/_apis/Release/definitions/2016
URLs tried for PUT:
behavior is same for both the URLs
https://xxxxxxxx.vsrm.visualstudio.com/xxxxxxx/_apis/Release/definitions?api-version=4.1-preview.1
https://xxxxxxxx.vsrm.visualstudio.com/xxxxxxx/_apis/Release/definitions/2016?api-version=4.1-preview.1
It's not a bug, you should use api-version=4.0-preview.3:
https://xxxx.vsrm.visualstudio.com/xxxx/_apis/Release/definitions?api-version=4.0-preview.3

Unable to run experiment on Azure ML Studio after copying from different workspace

My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:

Cleanup for running spec files in series in Protractor

I am running multiple specs using a Protractor configuration file as follows:
...
specs: [abc.js , xyz.js]
...
After abc.js is finished I want to reset my App to an initial state from where the next spec xyz.js can kick off.
Is there a well defined way of doing so in Protractor? I'm using Jasmine as a test framework.
You can use something like this:
specs: ['*.js']
But I recommend you to separate the specs with a suffix, such as abc-spec.js and xyz-spec.js. Thus your specs will be like this:
specs: ['*-spec.js']
This is done to avoiding the config file to be 'run'/tested if you put the config file in the same folder as your tests/spec files.
Also there is downside that the test will be run in 0 -> 9 and A -> Z order. E.g. abc-spec.js will run first then xyz-spec.js. If you want to define your custom execution order, you may prefix your spec files' names, for instance: 00-xyz-spec.js and 01-abc-spec.js.
To restart the app, sadly there is no common way (source) but you need to work around to achieve it. Use something like
browser.get('http://localhost:3030/');
browser.waitForAngular();
whenever you need to reload your app. It will force the page to be reloaded. But if your app uses cookie, you will also need to clean it out in order to make it completely reset.
I used a different approach and it worked for me. Inside my first spec I am adding Logout testcase which logouts from the app and on reaching the log in page, just clear the cookie before login again using following:
browser.driver.manage().deleteAllCookies();
The flag named restartBrowserBetweenTests can also be specified in a configuration file. However, this comes with a valid warning from the Protractor team:
// If [set to] true, protractor will restart the browser between each test.
// CAUTION: This will cause your tests to slow down drastically.
If the speed penalty is of no concern, this could help.
If the above doesn't help and you absolutely want to be sure that the state of the app (and browser!) is clean between specs, you need to roll out your own shellscript which gathers all your *_spec.js files and calls protractor --specs [currentSpec from a spec list/test suite].