How to introduce versioning for endpoints for akka http - scala

I have 5 controllers in akka-http. Each endpoint has 5 endpoints(routes). Now I need to introduce versioning for those. All endpoints should be prefixed with /version1.
For example if there was an endpoint xyz now it should be /version1/xyz.
One of the ways is to add a pathPrefix But it needs to be added to each controller.
Is there way to add it at a common place so that it appears for all endpoints.
I am using akka-http with scala.

You can create a base route, that accepts paths like /version1/... and refers to internal routes without path prefix.
val version1Route = path("xyz") {
...
}
val version2Route = path("xyz") {
...
}
val route = pathPrefix("version1") {
version1Route
} ~ pathPrefix("version2") {
version2Route
}

Indirect Answer
Aleksey Isachenkov's answer is the correct direct solution.
One alternative is to put versioning in the hostname instead of the path. Once you have "version1" of your Route values in source-control then you can tag that checkin as "version1", deploy it into production, and then use DNS entries to set the service name to version1.myservice.com.
Then, once newer functionality becomes necessary you update your code and tag it in source-control as "version2". Release this updated build and use DNS to set the name as version2.myservice.com, while still keeping the version1 instance running. This would result in two active services running independently.
The benefits of this method are:
Your code does not continuously grow longer as new versions are released.
You can use logging to figure out if a version hasn't been used in a long time and then just kill that running instance of the service to End-Of-Life the version.
You can use DNS to define your current "production" version by having production.myservice.com point to whichever version of the service you want. For example: once you've released version24.myservice.com and tested it for a while you can update the production.myservice.com pointer to go to 24 from 23. The old version can stay running for any users that don't want to upgrade, but anybody who wants the latest version can always use "production".

Related

Pulumi DigitalOcean: different name for droplet

I'm creating a droplet in DigitalOcean with Pulumi. I have the following code:
name = "server"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
The server gets created successfully on DigitalOcean but the name in the DigitalOcean console is something like server-0bbc405 (upon each execution, it's a different name).
Why isn't it just the name I provided? How can I achieve that?
This is a result of auto-naming, which is explained here in the Pulumi docs:
https://www.pulumi.com/docs/intro/concepts/resources/names/#autonaming
The extra characters tacked onto the end of the resource name allow you to use the same "logical" name (your "server") with multiple stacks without risk of a collision (as cloud providers often require resources of the same kind to ba named uniquely). Auto-naming looks a bit strange at first, but it's incredibly useful in practice, and once you start working with multiple stacks, you'll almost surely appreciate it.
That said, you can generally override this name by providing a name in your list of resource arguments:
...
name = "server"
droplet = digitalocean.Droplet(
name,
name="my-name-override", # <-- Override auto-naming
image="ubuntu-18-04-x64",
region="nyc2",
size="s-1vcpu-1gb")
.. which would yield the following result:
+ pulumi:pulumi:Stack: (create)
...
+ digitalocean:index/droplet:Droplet: (create)
...
name : "my-name-override" # <-- As opposed to "server-0bbc405"
...
.. but again, it's usually best to go with auto-naming for the reasons specified in the docs. Quoting here:
It ensures that two stacks for the same project can be deployed without their resources colliding. The suffix helps you to create multiple instances of your project more easily, whether because you want, for example, many development or testing stacks, or to scale to new regions.
It allows Pulumi to do zero-downtime resource updates. Due to the way some cloud providers work, certain updates require replacing resources rather than updating them in place. By default, Pulumi creates replacements first, then updates the existing references to them, and finally deletes the old resources.
Hope it helps!

unable to upload StructureDefinitions when Validation-Requests-Enabled (DSTU3)

I am experimenting with the automatic validation feature of HAPI-Fhir Server. I am using the hapi-fhir-jpaserver-starter running in a docker container. For compatibility reasons I am forced to stick at DSTU3 for the moment. My observed behavior is the following:
If request-validation is off (controlled via Env variable API_FHIR_VALIDATION_REQUESTSENABLED unset) I can upload ValueSet and StructureDefinition resources. When uploading eg. Patient or Observation resources, I can use the .../$validate REST call to validate the resources. Works as expected.
If request-validation is on (HAPI_FHIR_VALIDATION_REQUESTSENABLED set to true) then uploading of StructureDefinitions which refer to ValueSet resources being present (by binding.valueSetReference) fail with messages like This context is for FHIR version "DSTU3" but the class "org.hl7.fhir.r4.model.ValueSet" is for version "R4". Validation of resources like Patient or Observation being uploaded works as expected. These resources are marked with a reference to my own StructureDefinitions and are validated against them. Resources with errors will not be persisted.
My current workaround is to disable validation, upload ValueSet and StructureDefinition resources. After a restart with HAPI_FHIR_VALIDATION_REQUESTSENABLED=true the server works as expected and correctly validates all resources being uploaded.
Is there a way to either avoid the errors above or prevent StructureDefinition or ValueSet resources from validation for an individual upload-request?
Every help will be appreciated.
-wolfgang

VSTS: Built in variable for organization name?

In many of the calls described in the Azure DevOps REST API documentation, I need to supply the name of the organization, e.g.:
https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases?api-version=5.0-preview.8
The project I can get from System.TeamProject. I would have expected something similar for organization name, something like:
System.TeamFoundationCollectionName
This does not seem to be available. I've even printed out all of my environment variables on the agent and don't see anything that fits the need exactly. Sure, I can parse it out of one of the other values, but this seems fragile since MS seems to like to change the format of URLs.
I also can't hard code the organization name because this release definition will live in multiple organizations and we don't want to have to manually update it for each. How are others solving this problem?
Try using System.TeamFoundationServerUri and System.TeamFoundationCollectionUri to build your API requests. They have the organization included in them.
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=vsts&tabs=batch
edit: SYSTEM_TEAMFOUNDATIONSERVERURI/BUILD_PROJECTNAME/_apis/release/releases?api-version=5.0-preview.8
It looks like currently there is no such variable for the organization, also, the variables return the old URL (xxx.visualstudio.com) and not the new URL (dev.azure.com/xxx) so if you use the System.TeamFoundationCollectionName the API should work without the {organization}:
https://System.TeamFoundationCollectionName/{project}/_apis/release/releases?api-version=5.0-preview.8.
In Powershell, do this:
# Where SYSTEM_TEAMFOUNDATIONCOLLECTIONURI=https://some_org_name.visualstudio.com/
([System.Uri]$Env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI).Host.split('.')[-3] # returns 'some_org_name'
Now, just assign that to a variable and use it anywhere you like. "SYSTEM_TEAMPROJECT" is the Project Name, so no need to do any parsing there. It is already available.

neo4j spatial findGeometriesWithinDistance REST

Using neo4j 1.9 and neo4j spatial for 1.9.
Trying to get findGeometriesWithinDistance REST call working.
I can confirm that the install has worked and that the function exists BUT, using the http console I get a "Node 0 does not exist" error. The REST request I make is exactly as in the docs but instead of returning nodes I get this error.
What is going on that requires node 0 to exist and hence causes the error?
For info, the REST findGeometriesInBBox works fine.
On Further Investigation...
Using py2neo to interact with the DB. In particular, we make use of the GregorianCalendar functionality (see here). When removed from our logic the process of findGeometriesWithinDistance works fine.
Looking into it further, there are comments in the py2neo code that say #retain a handle to the root node (see the first code example here).
Does this "handle" do something with the node of index 0 so we can't use it?
Did you accidentally clean out your database?
I.e. remove node 0, which was the reference node that neo4j-spatial connected its root elements to (in 1.9)?

Is it possible to use variables in a ClearCase config spec?

For example, instead of writing the following:
element * .../my_branch_01/LATEST
element * .../base_branch/LATEST -mkbranch my_branch_01
I would want to write something like this:
MY_BRANCH=my_branch_01
element * .../%MY_BRANCH%/LATEST
element * .../base_branch/LATEST -mkbranch %MY_BRANCH%
Is this even possible? What is the correct syntax?
The only native way to do this in ClearCase is to use attribute within a config-spec.
According to the version selector rules, you can make a "selection by query" rule, based for instance on an attribute:
element * ...{MY_ATTRIBUTE_NAME=="aValue"}
would select the LATEST version on any branch with an attribute 'MY_ATTRIBUTE_NAME' with 'aValue' in it.
That mean you need to change the attribute value on the old branch, put it on the new branch, 'cleartool setcs' your view again, and you should have a new content based on a new version selection.
Not very straight forward, but it could work, except for the mkbranch part (which needs a fixed name).
Regarding GeekCyclist's answer, a few comments:
The solution to include a common config spec can work for Base ClearCase solution, but:
need to be in a share available by all concerned developer
the setcs is indeed necessary to Ccuses the view_server to flush its caches and reevaluate the current config spec, which is stored in file config_spec in the view storage directory. This includes:
Evaluating time rules with nonabsolute specifications (for example, now, Tuesday)
Reevaluating –config rules, possibly selecting different derived objects than previously
Re-reading files named in include rules
all the other developers need to be notified when the common included config spec file changes (there is no native notification included in ClearCase)
If you need to have one "environment" (i.e. one "view" or workspace) with a variable content (depending on a different branch), you need to define a symbolic link (or a windows subst) pointing to different views (each with their own config spec)
That way, you only have to change the link (or the path subst'ed) in order to change the config spec associated with a given fixed path.
It's been a while since I worked in ClearCase (we switched to Subversion), but if I recall correctly there is no way to do this native to ClearCase.
You could use or write a script generator that would create your spec file and then include that in the actual spec:
element * CHECKEDOUT
include scripted_file_output
Then run
cleartool setcs -current
The problem with this approach is that I believe the include spec would need to be regenerated and the cleartool setcs run whenever you change the value of MY_BRANCH.