Conftest Policy for Kubernetes manifests for checking that images come from a specific registry - kubernetes

I'm using conftest for validating policies on Kubernetes manifests.
Below policy validates that images in StatefulSet manifests have to come from specific registry reg_url
package main
deny[msg] {
input.kind == "StatefulSet"
not regex.match("[reg_url]/.+", input.spec.template.spec.initContainers[0].image)
msg := "images come from artifactory"
}
Is there a way to enforce such policy for all kubernetes resources that have image field somewhere in their description? This may be useful for policy validation on all helm chart manifests, for instance.
I'm looking for something like:
package main
deny[msg] {
input.kind == "*" // all resources
not regex.match("[reg_url]/.+", input.*.image) // any nested image field
msg := "images come from artifactory"
}

You could do this using something like the walk built-in function. However, I would recommend against it, because:
You'd need to scan every attribute of every request/resource (expensive).
You can't know for sure that e.g. "image" means the same thing across all current and future resouce manifests, including CRDs.
I'd probably just stick with checking for a match of resource kind here, and include any resource type known to have an image attribute with a shared meaning.

Related

Pulumi DigitalOcean: different name for droplet

I'm creating a droplet in DigitalOcean with Pulumi. I have the following code:
name = "server"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
The server gets created successfully on DigitalOcean but the name in the DigitalOcean console is something like server-0bbc405 (upon each execution, it's a different name).
Why isn't it just the name I provided? How can I achieve that?
This is a result of auto-naming, which is explained here in the Pulumi docs:
https://www.pulumi.com/docs/intro/concepts/resources/names/#autonaming
The extra characters tacked onto the end of the resource name allow you to use the same "logical" name (your "server") with multiple stacks without risk of a collision (as cloud providers often require resources of the same kind to ba named uniquely). Auto-naming looks a bit strange at first, but it's incredibly useful in practice, and once you start working with multiple stacks, you'll almost surely appreciate it.
That said, you can generally override this name by providing a name in your list of resource arguments:
...
name = "server"
droplet = digitalocean.Droplet(
name,
name="my-name-override", # <-- Override auto-naming
image="ubuntu-18-04-x64",
region="nyc2",
size="s-1vcpu-1gb")
.. which would yield the following result:
+ pulumi:pulumi:Stack: (create)
...
+ digitalocean:index/droplet:Droplet: (create)
...
name : "my-name-override" # <-- As opposed to "server-0bbc405"
...
.. but again, it's usually best to go with auto-naming for the reasons specified in the docs. Quoting here:
It ensures that two stacks for the same project can be deployed without their resources colliding. The suffix helps you to create multiple instances of your project more easily, whether because you want, for example, many development or testing stacks, or to scale to new regions.
It allows Pulumi to do zero-downtime resource updates. Due to the way some cloud providers work, certain updates require replacing resources rather than updating them in place. By default, Pulumi creates replacements first, then updates the existing references to them, and finally deletes the old resources.
Hope it helps!

unable to upload StructureDefinitions when Validation-Requests-Enabled (DSTU3)

I am experimenting with the automatic validation feature of HAPI-Fhir Server. I am using the hapi-fhir-jpaserver-starter running in a docker container. For compatibility reasons I am forced to stick at DSTU3 for the moment. My observed behavior is the following:
If request-validation is off (controlled via Env variable API_FHIR_VALIDATION_REQUESTSENABLED unset) I can upload ValueSet and StructureDefinition resources. When uploading eg. Patient or Observation resources, I can use the .../$validate REST call to validate the resources. Works as expected.
If request-validation is on (HAPI_FHIR_VALIDATION_REQUESTSENABLED set to true) then uploading of StructureDefinitions which refer to ValueSet resources being present (by binding.valueSetReference) fail with messages like This context is for FHIR version "DSTU3" but the class "org.hl7.fhir.r4.model.ValueSet" is for version "R4". Validation of resources like Patient or Observation being uploaded works as expected. These resources are marked with a reference to my own StructureDefinitions and are validated against them. Resources with errors will not be persisted.
My current workaround is to disable validation, upload ValueSet and StructureDefinition resources. After a restart with HAPI_FHIR_VALIDATION_REQUESTSENABLED=true the server works as expected and correctly validates all resources being uploaded.
Is there a way to either avoid the errors above or prevent StructureDefinition or ValueSet resources from validation for an individual upload-request?
Every help will be appreciated.
-wolfgang

How to introduce versioning for endpoints for akka http

I have 5 controllers in akka-http. Each endpoint has 5 endpoints(routes). Now I need to introduce versioning for those. All endpoints should be prefixed with /version1.
For example if there was an endpoint xyz now it should be /version1/xyz.
One of the ways is to add a pathPrefix But it needs to be added to each controller.
Is there way to add it at a common place so that it appears for all endpoints.
I am using akka-http with scala.
You can create a base route, that accepts paths like /version1/... and refers to internal routes without path prefix.
val version1Route = path("xyz") {
...
}
val version2Route = path("xyz") {
...
}
val route = pathPrefix("version1") {
version1Route
} ~ pathPrefix("version2") {
version2Route
}
Indirect Answer
Aleksey Isachenkov's answer is the correct direct solution.
One alternative is to put versioning in the hostname instead of the path. Once you have "version1" of your Route values in source-control then you can tag that checkin as "version1", deploy it into production, and then use DNS entries to set the service name to version1.myservice.com.
Then, once newer functionality becomes necessary you update your code and tag it in source-control as "version2". Release this updated build and use DNS to set the name as version2.myservice.com, while still keeping the version1 instance running. This would result in two active services running independently.
The benefits of this method are:
Your code does not continuously grow longer as new versions are released.
You can use logging to figure out if a version hasn't been used in a long time and then just kill that running instance of the service to End-Of-Life the version.
You can use DNS to define your current "production" version by having production.myservice.com point to whichever version of the service you want. For example: once you've released version24.myservice.com and tested it for a while you can update the production.myservice.com pointer to go to 24 from 23. The old version can stay running for any users that don't want to upgrade, but anybody who wants the latest version can always use "production".

Grafana templating merge variables

I'm looking for a solution to merge two templating variables in grafana(data source: prometheus).
My use case is:
I've my first variable:
deployment = label_values(kube_deployment_labels{namespace="$namespace"},deployment)
and the second one:
statefulset = label_values(kube_statefulset_labels{namespace="$namespace"},statefulset)
What I'm looking for is a only one dropdown menu(selector) because in my dashboard I wan't to be able to select a deployment or a statefulset but not both at the same time.
I've tried at the different side:
1) With prometheus by using a query like this:
kube_deployment_labels{namespace="$namespace"} or kube_statefulset_labels{namespace="$namespace"}
But in this case I'm not able to extract the labels(could be "deployment" or statefulset")
2) It seems not possible to perform a merge of two template variables in grafana like this:
$deployment,$statefulset
Maybe I've missed something...
Thanks,
Matt
I do it by creating two separate variable and give them same label name.
Since the label name is same it will only show one drop-down.
https://grafana.com/docs/grafana/latest/variables/templates-and-variables/#basic-variable-options

Grafana templates for Prometheus

I am trying to update the node-exporter-full Grafana dashboard with some of our internal labels as templates. We have labels for "pod" and "servertype" which can be used to get a subset of "nodes" to list at the top of the dashboard.
I can add "pod" like:
label_values(pod)
Then I can reference "pod" in the node query as follows:
label_values(node_boot_time{job="clients",pod="$pod"}, instance)
This works. If I want to add servertype in the middle how would I pull a list of "servertype" based on "pod" which is selected?
I already know the "node" can be filtered with:
label_values(node_boot_time{job="clients",pod="$pod"},servertype="$servertype", instance)
Answer was pretty simple once I reread the documentation. Using "up" function currently and it is working fine but there may be a better solution.
node_boot_time has been changed to node_boot_time_seconds
Refer to this link to get all the name changes since prometheus 0.16.0
https://github.com/prometheus/node_exporter/issues/830