Can one rake namespace reference a variable in another - rake

I have 2 rake namespaces and each has some variables in it. e.g.
namespace :htmlNs do
html_main = src_dir_html + "/main.html"
Obviously tasks in this namespace can use this variable. But how do I refer to html_main from other tasks in other namespaces?
I should emphasise that I am not talking about dependencies in other namespaces which works fine like ns:task

Related

AWS ECS Task Definition: How do I reference an environment variable in another environment variable?

I would like to be able to define environment variables in AWS ECS task definition like below:
TEST_USER: admin
TEST_PATH: /home/$TEST_USER/workspace
When I echo TEST_PATH:
Actual Value = /home/**$TEST_USER**/workspace
Expected Value = /home/**admin**/workspace
You can't do that. I don't think Docker in general supports evaluating environment variables like that before exposing them in the container.
If you are using something like CloudFormation or Terraform to create create your Task Definitions, you would use a variable in that tool to store the value, and create the ECS environment variables using that CloudFormatin/Terraform variable.
Otherwise you could edit the entrypoint script of your Docker image to do the following when the container starts up:
export TEST_PATH="/home/$TEST_USER/workspace"

Refer to resource created by module TERRAFORM

got one question about terraform and reference to resource. Long story short: I have module to create AKS cluster (attachment) and I create cluster form one folder. On other folder I have other module to manage kubernetes it self: like create namespace, deployments etc. How can I refer to this cluster form other folder?
As Marko mentioned you would use outputs in the same root module, however if you are applying 1 plan, then the other you would likely need to use data sources.
Typically in my root modules I have a data-source.tf file with any pre-existing resources that I need to reference in the root module.
data "kubernetes_service" "example" {
metadata {
name = "terraform-example"
}
}
With the above data source defined, you can use it like this in your root module: data.kubernetes_service.example.attribute-you-want-to-retrieve
Here's a good reference: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/service

environmental variables in bosh deployment

I would like for a job J from a release R in a bosh deployment to start with a certain environmental variable E set, which is not available in the job's properties for configuration
Can this be specified in the deployment file or when calling the bosh cli?
Unfortunately, I am pretty sure this is not possible. BOSH does not understand environment variables. Instead, it executes an ERB template with the properties configured in the manifest. For example in this job template from log-cache is executed with the properties from a manifest along with defaults from the job spec.
If you need to have a particular environment variable set for testing/development, you can bosh ssh on to an instance where you are going to run the job and then mutate the generated file. Given the CF deployment example, bosh ssh doppler/0 and then modify the generated bpm.yml in /var/vcap/jobs/log-cache/config/bpm.yml. This is a workaround for debugging and development, if you need to set a field in a manifest reach out to the release author and open an issue or PR the ability to set environment variable as a property by adding it to the job spec.
(note the versions used in the example are just from HEAD and may not actually work)

Why prefix kubernetes manifest files with numbers?

I'm trying to deploy Node.js code to a Kubernetes cluster, and I'm seeing that in my reference (provided by the maintainer of the cluster) that the yaml files are all prefixed by numbers:
00-service.yaml
10-deployment.yaml
etc.
I don't think that this file format is specified by kubectl, but I found another example of it online: https://imti.co/kibana-kubernetes/ (but the numbering scheme isn't the same).
Is this a Kubernetes thing? A file naming convention? Is it to keep files ordered in a folder?
This is to handle the resource creation order. There's an opened issue in kubernetes:
https://github.com/kubernetes/kubernetes/issues/16448#issue-113878195
tl;dr kubectl apply -f k8s/* should handle the order but it does not.
However, except the namespace, I cannot imagine where the order will matter. Every relation except namespace is handled by label selectors, so it fixes itself once all resources are deployed. You can just do 00-namespace.yaml and everything else without prefixes. Or just skip prefixes at all unless you really hit the issue (I never faced it).
When you execute kubectl apply * the files are executed alphabetically. Prefixing files with a rising number allows you to control the order of the executed files. But in nearly all cases the order shouldn't matter.
Sequence helps in readability, user friendly and not the least maintainability. Looking at the resources one can conclude in which order the deployment needs to be performed. For example, deployment using configMap object would fail if the deployment is done before configMap is created.

Handling OpenShift secrets in a safe way after extraction into environment variables

So I have configured an OpenShift 3.9 build configuration such that environment variables are populated from an OpenShift secret at build-time. I am using these environment variables for setting passwords up for PostgreSQL roles in the image's ENTRYPOINT script.
Apparently these environment variables are baked into the image, not just the build image, but also the resulting database image. (I can see their values when issuing set inside the running container.) On one hand this seems necessary because the ENTRYPOINT script needs access to them, and it executes only at image run-time (not build-time). On the other this is somewhat disconcerting, because FWIK one who obtained the image could now extract those passwords. Unsetting the environment variables after use would not change that.
So is there a better way (or even best practice) for handling such situations in a more secure way?
UPDATE At this stage I see two possible ways forward (better choice first):
Configure DeploymentConfig such that it mounts the secret as a volume (not: have BuildConfig populate environment variables from it).
Store PostgreSQL password hashes (not: verbatim passwords) in secret.
As was suggested in a comment, what made sense was to shift the provision of environment variables from the secret from BuildConfig to DeploymentConfig. For reference:
oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
oc explain dc.spec.template.spec.containers.env.valueFrom.secretKeyRef