Unable to get ENV variables in GoCD Kubernetes using YAML config - kubernetes

GoCD Version: 19.12.0
I'm trying to get environment variables defined in the Kubernetes deployment (system) in my GoCD YAML config in order to pass the GitHub authentication when pulling the resource.
I've confirmed that I'm able to call the repository using a personal access token. (via https://[TOKEN]#github.com/[COMPANY]/[REPO].git)
This, of course, also works if I do the same for the actual YAML git field.
The GoCD secrets in K8s:
apiVersion: v1
data:
GITHUB_ACCESS_KEY: base64EncodedKey
kind: Secret
type: Opaque
The GoCD deployment gets the secrets:
...
spec:
containers:
- env:
- name: GOCD_PLUGIN_INSTALL_kubernetes-elastic-agents
value: https://github.com/gocd/kubernetes-elastic-agents/releases/download/v3.4.0-196/kubernetes-elastic-agent-3.4.0-196.jar
- name: GOCD_PLUGIN_INSTALL_docker-registry-artifact-plugin
value: https://github.com/gocd/docker-registry-artifact-plugin/releases/download/v1.1.0-104/docker-registry-artifact-plugin-1.1.0-104.jar
- name: GITHUB_ACCESS_KEY
valueFrom:
secretKeyRef:
key: GITHUB_ACCESS_KEY
name: gocd-server
...
I've exec'd into the pod and echoed the variable, which returns the decoded value.
The YAML:
format_version: 9
pipelines:
db-docker-build:
group: someGroup
label_template: ${COUNT}-${git[:8]}
lock_behavior: unlockWhenFinished
display_order: 1
materials:
git:
git: 'https://$GITHUB_ACCESS_KEY#github.com/[COMPANY]/[REPO].git'
shallow_clone: true
auto_update: true
branch: master
...
I'd half expect that to work, but it doesn't, it actually just gets $GITHUB_ACCESS_KEY as the value. The jobs defined in the pipeline stages are run using an elastic agent pod which also has the required secrets defined. I've tried a few
Setting env variables -
environment_variables: GIT_KEY: ${GITHUB_ACCESS_KEY}
and then using that variable
git: 'https://$GIT_KEY#github.com/[COMPANY]/[REPO].git'
Setting env variables and no quotes -
environment_variables: GIT_KEY: ${GITHUB_ACCESS_KEY}
and then using that variable
git: https://${GIT_KEY}#github.com/[COMPANY]/[REPO].git
No quotes - git: https://$GITHUB_ACCESS_KEY#github.com/[COMPANY]/[REPO].git
No quotes with brackets - git: https://${GITHUB_ACCESS_KEY}#github.com/[COMPANY]/[REPO].git
I've seen from some YAML documentation that it is recommended to use encrypted_password for the GitHub password, but this seems unnecessary since the GUI hides the token, and that its running in Kubernetes with secrets.

The team and I researched this a little further and found a workaround. Most issues and articles explain what is written in the docs, that you really need access to /bin/bash -c in order to get the variables.
The YAML plugin creator also uses secure, encrypted variables to store sensitive data which is fine, but for our team it means that a lot of Kubernetes features are not utilised.
The workaround:
Use the GUI to create a pipeline in GoCD, enter the GitHub link, add a username and the personal access token for the user as the password, test the connection is OK. Once created, go to Admin -> Pipelines and click the Download pipeline configuration and select YAML.
The generated YAML has the token encrypted as with the GoCD servers private key.

Related

Flux Terraform controller not picking the correct Terraform state

I have a terraform controller for Flux running with a Github provider, however, it seems to be picking up the wrong Terraform state, so it keeps trying to recreate the resources again and again (and fails because they already exist)
This is how it is configured
apiVersion: infra.contrib.fluxcd.io/v1alpha1
kind: Terraform
metadata:
name: saas-github
namespace: flux-system
spec:
interval: 2h
approvePlan: "auto"
workspace: "prod"
backendConfig:
customConfiguration: |
backend "s3" {
bucket = "my-bucket"
key = "my-key"
region = "eu-west-1"
dynamodb_table = "state-lock"
role_arn = "arn:aws:iam::11111:role/my-role"
encrypt = true
}
path: ./terraform/saas/github
runnerPodTemplate:
metadata:
annotations:
iam.amazonaws.com/role: pod-role
sourceRef:
kind: GitRepository
name: infrastructure
namespace: flux-system
locally running terraform init with a state.config file that has a similar/same configuration it works fine and it detect the current state properly:
bucket = "my-bucket"
key = "infrastructure-github"
region = "eu-west-1"
dynamodb_table = "state-lock"
role_arn = "arn:aws:iam::111111:role/my-role"
encrypt = true
Reading the documentation I saw also a configPath that could be used, so I tried to point it to the state file, but then I got the error:
Failed to initialize kubernetes configuration: error loading config file couldn't get version/kind; json parse error
Which is weird, like it tries to load Kuberntes configuration, not Terraform, or at least it expects a json file, which is not the case of my state configuration
I'm running Terraform 1.3.1 on both locally and on the tf runner pod
On the runner pod I can see the generated_backend_config.tf and it is the same configuration and .terraform/terraform.tfstate also points to the bucket
The only suspicious thing on the logs that I could find is this:
- Finding latest version of hashicorp/github...
- Finding integrations/github versions matching "~> 4.0"...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/github v5.9.1...
- Installed hashicorp/github v5.9.1 (signed by HashiCorp)
- Installing integrations/github v4.31.0...
- Installed integrations/github v4.31.0 (signed by a HashiCorp partner, key ID 38027F80D7FD5FB2)
- Installing hashicorp/aws v4.41.0...
- Installed hashicorp/aws v4.41.0 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Warning: Additional provider information from registry
The remote registry returned warnings for
registry.terraform.io/hashicorp/github:
- For users on Terraform 0.13 or greater, this provider has moved to
integrations/github. Please update your source in required_providers.
It seems that it installs 2 github providers, one from hashicorp and one from integrations... I have changed versions of Terraform/provider during the development, and I have removed any reference to the hashicorp one, but this warning still happens
However, it also happens locally, where it reads the correct state, so I don't think it is related.

Validate K8s YAML Files in a Git repo

I have a set of K8s YAML descriptors as part of a project and I'm using kustomization to build them. I'm also using GitOps to do pull based deployments to my K8s cluster.
I now want to add some tests for my YAML files so that if I have any errors, I want to avoid or prevent Flux from pulling my changes into the cluster. So basically I want to do some unit test like thingy for my YAML files. I came across Kubeval and this could serve my purpose well. I'm just not sure how to use it.
Anyone already tried this? I want to basically do the following:
As soon as I push some YAML files into my repo, Kubeval kicks in and validates all the YAML files in a set of folders that I specify
If all the YAML files passes lint validations, then I want to proceed to the next stage where I call kustomize to build the deployment YAML.
If the YAML files fail lint validation, then my CI fails and nothing should happen
Any ideas on how I could do this?
Since my project is hosted on GitHub, I was able to get what I want using GitHub actions and kube-tools
So basically here is what I did!
In my GitHub project, added a main.yaml under project-root/.github/workflows/main.yml
The contents of my main.yaml is:
name: ValidateKubernetesYAML
branches: [ master ] pull_request:
branches: [ master ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Kubeval
uses: stefanprodan/kube-tools#v1.2.0
with:
kubectl: 1.16.2
kustomize: 3.4.0
helm: 2.16.1
helmv3: 3.0.0
command: |
echo "Run kubeval"
kubeval -d base,dev,production --force-color --strict --ignore-missing-schemas
Now when someone issues a pull request into master, this validation kicks in and if it fails the changes does not get promoted into master branch which is what I want!
Here is the output of such a validation:
Run kubeval
WARN - Set to ignore missing schemas
PASS - base/application/plant-simulator-deployment.yaml contains a valid Deployment
PASS - base/application/plant-simulator-ingress-service.yaml contains a valid Ingress
PASS - base/application/plant-simulator-namespace.yaml contains a valid Namespace
PASS - base/application/plant-simulator-service.yaml contains a valid Service
WARN - base/kustomization.yaml containing a Kustomization was not validated against a schema
PASS - base/monitoring/grafana/grafana-deployment.yaml contains a valid Deployment
PASS - base/monitoring/grafana/grafana-service.yaml contains a valid Service
PASS - base/monitoring/plant-simulator-monitoring-namespace.yaml contains a valid Namespace
PASS - base/monitoring/prometheus/config-map.yaml contains a valid ConfigMap
PASS - base/monitoring/prometheus/prometheus-deployment.yaml contains a valid Deployment
PASS - base/monitoring/prometheus/prometheus-roles.yaml contains a valid ClusterRole
PASS - base/monitoring/prometheus/prometheus-roles.yaml contains a valid ServiceAccount
PASS - base/monitoring/prometheus/prometheus-roles.yaml contains a valid ClusterRoleBinding
PASS - base/monitoring/prometheus/prometheus-service.yaml contains a valid Service
PASS - dev/flux-patch.yaml contains a valid Deployment
WARN - dev/kustomization.yaml containing a Kustomization was not validated against a schema
PASS - production/flux-patch.yaml contains a valid Deployment
WARN - production/kustomization.yaml containing a Kustomization was not validated against a schema

How to use pipeline variable inside property file from git

In Azure pipeline I download kubernetes deployment.yml property file which contains following content.
spec:
imagePullSecrets:
- name: some-secret
containers:
- name: container-name
image: pathtoimage/data-processor:$(releaseVersion)
imagePullPolicy: Always
ports:
- containerPort: 8088
env:
My intention is to get the value from pipeline variable $(releaseVersion). But it seems like kubernetes task doesn't allow this value to be accessed from pipeline variable.
I tried using inline configuration type and it works.That means If I copy same configuration as inline content to kubernetes task configuration, it works.
Is there anyway that I can make it work for the configuration from a file?
As I understand, you may want to replace the variable of deployment.yml file content while build executed.
You can use one task which name is Replace Tokens task (Note:The token under this task name is not same with PAToken). This is the task which support replace values of files in projects with environments variables when setting up VSTS Build/Release processes.
Install Replace Tokens from marketplace first, then add Replace Tokens task into your pipeline.
Configure the .yml file path in the Root directory. For me, my target file is under the Drop folder of my local. And then, point out which file you want to operate and replace.
For more argument configured, you can check this doc which I ever refer: https://github.com/qetza/vsts-replacetokens-task#readme
Note: Please execute this task before Deploy to Kubernetes task, so that the change can be apply to the Kubernetes cluster.
Here also has another sample blog can for you refer.
You should have it as part of your pipeline, to substitute environment variables inside the deployment template
Something along the lines of:
- sed -i "s/$(releaseVersion)/${RELEASE_VERSION_IN_BUILD_RUNNER}/" deployment.yml
- kubectl apply -f deployment.yml
You can set the variables in your pipeline. https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch

Kubernetes secrets encryption

I have pods who deployed to Kubernetes cluster (hosted with Google Cloud Kubernetes). Those pods are using some secret, which are plain text files. I added the secret to the yaml file and deployed the deployment. The application is working fine.
Now, let say that someone compromised my code and somehow get access to all my files on the container. In that case, the attacker can find the secrets directory and print all the secrets written there. It's a plain text.
Question:
Why it more secure use kubernetes-secrets instead of just a plain-text?
There are different levels of security and as #Vishal Biyani says in the comments, it sounds like you're looking for a level of security you'd get from a project like Sealed Secrets.
As you say, out of the box secrets doesn't give you encryption at the container level. But it does give controls on access through kubectl and the kubernetes APIs. For example, you could use role-based access control so that specific users could see that a secret exists without seeing (through the k8s APIs) what its value is.
In case you can create the secrets using a command instead of having it on the yaml file:
example:
kubectl create secret generic cloudsql-user-credentials --from-literal=username=[your user]--from-literal=password=[your pass]
you can also read it as
kubectl get secret cloudsql-user-credentials -o yaml
i also use the secret with 2 levels, the one is the kubernetes :
env:
- name: SECRETS_USER
valueFrom:
secretKeyRef:
name: cloudsql-user-credentials
key: username
the SECRETS_USER is a env var, which i use this value on jasypt
spring:
datasource:
password: ENC(${SECRETS_USER})
on the app start up you use the param : -Djasypt.encryptor.password=encryptKeyCode
/.m2/repository/org/jasypt/jasypt/1.9.2/jasypt-1.9.2.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input="encryptKeyCode" password=[pass user] algorithm=PBEWithMD5AndDES

Kubernetes - different settings per environment

We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.
We are using different namespaces per environment
So our current pod config looks something like this:
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.
Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.
What options do we have?
Since configs are in git repo, create a separate branch for prod environment
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
Other...?
This seems like an ideal opportunity to use helm!
It's really easy to get started, simply install tiller into your cluster.
Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:
env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:
auth:
url: https://auth.namespace.example.org
You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.
Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case
jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:
Since configs are in git repo, create a separate branch for prod environment.
I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that,
else use config.yaml) - but with this approach we cannot use kubectl
directly anymore
Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.
Other...?
See jaxxstorms' answer about Helm.