Openshift - How to share a secret between Namespaces? - kubernetes

SSH Secrets are required to clone a private repo from Github in Origin.
I created a project, added SSH Secrets to the build config, all went fine.
Now I am creating a template, so users will create new project and use my template to deploy their applications.
Here after creating the project, build is not starting because of missing SSH Secret. Is it possible to share a SSH Secret between namespaces? So that I can create a SSH secret in Openshift project and users can use it straight away without assigning secrets to build config after creating every project?
EDIT: Is this possible in Kubernetes?

Related

how to use gitlab to share files and folders between projects

I have a question to ask but ill explain my plan/requirement first
I have started on new company
I have been tasked to migrate a lot of microservices running on swarm to Kubernetes
there are about 50 microservices running now
right now we are using consul as key/value store for configuration files
due to a lot of mistakes in designing infrastructure, our swarm is not stable ( failing overlays and so on)
developers want to have sub-versioning on configuration as well but in a specific way
one project for all config files
they don't want to go through building stages
there are some applications that read live configurations (
changes occur regularly )
so I need to centralize the configuration and create a project for this task
I store Kubernetes manifests GitLab-ci files and app configurations there
when I include ci files in the target project I can't access config and Kube manifests ( submodule is not acceptable by developers )
I'm planning to use helm instead of kubectl for deployment
my biggest challenge is to provide the configuration live ( as the developer pushes it applies on cm )
am I on the right track?
any suggestion on how to achieve my goal?
I expect to be able to deploy projects and use multiple files and folders from other projects
create a ci file like this in your devops repo, this job should commit the config file to your repo when config changed.
commit-config-file-to-devops-repo:
script: "command to commit config file to your devops repo"
only:
refs:
- master
changes:
- path/some-config-file.json
- configs/*
change default ci file location to point to ci file in your devops repo
https://192.168.64.188/help/ci/pipelines/settings#custom-cicd-configuration-path
my/path/.my-custom-file.yml#mygroup/another-project
setup pipeline, apply config to k8s when file commited.
Personally I use argocd to sync helm chart to k8s, you can do it your way.
Read live configurations is normally not recommended, because changing config may cause error.
When using k8s, it is better to create configmap and inject config into environment variables
Then use rollout mechanism to restart the app.
Howeven, if you are using configmap volume
It will auto update config file when you change config
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically

Gitlab Kubernetes Agent how to restrict access by namespace or environment

I'm trying to move from certificate based GitLab Kubernetes integration which got deprecated, to new agent based Kubernetes integration.
I use CI/CD workflow, created separate project for Gitlab Kubernetes Agents and registered them there.
The equation is how to restrict the usage of registered agents in other projects?
Previously when one uses certificate based approach, one can set target namespace in project setting, also one can set environment for the integrated cluster, to use it with protected environments.
Now Kubernetes context is just available in other projects under same group, and once you have access to CI\CD files you can do whatever you want, and deploy anywhere.

Programmatically Connecting a GitHub repo to a Google Cloud Project

I'm working on a Terraform project that will set up all the GCP resources needed for a large project spanning multiple GitHub repos. My goal is to be able to recreate the cloud infrastructure from scratch completely with Terraform.
The issue I'm running into is in order to setup build triggers with Terraform within GCP, the GitHub repo that is setting off the trigger first needs to be connected. Currently, I've only been able to do that manually via the Google Cloud Build dashboard. I'm not sure if this is possible via Terraform or with a script but I'm looking for any solution I can automate this with. Once the projects are connected updating everything with Terraform is working fine.
TLDR; How can I programmatically connect a GitHub project with a GCP project instead of using the dashboard?
Currently there is no way to programmatically connect a GitHub repo to a Google Cloud Project. This must be done manually via Google Cloud.
My workaround is to manually connect an "admin" project, build containers and save them to that project's artifact registry, and then deploy the containers from the registry in the programmatically generated project.

How can I use Gitlab's Container Registry for Helm Charts with ArgoCDs CI/CD Mechanism?

My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.

Storing secrets and credentials securely in GitLab

I am wondering if it's possible to store credentials like passwords, tokens and keys safely in my GitLab project.
Currently there are a bunch of Java files with some passwords stored in it for testing purposes. However, I don't want to push this information on my repo due to security reasons. I tried using environment variables in the project, but they only seem to work for the .gitlab-ci.yml file.
My question is does anyone use a vault like Hashicorps or Blackbox to encrypt sensitive information?
Thanks
You can check out GitLab 12.9 (March 2020) which comes with:
HashiCorp Vault GitLab CI/CD Managed Application
GitLab wants to make it easy for users to have modern secrets management. We are now offering users the ability to install Vault within a Kubernetes cluster as part of the GitLab CI managed application process.
This will support the secure management of keys, tokens, and other secrets at the project level in a Helm chart installation.
See documentation and issue.
See also GitLab 13.4 (September 2020)
For Premium/Silver only:
Use HashiCorp Vault secrets in CI jobs
In GitLab 12.10, GitLab introduced functionality for GitLab Runner to fetch and inject secrets into CI jobs. GitLab is now expanding the JWT Vault Authentication method by building a new secrets syntax in the .gitlab-ci.yml file. This makes it easier for you to configure and use HashiCorp Vault with GitLab.
https://about.gitlab.com/images/13_4/vault_ci.png -- Use HashiCorp Vault secrets in CI jobs
See Documentation and Issue.
If you are not using environment variables in GitLab, then you are asking if it is possible to store secrets in GitLab. I have not done this myself, but I found this post about it:
https://embeddedartistry.com/blog/2018/03/15/safely-storing-secrets-in-git/
The author suggests three ways of storing secrets in git:
Blackbox
git-secret
git-crypt
The author was using BlackBox, but was going to migrate to git-crypt. From a quick look at it, git-crypt looks like something that I could use myself.