I am wondering if it's possible to store credentials like passwords, tokens and keys safely in my GitLab project.
Currently there are a bunch of Java files with some passwords stored in it for testing purposes. However, I don't want to push this information on my repo due to security reasons. I tried using environment variables in the project, but they only seem to work for the .gitlab-ci.yml file.
My question is does anyone use a vault like Hashicorps or Blackbox to encrypt sensitive information?
Thanks
You can check out GitLab 12.9 (March 2020) which comes with:
HashiCorp Vault GitLab CI/CD Managed Application
GitLab wants to make it easy for users to have modern secrets management. We are now offering users the ability to install Vault within a Kubernetes cluster as part of the GitLab CI managed application process.
This will support the secure management of keys, tokens, and other secrets at the project level in a Helm chart installation.
See documentation and issue.
See also GitLab 13.4 (September 2020)
For Premium/Silver only:
Use HashiCorp Vault secrets in CI jobs
In GitLab 12.10, GitLab introduced functionality for GitLab Runner to fetch and inject secrets into CI jobs. GitLab is now expanding the JWT Vault Authentication method by building a new secrets syntax in the .gitlab-ci.yml file. This makes it easier for you to configure and use HashiCorp Vault with GitLab.
https://about.gitlab.com/images/13_4/vault_ci.png -- Use HashiCorp Vault secrets in CI jobs
See Documentation and Issue.
If you are not using environment variables in GitLab, then you are asking if it is possible to store secrets in GitLab. I have not done this myself, but I found this post about it:
https://embeddedartistry.com/blog/2018/03/15/safely-storing-secrets-in-git/
The author suggests three ways of storing secrets in git:
Blackbox
git-secret
git-crypt
The author was using BlackBox, but was going to migrate to git-crypt. From a quick look at it, git-crypt looks like something that I could use myself.
Related
I want to deploy a web API on Google cloud and for test purposes I would just put the API key in the app.yaml file as an environment variable. Is this a security issue?
It's generally problematic to persist secrets to files. Even if the app.yaml were inaccessible from the runtime service, you'd still face challenges that it be exposed in build logs and if you inadvertently commit app.yaml to e.g. github.
For "testing", you can run generally run an App Engine locally. This isn't a perfect replica of the production service but it should be sufficient for testing.
A solution for managing secrets is e.g. Google's Secret Manager. SDKs (encouraged) and the underlying REST API (discouraged) are available.
My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.
enter image description here
Operation failed. Check pod logs for install-runner for more details.
I am getting this error while trying to install GitLab runner.
What I have done so far
successfully installed Kubernetes cluster
created a demo project in Gitlab
provided details to GitLab for Kubernetes cluster
Then while trying to installing runner it shows failure.
What am I missing here? [please check the attached image]
I had was facing the same issue, In my case it was because i had not set RBAC-enabled cluster to true. I deleted the intergration and checked RBAC-enabled cluster when i re-integrated and it worked.
Runner logs:
kubectl logs install-runner -n gitlab-managed-apps
Error: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot list resource "secrets" in API group "" in the namespace "gitlab-managed-apps"
Reference:
gitlab issue
Warning, with GitLab 13.11 (April 2021):
One-click GitLab Managed Apps will be removed in GitLab 14.0
We are deprecating one-click install of GitLab Managed Apps.
Although they made it very easy to get started with deploying to Kubernetes from GitLab, the overarching community feedback was that they were not flexible or customizable enough for real-world Kubernetes applications.
Instead, our future direction will focus on installing apps on Kubernetes via GitLab CI/CD in order to provide a better balance between ease-of-use and expansive customization.
We plan to remove one-click Managed Apps completely in GitLab version 14.0.
This will not affect how existing managed applications run inside your cluster, however, you’ll no longer have the ability to modify those applications via the GitLab UI.
We recommend cluster administrators plan to migrate any existing managed applications by reinstalling them either manually or via CI/CD. Migration instructions will be available in our documentation later.
For users of alerts on managed Prometheus, in GitLab version 14.0, we will also remove the ability to setup/modify alerts from the GitLab UI. This change is necessary because the existing solution will no longer function once managed Prometheus is removed.
Deprecation date: May 22, 2021
I have a private gitlab instance with multiple projects and Gitlab CI enabled. The infrastructure is provided by Google Cloud Platform and Gitlab Pipeline Runner is configured in Kubernetes cluster.
This setup works very well for basic pipelines running tests etc. Now I'd like to start with CD and to do that I need some manual acceptance on the pipeline which means the person reviewing it needs to have the access to the current state of the app.
What I'm thinking is having a kubernetes deployment for the pipeline that would be executed once you try to access it (so we don't waste cluster resources) and would be destroyed once the reviewer accepts the pipeline or after some threshold.
So the deployment would be executed in the same cluster as Gitlab Runner (or different?) and would be accessible by unique URI (we're mostly talking about web-server apps) e.g. https://pipeline-58949526.git.mydomain.com
While in theory, it all makes sense to me, I don't really know how to set this up properly.
Does anyone have a similar setup? Is my view on this topic too simple? Let me know!
Thanks
If you want to see how to automate CI/CD with multiple environments on GKE using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.
SSH Secrets are required to clone a private repo from Github in Origin.
I created a project, added SSH Secrets to the build config, all went fine.
Now I am creating a template, so users will create new project and use my template to deploy their applications.
Here after creating the project, build is not starting because of missing SSH Secret. Is it possible to share a SSH Secret between namespaces? So that I can create a SSH secret in Openshift project and users can use it straight away without assigning secrets to build config after creating every project?
EDIT: Is this possible in Kubernetes?