How to set shared home for JIRA Data Center when it is installed using Helm chart? - kubernetes

I installed JIRA Data Center helm chart and had the k8s provision the shared home dynamically by setting the create flag to true as shown in this link.
https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L219
This created the AWS EFS access point as expected. Also, the pod shows "JIRA_SHARED_HOME" environment variable for Jira container. However, the JIRA software doesn't seem to honor the environment variable. The log (/var/atlassian/jira/logs/atlassian-jira.log) shows that jira.shared.home is set to the same values as jira.local.home which is /var/atlassian/application-data/jira.
I expected the jira.shared.home to be set as /var/atlassian/application-data/shared-home as the below link would indicate.
https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L249

I found a workaround based on the following documentation which necessitates the cluster.properties file in the Jira local home.
https://developer.atlassian.com/server/jira/platform/configuring-a-jira-cluster/#jira-setup
The helm chart allows creation of config/properties file from k8s config maps.
https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L765
Create a ConfigMap k8s object
apiVersion: v1
kind: ConfigMap
metadata:
name: jira-configs
data:
cluster.properties: |
jira.shared.home=/var/atlassian/application-data/shared-home
Add the following to the JIRA data center helm chart values.yaml.
additionalFiles:
- name: jira-configs
type: configMap
key: cluster.properties
mountPath: /var/atlassian/application-data/jira

Related

How to generate kubernetes configmap from the Quarkus `application.properties` with its `helm extension`?

How to have the properties of Quarkus application.properties be available as configmap or environment variable in the Kubernetes container?
The quarkus provides helm and kubernetes extensions to generate resources (yaml) during the build, which can be used to deploy the application in kubernetes. However this extension does not elaborate the ways to generate the configmap to hold the application properties set in the application.properties. The site too does not give directions on it.
This is the purpose of the Kubernetes Config extension. Basically, after adding the Kubernetes Config, Kubernetes, and Helm extensions to your Maven/Gradle configuration, you need first to enable it by adding the following properties to your application properties:
quarkus.kubernetes-config.enabled=true
quarkus.kubernetes-config.config-maps=app-config
With these two properties, Quarkus will try to load the config map named app-config at startup as config source.
Where is the ConfigMap named app-config? You need to write it on your own and write the application properties there, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.properties: |
hello.message=Hello %s from configmap
and then add the content at the file src/main/kubernetes/kubernetes.yml. (Note that the name of the file must be kubernetes.yml and the folder src/main/kubernetes). More information is in this link.
The Kubernetes extension will aggregate the resources within the file src/main/kubernetes/kubernetes.yml into the generated target/kubernetes/kubernetes.yml (you will notice your configmap is there).
And finally, the Helm extension will inspect the target/kubernetes folder and create the Helm chart templates accordingly.
You can checkout a complete example in this link.

Application not showing in ArgoCD when applying yaml

I am trying to setup ArgoCD for gitops. I used the ArgoCD helm chart to deploy it to my local Docker Desktop Kubernetes cluster. I am trying to use the app of apps pattern for ArgoCD.
The problem is that when I apply the yaml to create the root app, nothing happens.
Here is the yaml (created by the command helm template apps/ -n argocd from the my public repo https://github.com/gajewa/gitops):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: http://kubernetes.default.svc
namespace: argocd
project: default
source:
path: apps/
repoURL: https://github.com/gajewa/gitops.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
The resource is created but nothing in Argo UI actually happened. No application is visible. So I tried to create the app via the Web UI, even pasting the yaml in there. The application is created in the web ui and it seems to synchronise and see the repo with the yaml templates of prometheus and argo but it doesn't actually create the prometheus application in ArgoCD. And the prometheus part of the root app is forever progressing.
Here are some screenshots:
The main page with the root application (where also argo-cd and prometheus should be visible but aren't):
And then the root app view where something is created for each template but Argo seems that it can't create kubernetes deployments/pods etc from this:
I thought maybe the CRD definitions are not present in the k8s cluster but I checked and they're there:
λ kubectl get crd
NAME CREATED AT
applications.argoproj.io 2021-10-30T16:27:07Z
appprojects.argoproj.io 2021-10-30T16:27:07Z
I've ran out of things to check why the apps aren't actually deployed. I was going by this tutorial: https://www.arthurkoziel.com/setting-up-argocd-with-helm/
the problem is you have to use the below code in your manifest file in metadata:
just please change the namespace with the name your argocd was deployed in that namespace. (default is argocd)
metadata:
namespace: argocd
From another SO post:
https://stackoverflow.com/a/70276193/13641680
It turns out that at the moment ArgoCD can only recognize application declarations made in ArgoCD namespace,
Related GitHub Issue

kubernetes config map data value externalisation

I'm installing fluent-bit in our k8s cluster. I have the helm chart for it on our repo, and argo is doing the deployment.
Among the resources in the helm chart is a config-map with data value as below:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit
labels:
app: fluent-bit
data:
...
output-s3.conf: |
[OUTPUT]
Name s3
Match *
bucket bucket/prefix/random123/test
region ap-southeast-2
...
My question is how can I externalize the value for the bucket so it's not hardcoded (please note that the bucket value has random numbers)? As the s3 bucket is being created by a separate app that gets ran on the same master node, the randomly generated s3 bucket name is available as environment variable, e.g. doing "echo $s3bucketName" on the node would give the actual value).
I have tried doing below on the config map but it didn't work and is just getting set as it is when inspected on pod:
bucket $(echo $s3bucketName)
Using helm, I know it can be achieved something like below and then can populate using scripting something like helm --set to set the value from environment variable. But the deployment is happening auto through argocd so it's not like there is a place to do helm --set command or please let me know if otherwise.
bucket {{.Values.s3.bucket}}
TIA
Instead of using helm install you can use helm template ... --set ... > out.yaml to locally render your chart in a yaml file. This file can then be processed by Argo.
Docs
With FluentBit you should be able to use environment variables such as:
output-s3.conf: |
[OUTPUT]
Name s3
Match *
bucket ${S3_BUCKET_NAME}
region ap-southeast-2
You can then set the environment variable on your Helm values. Depending on the chart you are using and how values are passed you may have to perform a different setup, but for example using the official FluentBit charts with a values-prod.yml like:
env:
- name: S3_BUCKET_NAME
value: "bucket/prefix/random123/test"
Using ArgoCD, you probably have a Git repository where Helm values files are defined (like values-prod.yml) and/or an ArgoCD application defining values direct. For example, if you have an ArgoCD application defined such as:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
# [...]
spec:
source:
# ...
helm:
# Helm values files for overriding values in the helm chart
valueFiles:
# You can update this file
- values-prod.yaml
# Helm values
values: |
# Or update values here
env:
- name: S3_BUCKET_NAME
value: "bucket/prefix/random123/test"
# ...
You should be able to update either values-prod.yml on the repository used by ArgoCD or update directly values: with you environment variable

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

Standard way of keeping Dockerhub credentials in Kubernetes YAML resource

I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.
Confusion
Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?
Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ?
If I am using imagePullSecrets way How I can create separate YAML file for credentials ?
From Kubernetes point of view: Pull an Image from a Private Registry you can create secrets and add necessary information into your yaml (Pod/Deployment)
Steps:
1. Create a Secret by providing credentials on the command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
2. Create a Pod that uses your Secret (example pod):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
You can pass the dockerhub creds as environment variables at jenkins only and Imagepullsecrets are to be made as per kubernetes doc, as they are one time things, you can directly add them to the required clusters