I am trying to add the Jfrog Artifactory to spinnaker so that spinnaker will be able to fetch the helm chart and makes the deployment. I am trying this command but it's not working
hal config artifact helm account add my-helm-account \
--username-password-file $USERNAME_PASSWORD_FILE
When I run the pipeline it shows me this error
Status: 500, URL: http://spin-clouddriver.spinnaker:7002/artifacts/fetch/, Message: Failed to download index.yaml file in
You'll need to provide the --repository flag as well. I'm guessing that spin-clouddriver URL is the default if a repository isn't specified.
The final hal command may look like this:
hal config artifact helm account add my-helm-account --username-password-file $USERNAME_PASSWORD_FILE --repository https://my.artifactory.com/artifactory/helm
Reference for the command: https://spinnaker.io/docs/reference/halyard/commands/#hal-config-artifact-helm-account-add
I'm new to Terraform and Helm world! I need to set up Istio on the AWS EKS cluster. I was able to set up the EKS cluster using Terraform. I'm thinking of installing ISTIO on top of the EKS cluster using Terraform by writing terraform modules. However, I found that we can set up Istio on top of eks using the helm chart.
Can someone help me to answer my few queries:
Should I install Istio using Terraform? If yes, Is there any terraform module available or How can I write one?
Should I install Istio using Helm Chart? If yes, what are the pros and cons of it?
I need to write a pipeline to install Istio on EKS cluster. Should I use a combination of both terraform and Helm as provider?
Thank you very much for your time. Appreciate all your help!
To extend #Chris 3rd option of terraform + helm provider,
as for version 1.12.0+ of istio they officially have a working helm repo:
istio helm install
and that with terraform's helm provider
Terraform helm provider
allows an easy setup that is configured only by terraform:
provider "helm" {
kubernetes {
// enter the relevant authentication
}
}
locals {
istio_charts_url = "https://istio-release.storage.googleapis.com/charts"
}
resource "helm_release" "istio-base" {
repository = local.istio_charts_url
chart = "base"
name = "istio-base"
namespace = var.istio-namespace
version = "1.12.1"
create_namespace = true
}
resource "helm_release" "istiod" {
repository = local.istio_charts_url
chart = "istiod"
name = "istiod"
namespace = var.istio-namespace
create_namespace = true
version = "1.12.1"
depends_on = [helm_release.istio-base]
}
resource "kubernetes_namespace" "istio-ingress" {
metadata {
labels = {
istio-injection = "enabled"
}
name = "istio-ingress"
}
}
resource "helm_release" "istio-ingress" {
repository = local.istio_charts_url
chart = "gateway"
name = "istio-ingress"
namespace = kubernetes_namespace.istio-ingress-label.id
version = "1.12.1"
depends_on = [helm_release.istiod]
}
This is the last step that was missing to make this production ready
It is no longer needed to locally keep the helm charts with the null_resource
If you wish to override the default helm values it is nicely shown in here:
Artifact hub, choose the relevant chart and see the values
As #Matt Schuchard mentioned, this is a bit opinion based question, that's why I will answer that based on my understanding.
Question number 1.
To answer your question, Should I install Istio using Terraform?, yes, if you follow Devops practices then you should write everything in a code, so I would recommend to do that.
As per the second part of your question, If yes, Is there any Terraform module available, no, from what I see currently there is no Istio module for Terraform, there is only a helm one.
As for the last part of the first question, How can I write Terraform module? I would recommend to start with the Terraform documentation. There is also a tutorial for creating a module.
Question number 2.
To answer your question, Should I install Istio using Helm Chart?, depends on your use case, you can do it either with helm or istioctl/istio operator.
As for the following question, If yes, what are the pros and cons of it? I'm not sure if the current helm chart is production ready, according to Istio documentation, Providing the full configuration in an IstioOperator CR is considered an Istio best practice for production environments, so from what I understand, you should rather use operator than helm. Also worth to note that the helm chart was not used by several versions, if was broughts back to life in version 1.8.
Question number 3.
As per the last question, I need to write a pipeline to install Istio on EKS cluster. Should I use a combination of both terraform and Helm as provider?, depends, it could be either Terraform and Helm, but from what I see it's also possible to do that with an Terraform and Istio Operator, there is an example. So it's rather up to you to decide which path will you take.
I would also recommend to take a look at this reddit thread. You might find few useful comments from the prod environment here, about installing Istio with Terraform.
I have been researching this in the last months and want to add my findings to #Jakob's answer:
First, there is an answer to the pros/cons of the different installation method, so I will not say anything about that:
https://istio.io/latest/faq/setup/#install-method-selection
Basically all of them can be done with terraform in a certain way.
terraform + istioctl with terraform null_resource provider
This is basically the istioctl install -f <file> command. You can create a template file and to the istictl install command with the null_resource provider.
resource "local_file" "setup_istio_config" {
content = templatefile("${path.module}/istio-operator.tmpl", {
enableHoldAppUntilProxyStarts = var.hold_app_until_proxy_starts
})
filename = "istio-operator.yaml"
}
resource "null_resource" "install_istio" {
provisioner "local-exec" {
command = "istioctl install -f \"istio-operator.yaml\" --kubeconfig ../${var.kubeconfig}"
}
depends_on = [local_file.setup_istio_config]
}
Pros:
Very easy setup
Cons:
How to upgrade using istioctl upgrade -f <file has to be solved somehow
istioctl must be installed in different versions when handling multiple clusters with different istio versions
Right istioctl version must be choosen on setup
I guess you can solve the upgrade process somehow, but the hole process is not really "infrastructure as code" enough. I didn't look into it further, because it doesn't seam to be good practice.
terraform + istio operator with terraform null_resource provider and kubectl provider
Similar the istio operator setup initializes the operator pod and takes a istio-operator.yml to setup istio for you.
resource "null_resource" "init_operator" {
provisioner "local-exec" {
command = "istioctl operator init --kubeconfig ../${var.kubeconfig}"
}
}
resource "kubectl_manifest" "setup_istio" {
yaml_body = <<YAML
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-setup
namespace: istio-system
spec:
profile: default
hub: gcr.io/istio-release
tag: 1.9.2
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
meshConfig:
defaultConfig:
holdApplicationUntilProxyStarts: ${var.hold_app_until_proxy_starts}"
YAML
depends_on = [null_resource.init_operator]
}
It would be a good idea to wait for some seconds between the init and applying the config.
Here is a good article about doing this with Azure's aks:
https://medium.com/#vipinagarwal18/install-istio-on-azure-kubernetes-cluster-using-terraform-214f6d3f611
Pros:
Easy to setup
Easy to upgrade istio using the kubectl provider
As long as helm is in alpha, this might be the best approach.
terraform + helm with terraform helm provider
Istio provides some charts for the different componentes, when downloading istioctl. Those can be used for installing it with helm.
resource "helm_release" "istio_base" {
name = "istio-base"
chart = "./manifests/charts/base"
namespace = "istio-system"
}
Cons:
Not ready for production
Bonus
istio manifest + helm
Some time ago I've read an article on how to use istio manifest from istioctl manifest generate in combination with helm to install and mange istio. This approach needs some custom code, but it could be done with terraform and the helm provider as well.
Please read: https://karlstoney.com/2021/03/04/ci-for-istio-mesh/index.html
Conclusion
Installing istio with terraform works but seams to be a bit dirty at the moment. Once the helm setup is stable, I guess this would be the best approach. And with the helm provider it can be composed with terraform creation of other resources. Terraform certainly misses an istio provider, but I don't think they will create one in the foreseeable future.
For all those who found #Benda's solution to the point. Here is the working template for the same. Since I faced a couple of issues with that template, I compiled it for my own use case. I hope its helpful.
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
}
locals {
istio_charts_url = "https://istio-release.storage.googleapis.com/charts"
}
resource "kubernetes_namespace" "istio_system" {
metadata {
name = "istio-system"
labels = {
istio-injection = "enabled"
}
}
}
resource "helm_release" "istio-base" {
repository = local.istio_charts_url
chart = "base"
name = "istio-base"
namespace = kubernetes_namespace.istio_system.metadata.0.name
version = ">= 1.12.1"
timeout = 120
cleanup_on_fail = true
force_update = false
}
resource "helm_release" "istiod" {
repository = local.istio_charts_url
chart = "istiod"
name = "istiod"
namespace = kubernetes_namespace.istio_system.metadata.0.name
version = ">= 1.12.1"
timeout = 120
cleanup_on_fail = true
force_update = false
set {
name = "meshConfig.accessLogFile"
value = "/dev/stdout"
}
depends_on = [helm_release.istio-base]
}
resource "helm_release" "istio-ingress" {
repository = local.istio_charts_url
chart = "gateway"
name = "istio-ingress"
namespace = kubernetes_namespace.istio_system.metadata.0.name
version = ">= 1.12.1"
timeout = 500
cleanup_on_fail = true
force_update = false
depends_on = [helm_release.istiod]
}
PS: Please make sure you enable ports 15017 and 15021 in the master firewall rule for the istio ingress pod to properly start.
I am managing my k8s cluster using terraform and has tiller version 0.10.4,
Now I made some changes in my terraform file. so when I run terraform init I am getting following error.
error initializing local helm home: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
So I change the stable url in my terraform file, and now it looks something like
data "helm_repository" "stable" {
name = "stable"
url = "https://charts.helm.sh/stable"
}
provider "kubernetes" {
config_path = "kubeconfig.yaml"
}
provider "helm" {
install_tiller = true
version = "0.10.4"
service_account = "tiller"
namespace = "kube-system"
kubernetes {
config_path = "kubeconfig.yaml"
}
}
But I am still getting the same error.
The old Google based Chart storage system has been decommissioned. But also Helm 2 is no longer supported at all and Helm 3 does not use Tiller. You can find a static mirror of the old charts repo on Github if you go poking, but you need to upgrade to Helm 3 anyway so just do that instead.
I am somewhat new to Kubernetes, and I am trying to learn about deploying airflow to Kubernetes.
My objective is to try to deploy an "out-of-the-box" (or at least closer to that) deployment for airflow on Kubernetes. I have created the Kubernetes cluster via Terraform (on EKS), and would like to deploy airflow to the cluster. I found that Helm can help me deploy airflow easier relative to other solutions.
Here is what I have tried so far (snippet and not complete code):
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
data "helm_repository" "airflow" {
name = "airflow"
url = "https://airflow-helm.github.io/charts"
}
resource "helm_release" "airflow" {
name = "airflow-helm"
repository = data.helm_repository.airflow.metadata[0].name
chart = "airflow-chart"
}
I am not necessarily fixed on using Terraform (I just thought it might be easier and wanted to keep state). So I am also happy to discover other solutions that will help me airflow with all the pods needed.
You can install it using Helm from official repository, but there are a lot of additional configuration to consider. The Airflow config is described in chart's values.yaml. You can take a look on this article to check example configuration.
For installation using terraform you can take a look into this article, where both Terraform config and helm chart's values are described in detail.
I'm able to install grafana using the stable/grafana chart, using Terraform and the Helm provider. I'm trying to configure grafana with a new grafana.ini file, which should be possible using a set, however it doesn't appear to pick up the configuration at all.
I've also tried using the Helm release resources values key to merge in the same config in yaml format (with a top-level grafana.ini key), also with no success.
What I'm trying to achieve is a file containing my config, in ini or yml format, passed to the grafana Helm chart so I can configure grafana correctly (ultimately I need to configure OAuth providers via the config) using Terraform.
Relevant config snips below.
Chart https://github.com/helm/charts/tree/master/stable/grafana
Terraform v0.12.3
provider.helm v0.10.2
provider.kubernetes v1.8.0
grafana.ini
[security]
admin_user = username
main.tf (excerpt)
resource "helm_release" "grafana" {
chart = "stable/grafana"
name = "grafana"
set {
name = "grafana.ini"
value = file("grafana.ini")
}
}
I eventually found the correct way of merging the values key - it turns out (no surprise) I had the format of grafana.ini wrong when converting to YAML. Here's the working config:
config.yaml
grafana.ini:
default:
instance_name: my-server
auth.basic:
enabled: true
main.tf
resource "helm_release" "grafana" {
chart = "stable/grafana"
name = "grafana"
values = [file("config.yaml")]
}