Is it possible to create a tls kubernetes secret using Azure Key Vault data resources in Terraform? - kubernetes

I have a certificate file and a private key file that I am using to implement tls encrypted traffic for several different k8s pods running under an NGINX ingress load balancer. This works fine (i.e. the web apps are visible and show as secure in a browser) if I create the kubernetes.io/tls secret in either of these ways:
Use kubectl: kubectl create secret my-tls-secret --key <path to key file> --cert <path to cert file>.
Reference those files locally in terraform:
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = file("${path.module}/certfile.cer"),
"tls.key" = file("${path.module}/keyfile.key")
}
}
However, neither of these methods are ideal because for #1, it turns my terraform plan/apply steps into 2-step processes and for #2, I don't want to commit the key file to source control for security reasons.
So, my question is: is there a way to do this by using some combination of Azure Key Vault data resources (i.e. keys, secrets or certificates)?
I have tried the following:
Copy/pasting the cert and key into key vault secrets (have also tried this with base64 encoding the values before pasting them into the key vault and using base64decode() around the tls.crt and tls.key values in the Terraform):
data "azurerm_key_vault_secret" "my_private_key" {
name = "my-private-key"
key_vault_id = data.azurerm_key_vault.mykv.id
}
data "azurerm_key_vault_secret" "my_certificate" {
name = "my-certificate"
key_vault_id = data.azurerm_key_vault.mykv.id
}
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = data.azurerm_key_vault_secret.my_certificate.value,
"tls.key" = data.azurerm_key_vault_secret.my_private_key.value
}
}
Tried importing the cert as an Azure key vault certificate and accessing its attributes like so:
data "azurerm_key_vault_certificate_data" "my_certificate_data" {
name = "my-certificate"
key_vault_id = data.azurerm_key_vault.mykv.id
}
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = data.azurerm_key_vault_certificate_data.my_certificate_data.pem,
"tls.key" = data.azurerm_key_vault_certificate_data.my_certificate_data.key
}
}
which results in an error in the NGINX ingress log of:
[lua] certificate.lua:253: call(): failed to convert private key from PEM to DER: PEM_read_bio_PrivateKey() failed, context: ssl_certificate_by_lua*, client: xx.xx.xx.xx, server: 0.0.0.0:443
Both of these attempts resulted in failure and the sites ended up using the default/fake/acme kubernetes certificate and so are shown as insecure in a browser.
I could potentially store the files in a storage container and wrap my terraform commands in a script that pulls the cert/key from the storage container first and then use working method #2 from above, but I'm hoping there's a way I can avoid that that I am just missing. Any help would be greatly appreciated!

Method #1 from the original post works - the key point I was missing was how I was getting the cert/key into Azure KeyVault. As mentioned in the post, I was copy/pasting the text from the files into the web portal secret creation UI. Something was getting lost in translation doing it this way. The right way to do it is to use the Azure CLI, like so:
az keyvault secret set --vault-name <vault name> --name my-private-key --file <path to key file>
az keyvault secret set --vault-name <vault name> --name my-certificate --file <path to cert file>

Related

InfluxDB2 on Kubernetes not using existing admin password/token secret

I'm installing InfluxDB2 on a Kubernetes cluster (AWS EKS) and in the helm chart I specify an existing secret name "influxdb-auth" for admin user credentials. When I try to login to the web admin interface, it does not accept the password or token from that secret. If I don't specify an existing secret, it automatically creates a secret "influxdb2-auth" and I can retrieve and use the password successfully, but it will not use the existing secret. Also when I specify the existing secret "influxdb-auth" it does not create a secret "influxdb2-auth" so I can't retrieve the password it has generated. I have tried naming the existing secret "influxdb2-auth" but that also did not work. Any ideas on what the problem might be?
Section from values.yaml:
## Create default user through docker entrypoint
## Defaults indicated below
##
adminUser:
organization: "test"
bucket: "default"
user: "admin"
retention_policy: "0s"
## Leave empty to generate a random password and token.
## Or fill any of these values to use fixed values.
password: ""
token: ""
## The password and token are obtained from an existing secret. The expected
## keys are `admin-password` and `admin-token`.
## If set, the password and token values above are ignored.
existingSecret: influxdb-auth
To anyone here coming here from the future. Make sure you run:
echo $(kubectl get secret influxdb-influxdb2-auth -o "jsonpath={.data['admin-password']}" --namespace monitoring | base64 --decode)
after first installation. First time influxdb2 starts it will setup task, subsequent helm install/upgrade seem to save new password in the secret which isn't on the file system.
I had to delete content of PVC for influxdb and rerun installation.

Azure Terraform Unable to Set secret in KeyVault during deployment

I am facing a blocker that I don't seem to find a practical solution.
I am using azure terraform to create a storage account, and I would like, during the release pipeline, to be able to set the connection string of this storage account as a secret in an existing KeyVault.
So far I am able to retrieve secret from this KeyVault as I am using a managed identity which has the following permission upon the KeyVault:
key = get, list
secret = get, list and set
cert = get , list
the workflow process in my terraform is as follow:
Retrieve the KeyVault data:
data "azurerm_key_vault" "test" {
name = "test"
resource_group_name = "KeyVault-test"
}
Retrieve the user assigned identity data:
data "azurerm_user_assigned_identity" "example" {
name = "mng-identity-example"
resource_group_name = "managed-identity-example"
}
Once I have those 2 data, I tried to create the secret as follow:
resource "azurerm_key_vault_secret" "secretTest" {
key_vault_id = data.azurerm_key_vault.test.id
name = "secretTest"
value = azurerm_storage_account.storageaccount.primary_connection_string
}
Once I set the release pipeline to run this terraform, it does fail with the error Access Denied
Which is fully understandable as this terraform does not have permission to set or retrieve the secret.
And this is the part on which I am blocked.
If anyone can help understand how can I use my managed identity to set this secret?
I looked into terraform documentation but couldn't find any step or explanation.
Thank you so much for your help and time, and please if you need more info just ask me.
Please make sure that the service principal that you are using to login into Azure using Terraform has the same permission which you assigned to the managed identity .
provider "azurerm" {
features {}
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000" ## This Client ID needs to have the permissions in Keyvault access policy which you have provided to the managed identity.
client_secret = var.client_secret
tenant_id = "00000000-0000-0000-0000-000000000000"
}
OR
If You are using a Service Connection to connect the Devops Pipeline to Azure and use it in Terrafarom , then you need to provide that Devops service connection (service principal) the permissions in the access policy.

Authenticating to GKE master in Python

I need to authenticate to a Kubernetes cluster provisioned in GKE using the Kubernetes Python client and the Google Cloud python client. I would prefer not to shell out to gcloud for several reasons:
relying on the system shell gcloud in a Python script when I have a native Google Cloud library is inelegant
it requires the system to have gcloud
I would have to switch users to the relevant ServiceAccount and switch back
It incurs the cost of starting/joining another process
As such, the workflow of gcloud container clusters get-credentials (which delegates to gcloud config config-helper) will not suffice to get me the API key I need. How do I get the equivalent output with the Google Cloud Python API?
Here is what I have so far:
import kubernetes.client
import googleapiclient.discovery
import base64
# get the cluster object from GKE
gke = googleapiclient.discovery.build('container', 'v1', credentials=config['credentials'])
name = f'projects/{config["project_id"]}/locations/{config["location"]}/{parent}/clusters/{config["name"]}'
gke_clusters = gke.projects().locations().clusters()
gke_cluster = gke_clusters.get(name=name).execute()
# set up Kubernetes Config
kube_config = kubernetes.client.Configuration()
kube_config.host = 'https://{0}/'.format(gke_cluster['endpoint'])
kube_config.verify_ssl = True
#kube_config.api_key['authenticate'] = "don't know what goes here"
# regretably, the Kubernetes client requires `ssl_ca_cert` to be a path, not the literal cert, so I will write it here.
kube_config.ssl_ca_cert = 'ssl_ca_cert'
with open(kube_config.ssl_ca_cert, 'wb') as f:
f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode()))
# use Kubernetes client to do something
kube_client = kubernetes.client.ApiClient(configuration=kube_config)
kube_v1 = kubernetes.client.CoreV1Api(kube_client)
kube_v1.list_pod_for_all_namespaces(watch=False)
Below is a solution that pulls the access token out of the googleapiclient, rather than copy-pasting things manually.
import googleapiclient.discovery
from tempfile import NamedTemporaryFile
import kubernetes
import base64
def token(*scopes):
credentials = googleapiclient._auth.default_credentials()
scopes = [f'https://www.googleapis.com/auth/{s}' for s in scopes]
scoped = googleapiclient._auth.with_scopes(credentials, scopes)
googleapiclient._auth.refresh_credentials(scoped)
return scoped.token
def kubernetes_api(cluster):
config = kubernetes.client.Configuration()
config.host = f'https://{cluster["endpoint"]}'
config.api_key_prefix['authorization'] = 'Bearer'
config.api_key['authorization'] = token('cloud-platform')
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.decodebytes(cluster['masterAuth']['clusterCaCertificate'].encode()))
config.ssl_ca_cert = cert.name
client = kubernetes.client.ApiClient(configuration=config)
api = kubernetes.client.CoreV1Api(client)
return api
def run(cluster):
"""You'll need to give whichever account `googleapiclient` is using the
'Kubernetes Engine Developer' role so that it can access the Kubernetes API.
`cluster` should be the dict you get back from `projects.zones.clusters.get`
and the like"""
api = kubernetes_api(cluster)
print(api.list_pod_for_all_namespaces())
Figuring this out took longer than I care to admit. #Ivan's post helped a lot.
In order to authenticate to a GKE cluster, you can use a service account to connect to a project and then a generated secret key from GKE to authenticate to a cluster. Here are the steps:
Create a service account in GCP. Go to IAM > Service Accounts > create a service account. Give it a Project Owner role. Once SA is created, create a key and download it as json.
Upload key.json to a folder where you have .py script
Get API_TOKEN. This is your main question, you can get it by reading a token file:
First run kubectl get secrets
You will get ‘default-token-xxxxx’
run kubectl describe secrets default-token-xxxxx (replace xxxxx with your token name).
The token parameter displayed is your “API-KEY”. Copy it inside your script.
Creating a script. It is a bit different then yours for few reasons: you need to authenticate to a project first with a service account, then you need to pass the api_token, but also you need to get SSL certificate when authenticating to GKE master.
import base64, pprint
import kubernetes.client
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file("key.json")
gke = googleapiclient.discovery.build('container', 'v1', credentials=credentials)
name = 'projects/your_project/locations/your_zone/clusters/your_gke_cluster'
gke_clusters = gke.projects().locations().clusters()
gke_cluster = gke_clusters.get(name=name).execute()
kube_config = kubernetes.client.Configuration()
kube_config.host = 'https://{}'.format(gke_cluster['endpoint'])
kube_config.verify_ssl = True
kube_config.api_key['authorization'] = 'your_api_token'
kube_config.api_key_prefix['authorization'] = 'Bearer'
kube_config.ssl_ca_cert = 'ssl_ca_cert'
with open(kube_config.ssl_ca_cert, 'wb') as f:
f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode()))
kube_client = kubernetes.client.ApiClient(configuration=kube_config)
kube_v1 = kubernetes.client.CoreV1Api(kube_client)
pprint.pprint(kube_v1.list_pod_for_all_namespaces())
Specific fields:
your_project - from GCP
your _zone - where gke cluster is created
your _gke_cluster - GKE cluster name
your_api_key - what you get in step 3.
This should be enough to authenticate you to a GKE cluster.

Hashicorp Vault cli return 403 when trying to use kv

I set up vault backed by a consul cluster. I secured it with https and am trying to use the cli on a separate machine to get and set secrets in the kv engine. I am using version 1.0.2 of both the CLI and Vault server.
I have logged in with the root token so I should have access to everything. I have also set my VAULT_ADDR appropriately.
Here is my request:
vault kv put secret/my-secret my-value=yea
Here is the response:
Error making API request.
URL: GET https://{my-vault-address}/v1/sys/internal/ui/mounts/secret/my-secret
Code: 403. Errors:
* preflight capability check returned 403, please ensure client's policies grant access to path "secret/my-secret/"
I don't understand what is happening here. I am able to set and read secrets in the kv engine no problem from the vault ui. What am I missing?
This was a result of me not reading documentation.
The request was failing because there was no secret engine mounted at that path.
You can check your secret engine paths by running vault secrets list -detailed
This showed that my kv secret engine was mapped to path kv not secret as I was trying.
Therefore running vault kv put kv/my-secret my-value=yea worked as expected.
You can enable secret engine for specific path
vault secrets enable -path=kv kv
https://www.vaultproject.io/intro/getting-started/secrets-engines
You need to update secret/my-secret to whichever path you mounted when you enable the kv secret engine.
For example, if you enable the secret engine like this:
vault secrets enable -version=2 kv-v2
You should mount to kv-v2 instead of secret
vault kv put kv-v2/my-secret my-value=yea

Install a certificate in a Service Fabric Cluster without a private key

I need to install a certificate in a Service Fabric cluster that I created using an ARM template. I was able to install a certificate with the private key using the following helper powershell command:
> Invoke-AddCertToKeyVault
https://github.com/ChackDan/Service-Fabric/tree/master/Scripts/ServiceFabricRPHelpers
Once this certificate is in Azure Key Vault I can modify my ARM template to install the certificate automatically on the nodes in the cluster:
"osProfile": {
"secrets": [
{
"sourceVault": {
"id": "[parameters('vaultId')]"
},
"vaultCertificates": [
{
"certificateStore": "My",
"certificateUrl": "https://mykeyvault.vault.azure.net:443/secrets/fabrikam/9d1adf93371732434"
}
]
}
]
}
The problem is that the Invoke-AddCertToKeyVault is expecting me to provide a pfx file assuming I have the private key.
The script is creating the following JSON blob:
$jsonBlob = #{
data = $base64
dataType = 'pfx'
password = $Password
} | ConvertTo-Json
I modified the script to remove password and change dataType to 'cer' but when I deployed the template in Azure it said the dataType was no longer valid.
How can I deploy a certificate to a service fabric cluster that does not include the private key?
1) SF does not really care if you used .cer or .pfx. All SF needs is for the certificate to be available in the local cert store in the VM.
2) The issue you are running into is that CRP agent, which installs the cert from the keyvault to the local cert store in the VM, supports only .pfx today.
So now you have two options
1) create a pfx file without a private key and use it
Here is how to do via C# (or powershell)
Load the certificate into a X509Certificate2 object
Then use the export method for X509ContentType = Pfx
https://msdn.microsoft.com/en-us/library/24ww6yzk(v=vs.110).aspx
2) Deploy the .cer using a custom VM extension. Since .cer is only a public key cert there should be no privacy requirements. You can just upload the cert to a blob, and have a custom script extension download it and install it on the machine.