Pulumi kubernetes secret not creating all data keys - kubernetes

I've declared a kubernetes secret in pulumi like:
const tlsSecret = new k8s.core.v1.Secret('tlsSecret', {
metadata: {
name: 'star.builds.qwil.co'
},
data: {
'tls.crt': tlsCert,
'tls.key': tlsKey
}
});
However, I'm finding that when the secret is created only tls.key is present in the secret. When I look at the Diff View from the pulumi deployment on app.pulumi.com I see the following:
tlsSecret (kubernetes:core:Secret)
+ kubernetes:core/v1:Secret (create)
[urn=urn:pulumi:local::ledger::kubernetes:core/v1:Secret::tlsSecret]
apiVersion: "v1"
data : {
tls.key: "[secret]"
}
kind : "Secret"
metadata : {
labels: {
app.kubernetes.io/managed-by: "pulumi"
}
name : "star.builds.qwil.co"
}
Why is only tls.key being picked up even though I've also specified a tls.crt?

Turns out the variable tlsCert was false-y (I was loading it from config with the wrong key for Config.get()). Pulumi was smart and didn't create a secret with an empty string.

Related

In CDK, can I wait until a Helm-installed operator is running before applying a manifest?

I'm installing the External Secrets Operator alongside Apache Pinot into an EKS cluster using CDK. I'm running into an issue that I think is being caused by CDK attempting to create a resource defined by the ESO before the ESO has actually gotten up and running. Here's the relevant code:
// install Pinot
const pinot = cluster.addHelmChart('Pinot', {
chartAsset: new Asset(this, 'PinotChartAsset', { path: path.join(__dirname, '../pinot') }),
release: 'pinot',
namespace: 'pinot',
createNamespace: true
});
// install the External Secrets Operator
const externalSecretsOperator = cluster.addHelmChart('ExternalSecretsOperator', {
chart: 'external-secrets',
release: 'external-secrets',
repository: 'https://charts.external-secrets.io',
namespace: 'external-secrets',
createNamespace: true,
values: {
installCRDs: true,
webhook: {
port: 9443
}
}
});
// create a Fargate Profile
const fargateProfile = cluster.addFargateProfile('FargateProfile', {
fargateProfileName: 'externalsecrets',
selectors: [{ 'namespace': 'external-secrets' }]
});
// create the Service Account used by the Secret Store
const serviceAccount = cluster.addServiceAccount('ServiceAccount', {
name: 'eso-service-account',
namespace: 'external-secrets'
});
serviceAccount.addToPrincipalPolicy(new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
'secretsmanager:GetSecretValue',
'secretsmanager:DescribeSecret'
],
resources: [
'arn:aws:secretsmanager:us-east-1:<MY-ACCOUNT-ID>:secret:*'
]
}))
serviceAccount.node.addDependency(externalSecretsOperator);
// create the Secret Store, an ESO Resource
const secretStoreManifest = getSecretStoreManifest(serviceAccount);
const secretStore = cluster.addManifest('SecretStore', secretStoreManifest);
secretStore.node.addDependency(serviceAccount);
secretStore.node.addDependency(fargateProfile);
// create the External Secret, another ESO resource
const externalSecretManifest = getExternalSecretManifest(secretStoreManifest)
const externalSecret = cluster.addManifest('ExternalSecret', externalSecretManifest)
externalSecret.node.addDependency(secretStore);
externalSecret.node.addDependency(pinot);
Even though I've set the ESO as a dependency to the Secret Store, when I try to deploy this I get the following error:
Received response status [FAILED] from custom resource. Message returned: Error: b'Error from server (InternalError): error when creating "/tmp/manifest.yaml": Internal error occurred:
failed calling webhook "validate.clustersecretstore.external-secrets.io": Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-clusterse
cretstore?timeout=5s": no endpoints available for service "external-secrets-webhook"\n'
If I understand correctly, this is the error you'd get if you try to add a Secret Store before the ESO is fully installed. I'm guessing that CDK does not wait until the ESO's pods are running before attempting to apply the manifest. Furthermore, if I comment out the lines the create the Secret Store and External Secret, do a cdk deploy, uncomment those lines and then deploy again, everything works fine.
Is there any way around this? Some way I can retry applying the manifest, or to wait a period of time before attempting the apply?
The addHelmChart method has a property wait that is set to false by default - setting it to true lets CDK know to not mark the installation as complete until of its its K8s resources are in a ready state.

No known RoleID Vault Agent

I'm using vault approle auth method to fetch secrets from vault. Below is my vault agent configmap.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
config-init.hcl: |
"auto_auth" = {
"method" = {
"config" = {
"role" = "test"
"role_id_file_path" = "roleid"
"secret_id_file_path" = "secretid"
"remove_secret_id_file_after_reading" = false
}
"type" = "approle"
}
"sink" = {
"config" = {
"path" = "/home/vault/.token"
}
"type" = "file"
"wrap_ttl" = "30m"
}
}
"vault" = {
"address" = "https://myvault.com"
}
"exit_after_auth" = true
"pid_file" = "/home/vault/.pid"
Then I'm referencing the above configmap in the deployment file.
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-configmap: 'my-configmap'
But I get below error
vault-agent-init 2022-07-20T10:43:13.306Z [ERROR] auth.handler: error getting path or data from method: error="no known role ID" backoff=4m51.03s
Create a secret file in Kubernetes by using the RoleID and SecretID and pass the below annotation
vault.hashicorp.com/extra-secret: "secret-file"

Creation of resource cert-manager/letsencrypt failed because the Kubernetes API server reported that the apiVersion for this resource does not exist

I am having issues installing the cert-manager Helm chart and setting up a LetsEncrypt cluster issuer using Pulumi in our Azure Kubernetes cluster. We are using Kubernetes version 1.21.2 and cert-manager. 1.5.3.
When running pulumi up before any of the resources exist I get the following error:
kubernetes:cert-manager.io/v1:ClusterIssuer (cert-manager-letsencrypt):
error: creation of resource cert-manager/letsencrypt failed because the Kubernetes API server reported that the apiVersion for this resource does not exist. Verify that any required CRDs have been created: no matches for kind "ClusterIssuer" in version "cert-manager.io/v1"
error: update failedaToolsCertManager cert-manager
I can confirm that the cluster issuer hasn't been created by running kubectl get clusterissuer.
When running pulumi up again it succeeds and the letsencrypt ClusterIssuer is correctly created.
I don't want to have to run pulumi up consecutive times to reach a successful deployment. Can anyone see what the issue is here?
C# stack definition:
// Create new namespace
var certManagerNamespace = new Namespace("cert-manager",
new NamespaceArgs()
{
Metadata = new ObjectMetaArgs
{
Name = "cert-manager"
}
},
options);
// Install cert-manager using Helm
var certManagerChart = new Chart("cert-manager",
new ChartArgs
{
Chart = "cert-manager",
Version = "1.5.3",
Namespace = certManagerNamespace.Metadata.Apply(m => m.Name),
Values =
{
["installCRDs"] = "true"
},
FetchOptions = new ChartFetchArgs
{
Repo = "https://charts.jetstack.io"
}
},
options);
// Create ClusterIssuer using LetsEncrypt
var clusterIssuer = new ClusterIssuer($"{name}-letsencrypt",
new ClusterIssuerArgs()
{
ApiVersion = "cert-manager.io/v1",
Kind = "ClusterIssuer",
Metadata = new ObjectMetaArgs()
{
Name = "letsencrypt",
Namespace = "cert-manager",
},
Spec = new ClusterIssuerSpecArgs()
{
Acme = new ClusterIssuerSpecAcmeArgs()
{
Email = "administrator#blahblah.com",
Server = "https://acme-v02.api.letsencrypt.org/directory",
PrivateKeySecretRef = new ClusterIssuerSpecAcmePrivateKeySecretRefArgs()
{
Name = "letsencrypt"
},
Solvers =
{
new ClusterIssuerSpecAcmeSolversArgs()
{
Http01 = new ClusterIssuerSpecAcmeSolversHttp01Args()
{
Ingress = new ClusterIssuerSpecAcmeSolversHttp01IngressArgs()
{
Class = "nginx"
}
}
}
}
}
}
},
new CustomResourceOptions()
{
DependsOn = certManagerChart,
Provider = options.Provider
});
The cluster issuer definition from Pulumi:
+ kubernetes:cert-manager.io/v1:ClusterIssuer: (create)
[urn=urn:pulumi:preprod::MyAks::kubernetes:cert-manager.io/v1:ClusterIssuer::cert-manager-letsencrypt]
[provider=urn:pulumi:preprod::MyAks::k8sx:service:MyAks$pulumi:providers:kubernetes::k8s-provider::5191350f-c03b-4796-bc48-053584e2c996]
apiVersion: "cert-manager.io/v1"
kind : "ClusterIssuer"
metadata : {
labels : {
app.kubernetes.io/managed-by: "pulumi"
}
name : "letsencrypt"
namespace: "cert-manager"
}
spec : {
acme: {
email : "administrator#blahblah.com"
privateKeySecretRef: {
name: "letsencrypt"
}
server : "https://acme-v02.api.letsencrypt.org/directory"
solvers : [
[0]: {
http01: {
ingress: {
class: "nginx"
}
}
}
]
}
}

Create kubernetes secret for docker registry - Terraform

Using kubectl we can create docker registry authentication secret as follows
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com \
How do i create this secret using terraform, i saw this link, it has data, in the flow of terraform the kubernetes instance is being created in azure and i get the data required from there and i created something like below
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "registry-credentials"
}
data = {
docker-server = data.azurerm_container_registry.docker_registry_data.login_server
docker-username = data.azurerm_container_registry.docker_registry_data.admin_username
docker-password = data.azurerm_container_registry.docker_registry_data.admin_password
}
}
It seems that it is wrong as the images are not being pulled. What am i missing here.
If you run following command
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com
It will create a secret like following
$ kubectl get secrets regsecret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-06-01T18:31:07Z"
name: regsecret
namespace: default
resourceVersion: "42304"
selfLink: /api/v1/namespaces/default/secrets/regsecret
uid: 59054483-2789-4dd2-9321-74d911eef610
type: kubernetes.io/dockerconfigjson
If we decode .dockerconfigjson we will get
{"auths":{"docker.example.com":{"username":"kube","password":"PW_STRING","email":"my#email.com","auth":"a3ViZTpQV19TVFJJTkc="}}}
So, how can we do that using terraform?
I created a file config.json with following data
{"auths":{"${docker-server}":{"username":"${docker-username}","password":"${docker-password}","email":"${docker-email}","auth":"${auth}"}}}
Then in main.tf file
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "regsecret"
}
data = {
".dockerconfigjson" = "${data.template_file.docker_config_script.rendered}"
}
type = "kubernetes.io/dockerconfigjson"
}
data "template_file" "docker_config_script" {
template = "${file("${path.module}/config.json")}"
vars = {
docker-username = "${var.docker-username}"
docker-password = "${var.docker-password}"
docker-server = "${var.docker-server}"
docker-email = "${var.docker-email}"
auth = base64encode("${var.docker-username}:${var.docker-password}")
}
}
then run
$ terraform apply
This will generate same secrets. Hope it will helps
I would suggest creating a azurerm_role_assignement to give aks access to the acr:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = var.service_principal_obj_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update
You can create the service principal in the azure portal or with az cli and use client_id, client_secret and object-id in terraform.
Get Client_id and Object_id by running az ad sp list --filter "displayName eq '<name>'". The secret has to be created in the Certificates & secrets tab of the service principal. See this guide: https://pixelrobots.co.uk/2018/11/first-look-at-terraform-and-the-azure-cloud-shell/
Just set all three as variable, eg for obj_id:
variable "service_principal_obj_id" {
default = "<object-id>"
}
Now use the credentials with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = var.service_principal_app_id
client_secret = var.service_principal_password
}
...
}
And set the object id in the acr as described above.
Alternative
You can create the service principal with terraform (only works if you have the necessary permissions). https://www.terraform.io/docs/providers/azuread/r/service_principal.html combined with a random_password resource:
resource "azuread_application" "aks_sp" {
name = "somename"
available_to_other_tenants = false
oauth2_allow_implicit_flow = false
}
resource "azuread_service_principal" "aks_sp" {
application_id = azuread_application.aks_sp.application_id
depends_on = [
azuread_application.aks_sp
]
}
resource "azuread_service_principal_password" "aks_sp_pwd" {
service_principal_id = azuread_service_principal.aks_sp.id
value = random_password.aks_sp_pwd.result
end_date = "2099-01-01T01:02:03Z"
depends_on = [
azuread_service_principal.aks_sp
]
}
You need to assign the role "Conributer" to the sp and can use it directly in aks / acr.
resource "azurerm_role_assignment" "aks_sp_role_assignment" {
scope = var.subscription_id
role_definition_name = "Contributor"
principal_id = azuread_service_principal.aks_sp.id
depends_on = [
azuread_service_principal_password.aks_sp_pwd
]
}
Use them with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = azuread_service_principal.aks_sp.app_id
client_secret = azuread_service_principal_password.aks_sp_pwd.value
}
...
}
and the role assignment:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azuread_service_principal.aks_sp.object_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update secret example
resource "random_password" "aks_sp_pwd" {
length = 32
special = true
}

terraform-kubernetes-provider how to create secret from file?

I'm using the terraform kubernetes-provider and I'd like to translate something like this kubectl command into TF:
kubectl create secret generic my-secret --from-file mysecret.json
It seems, however the secret resource's data field expects only a TF map.
I've tried something like
data "template_file" "my-secret" {
template = "${file("${path.module}/my-secret.json")}"
}
resource "kubernetes_secret" "sgw-config" {
metadata {
name = "my-secret"
}
type = "Opaque"
data = "{data.template_file.my-secret.template}"
}
But it complains that this is not a map. So, I can do something like this:
data = {
"my-secret.json" = "{data.template_file.my-secret.template}"
}
But this will write the secret with a top-level field named my-secret.json and when I volume mount it, it won't work with other resources.
What is the trick here?
as long the file is UTF-8 encoded you can use something like this
resource "kubernetes_secret" "some-secret" {
metadata {
name = "some-secret"
namespace = kubernetes_namespace.some-ns.metadata.0.name
labels = {
"sensitive" = "true"
"app" = "my-app"
}
}
data = {
"file.txt" = file("${path.cwd}/your/relative/path/to/file.txt")
}
}
If the file is a binary one you will have an error like
Call to function "file" failed: contents of
/your/relative/path/to/file.txt are not valid UTF-8; use the
filebase64 function to obtain the Base64 encoded contents or the other
file functions (e.g. filemd5, filesha256) to obtain file hashing
results instead.
I tried encoding the file in base64 but then the problem is that the resulting text will be re-encoded in base64 by the provider. So I guess there is no solution for binary files at the moment...
I'll edit with what I find next for binaries.
This might be a bit off-topic, but I've been facing similar problem except that the file might not be present in which case the terraform [plan|apply] fails.
To be exact: I needed to duplicate a secret from one namespace to another one.
I realized that by using hashicorp/external provider.
The steps are pretty simple:
Load data by calling external program
Refer to the data in kubernetes_secret resource
The program should accept (and process) JSON on STDIN and produce valid JSON on STDOUT as response to the parameters passed-in in the STDIN's JSON.
Example shell script:
#!/bin/bash
set -e
/bin/echo -n '{ "token": "'
kubectl get -n consul secrets/hashicorp-consul-bootstrap-acl-token --template={{.data.token}}
/bin/echo -n '"}'
tarraform source:
data "external" "token" {
program = ["sh", "${path.module}/consul-token.sh"]
}
resource "kubernetes_secret" "consul-token" {
depends_on = [data.external.token]
metadata {
name = "consul-token"
namespace = "app"
}
data = {
token = base64decode(data.external.token.result.token)
}
}
and requirements:
terraform {
required_providers {
external = {
source = "hashicorp/external"
version = ">= 2.0.0"
}
}
}
Just use
https://www.terraform.io/docs/providers/kubernetes/r/config_map.html#binary_data
resource "kubernetes_config_map" "example" {
metadata {
name = "my-config"
}
binary_data = {
"my_payload.bin" = "${filebase64("${path.module}/my_payload.bin")}"
}
}
I believe you can use binary_data attribute in the secret now.
e.g.
binary_data = {
"my_payload.bin" = "${filebase64("${path.module}/my_payload.bin")}"
}
reference:
https://github.com/hashicorp/terraform-provider-kubernetes/pull/1228
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret#binary_data
Basically you need to provide a map like this :
resource "kubernetes_secret" "sgw-config" {
metadata {
name = "my-secret"
}
type = "Opaque"
data {
"key1" = "value1"
"key2" = "value2"
}
}
you can refer to your internal variables using
resource "kubernetes_secret" "sgw-config" {
metadata {
name = "my-secret"
}
type = "Opaque"
data {
"USERNAME" = "${var.some_variable}"
"PASSWORD" = "${random_string.root_password.result}"
}
}
It seems if you run the command kubectl create secret generic my-secret --from-file mysecret.json
and then
$ kubectl get secrets my-secret -o yaml
apiVersion: v1
data:
my-secret.json: ewogICA.....
kind: Secret
metadata:
creationTimestamp: "2019-03-25T18:20:43Z"
name: my-secret
namespace: default
resourceVersion: "67026"
selfLink: /api/v1/namespaces/default/secrets/my-secret
uid: b397a29c-4f2a-11e9-9806-000c290425d0
type: Opaque
it stores it similarly with the filename as the single key. When I mount this in a volume/volumeMount it works as expected. I was afraid that it wouldn't but when I create the secret using the --from-file argument, this is exactly how it stores it.