Hashicorp Vault won't let me delete a Policy even using the root token - hashicorp-vault

I am trying to delete a policy.
After logging in with the root token, I do the following:
$ vault policy delete testttt
Error deleting testttt: Error making API request.
URL: DELETE https://vault.local:8200/v1/sys/policies/acl/testttt
Code: 400. Errors:
* failed to delete policy: AccessDenied: Access Denied
status code: 403, request id: VB6YWECETDJ5KB7Q, host id:
S0FJvs41pSbzTmP1lDr/aVSOPjeRVz4Vk/ofkFHu8jvNjfzk6ARnY33qzP/usqmpVDExwLlsF44=
My config file looks like this:
storage "s3" {
access_key = "XXXX"
secret_key = "XXXX"
bucket = "XXXX-vault"
region = "eu-central-1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/etc/vault.d/fullchain.pem"
tls_key_file = "/etc/vault.d/privkey.pem"
}
api_addr = "http://0.0.0.0:8200"
cluster_addr = "https://0.0.0.0:8201"
ui = true
Something seems totally off, as after using the root token in the UI, I also just see this:
null is not an object (evaluating 'l.userRootNamespace')

Related

Terraform kubectl provider error: failed to created kubernetes rest client for read of resource

I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the kubectl provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers.
This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the kubectl_manifest resources:
Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp 127.0.0.1:80: connect: connection refused
Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!
The provider trying to connect with localhost, which means either to you need to provide a proper kube-config file or set it dynamically in the terraform.
Although you didn't mention how are setting the auth, but here is two way
Poor way
resource "null_resource" "deploy-app" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
kubectl apply -f myapp.yaml ./temp/kube-config.yaml;
EOT
}
# will run always, its bad
triggers = {
always_run = "${timestamp()}"
}
depends_on = [
local_file.kube_config
]
}
resource "local_file" "kube_config" {
content = var.my_kube_config # pass the config file from ci variable
filename = "${path.module}/temp/kube-config.yaml"
}
Proper way
data "google_container_cluster" "cluster" {
name = "your_cluster_name"
}
data "google_client_config" "current" {
}
provider "kubernetes" {
host = data.google_container_cluster.cluster.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(
data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
)
}
data "kubectl_file_documents" "app_yaml" {
content = file("myapp.yaml")
}
resource "kubectl_manifest" "app_installer" {
for_each = data.kubectl_file_documents.app_yaml.manifests
yaml_body = each.value
}
If the cluster in the same module , then provider should be
provider "kubernetes" {
load_config_file = "false"
host = google_container_cluster.my_cluster.endpoint
client_certificate = google_container_cluster.my_cluster.master_auth.0.client_certificate
client_key = google_container_cluster.my_cluster.master_auth.0.client_key
cluster_ca_certificate = google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate
}
Fixed the issue by adding load_config_file = false to the kubectl provider config. My provider config now looks like this:
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
}
provider "kubectl" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
load_config_file = false
}

Terraform Error creating Topic: googleapi: Error 403: User not authorized to perform this action

Googleapi: Error 403: User not authorized to perform this action
provider "google" {
project = "xxxxxx"
region = "us-central1"
}
resource "google_pubsub_topic" "gke_cluster_upgrade_notifications" {
name = "cluster-notifications"
labels = {
foo = "bar"
}
message_storage_policy {
allowed_persistence_regions = [
"region",
]
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "xxxxxx-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "function_script_zip" {
type = "zip"
source_dir = "./function/"
output_path = "./function/main.py.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "function_script_zip" {
name = "main.py.zip"
bucket = google_storage_bucket.source_code.name
source = "./function/main.py.zip"
}
resource "google_cloudfunctions_function" "gke_cluster_upgrade_notifications" {---
-------
}
The service account has the owner role attached
Also tried using
1.export GOOGLE_APPLICATION_CREDENTIALS={{path}}
2.credentials = "${file("credentials.json")}" by place json file in terraform root folder.
It seems that the used account is missing some permissions (e.g. pubsub.topics.create) to create the Cloud Pub/Sub topic. The owner role should be sufficient to create the topic, as it contains the necessary permissions (you can check this here). Therefore, a wrong service account might be set in Terraform.
To address these IAM issues I would suggest:
Use the Policy Troubleshooter.
Impersonate service account and do the API call using CLI with --verbosity=debug flag, which will provide helpful information about the missing permissions.

Add secret to freshly created Azure AKS using Terraform Kubernetes provider fails

I am creating a kubernetes cluster with the Azure Terraform provider and trying to add a secret to it. The cluster gets created fine but I am getting errors with authenticating to the cluster when creating the secret. I tried 2 different Terraform Kubernetes provider configurations. Here is the main configuration:
variable "client_id" {}
variable "client_secret" {}
resource "azurerm_resource_group" "rg-example" {
name = "rg-example"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "k8s-example" {
name = "k8s-example"
location = azurerm_resource_group.rg-example.location
resource_group_name = azurerm_resource_group.rg-example.name
dns_prefix = "k8s-example"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
role_based_access_control {
enabled = true
}
}
resource "kubernetes_secret" "secret_example" {
metadata {
name = "mysecret"
}
data = {
"something" = "super secret"
}
depends_on = [
azurerm_kubernetes_cluster.k8s-example
]
}
provider "azurerm" {
version = "=2.29.0"
features {}
}
output "host" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
}
output "cluster_username" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
}
output "cluster_password" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
output "client_key" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
Here is the first Kubernetes provider configuration using certificates:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
client_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
client_key = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
cluster_ca_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Failed to configure client: tls: failed to find any PEM data in certificate input
Here is the second Kubernetes provider configuration using HTTP Basic Authorization:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
username = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
password = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Post "https://k8s-example-c4a78c03.hcp.eastus.azmk8s.io:443/api/v1/namespaces/default/secrets": x509: certificate signed by unknown authority
ANALYSIS
I checked the outputs of azurerm_kubernetes_cluster.k8s-example and the data seems valid (username, password, host, etc..) Maybe I need a SSL certificate on my Kubernetes cluster, however I'm am not certain, as I'm new to this. Can someone help me out ?
According to this issue in hashicorp/terraform-provider-kubernetes, you need to use base64decode(). The example that author used:
provider "kubernetes" {
host = "${google_container_cluster.k8sexample.endpoint}"
username = "${var.master_username}"
password = "${var.master_password}"
client_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.cluster_ca_certificate)}"
}
That author said they got the same error as you if they left out the base64decode. You can read more about that function here: https://www.terraform.io/docs/configuration/functions/base64decode.html

Vault server token login doesn't work as per lease time

We are using Hashicorp Vault with Consul and Filesystem as storage, facing issue on login every time my token duration is infinity but still ask for token need to be sealed for every few hours with again login?
config.hcl:
`ui = true
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
backend "file" {
path = "/mnt/vault/data"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}`
Token lookup:
Key Value
• -----
token **********
token_accessor ***********
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]

Managing GKE and its deployments with Terraform

I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}