I've below policy in vault
path "/secrets/global/*" { capabilities = ["read", "create", "update", "delete", "list"] }
will this policy grant me access to all the paths under global like
/secrets/global/common/*
/secrets/global/notsocommoon/app1/*
/secrets/global/notsocommoon/app1/module1/*
Yes. Vault will grand all the capabilities to the /secrets/global/ and its child directory.
As we can add multiple paths to the same policy, if we want to restrict few capabilities a particular path, we can do that like
#mypolicy.hcl
path "/secrets/global/*" { capabilities = ["read", "create", "update", "delete", "list"] }
path "/secrets/global/myteam/passwords/*" { capabilities = ["read"] }
Related
The Scenario is:We need to create resources in azure portal through terraform cloud only by using service principals and azure portal shouldn't allow creation of resources by other means.
(No manual creation in azure portal,and also by SVM's tools like vs code)
The following code which i tried:
resource "azurerm_policy_definition" "policych" {
name = "PolicyTestSP"
policy_type = "Custom"
mode = "Indexed"
display_name = "Allow thru SP"
metadata = <<METADATA
{
"category": "General"
}
METADATA
policy_rule = <<POLICY_RULE
{
"if": {
"not": {
"field": "type",
"equals": "https://app.terraform.io/"
}
},
"then": {
"effect": "deny"
}
}
POLICY_RULE
}
resource "azurerm_policy_assignment" "policych" {
name = "Required servive-principles"
display_name = "Allow thru SP"
description = " Require SP Authentication'"
policy_definition_id = "${azurerm_policy_definition.policych.id}"
scope = "/subscriptions/xxxx-xxxx-xxxx-xxxx"
}
I've an Azure Key Vault(KV) that has shared secrets and a cert that needs to be pulled into different deployments.
E.g. DEV, TEST, UAT, Production all have their own key vaults BUT need access to the shared KV for wild card ssl cert.
I've tried a number of approaches but each has errors. I'm doing something similar for KV within the deployment resource group without issues
Is it possible to have this and then use it as a module? Something like this...
sharedKV.bicep
var kvResourceGroup = 'project-shared-rg'
var subscriptionId = subscription().id
var name = 'project-shared-kv'
resource project_shared_kv 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: name
scope: resourceGroup(subscriptionId, kvResourceGroup )
}
And then uses like:
template.bicep
module shared_kv './sharedKeyVault/template.bicep' = {
name: 'sharedKeyVault'
}
resource add_secrect 'Microsoft.KeyVault/vaults/secrets#2021-06-01-preview' = {
name: '${shared_kv.name}/mySecretKey'
properties: {
contentType: 'string'
value: 'secretValue'
attributes: {
enabled: true
}
}
}
If you need to target a different resourceGroup (and/or sub) than the rest of the deployment, the module's scope property needs to target that RG/sub. e.g.
module shared_kv './sharedKeyVault/template.bicep' = {
scope: resourceGroup(kvSubscription, kvResourceGroupName)
name: 'sharedKeyVault'
params: {
subId: kvSubscription
rg: kvResourceGroupName
...
}
}
Ideally, the sub/rg for the KV would be passed in to the module rather than hardcoded (which you probably knew, but just in case...)
I run a kubernetes cluster on aws managed by terraform. I'd like to automatically restart the pods in the cluster at some regular interval, maybe weekly. Since the entire cluster is managed by terraform, I'd like to run the auto restart command through terraform as well.
At first I assumed that kubernetes would have some kind of ttl for its pods, but that does not seem to be the case.
Elsewhere on SO I've seen the ability to run auto restarts using a cron job managed by kubernetes (eg: How to schedule pods restart). Terraform has a relevant resource -- the kubernetes_cron_job -- but I can't fully understand how to set it up with the permissions necessary to actually run.
Would appreciate some feedback!
Below is what I've tried:
resource "kubernetes_cron_job" "deployment_restart" {
metadata {
name = "deployment-restart"
}
spec {
concurrency_policy = "Forbid"
schedule = "0 8 * * *"
starting_deadline_seconds = 10
successful_jobs_history_limit = 10
job_template {
metadata {}
spec {
backoff_limit = 2
active_deadline_seconds = 600
template {
metadata {}
spec {
service_account_name = var.service_account.name
container {
name = "kubectl"
image = "bitnami/kubectl"
command = ["kubectl rollout restart deploy"]
}
}
}
}
}
}
}
resource "kubernetes_role" "deployment_restart" {
metadata {
name = "deployment-restart"
}
rule {
api_groups = ["apps", "extensions"]
resources = ["deployments"]
verbs = ["get", "list", "patch", "watch"]
}
}
resource "kubernetes_role_binding" "deployment_restart" {
metadata {
name = "deployment-restart"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.deployment_restart.metadata[0].name
}
subject {
kind = "ServiceAccount"
name = var.service_account.name
api_group = "rbac.authorization.k8s.io"
}
}
This was based on a combination of Granting RBAC roles in k8s cluster using terraform and How to schedule pods restart.
Currently getting the following error:
Error: RoleBinding.rbac.authorization.k8s.io "deployment-restart" is invalid: subjects[0].apiGroup: Unsupported value: "rbac.authorization.k8s.io": supported values: ""
As per the official documentation rolebinding.subjects.apiGroup for Service Accounts should be empty.
kubectl explain rolebinding.subjects.apiGroup
KIND: RoleBinding VERSION: rbac.authorization.k8s.io/v1
FIELD: apiGroup
DESCRIPTION:
APIGroup holds the API group of the referenced subject. Defaults to "" for
ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User
and Group subjects.
I want to assign RBAC rule to a user providing access to all the resources except 'create' and 'delete' verb in 'namespace' resource using Terraform.
Currently we have rule as stated below:
rule {
api_groups = ["*"]
resources = ["*"]
verbs = ["*"]
}
As we can find in the Role and ClusterRole documentation, permissions (rules) are purely additive - there are no "deny" rules:
Role and ClusterRole
An RBAC Role or ClusterRole contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules).
The list of possible verbs can be found here:
You need to provide all verbs that should be applied to the resources contained in the rule.
Instead of:
verbs = ["*"]
Provide required verbs e.g.:
verbs = ["get", "list", "patch", "update", "watch"]
As an example, I've created an example-role Role and an example_role_binding RoleBinding.
The example_role_binding RoleBinding grants the permissions defined in the example-role Role to user john.
NOTE: For details on using the following resources, see the kubernetes_role and kubernetes_role_binding resource documentation.
resource "kubernetes_role" "example_role" {
metadata {
name = "example-role"
namespace = "default"
}
rule {
api_groups = ["*"]
resources = ["*"]
verbs = ["get", "list", "patch", "update", "watch"]
}
}
resource "kubernetes_role_binding" "example_role_binding" {
metadata {
name = "example_role_binding"
namespace = "default"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = "example-role"
}
subject {
kind = "User"
name = "john"
api_group = "rbac.authorization.k8s.io"
}
}
Additionally, I've created the test_user.sh Bash script to quickly check if it works as expected:
NOTE: You may need to modify the variables namespace, resources, and user to fit your needs.
$ cat test_user.sh
#!/bin/bash
namespace=default
resources="pods deployments"
user=john
echo "=== NAMESPACE: ${namespace} ==="
for verb in create delete get list patch update watch; do
echo "-- ${verb} --"
for resource in ${resources}; do
echo -n "${resource}: "
kubectl auth can-i ${verb} ${resource} -n ${namespace} --as=${user}
done
done
$ ./test_user.sh
=== NAMESPACE: default ===
-- create --
pods: no
deployments: no
-- delete --
pods: no
deployments: no
-- get --
pods: yes
deployments: yes
-- list --
pods: yes
deployments: yes
...
Using Terraform to deploy API Gateway/Lambda and already have the appropriate logs in Cloudwatch. However I can't seem to find a way to set the retention on the logs via Terraform, using my currently deployed resources (below). It looks like the log group resource is where I'd do it, but not sure how to point log stream from api gateway at the new log group. I must be missing something obvious ... any advice is very much appreciated!
resource "aws_api_gateway_account" "name" {
cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
}
resource "aws_iam_role" "cloudwatch" {
name = "#{name}_APIGatewayCloudWatchLogs"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "api_gateway_logs" {
name = "#{name}_api_gateway_logs_policy_attach"
roles = ["${aws_iam_role.cloudwatch.id}"]
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"
}
resource "aws_api_gateway_method_settings" "name" {
rest_api_id = "${aws_api_gateway_rest_api.name.id}"
stage_name = "${aws_api_gateway_stage.name.stage_name}"
method_path = "${aws_api_gateway_resource.name.path_part}/${aws_api_gateway_method.name.http_method}"
settings {
metrics_enabled = true
logging_level = "INFO"
data_trace_enabled = true
}
}
yes, you can use the Lambda log name to create log resource before you create the Lambda function. Or you can import the existing log groups.
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${var.env}-${join("", split("_",title(var.lambda_name)))}-Lambda"
retention_in_days = 7
lifecycle {
create_before_destroy = true
prevent_destroy = false
}
}