I have a file with below content,
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::1234566:role/radeks-project-us-east-1-NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
I want to append this file content as shown below with eks ,iammappings as first two lines,
eks:
iammappings:
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::1234566:role/radeks-project-us-east-1-NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
I tried yq merge.But It didnt work for me.Please let me know how to do this.
There is a specific tool for parsing yaml in bash i.e yq same as jq.
Link - https://github.com/mikefarah/yq
You have to modify the source yaml file as follows
- groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::1234566:role/radeks-project-us-east-1-NodeInstanceRole
- username: system:node:{{EC2PrivateDNSName}}
Else yq won't accept it as proper yaml file.
Next to get your job done use the following command
yq p -i file.yaml 'eks.iammappings'
The above command uses prefix function and will replace in place. The contents of the file will be as follows
eks:
iammappings:
- groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::1234566:role/radeks-project-us-east-1-NodeInstanceRole
- username: system:node:{{EC2PrivateDNSName}}
Related
I have cloudbuild.yaml file where I'm trying use helm image
Inside my step I want to have access to secrets from GCP Secret Manager but I cannot use it in regular way silimary to this case.
Is it possible to use "helm step" with secrets from GCP SM?
Something like this:
- name: gcr.io/$PROJECT_ID/helm
entrypoint: 'bash'
args:
- -c
- |
helm upgrade $_NAME ./deployment/charts/$_NAME --namespace $_NAMESPACE --set secret.var3="$$VAR3"
[EDIT]
to be more precise how my cloudbuild looks like and how it should
when I use "helm step" in classic way:
steps:
- name: gcr.io/$PROJECT_ID/helm
args:
- upgrade
- "$_NAME"
- "./deployment/charts/$_NAME"
- "--namespace"
- "$_NAMESPACE"
- "--set"
- "secret.var3=$$VAR3"
env:
- "CLOUDSDK_COMPUTE_ZONE=$_GKE_LOCATION"
- "CLOUDSDK_CONTAINER_CLUSTER=$_GKE_CLUSTER"
secretEnv: ['VAR3']
id: Apply deploy
substitutions:
_GKE_LOCATION: europe-west3-b
_GKE_CLUSTER: cluster-name
_NAME: "test"
_NAMESPACE: "test"
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/test-var-3/versions/latest
env: 'VAR3'
options:
substitution_option: 'ALLOW_LOOSE'
step works fine but my variable VAR3 is equal to "$VAR3" not to value what hide behind, so according to documentation I try use something like this:
steps:
- name: gcr.io/$PROJECT_ID/helm
entrypoint: 'helm'
args:
- |
upgrade $_NAME ./deployment/charts/$_NAME --namespace $_NAMESPACE --set secret.var3="$$VAR3"
env:
- "CLOUDSDK_COMPUTE_ZONE=$_GKE_LOCATION"
- "CLOUDSDK_CONTAINER_CLUSTER=$_GKE_CLUSTER"
secretEnv: ['VAR3']
id: Apply deploy
substitutions:
_GKE_LOCATION: europe-west3-b
_GKE_CLUSTER: cluster-name
_NAME: "test"
_NAMESPACE: "test"
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/test-var-3/versions/latest
env: 'VAR3'
options:
substitution_option: 'ALLOW_LOOSE'
but then I got an error:
UPGRADE FAILED: Kubernetes cluster unreachable: Get
"http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
You forget to use the secretEnv as shown in the example
Example :
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker login --username=$$USERNAME --password=$$PASSWORD']
secretEnv: ['USERNAME', 'PASSWORD']
Read more about it : https://cloud.google.com/build/docs/securing-builds/use-secrets#access-utf8-secrets
I'm certain I'm just looking for the right syntax here and need help finding it. I'm trying to create a configuration where the Terraform code is all in one repo (CodeRepo) but the .tfvars files are in another repo (ConfigRepo). The pipeline is run from CodeRepo. In the Yaml, the relevant parts are:
parameters:
- name: Environment
type: string
values:
- dev
- qa
- prod
- name: applyOrDestroy
type: string
values:
- apply
- destroy
- name: Role
type: string
values:
- iam-role-automation
- iam-role-testing
- iam-role-debug
resources:
repositories:
- repository: ConfigRepository
type: git
name: TechOps/ConfigRepo
terraform -chdir="modules/${{parameters.Role}}" ${{parameters.applyOrDestroy}} -auto-approve -input=false -var-file ConfigRepository/vars/${{parameters.Environment}}/terraform.tfvars
The contents of the ConfigRepo are a directory named "vars", which has a subdirectory named "dev", which contains the terraform.tfvars file.
What is the syntax to use in the Terraform command after -var-file in bold above to get it to read the terraform.tfvars file from the ConfigRepo?
I have a playbook and only want to run this play on the first master node. I tried moving the list into the role but did not see to work. Thanks for your help!
## master node only changes
- name: Deploy change kubernetes Master
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
delegate_to: "{{ groups['masters'][0] }}"
ERROR! 'delegate_to' is not a valid attribute for a Play
The error appears to be in '/mnt/win/kubernetes.playbook/deploy-kubernetes.yml': line 11, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
master node only changes
name: Deploy change kubernetes Master
^ here
In one playbook, create a new group with this host in the first play and use it in the second play. For example,
shell> cat playbook.yml
- name: Create group with masters.0
host: localhost
gather_facts: false
tasks:
- add_host:
name: "{{ groups.masters.0 }}"
groups: k8s_master_0
- name: Deploy change kubernetes Master
hosts: k8s_master_0
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
(not tested)
Fix the role name
If files_location is a variable which shall be used in the role's scope put it into the vars. For example
roles:
- role: gd.kubernetes.master.role
vars:
files_location: ../files
I would like to give a certain team access to the system:masters group in RBAC. My team (AWSReservedSSO_Admin_xxxxxxxxxx in example below) already has it and it works when I only add that one rolearn, but when I apply the configmap below with the additional rolearn, users under the AWSReservedSSO_Dev_xxxxxxxxxxrole still get this error when trying to access the cluster: error: You must be logged in to the server (Unauthorized)
(note: we are using AWS SSO, so the IAM roles are assumed):
---
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/eks-node-group
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Admin_xxxxxxxxxx
groups:
- system:masters
username: admin
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx
groups:
- system:masters
username: admin
metadata:
name: aws-auth
namespace: kube-system
I'm not sure how you are assuming the roles ❓ and your configuration looks fine, but the reason could be that you are mapping the same user to two different roles. AWS IAM only allows a user to assume only one role at a time, basically, as an AWS IAM user, you can't assume multiple IAM roles at the same time.
You can try with different users and see it works for you.
---
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/eks-node-group
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Admin_xxxxxxxxxx
groups:
- system:masters
username: admin
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx
groups:
- system:masters
username: admin2
metadata:
name: aws-auth
namespace: kube-system
The other aspect that you may be missing is the 'Trust Relationship' 🤝 in your arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx role that allows admin to assume the role.
✌️☮️
Thanks Rico. When you sign in with SSO, you are assuming a role in STS. You can verify this by running aws sts get-caller-identity.
You werew right that that the username wrong but it didn't solve the whole issue.
Took a long time but my teammate finally found the solution for this in this guide
The problem was the ARN for the IAM Role:
rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx
This part aws-reserved/sso.amazonaws.com/ needs to be removed from the name. So in the end combined with Rico's suggested username fix:
---
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/eks-node-group
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/AWSReservedSSO_Admin_xxxxxxxxxx
groups:
- system:masters
username: admin
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/AWSReservedSSO_Dev_xxxxxxxxxx
groups:
- system:masters
username: admin2
metadata:
name: aws-auth
namespace: kube-system
The issue is finally fixed, and SSO users assuming the role can run kubectl commands!
I have two jobs viz. build and publish. I want publish to trigger after build is done. So, I am using an external resource gcs-resourcehttps://github.com/frodenas/gcs-resource
Following is my pipeline.yml:
---
resource_types:
- name: gcs-resource
type: docker-image
source:
repository: frodenas/gcs-resource
resources:
- name: proj-repo
type: git
source:
uri: <my uri>
branch: develop
username: <username>
password: <password>
- name: proj-gcr
type: docker-image
source:
repository: asia.gcr.io/myproject/proj
tag: develop
username: _json_key
password: <my password>
- name: proj-build-output
type: gcs-resource
source:
bucket: proj-build-deploy
json_key: <my key>
regexp: Dockerfile
jobs:
- name: build
serial_groups: [proj-build-deploy]
plan:
- get: proj
resource: proj-repo
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: 10.13.0}
inputs:
- name: proj
run:
path: sh
args:
- -exc
- |
<do something>
- put: proj-build-output
params:
file: proj/Dockerfile
content_type: application/octet-stream
- name: publish
serial_groups: [proj-build-deploy]
plan:
- get: proj-build-output
trigger: true
passed: [build]
- put: proj-gcr
params:
build: proj-build-output
I am using the external resource proj-build-output to trigger the next job. I can run the individual jobs without any problem, however the the publish job doesn't automatically get triggered after completion of build job.
Am I missing something?
The regexp of the gcs-resource is misconfigured:
...
regexp: Dockerfile
...
while regexp, as the original S3 resource from which it comes from, wants:
regexp: the pattern to match filenames against within GCS. The first grouped match is used to extract the version, or if a group is explicitly named version, that group is used.
The https://github.com/frodenas/gcs-resource#example-configuration shows its correct usage:
regexp: directory_on_gcs/release-(.*).tgz
This is not specific to the GCS or S3 resource; Concourse needs a "version" to move artifacts from jobs to storage and back. It is one of the fundamental concepts of Concourse. See https://web.archive.org/web/20171205105324/http://concourse.ci:80/versioned-s3-artifacts.html for an example.
As Marco mentioned, the problem was with versioning.
I solved my issue using these two steps:
Enabled versioning on my GCS Bucket https://cloud.google.com/storage/docs/object-versioning#_Enabling
Replaces regexp with versioned_file as mentioned in the docs https://github.com/frodenas/gcs-resource#file-names