GCP Service Account Credentials Ignored? - gcloud

until yesterday I was able to run on GCP an application listening to PubSub and writing data to BigTable, but as of today, I don't seem to have a valid authentication anymore.
Here's how I proceed:
I create on the fly a service account:
gcloud iam service-accounts create ${SERVICE_ACCOUNT} \
--display-name "$(whoami) dev account" --project ${PROJECT_ID}
I create a JSON key file for this account:
gcloud iam service-accounts keys create \
"auth-${SERVICE_ACCOUNT}#${PROJECT_ID}.json" --iam-account=${IAM_ACCOUNT} \
--project ${PROJECT_ID}
I create a kubectl key from this JSON file:
kubectl create secret generic ingester-key \
--from-file=key.json="auth-${SERVICE_ACCOUNT}#${PROJECT_ID}.json"
I bind this account to admin roles:
bindings:
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
- serviceAccount:service-480932822351#container-engine-robot.iam.gserviceaccount.com
role: roles/bigtable.admin
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
role: roles/bigtable.user
- members:
- serviceAccount:service-480932822351#container-engine-robot.iam.gserviceaccount.com
role: roles/container.serviceAgent
- members:
- serviceAccount:480932822351-compute#developer.gserviceaccount.com
- serviceAccount:480932822351#cloudservices.gserviceaccount.com
- serviceAccount:service-480932822351#containerregistry.iam.gserviceaccount.com
role: roles/editor
- members:
- user:marcello#XXXEDITEDXXX
role: roles/owner
- members:
- serviceAccount:service-480932822351#container-engine-robot.iam.gserviceaccount.com
role: roles/pubsub.admin
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
role: roles/pubsub.editor
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
role: roles/storage.admin
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
role: roles/storage.objectAdmin
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
role: roles/storage.objectCreator
- members:
- serviceAccount:marcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com
role: roles/storage.objectViewer
I get cluster credentials for kubectl:
gcloud container clusters get-credentials ingester-cluster --zone us-east1-b --project noisy-turtle-20171031
Then I launch the kubernetes cluster (not shown here for brevity), and in the logfile I print out the credentials I am using, to verify that the service account is correct:
"Account: ServiceAccountCredentials{clientId\u003d117494744145141605372, clientEmail\u003dmarcello-dev#noisy-turtle-20171031.iam.gserviceaccount.com, privateKeyId\u003dad79da59c0a75c2b358d530d63d9a8898523f3cb, transportFactoryClassName\u003dcom.google.auth.oauth2.OAuth2Utils$DefaultHttpTransportFactory, tokenServerUri\u003dhttps://accounts.google.com/o/oauth2/token, scopes\u003d[], serviceAccountUser\u003dnull}"
and find that the clientEmail matches with my service account.
A few cycles later, the application crashes:
Exception in thread "main" java.io.IOException: Failed to listTables
...
Caused by: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Access denied. Missing IAM permission: bigtable.tables.list.
Any ideas?

I found the missing part: after step 2, you need to activate the service account key:
gcloud auth activate-service-account ${IAM_ACCOUNT} --key-file="auth-${SERVICE_ACCOUNT}#${PROJECT_ID}.json"

Related

IAM permission to retrieve LaunchTemplate's LatestVersionNumber attribute in AWS::AutoScaling::AutoScalingGroup

While developing a CloudFormation template, I am following the principle of least privilege. So I am providing CloudFormation a role to assume, and which has the minimal set of privileges.
The template contains an AWS::AutoScaling::AutoScalingGroup which is based on a AWS::EC2::LaunchTemplate:
ECSAutoScalingGroup:
DependsOn: ECSCluster
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier: !Ref Subnets
LaunchTemplate:
LaunchTemplateId: !Ref ECSLaunchTemplate
Version: !GetAtt ECSLaunchTemplate.LatestVersionNumber
MinSize: 1
MaxSize: 2
DesiredCapacity: 1
...
ECSLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Metadata:
AWS::CloudFormation::Init:
...
Properties:
LaunchTemplateName: test-template
LaunchTemplateData:
...
When I create a CloudFormation stack out of this template, I get the following error on the ECSAutoScalingGroup resource:
Failed to retrieve attribute [LatestVersionNumber] for resource
[ECSLaunchTemplate]: You are not authorized to perform this operation.
(Service: AmazonEC2; Status Code: 403; Error Code:
UnauthorizedOperation; Request ID:
e0f01fd0-ee2a-4260-94f4-3c65177d05ee; Proxy: null)
Which IAM policy should I add to the IAM Role which is assumed by CloudFormation? Clearly, if I give it AdministratorAccess, it succeeds. However, I would like to follow the principal of least privilege.
Any ideas?
Thanks.
Answering my own question here. One should add these 2 actions to their IAM Role policy:
ec2:DescribeLaunchTemplates
ec2:DescribeLaunchTemplateVersions

How can I assign the same RBAC role to two different IAM roles to access a cluster in EKS?

I would like to give a certain team access to the system:masters group in RBAC. My team (AWSReservedSSO_Admin_xxxxxxxxxx in example below) already has it and it works when I only add that one rolearn, but when I apply the configmap below with the additional rolearn, users under the AWSReservedSSO_Dev_xxxxxxxxxxrole still get this error when trying to access the cluster: error: You must be logged in to the server (Unauthorized)
(note: we are using AWS SSO, so the IAM roles are assumed):
---
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/eks-node-group
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Admin_xxxxxxxxxx
groups:
- system:masters
username: admin
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx
groups:
- system:masters
username: admin
metadata:
name: aws-auth
namespace: kube-system
I'm not sure how you are assuming the roles ❓ and your configuration looks fine, but the reason could be that you are mapping the same user to two different roles. AWS IAM only allows a user to assume only one role at a time, basically, as an AWS IAM user, you can't assume multiple IAM roles at the same time.
You can try with different users and see it works for you.
---
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/eks-node-group
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Admin_xxxxxxxxxx
groups:
- system:masters
username: admin
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx
groups:
- system:masters
username: admin2
metadata:
name: aws-auth
namespace: kube-system
The other aspect that you may be missing is the 'Trust Relationship' 🤝 in your arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx role that allows admin to assume the role.
✌️☮️
Thanks Rico. When you sign in with SSO, you are assuming a role in STS. You can verify this by running aws sts get-caller-identity.
You werew right that that the username wrong but it didn't solve the whole issue.
Took a long time but my teammate finally found the solution for this in this guide
The problem was the ARN for the IAM Role:
rolearn: arn:aws:iam::xxxxxxxxxxx:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Dev_xxxxxxxxxx
This part aws-reserved/sso.amazonaws.com/ needs to be removed from the name. So in the end combined with Rico's suggested username fix:
---
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/eks-node-group
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/AWSReservedSSO_Admin_xxxxxxxxxx
groups:
- system:masters
username: admin
- rolearn: arn:aws:iam::xxxxxxxxxxx:role/AWSReservedSSO_Dev_xxxxxxxxxx
groups:
- system:masters
username: admin2
metadata:
name: aws-auth
namespace: kube-system
The issue is finally fixed, and SSO users assuming the role can run kubectl commands!

Unable to sign in with default config file generated with Oracle cloud

I have generated a config file with Oracle cloud for Kubernetes, The generated file keeps throwing the error "Not enough data to create auth info structure.
", wat are methods for fixing this
I have created a new oracle cloud account and set up a cluster for Kubernetes (small with only 2 nodes using quick setup) when I upload the generated config file, to Kubernetes dashboard, it throws the error "Not enough data to create auth info structure".
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURqRENDQW5TZ0F3SUJBZ0lVZFowUzdXTTFoQUtDakRtZGhhbWM1VkxlRWhrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hqRUxNQWtHQTFVRUJoTUNWVk14RGpBTUJnTlZCQWdUQlZSbGVHRnpNUTh3RFFZRFZRUUhFd1pCZFhOMAphVzR4RHpBTkJnTlZCQW9UQms5eVlXTnNaVEVNTUFvR0ExVUVDeE1EVDBSWU1ROHdEUVlEVlFRREV3WkxPRk1nClEwRXdIaGNOTVRrd09USTJNRGt6T0RBd1doY05NalF3T1RJME1Ea3pPREF3V2pCZU1Rc3dDUVlEVlFRR0V3SlYKVXpFT01Bd0dBMVVFQ0JNRlZHVjRZWE14RHpBTkJnTlZCQWNUQmtGMWMzUnBiakVQTUEwR0ExVUVDaE1HVDNKaApZMnhsTVF3d0NnWURWUVFMRXdOUFJGZ3hEekFOQmdOVkJBTVRCa3M0VXlCRFFUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLSDFLeW5lc1JlY2V5NVlJNk1IWmxOK05oQ1o0SlFCL2FLNkJXMzQKaE5lWjdzTDFTZjFXR2k5ZnRVNEVZOFpmNzJmZkttWVlWcTcwRzFMN2l2Q0VzdnlUc0EwbE5qZW90dnM2NmhqWgpMNC96K0psQWtXWG1XOHdaYTZhMU5YbGQ4TnZ1YUtVRmdZQjNxeWZYODd3VEliRjJzL0tyK044NHpWN0loMTZECnVxUXp1OGREVE03azdwZXdGN3NaZFBSOTlEaGozcGpXcGRCd3I1MjN2ZWV0M0lMLzl3TXN6VWtkRzU3MnU3aXEKWG5zcjdXNjd2S25QM0U0Wlc1S29YMkRpdXpoOXNPZFkrQTR2N1VTeitZemllc1FWNlFuYzQ4Tk15TGw4WTdwcQppbEd2SzJVMkUzK0RpWXpHbFZuUm1GU1A3RmEzYmFBVzRtUkJjR0c0SXk5QVZ5TUNBd0VBQWFOQ01FQXdEZ1lEClZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGUFprNlI0ZndpOTUKOFR5SSt0VWRwaExaT2NYek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ0g2RVFHbVNzakxsQURKZURFeUVFYwpNWm9abFU5cWs4SlZ3cE5NeGhLQXh2cWZjZ3BVcGF6ZHZuVmxkbkgrQmZKeDhHcCszK2hQVjJJZnF2bzR5Y2lSCmRnWXJJcjVuLzliNml0dWRNNzhNL01PRjNHOFdZNGx5TWZNZjF5L3ZFS1RwVUEyK2RWTXBkdEhHc21rd3ZtTGkKRmd3WUJHdXFvS0NZS0NSTXMwS2xnMXZzMTMzZE1iMVlWZEFGSWkvTWttRXk1bjBzcng3Z2FJL2JzaENpb0xpVgp0WER3NkxGRUlKOWNBNkorVEE3OGlyWnJyQjY3K3hpeTcxcnFNRTZnaE51Rkt6OXBZOGJKcDlNcDVPTDByUFM0CjBpUjFseEJKZ2VrL2hTWlZKNC9rNEtUQ2tUVkFuV1RnbFJpTVNiRHFRbjhPUVVmd1kvckY3eUJBTkkxM2QyMXAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://czgkn3bmu4t.uk-london-1.clusters.oci.oraclecloud.com:6443
name: cluster-czgkn3bmu4t
contexts:
- context:
cluster: cluster-czgkn3bmu4t
user: user-czgkn3bmu4t
name: context-czgkn3bmu4t
current-context: context-czgkn3bmu4t
kind: ''
users:
- name: user-czgkn3bmu4t
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster
- generate-token
- --cluster-id
- ocid1.cluster.oc1.uk-london-1.aaaaaaaaae3deztchfrwinjwgiztcnbqheydkyzyhbrgkmbvmczgkn3bmu4t
command: oci
env: []
if you could help me resolve this I would be extremely grateful
You should be able to solve this by downloading a v1 kubeconfig. Then specifying the --token-version=1.0.0 flag on the create kubeconfig command.
oci ce cluster create-kubeconfig <options> --token-version=1.0.0
Then use that kubeconfig in the dashboard.

error: the server doesn't have resource type "svc"

Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
i am following this document
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card
while i am trying to test my configuration in step 11 of configure kubectl for amazon eks
apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Change "name: kubernetes" to actual name of your cluster.
Here is what I did to work it through....
1.Enabled verbose to make sure config files are read properly.
kubectl get svc --v=10
2.Modified the file as below:
apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
I have faced a similar issue, however this is not a direct solution but workaround. Use AWS cli commands to create cluster rather than console. As per documentation, the user or role which creates cluster will have master access.
aws eks create-cluster --name <cluster name> --role-arn <EKS Service Role> --resources-vpc-config subnetIds=<subnet ids>,securityGroupIds=<security group id>
Make sure that EKS Service Role has IAM access(I have given Full however AssumeRole will do I guess).
The EC2 machine Role should have eks:CreateCluster and IAM access. Worked for me :)
I had this issue and found it was caused default key setting in ~/.aws/credentials.
We have a few AWS accounts for different customers plus a sandbox account for our own testing and research. So our credentials file looks something like this:
[default]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[cpproto]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[sandbox]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
I was messing around our sandbox account but the [default] section was pointing to another account.
Once I put the keys for sandbox into the default section the "kubectl get svc" command worked fine.
Seems we need a way to tell aws-iam-authenticator which keys to use same as --profile in the aws cli.
I guess you should uncomment "env" item and change your refer to ~/.aws/credentials
Because your aws_iam_authenticator requires exact AWS credentials.
Refer this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
To have the AWS IAM Authenticator for Kubernetes always use a specific named AWS credential profile (instead of the default AWS credential provider chain), uncomment the env lines and substitute with the profile name to use.

kubectl pull image from gitlab unauthorized: HTTP Basic: Access denied

I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:
Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
Here is my deployment gitlab-ci job:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email=some#gmail.com
- kubectl apply -f cloud-kubernetes.yml
and here is cloud-kubernetes.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
I have followed this article
There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using json token file. So .gitlab-ci.yaml could look like:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
And image in cloud-kubernetes.yaml should be:
gcr.io/proj/api:v1
You must use --docker-server=CI_REGISTRY.
The same as you sue for docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY.
Also note that your docker secrets must be in the same namespace with Deployment/ReplicaSet/DaemonSet/StatefullSet/Job.