AttributeError: 'tuple' object has no attribute 'authorize' - GCP Create Service Account with Workload Identity Federation - github

I am trying to create a service account using Python in GCP. This works fine when i've set env var GOOGLE_APPLICATION_CREDENTIALS to a JSON credentials file, and used the following code:
GoogleCredentials.get_application_default()
However the following code fails in CI - Github Actions using Workload Identity Federation:
import google
import googleapiclient.discovery
import os
from util import get_service_name
environment = os.getenv('ENVIRONMENT')
def create_service_account(requested_project_id):
project_id = requested_project_id
credentials = google.auth.default()
service = googleapiclient.discovery.build(
'iam', 'v1', credentials=credentials)
service_account_name = f'svc-{get_service_name()}'
service_accounts = service.projects().serviceAccounts().list(
name='projects/' + project_id).execute()
service_account_exists = False
for account in service_accounts['accounts']:
if (service_account_name in account['name']):
service_account_exists = True
service_account = account
break
if (service_account_exists == False):
service_account = service.projects().serviceAccounts().create(
name='projects/' + project_id,
body={
'accountId': service_account_name,
'serviceAccount': {
'displayName': service_account_name
}
}).execute()
print(f'{"Already Exists" if service_account_exists else "Created"} service account: ' + service_account['email'])
return service_account
Fails with the error:
File "/opt/hostedtoolcache/Python/3.9.0/x64/lib/python3.9/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs) File "/opt/hostedtoolcache/Python/3.9.0/x64/lib/python3.9/site-packages/googleapiclient/discovery.py", line 298, in build
service = build_from_document( File "/opt/hostedtoolcache/Python/3.9.0/x64/lib/python3.9/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs) File "/opt/hostedtoolcache/Python/3.9.0/x64/lib/python3.9/site-packages/googleapiclient/discovery.py", line 600, in build_from_document
http = _auth.authorized_http(credentials) File "/opt/hostedtoolcache/Python/3.9.0/x64/lib/python3.9/site-packages/googleapiclient/_auth.py", line 119, in authorized_http
return credentials.authorize(build_http()) AttributeError: 'tuple' object has no attribute 'authorize'
I am using the following Github Action to authenticate with Google
- name: Authenticate to Google Cloud To Create Service Account
uses: google-github-actions/auth#v0.4.3
with:
workload_identity_provider: 'projects/xxx/locations/global/workloadIdentityPools/github-actions-identity-pool/providers/github-provider'
service_account: 'svc-iam-creator-dev#acme-dev-tooling.iam.gserviceaccount.com'
Can anyone help?

You have two problems. This line of code is failing:
credentials = google.auth.default()
Problem 1 - Generate an Google OAuth Access Token
Change the GitHub Actions Step to:
- name: Authenticate to Google Cloud To Create Service Account
uses: google-github-actions/auth#v0.4.3
with:
token_format: 'access_token' # Your python code needs an access token
access_token_lifetime: '300s' # make this value small but long enough to complete the job
workload_identity_provider: 'projects/xxx/locations/global/workloadIdentityPools/github-actions-identity-pool/providers/github-provider'
service_account: 'svc-iam-creator-dev#acme-dev-tooling.iam.gserviceaccount.com'
Problem 2 - Creating Credentials
This line will not work because the credentials are not available from ADC (Application Default Credentials).
credentials = google.auth.default()
Pass the access token generated by Workload Identity Federation to your program from from the GitHub Actions output:
${{ steps.auth.outputs.access_token }}
Create the credentials from the access token:
credentials = google.oauth2.credentials.Credentials(access_token)
service = googleapiclient.discovery.build('iam', 'v1', credentials=credentials)

Related

GCP: using image from one account's Artifact Registry on other account

Hello and wish you a great time!
I've got following terraform service account deffinition:
resource "google_service_account" "gke_service_account" {
project = var.context
account_id = var.gke_account_name
display_name = var.gke_account_description
}
That I use in GCP kubernetes node pool:
resource "google_container_node_pool" "gke_node_pool" {
name = "${var.context}-gke-node"
location = var.region
project = var.context
cluster = google_container_cluster.gke_cluster.name
management {
auto_repair = "true"
auto_upgrade = "true"
}
autoscaling {
min_node_count = var.gke_min_node_count
max_node_count = var.gke_max_node_count
}
initial_node_count = var.gke_min_node_count
node_config {
machine_type = var.gke_machine_type
service_account = google_service_account.gke_service_account.email
metadata = {
disable-legacy-endpoints = "true"
}
# Needed for correctly functioning cluster, see
# https://www.terraform.io/docs/providers/google/r/container_cluster.html#oauth_scopes
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
However the current solution requires the prod and dev envs to be on various GCP accounts but use the same image from prod artifact registry.
As for now I have JSON key file for service account in prod having access to it's registry. Maybe there's a pretty way to use the json file as a second service account for kubernetes or update current k8s service account with json file to have additional permissions to the remote registry?
I've seen the solutions like put it to a secret or user cross-account-service-account.
But it's not the way I want to resolve it since we have some internal restrictions.
Hope someone faced similar task and has a solution to share - it'll save me real time.
Thanks in advance!

Terraform Error creating Topic: googleapi: Error 403: User not authorized to perform this action

Googleapi: Error 403: User not authorized to perform this action
provider "google" {
project = "xxxxxx"
region = "us-central1"
}
resource "google_pubsub_topic" "gke_cluster_upgrade_notifications" {
name = "cluster-notifications"
labels = {
foo = "bar"
}
message_storage_policy {
allowed_persistence_regions = [
"region",
]
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "xxxxxx-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "function_script_zip" {
type = "zip"
source_dir = "./function/"
output_path = "./function/main.py.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "function_script_zip" {
name = "main.py.zip"
bucket = google_storage_bucket.source_code.name
source = "./function/main.py.zip"
}
resource "google_cloudfunctions_function" "gke_cluster_upgrade_notifications" {---
-------
}
The service account has the owner role attached
Also tried using
1.export GOOGLE_APPLICATION_CREDENTIALS={{path}}
2.credentials = "${file("credentials.json")}" by place json file in terraform root folder.
It seems that the used account is missing some permissions (e.g. pubsub.topics.create) to create the Cloud Pub/Sub topic. The owner role should be sufficient to create the topic, as it contains the necessary permissions (you can check this here). Therefore, a wrong service account might be set in Terraform.
To address these IAM issues I would suggest:
Use the Policy Troubleshooter.
Impersonate service account and do the API call using CLI with --verbosity=debug flag, which will provide helpful information about the missing permissions.

enabling oauth2 with pgadmin and gitlab

I've deployed pgadmin on Kubernetes and I'm trying to enable oauth2 as per the pgadmin docs
This is the oauth config which I've passed in:
AUTHENTICATION_SOURCES = ['oauth2', 'internal']
OAUTH2_CONFIG = [
{
# The name of the of the oauth provider, ex: github, google
'OAUTH2_NAME': 'gitlab',
# The display name, ex: Google
'OAUTH2_DISPLAY_NAME': 'Gitlab',
# Oauth client id
'OAUTH2_CLIENT_ID': 'my-client-id-here',
# Oauth secret
'OAUTH2_CLIENT_SECRET': 'my-client-secret-here',
# URL to generate a token,
# Ex: https://github.com/login/oauth/access_token
'OAUTH2_TOKEN_URL': 'https://gitlab.com/oauth/token',
# URL is used for authentication,
# Ex: https://github.com/login/oauth/authorize
'OAUTH2_AUTHORIZATION_URL': "https://gitlab.com/oauth/authorize",
# Oauth base url, ex: https://api.github.com/
'OAUTH2_API_BASE_URL': 'https://gitlab.com/api/v4/',
# Name of the Endpoint, ex: user
'OAUTH2_USERINFO_ENDPOINT': 'user',
# Font-awesome icon, ex: fa-github
'OAUTH2_ICON': 'fa-gitlab',
# UI button colour, ex: #0000ff
'OAUTH2_BUTTON_COLOR': '#E24329',
}
]
OAUTH2_AUTO_CREATE_USER = True
I've added the application on Gitlab. The redirect URIs are:
https://pgadmin.nonprod.example.io/oauth2/authorize
http://pgadmin.nonprod.example.io/oauth2/authorize
I've give the application the following scopes:
api
openid
profile
email
I'm testing it locally with the pgadmin ingress and my local minikube cluster. I keep getting the following error when I click the 'Sign in with Gitlab' button:
{
success: 0,
errormsg: "403 Client Error: Forbidden for url: https://gitlab.com/api/v4/user",
info: "",
result: null,
data: null
}
I believe I have all the necessary gitlab permissions and can't figure out what I'm doing wrong.
I think that in this case we can just use the OIDC endpoint to fetch userinfo. For gitlab it is: ttps://gitlab.com/oauth/userinfo. Therefore, you do not need api scope, just openid email profile
So the following configuration actually works for me:
AUTHENTICATION_SOURCES = ['oauth2', 'internal']
OAUTH2_CONFIG = [
{
'OAUTH2_NAME': 'gitlab',
'OAUTH2_DISPLAY_NAME': 'Gitlab',
'OAUTH2_CLIENT_ID': 'my-client-id-here',
'OAUTH2_CLIENT_SECRET': 'my-client-secret-here',
'OAUTH2_TOKEN_URL': 'https://gitlab.com/oauth/token',
'OAUTH2_AUTHORIZATION_URL': "https://gitlab.com/oauth/authorize",
'OAUTH2_API_BASE_URL': 'https://gitlab.com/oauth/',
'OAUTH2_USERINFO_ENDPOINT': 'userinfo',
'OAUTH2_SCOPE': 'openid email profile',
'OAUTH2_ICON': 'fa-gitlab',
'OAUTH2_BUTTON_COLOR': '#E24329',
}
]
OAUTH2_AUTO_CREATE_USER = True

Hashicorp Vault won't let me delete a Policy even using the root token

I am trying to delete a policy.
After logging in with the root token, I do the following:
$ vault policy delete testttt
Error deleting testttt: Error making API request.
URL: DELETE https://vault.local:8200/v1/sys/policies/acl/testttt
Code: 400. Errors:
* failed to delete policy: AccessDenied: Access Denied
status code: 403, request id: VB6YWECETDJ5KB7Q, host id:
S0FJvs41pSbzTmP1lDr/aVSOPjeRVz4Vk/ofkFHu8jvNjfzk6ARnY33qzP/usqmpVDExwLlsF44=
My config file looks like this:
storage "s3" {
access_key = "XXXX"
secret_key = "XXXX"
bucket = "XXXX-vault"
region = "eu-central-1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/etc/vault.d/fullchain.pem"
tls_key_file = "/etc/vault.d/privkey.pem"
}
api_addr = "http://0.0.0.0:8200"
cluster_addr = "https://0.0.0.0:8201"
ui = true
Something seems totally off, as after using the root token in the UI, I also just see this:
null is not an object (evaluating 'l.userRootNamespace')

Cannot import certificates for EDGE while REGIONAL is active

I am trying to use a certificate issued in eu-central-1 for my apigateway which is regional and works in the same region.
My terraform code is as follows:
//ACM Certificate
provider "aws" {
region = "eu-central-1"
alias = "eu-central-1"
}
resource "aws_acm_certificate" "certificate" {
provider = "aws.eu-central-1"
domain_name = "*.kumite.xyz"
validation_method = "EMAIL"
}
//Apigateway
resource "aws_api_gateway_rest_api" "kumite_writer_api" {
name = "kumite_writer_api"
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_api_gateway_domain_name" "domain_name" {
certificate_arn = aws_acm_certificate.certificate.arn
domain_name = "recorder.kumite.xyz"
endpoint_configuration {
types = ["REGIONAL"]
}
}
Unfortunately, I constantly get this error:
Error: Error creating API Gateway Domain Name: BadRequestException: Cannot import certificates for EDGE while REGIONAL is active.
What I am missing here? I think my ApiGateway is not EDGE but REGIONAL so cannot find sense to the error...
Change certificate_arn to regional_certificate_arn.
From documentation (emphasis mine):
When referencing an AWS-managed certificate, the following arguments are supported:
certificate_arn - (Optional) The ARN for an AWS-managed certificate. AWS Certificate Manager is the only supported source. Used when an edge-optimized domain name is desired. Conflicts with certificate_name, certificate_body, certificate_chain, certificate_private_key, regional_certificate_arn, and regional_certificate_name.
regional_certificate_arn - (Optional) The ARN for an AWS-managed certificate. AWS Certificate Manager is the only supported source. Used when a regional domain name is desired. Conflicts with certificate_arn, certificate_name, certificate_body, certificate_chain, and certificate_private_key.