I am trying to publish the message on topic but I am not able to publish the message. I am using laravelframework. My subscription is push type.
I have used $ composer require google/cloud-pubsub from https://github.com/googleapis/google-cloud-php-pubsub link
I have followed this link: (https://cloud.google.com/pubsub/docs/publisher#php)
**use Google\Cloud\PubSub\PubSubClient;**
function publish_message($projectId, $topicName, $message)
{
$pubsub = new PubSubClient([\[][1]
'projectId' => $projectId,
]);
$topic = $pubsub->topic($topicName);
$topic->publish(['data' => $message]);
print('Message published' . PHP_EOL);
}
I am getting this error (open this link : https://i.stack.imgur.com/XXHZ5.png.
[1]: https://i.stack.imgur.com/XXHZ5.png
Your question would benefit from a more detailed explanation.
As it is, the code you show is the same code as published by Google.
Assuming (!?) Google's code works (probable but not certain), your code should work.
Since we know your code doesn't work, it's probably something else.
I suspect you've missed one or more of the following possibly the last steps:
created a Google Cloud Platform project ($projectId)?
enabled the Pub/Sub API?
created a Pub/Sub topic [and >=1 subscriber] ($topicName)?
created (service account) credentials permitted to publish to this topic?
set GOOGLE_APPLICATION_CREDENTIALS to point to the account's key?
How are you running the code?
If possible please also print the ClientException that you show in the image.
Update
I tested Google's code and it works for me:
BILLING_ID=[[YOUR-BILLING]]
PROJECT_ID=[[YOUR-PROJECT]]
TOPIC_NAME=[[YOUR-TOPIC-NAME]]
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
# Enabled Pub/Sub and create topic=subscription=${TOPIC_NAME}
gcloud services enable pubsub.googleapis.com \
--project=${PROJECT}
gcloud pubsub topics create ${TOPIC_NAME} \
--project=${PROJECT}
gcloud pubsub subscriptions create ${TOPIC_NAME} \
--topic=${TOPIC_NAME} \
--project=${PROJECT}
# Create service account ${ROBOT} and key `./${ROBOT}.json`
# Grant the account `publisher` permissions
ROBOT=[[YOUR-ACCOUNT-NAME]]
gcloud iam service-accounts create ${ROBOT} \
--project=${PROJECT}
gcloud iam service-accounts keys create ./${ROBOT}.json \
--iam-account=${ROBOT}#${PROJECT}.iam.gserviceaccount.com \
--project=${PROJECT}
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${ROBOT}#${PROJECT}.iam.gserviceaccount.com \
--role=roles/pubsub.publisher
Then -- and apologies, I'm no PHP developer -- here's what I did:
composer.json:
{ "require": { "google/cloud-pubsub": "1.24.1" } }
pubsub.php:
<?php
require_once 'vendor/autoload.php';
use Google\Cloud\PubSub\PubSubClient;
// Expects Env PROJECT_ID, TOPIC_NAME **and** GOOGLE_APPLICATION_CREDENTIALS
$projectId = getenv("PROJECT_ID");
$topicName = getenv("TOPIC_NAME");
$pubsub = new PubSubClient([
"projectId" => $projectId
]);
$topic = $pubsub->topic($topicName);
$topic->publish(["data"=>"Hello Freddie!"]);
print("Message published" . PHP_EOL);
?>
Then:
export GOOGLE_APPLICATION_CREDENTIALS=./${ROBOT}.json
export PROJECT_ID
export TOPIC_NAME
php pubsub.php
NOTE the code implicitly assumes GOOGLE_APPLICATION_CREDENTIALS to authenticate against the service, see Application Default Credentials
yields:
Message published
And:
gcloud pubsub subscriptions pull ${TOPIC_NAME} \
--project=${PROJECT} \
--format="value(message.data)"
Hello Freddie!
I am guessing the issue you are faced is because of you must have missed the authentication step. Have you created a SA and downloaded the json file to authenticate? If so, double check that you have this line in your filesystem:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
Related
I want to connect service principal with certificate to authorize using pyspark. I could see the code in scala in this link - https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/use-aad-authentication-to-connect-eventhubs.md
I have client _id and tenant_id and certificate details. Could some please share me the code in pyspark for same?
You add the Azure AD service principal to the Azure Databricks workspace using the SCIM API 2.0. Authentication using Pyspark isn't available.
To authenticate using service principal, you need to follow below steps:
As you already have clientID and tenantID, so the service principal ID already created.
Create the Azure Databricks personal access token
You’ll use an Azure Databricks personal access token (PAT) to authenticate against the Databricks REST API. To create a PAT that can be used to make API requests:
Go to your Azure Databricks workspace.
Click the user icon in the top-right corner of the screen and click User Settings.
Click Access Tokens > Generate New Token.
Copy and save the token value.
Add the service principal to the Azure Databricks workspace
You add the Azure AD service principal to a workspace using the SCIM API 2.0. You must also give the service principal permission to launch automated job clusters. You can grant this through the allow-cluster-create permission. Open a terminal and run the following command to add the service principal and grant the required permissions:
curl -X POST 'https://<per-workspace-url>/api/2.0/preview/scim/v2/ServicePrincipals' \
--header 'Content-Type: application/scim+json' \
--header 'Authorization: Bearer <personal-access-token>' \
--data-raw '{
"schemas":[
"urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal"
],
"applicationId":"<application-id>",
"displayName": "test-sp",
"entitlements":[
{
"value":"allow-cluster-create"
}
]
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the Azure Databricks personal access token.
Replace with the Application (client) ID for the Azure AD application registration.
Create an Azure Key Vault-backed secret scope in Azure Databricks
Secret scopes provide secure storage and management of secrets. You’ll store the secret associated with the service principal in a secret scope. You can store secrets in a Azure Databricks secret scope or an Azure Key Vault-backed secret scope. These instructions describe the Azure Key Vault-backed option:
Create an Azure Key Vault instance in the Azure portal.
Create the Azure Databricks secret scope backed by the Azure Key Vault instance.
Step 1: Create an Azure Key Vault instance
In the Azure portal, select Key Vaults > + Add and give the key vault a name.
Click Review + create.
After validation completes, click Create .
After creating the key vault, go to the Properties page for the new key vault.
Copy and save the Vault URI and Resource ID.
Step 2: Create An Azure Key Vault-backed secret scope
Azure Databricks resources can reference secrets stored in an Azure key vault by creating a Key Vault-backed secret scope. To create the Azure Databricks secret scope:
Go to the Azure Databricks Create Secret Scope page at https:///#secrets/createScope. Replace per-workspace-url with the unique per-workspace URL for your Azure Databricks workspace.
Enter a Scope Name.
Enter the Vault URI and Resource ID values for the Azure key vault you created in Step 1: Create an Azure Key Vault instance.
Click Create.
Save the client secret in Azure Key Vault
In the Azure portal, go to the Key vaults service.
Select the key vault created in Step 1: Create an Azure Key Vault instance.
Under Settings > Secrets, click Generate/Import.
Select the Manual upload option and enter the client secret in the Value field.
Click Create.
Grant the service principal read access to the secret scope
You’ve created a secret scope and stored the service principal’s client secret in that scope. Now you’ll give the service principal access to read the secret from the secret scope.
Open a terminal and run the following command:
curl -X POST 'https://<per-workspace-url/api/2.0/secrets/acls/put' \
--header 'Authorization: Bearer <personal-access-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"scope": "<scope-name>",
"principal": "<application-id>",
"permission": "READ"
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the Azure Databricks personal access token.
Replace with the name of the Azure Databricks secret scope that contains the client secret.
Replace with the Application (client) ID for the Azure AD application registration.
Create a job in Azure Databricks and configure the cluster to read secrets from the secret scope
You’re now ready to create a job that can run as the new service principal. You’ll use a notebook created in the Azure Databricks UI and add the configuration to allow the job cluster to retrieve the service principal’s secret.
Go to your Azure Databricks landing page and select Create Blank Notebook. Give your notebook a name and select SQL as the default language.
Enter SELECT 1 in the first cell of the notebook. This is a simple command that just displays 1 if it succeeds. If you have granted your service principal access to particular files or paths in Azure Data Lake Storage Gen 2, you can read from those paths instead.
Go to Workflows and click the + Create Job button. Give the job a name, click Select Notebook, and select the notebook you just created.
Click Edit next to the Cluster information.
On the Configure Cluster page, click Advanced Options.
On the Spark tab, enter the following Spark Config:
fs.azure.account.auth.type.acmeadls.dfs.core.windows.net OAuth
fs.azure.account.oauth.provider.type.acmeadls.dfs.core.windows.net org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.id.acmeadls.dfs.core.windows.net <application-id>
fs.azure.account.oauth2.client.secret.acmeadls.dfs.core.windows.net {{secrets/<secret-scope-name>/<secret-name>}}
fs.azure.account.oauth2.client.endpoint.acmeadls.dfs.core.windows.net https://login.microsoftonline.com/<directory-id>/oauth2/token
Replace with the name of the Azure Databricks secret scope that contains the client secret.
Replace with the Application (client) ID for the Azure AD application registration.
Replace with the name associated with the client secret value in the secret scope.
Replace with the Directory (tenant) ID for the Azure AD application registration.
Transfer ownership of the job to the service principal
A job can have exactly one owner, so you’ll need to transfer ownership of the job from yourself to the service principal. To ensure that other users can manage the job, you can also grant Can Manage permissions to a group. In this example, we use the Permissions API to set these permissions.
Open a terminal and run the following command:
curl -X PUT 'https://<per-workspace-url>/api/2.0/permissions/jobs/<job-id>' \
--header 'Authorization: Bearer <personal-access-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"access_control_list": [
{
"service_principal_name": "<application-id>",
"permission_level": "IS_OWNER"
},
{
"group_name": "admins",
"permission_level": "CAN_MANAGE"
}
]
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the Azure Databricks personal access token.
Replace with the Application (client) ID for the Azure AD application registration.
The job will also need read permissions to the notebook. Run the following command to grant the required permissions:
curl -X PUT 'https://<per-workspace-url>/api/2.0/permissions/notebooks/<notebook-id>' \
--header 'Authorization: Bearer <personal-access-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"access_control_list": [
{
"service_principal_name": "<application-id>",
"permission_level": "CAN_READ"
}
]
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the ID of the notebook associated with the job. To find the ID, go to the notebook in the Azure Databricks workspace and look for the numeric ID that follows notebook/ in the notebook’s URL.
Replace with the Azure Databricks personal access token.
Replace with the Application (client) ID for the Azure AD application registration.
You can now test the job. You run jobs with a service principal the same way you run jobs as a user, either through the UI, API, or CLI.
A service account ($SERVICE_ACCOUNT_A) from one Google Cloud Platform (GCP) project ($PROJECT_A) is unable to interact with a Google Kubernetes Engine (GKE) cluster ($GKE_CLUSTER_B) within another GCP project ($PROJECT_B); where:
$PROJECT_A is the name of the project $SERVICE_ACCOUNT_A lives within
$SERVICE_ACCOUNT_A is of the form some-name#some-project-name#.iam.gserviceaccount.com
$PROJECT_B is the name of the project the $GKE_CLUSTER_B cluster lives within
$GKE_CLUSTER_B is a GKE cluster name, not context, of the form: some_cluster
$SERVICE_ACCOUNT_A is unable to interact with a $GKE_CLUSTER_B despite possessing roles from $PROJECT_B containing permissions that should allow it to do so.
I.e., first I created a custom role $ROLE:
gcloud iam roles create $ROLE \
--description="$ROLE_DESCRIPTION" \
--permissions=container.clusters.get,container.clusters.list \
--project=$PROJECT_B \
--title='$ROLE_TITLE'
#=>
Created role [$ROLE].
description: $ROLE_DESCRIPTION
etag: . . .
includedPermissions:
- container.clusters.get
- container.clusters.list
name: projects/$PROJECT_B/roles/$ROLE
stage: . . .
title: $ROLE_TITLE
then I associated $ROLE, from $PROJECT_B, with $SERVICE_ACCOUNT_A:
gcloud projects add-iam-policy-binding $PROJECT_B \
--member=serviceAccount:$SERVICE_ACCOUNT_A \
--role=projects/$PROJECT_B/roles/$ROLE
#=>
Updated IAM policy for project [$PROJECT_B].
auditConfigs:
. . .
and I am able to see $ROLE under $SERVICE_ACCOUNT_A:
gcloud projects get-iam-policy $PROJECT_B \
--flatten='bindings[].members' \
--format='value(bindings.role)' \
--filter="bindings.members:${SERVICE_ACCOUNT_A}"
#=>
projects/$PROJECT_B/roles/$ROLE
with the proper permissions:
gcloud iam roles describe $ROLE \
--flatten='includedPermissions' \
--format='value(includedPermissions)' \
--project=$PROJECT_B
#=>
container.clusters.get
container.clusters.list
but still unable to get $SERVICE_ACCOUNT_A to interact with $GKE_CLUSTER_B.
Why?
You need to enable the Kubernetes Engine API (found here) for $PROJECT_A, even if $PROJECT_A doesn't have or need a GKE cluster.
You can confirm this by creating a new JSON key for $SERVICE_ACCOUNT_A:
gcloud iam service-accounts keys create \
./some-key.json \
--iam-account="${SERVICE_ACCOUNT_A}" \
--key-file-type="json"
#=>
created key [$KEY_ID] of type [json] as [./some-key.json] for [$SERVICE_ACCOUNT_A]
activate the service account:
gcloud auth activate-service-account \
"${SERVICE_ACCOUNT_A}" \
--key-file=./some-key.json
#=>
Activated service account credentials for: [$SERVICE_ACCOUNT_A]
confirm it's active:
cloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
. . .
* $SERVICE_ACCOUNT_A
your#account.user
. . .
To set the active account, run:
$ gcloud config set account `ACCOUNT`
and attempt to interact with $GKE_CLUSTER_B:
gcloud container clusters list --project=$PROJECT_B
#=>
ERROR: (gcloud.container.clusters.list) ResponseError: code=403, message=Kubernetes Engine API has not
been used in project $PROJECT_A_ID before or it is disabled. Enable it by visiting
https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=$PROJECT_A_ID
then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our
systems and retry.
where $PROJECT_A_ID is a numeric id of the form: xxxxxxxxxxxxx.
Visit the address returned along with the 403 above (here) and enable the Kubernetes Engine API. $SERVICE_ACCOUNT_A should now be able to interact with GKE clusters within $PROJECT_B:
gcloud container clusters list \
--project=$PROJECT_B \
--format='value(name)
#=>
. . .
some_cluster
. . .
including $GKE_CLUSTER_B.
I am trying to integrate CircleCi with gcloud Kubernetes engine.
I created a service account with Kubernetes Engine Developer and Storage Admin roles.
Created CircleCi yaml file and configured CI.
Part of my yaml file includes:
docker:
- image: google/cloud-sdk
environment:
- PROJECT_NAME: 'my-project'
- GOOGLE_PROJECT_ID: 'my-project-112233'
- GOOGLE_COMPUTE_ZONE: 'us-central1-a'
- GOOGLE_CLUSTER_NAME: 'my-project-bed'
steps:
- checkout
- run:
name: Setup Google Cloud SDK
command: |
apt-get install -qq -y gettext
echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
gcloud --quiet container clusters get-credentials ${GOOGLE_CLUSTER_NAME}
Everything runs perfectly except that the last command:
gcloud --quiet container clusters get-credentials ${GOOGLE_CLUSTER_NAME}
It keeps failing with the error:
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission(s) for "projects/my-project-112233/zones/us-central1-a/clusters/my-project-bed". See https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted for more info.
I tried to give the ci account the role of project owner but I still got that error.
I tried to disable and re-enable the Kubernetes Service but it didn't help.
Any idea how to solve this? I am trying to solve it for 4 days...
This is an old thread, this is how this issue handled today in case using cloud build :
Granting Cloud Build access to GKE
To deploy the application in your Kubernetes cluster, Cloud Build needs the Kubernetes Engine Developer Identity and Access Management Role.
Get Project Number:
PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')"
Add IAM Policy bindings:
gcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \
--member=serviceAccount:${PROJECT_NUMBER}#cloudbuild.gserviceaccount.com \
--role=roles/container.developer
More info can be found here.
I believe it's not the CI Service account but the k8s service account used to manage your GKE cluster, where its email should look like this (Somebody must have deleted it):
k8s-service-account#<project-id>.iam.gserviceaccount.com
You can re-create it an give it project owner permissions.
Step 1 : gcloud init
Step 2 : Select [2] Create a new configuration
Step 3 : Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-': kubernetes-service-account
Step 4 : Choose the account you would like to use to perform operations for
this configuration:[2] Log in with a new account
Step 5 : Do you want to continue (Y/n)? y
Step 6 : Copy paste the link to brwoser and login with the ID which is used to create your google Cloud Account
Step 7 : Copy the verification code provided by google after login and paste it in to the console.
Step 8 : Pick cloud project to use:
Step 9: Do you want to configure a default Compute Region and Zone? (Y/n)? y
Step 10 : Please enter numeric choice or text value (must exactly match list item): 8
Your Google Cloud SDK is configured and ready to use!
The details of the above mentioned errors are explained in this help center article.
To add the Kubernetes Engine Service account (if you don't have it), please run the following command, in order to properly recreate the Kubernetes Service Account with the "Kubernetes Engine Service Agent" role,
gcloud services enable container.googleapis.com
In my case, these 2 steps solved my issue:
In the command,
gcloud container clusters get-credentials my-cluster-1 --zone=asia-south1-a --
project=thelab-240901
the --project should have the projectID value, not the project name
In the your travis account, go to your project repository -> more options -> settings -> Environment Variables. Now make sure you have only one set of encrypted_iv and encrypted_key environment variables as follows:
If you have encrypted different service accounts (json key files), this could add more than one set of encrypted_iv and encrypted_key environment variables. So, delete all those variables and freshly create the encrypted keys, may be by using travis encrypt-file --pro yourServiceAccountJSONFile.json --add
I had this problem using gcloud with my main owner account (!)
What fixed it was including --zone and --project params in the command to get the kubectl credentials.
I faced this issue with different scenarios, Listing it down below hope it will help someone.
1. If you did a fresh installation of google-cloud-sdk then you must login
with gcloud using the below command.
gcloud auth login
The above command will open ur browser and ask for login with your GCP account.
2. Sometimes provisioning is not reflected. Hence i revoke my provision and granted access (In this case my role is Owner) again. Then it worked.
I was getting the same error when trying to connect to my newly created cluster:
gcloud container clusters get-credentials <foo-cluster> --zone europe-central2-a --project <foo-project>
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission(s) for "projects/foo-project/zones/europe-central2-a/clusters/foo-cluster".
I tried a few things:
I enabled Kuberentes API - no success
I added a key to service account and loged in using downloaded key:
gcloud auth activate-service-account --key-file=<path-to-the-downloaded-json-file>/foo-project-xxxx.json
Activated service account credentials for: [xxxx-compute#developer.gserviceaccount.com]
I run:
gcloud components update
However, I had a problem retriving data, all kubectl command were giving TLS handshake timeout, for example: kubectl get namespace was giving an error:
Unable to connect to the server: net/http: TLS handshake timeout
This is when I tried again:
gcloud container clusters get-credentials <foo-cluster> --zone europe-central2-a --project <foo-project>
and it worked
Fetching cluster endpoint and auth data.
kubeconfig entry generated for foo-project.
I am trying to create a Service Account with 'roles/container.admin' and i get an error saying that the role is not supported for this resource.
$ gcloud iam service-accounts add-iam-policy-binding sa-ci-vm#PROJECT-ID.iam.gserviceaccount.com --member='serviceAccount:sa-ci-vm#PROJECT-ID.iam.gserviceaccount.com' --role='roles/container.admin'
ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) INVALID_ARGUMENT: Role roles/container.admin is not supported for this resource.
If I create a Service Account from the CONSOLE UI I can add this role without a problem.
You have to use gcloud projects to add roles for a service account at a project level as shown here.
This works for me:
gcloud projects add-iam-policy-binding PROJECT_ID \
--member serviceAccount:sa-ci-vm#PROJECT-ID.iam.gserviceaccount.com \
--role roles/container.admin
I got the same error. You have to give the absolute path to the role.
cloud iam service-accounts add-iam-policy-binding SERVICEACCOUNT --member=SERVICEACCOUNT_EMAIL --role=projects/PROJECTNAME/roles/ROLENAME
I have installed kubernetes with minikube in ubuntu 16.04.
I want to know how i can integrate openid-connect based authentication with it. I am new to kubernetes. So any suggestion on how to configure would help.
I am currently accessing the dashboard with "minikube dashboard" command. But i dont seem to find any role specific login. The K8S guide has the below config section,
kubectl config set-credentials USER_NAME \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=( issuer url ) \
--auth-provider-arg=client-id=( your client id ) \
--auth-provider-arg=client-secret=( your client secret ) \
--auth-provider-arg=refresh-token=( your refresh token ) \
--auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \
--auth-provider-arg=id-token=( your id_token ) \
--auth-provider-arg=extra-scopes=( comma separated list of scopes to add to "openid email profile", optional )
Can someone tell me how i can get values for
1. Issuer URL
2. Refresh token
3. Id-token
4. Extra-scope
I assume the client id and client secret are the ones we get when google credentials are created. Please correct me if I'm wrong.
The Kubernetes Authentication docs try to explain the different "authn" plugins. One of these is "OpenID Connect", which requires that you start up an "Identity Provider".
So when you tell kubectl to use --auth-provider=oidc, that's what you're using. The idp-issuer-url will point at your Identity Provider's HTTPS URL. They give different examples of implementations of this. CoreOS has one called Dex.
Their repo has some examples under: ./examples
An example of using LDAP connector plugin for dex is here
For more information about how Authentication is done in Kubernetes (e.g.: "What is authn?" "What is authz", etc...), there is a great presentation by Eric Chiang here.
So to answer your question:
Q: how i can get values for:
Issuer URL
Refresh token
Id-token
Extra-scope
A: Set up Dex, then authenticate to it using the "Login" app (with some backend such as LDAP in example). Then it redirects you to a page with a ~/.kube/config file with a user which has all of these items.