How to add long txt recordset to gcloud dns in the terminal? - gcloud

In gcloud console panel, to add long txt recordset (over 255) need to add dividing the string in multiple string with quotes like this (In the example I split it by word, but you have to split it every 255 chars):
original string = "loren inpsum"
gcloud format = "loren ""inpsum"
But if I try to add a long txt record via terminal, I get an error if I not split it, and if I split it I get multi rrdata, which I dont want that.
Not spliting example: (error 400 invalid rdata)
gcloud dns record-sets transaction add mail._domainkey.example.com. "{rdata}" --ttl 300 --type TXT --zone {zone} --name
Example spliting: (this error create multiple rdata)
gcloud dns record-sets transaction add mail._domainkey.example.com. "{rdata}" "{rdata1}" --ttl 300 --type TXT --zone {zone} --name
Thanks for any clue
[edit]
google doc: https://cloud.google.com/sdk/gcloud/reference/dns/record-sets/transaction/add
examples:
this give rdata value error
gcloud dns record-sets transaction add --name mail._domainkey.example.com. "v=DKIM1; h=sha256; k=rsa; s=email; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuStUNDJvpZcJpS4awOyD/I4H910zxR1JbrW4DhvHLcIr+ry5TkRvHp+H66LnYyOZVsU8G8S0u8Sftv1kK633M+HJLc0GBaFEFYEpyEwAUvs20S7FoPThz0ZFfxEDTPyww00BWv5BSIUA0HPPpKLHDlDtFr2r/rt9S7IGOLQ0oMm5bDgHZR1UXPbAsFSpcWAkLf5i8SDD0UPVqauqThYKWk0CsVcdF3L7OGQIVK21eVlHVb23e7FBfwO6tDZJQnpaV3LdLSzWPYSB2VeaQAIpZfaKMzmJJW/v0pnQZ2UD9WWaj3X4a+1VVLfx97CqiQigqMpbcrzrnhHGz6pwjEPwowIDAQAB" --ttl 300 --type TXT
this create multi value
gcloud dns record-sets transaction add --name mail._domainkey.example.com. "v=DKIM1; h=sha256; k=rsa; s=email; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuStUNDJvpZcJpS4awOyD/I4H910zxR1JbrW4DhvHLcIr+ry5TkRvHp+H66LnYyOZVsU8G8S0u8Sftv1kK633M+HJLc0GBaFEFYEpyEwAUvs20S7FoPThz0ZFfxEDTPyww00BWv5BSIUA0HPPpKLHDlDtFr2r/rt9S7IGOLQ0oMm5bDgHZR1UXPbAsFSpcWAkLf5i8SDD0UPVqauqT" "hYKWk0CsVcdF3L7OGQIVK21eVlHVb23e7FBfwO6tDZJQnpaV3LdLSzWPYSB2VeaQAIpZfaKMzmJJW/v0pnQZ2UD9WWaj3X4a+1VVLfx97CqiQigqMpbcrzrnhHGz6pwjEPwowIDAQAB" --ttl 300 --type TXT
[SOLUTION]
I find the solution
'"string1""string2""string3"'
like this
gcloud dns record-sets transaction add --name mail._domainkey.example.com. '"v=DKIM1; h=sha256; k=rsa; s=email; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuStUNDJvpZcJpS4awOyD/I4H910zxR1JbrW4DhvHLcIr+ry5TkRvHp+H66LnYyOZVsU8G8S0u8Sftv1kK633M+HJLc0GBaFEFYEpyEwAUvs20S7FoPThz0ZFfxEDTPyww00BWv5BSIUA0HPPpKLHDlDtFr2r/rt9S7IGOLQ0oMm5bDgHZR1UXPbAsFSpcWAkLf5i8SDD0UPVqauqT" "hYKWk0CsVcdF3L7OGQIVK21eVlHVb23e7FBfwO6tDZJQnpaV3LdLSzWPYSB2VeaQAIpZfaKMzmJJW/v0pnQZ2UD9WWaj3X4a+1VVLfx97CqiQigqMpbcrzrnhHGz6pwjEPwowIDAQAB"' --ttl 300 --type TXT

The solution would be two double quotes eg: "string1""string2""string3"' as posted by #Mariano DAngelo
Actual command:
gcloud dns record-sets transaction add --name mail._domainkey.example.com. '"v=DKIM1; h=sha256; k=rsa; s=email; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuStUNDJvpZcJpS4awOyD/I4H910zxR1JbrW4DhvHLcIr+ry5TkRvHp+H66LnYyOZVsU8G8S0u8Sftv1kK633M+HJLc0GBaFEFYEpyEwAUvs20S7FoPThz0ZFfxEDTPyww00BWv5BSIUA0HPPpKLHDlDtFr2r/rt9S7IGOLQ0oMm5bDgHZR1UXPbAsFSpcWAkLf5i8SDD0UPVqauqT" "hYKWk0CsVcdF3L7OGQIVK21eVlHVb23e7FBfwO6tDZJQnpaV3LdLSzWPYSB2VeaQAIpZfaKMzmJJW/v0pnQZ2UD9WWaj3X4a+1VVLfx97CqiQigqMpbcrzrnhHGz6pwjEPwowIDAQAB"' --ttl 300 --type TXT

Related

How to get gcloud dataproc create flag in a spark job?

I want to get flags used when creating a dataproc cluster in a spark job.
for example, I created my cluster using this command line:
gcloud dataproc clusters create cluster-name \
--region=region \
--bucket=bucket-name \
--temp-bucket=bucket-name \
other args ...
In my scala spark job I want to get the bucket name and other arguments how to do that, I know if I want to get the arguments of my job I must do that:
val sc = sparkSession.sparkContext
val conf_context=sc.getConf.getAll
conf_context.foreach(println)
Any help, please?
Thanks
Dataproc also publishes some attributes, including the bucket name, to GCE instance Metadata. You can also specify your own Metadata. See https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/metadata.
These will be available to you through the Metadata server. For example, if you want to read the bucket name, you can run
curl -s -H Metadata-Flavor:Google http://metadata.google.internal/computeMetadata/v1/instance/attributes/dataproc-bucket
You can use gcloud dataproc clusters describe shell command to get details about the cluster:
gcloud dataproc clusters describe $clusterName --region $clusterRegion
To get the bucket name from this command, you can use grep:
BUCKET_NAME=$(gcloud dataproc clusters describe $clusterName \
--region $clusterRegion \
| grep 'configBucket:' \
| sed 's/.* //')
You should be able to execute this from Scala, see this post for how to do.

Error executing access token command "/google/google-cloud-sdk/bin/gcloud config-helper --format=json

I'm trying to follow this step by step to upload the airflow in Kubernetes (https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm) but in this part of the execution I have problems as follows:
Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?
SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD#$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME
echo $SQL_ALCHEMY_CONN > /secrets/airflow/sql_alchemy_conn
# Create the fernet key which is needed to decrypt database the database
FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
echo $FERNET_KEY > /secrets/airflow/fernet-key
kubectl create secret generic airflow \
--from-file=fernet-key=/secrets/airflow/fernet-key \
--from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn
Unable to connect to the server: error executing access token command
"/google/google-cloud-sdk/bin/gcloud config config-helper
--format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run
the following command: gcloud feedback To check gcloud for common
problems, please run the following command: gcloud info
--run-diagnostics
I solved this by creating a new cloud shell tab to connect the cluster:
gcloud container clusters get-credentials testcluster1 --zone = your_zone
Example:
get the name and location of your cluster
gcloud container clusters list
then
gcloud container clusters get-credentials demo --region=us-west1-a

Using Cloud Shell to Access a Private Kubernetes Cluster in GCP

The following link https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands
$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "35.241.216.229/32"
Updating mservice-dev-cluster...done.
ERROR: (gcloud.container.clusters.update) Operation [<Operation
clusterConditions: []
detail: u'Patch failed'
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "172.17.0.2/32"
Updating mservice-dev-cluster...done.
Updated [https://container.googleapis.com/v1/projects/protean-
XXXX/zones/europe-west2/clusters/mservice-dev-cluster].
To inspect the contents of your cluster, go to:
https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-
west2/mservice-dev-cluster?project=protean-XXXX
$ kubectl config current-context
gke_protean-XXXX_europe-west2_mservice-dev-cluster
$ kubectl get services
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout
When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.
Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
Try to perform the following
gcloud container clusters get-credentials [CLUSTER_NAME]
And confirm that kubectl is using the right credentials:
gcloud auth application-default login

How to create an SSH in gcloud, but keep getting API error

I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.

Google Dataproc Agent reports failure when using initialization script

I am trying to set up a cluster with an initialization script, but I get the following error:
[BAD JSON: JSON Parse error: Unexpected identifier "Google"]
In the log folder the init script output log is absent.
This seems rather strange as it seemed to work past week, and the error message does not seem related to the init script, but rather to the input arguments for the cluster creation. I used the following command:
gcloud beta dataproc clusters create <clustername> --bucket <bucket> --zone <zone> --master-machine-type n1-standard-1 --master-boot-disk-size 10 --num-workers 2 --worker-machine-type n1-standard-1 --worker-boot-disk-size 10 --project <projectname> --initialization-actions <gcs-uri of script>
Apparently changing
#!/bin/sh
to
#!/bin/bash
and removing all "sudo" occurrences did the trick.
This particular error occurs most often when the initialization script is in a Cloud Storage (GCS) bucket to which the project running the cluster does not have access.
I would recommend double-checking the project which is being used for the cluster has read access to the bucket.