Installing Cloud SQL Proxy on Dataproc - google-cloud-dataproc

I'm trying to install the Cloud MySQL Proxy on a Dataproc cluster with the initialization-action given as an example here. I removed all the hive-metadata part because I don't need it. The proxy is installed but I can't connect to my instance. The created cloud-sql-proxy.service file looks fine:
> cat /usr/lib/systemd/system/cloud-sql-proxy.service
[Unit]
Description=Google Cloud SQL Proxy
After=local-fs.target network-online.target
After=google.service
Before=shutdown.target
[Service]
Type=simple
ExecStart=/usr/local/bin/cloud_sql_proxy -dir=/var/run/cloud_sql_proxy -instances_metadata=attributes/additional-cloud-sql-instances
[Install]
WantedBy=multi-user.target
Also, if I try to get the value of attributes/additional-cloud-sql-instances it looks fine:
> /usr/share/google/get_metadata_value attributes/additional-cloud-sql-instances
> myproject-12345:europe-west1:my-db-instance=tcp:3333
Yet I can't connect to the instance. When I stop the service and start it again I see an error message:
> /usr/local/bin/cloud_sql_proxy -dir=/var/run/cloud_sql_proxy -instances_metadata=attributes/additional-cloud-sql-instances
2017/07/03 09:23:44 Ready for new connections
2017/07/03 09:23:44 Error on receiving new instances from metadata: metadata: GCE metadata "attributes/additional-cloud-sql-instances" not defined
Am I doing something wrong?
In the meantime I can get this to work by using -instances=myproject-12345:europe-west1:my-db-instance=tcp:3333 instead of using the metadata key, but I wonder why it doesn't work as provided in the example.

There's a subtle difference in how the Cloud SQL proxy interprets flags and how get_metadata_value interprets its input.
When using get_metadata_value, the attribute to read is assumed to be relative to
http://metadata.google.internal/computeMetadata/v1/instance/
while in the case of the Cloud SQL proxy, all paths should be relative to:
http://metadata.google.internal/computeMetadata/v1/
So if you're intention is to have the Cloud SQL proxy read instance metadata, pass in:
-instances_metadata=instance/attributes/additional-cloud-sql-instances

Related

Run Arango Shell (Arangosh) on a Kubernetes pod

I have set up an Arango instance on Kubernetes nodes, which were installed on a VM, as mentioned in the ArangoDB docs ArangoDB on Kubernetes. Keep in mind, I skipped the ArangoLocalStorage and ArangoDeploymentReplication step. I can see 3 pods each of agent, coordinators and dbservers in get pods.
The arango-cluster-ea service, however, shows the external IP as pending. I can use the master node's IP address and the service port to access the Web UI, connect to the DB and make changes. But I am not able to access either the Arango shell, nor am I able to use my Python code to connect to the DB. I am using the Master Node IP and the service port shown in arango-cluster-ea in services to try to make the Python code connect to DB. Similarly, for arangosh, I am trying the code:
kubectl exec -it *arango-cluster-crdn-pod-name* -- arangosh --service.endpoint tcp://masternodeIP:8529
In case of Python, since the Connection class call is in a try block, it goes to except block. In case of Arangosh, it opens the Arango shell with the error:
Cannot connect to tcp://masternodeIP:port
thus not connecting to the DB.
Any leads about this would be appreciated.
Posting this community wiki answer to point to the github issue that this issue/question was resolved.
Feel free to edit/expand.
Link to github:
Github.com: Arangodb: Kube-arangodb: Issues: 734
Here's how my issue got resolved:
To connect to arangosh, what worked for me was to use ssl before using the localhost:8529 ip-port combination in the server.endpoint. Here's the command that worked:
kubectl exec -it _arango_cluster_crdn_podname_ -- arangosh --server.endpoint ssl://localhost:8529
For web browser, since my external access was based on NodePort type, I put in the master node's IP and the 30000-level port number that was generated (in my case, it was 31200).
For Python, in case of PyArango's Connection class, it worked when I used the arango-cluster-ea service. I put in the following line in the connection call:
conn = Connection(arangoURL='https://arango-cluster-ea:8529', verify= False, username = 'root', password = 'XXXXX')
The verify=False flag is important to ignore the SSL validity, else it will throw an error again.
Hopefully this solves somebody else's issue, if they face the similar issue.
I've tested following solution and I've managed to successfully connect to the database via:
arangosh from localhost:
Connected to ArangoDB 'http+ssl://localhost:8529, version: 3.7.12 [SINGLE, server], database: '_system', username: 'root'
Python code
from pyArango.connection import *
conn = Connection(arangoURL='https://ABCD:8529', username="root", password="password",verify= False )
db = conn.createDatabase(name="school")
Additional resources:
Arangodb.com: Tutorials: Tutorial Python
Arangodb.com: Docs: Stable: Tutorials Kubernetes

Connect Cloud SQL for MySql from Cloud Run

I'm trying to connect to Google Cloud SQL for MySql through Google Cloud Run. I've been through many posts at SO and also the Google documentation. But doesn't seem to get it working for myself. I admit that I have never containerized any application and I am predominantly a data person.
Note: I already have Cloud SQL instance created which I can connect through Cloud shell.
1) My code is based on Google documentation at
https://cloud.google.com/sql/docs/mysql/connect-run
and the github code at
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/cloud-sql/mysql/sqlalchemy/main.py
2) Build the image
gcloud builds submit --tag gcr.io/my-project/my-test-image
3) Deploy the image
gcloud run deploy --image gcr.io/my-project/my-test-image
4) When I try to invoke the service in Cloud Shell using the following, I get output as "Service Unavailable"
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://my-service.run.app
Can anyone please help me to resolve this?
Changes -
I had not provided the mysql instance name. I corrected it and now I'm getting error as below -
[INFO] Listening at: http://0.0.0.0:8080 (1)"
"Traceback (most recent call last): File
"/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line
571, in connect
sock.connect(self.unix_socket) FileNotFoundError: [Errno 2] No such file or directory"
I don't understand why it's listening at 0.0.0.0:8080. I removed the host and port from my code and I still get this "info" and the error I get is that "FileNotFoundError". I'm not sure which file it needs. Does it need some kind of connections file? If yes what's the format of the file?
I wrote a tutorial on how to Connect Cloud SQL(MySql) from Cloud Run (Java), but the flow is the same. link
Please make sure you understand the point 13, where you have to pass the instance connection name with the flag --add-cloudsql-instances, this will start the cloud sql proxi for you.
13.Configure the service for use with Cloud Run
gcloud run services update run-mysql --add-cloudsql-instances run-to-sql:europe-west2:database-external --set-env-vars CLOUD_SQL_CONNECTION_NAME=run-to-sql:europe-west2:database-external DB_USER=user_name,DB_PASS=user_password,DB_NAME=user_database
SO link

Deploy API REST IBM Hyperledger Composer Blockchain (bad flag in substitute command: 'U' ERROR)

I'm getting this error trying to deploy a card to a working blockchain on cloud, any idea? Thanks in advance. I'm using a mac, following the guide (Kubernetes installed/configured well, I think):
https://ibm-blockchain.github.io/interacting/
./create/create_composer-rest-server.sh --paid --business-network-card /Users/sm/jsblock/tutorial-network/PeerAdmin#fabric-network.card
Configured to setup a paid storage on ibm-cs
Preparing yaml file for create composer-rest-server
sed: 1: "s/%COMPOSER_CARD%//User ...": bad flag in substitute command: 'U'
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
error: no objects passed to create
Composer rest server created successfully
There is an error in that document. And you are also specifying the Peer Admin card when you need to use a Network Admin Card.
There are 2 sets of 'parallel' documents for paid and free clusters. The command you are using has --paid in it in error. If you remove --paid and use the Network Admin Card I think it will solve the problem.
Your command will look something like this: ./create/create_composer-rest-server.sh --business-network-card admin#YOUR-network
I tried again with the install proccess (deployed blockchain instance seemed to work weall): https://ibm-blockchain.github.io/simple/
But I noticed that the script:
./create_all.sh
...never ended (mostly an internal kubernetes problems I guess). And I reset the installation:
./delete_all.sh
./create_all.sh
I tried again, now everything Ok. I could get my awesome API REST talking with my Hyperledger Blockchain, deployed on IBM Cloud. Ready to develop something amazing.

Connection to a S3 instance using a service-connector

I'm trying to create a service-connector to my s3 instance like this:
cf service-connector 13001 mybucketname.ds31s3.swisscom.com:443
But I get the following error:
Server-Error 403: Check of security groups failed (no access)
I have created my service key according to this documentation.
Connecting to my MongoDB works perfectly using a service connector.
You can access Swisscom's S3 directly without the service connector.
The error message suggests that your current org and space do no have access to the S3. This is usually the case is there is no app-binding for that service in the current space. Please check whether you created your service key in the right org and space.
There was a misconfiguration due to security changes. We fixed the issue, so connecting to s3 with the service-connector should now work.

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service