I have create a cloud sql instance as shown below -
I have also allowed all network (0.0.0.0/0) under authorised network. But when I try to connect it using gcloud sql connect <ip address> -u <username> I get this error -
ERROR: (gcloud.sql.connect) HTTPError 400: Invalid request: instance name (34.69.175.233).
When I run the same command like this gcloud sql connect <ip address> -u <username> --log-http for the log I get this -
{
"error": {
"code": 400,
"message": "Invalid request: instance name (34.69.175.233).",
"errors": [
{
"message": "Invalid request: instance name (34.69.175.233).",
"domain": "global",
"reason": "invalid"
}
]
}
}
I am not able to identify what exactly is the problem.
Note that, gcloud sql connect command is not supported for Cloud SQL Server at this time and you may instead connect via a local client, or the proxy. I have send a feedback to the Cloud SQL Documentation team to clarify that this command is not applicable for Cloud SQL Server in the gcloud documentation until the gcloud command gets implemented.
you can try with "cloud_sql_proxy -instances=[connection-name]=tcp:0.0.0.0:[port]"
Related
When trying to get a psql shell (not using iam user) I am receiving:
> gcloud alpha sql connect pg-instance --database mydb --user myuser --project my-project
Starting Cloud SQL Proxy: [/Users/me/google-cloud-sdk/bin/cloud_sql_proxy -instances my-project:us-central1:pg-instance=tcp:9470 -credential_file /Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json]]
2022/03/15 14:47:59 Rlimits for file descriptors set to {Current = 8500, Max = 9223372036854775807}
2022/03/15 14:47:59 using credential file for authentication; path="/Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json"
2022/03/15 14:48:00 Listening on 127.0.0.1:9470 for my-project:us-central1:pg-instance
2022/03/15 14:48:00 Ready for new connections
Connecting to database with SQL user [myuser].Password:
psql: error: connection to server at "127.0.0.1", port 9470 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I had the same error message when connecting to Postgres(Cloud Sql) using a service account.
In my setup I did run cloud_sql_proxy inside docker container.
In order to make it work I had to add extra configuration defined in step #9 https://cloud.google.com/sql/docs/sqlserver/connect-docker#connect-client
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:5432:5432\
gcr.io/cloudsql-docker/gce-proxy:1.33.1 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:5432 -credential_file=/config
The missing bits were: host ip on port mapping and 0.0.0.0: in cloud_sql_proxy command
There are a few things I would like to point out. The best starting point for me would be the About connection options page; both the Overview and the Before you begin sections are very helpful to get the full idea of the process and how to properly configure the user. But the most important part is the Connection Options, for the message connection to server at "127.0.0.1" I’m guessing it is a private IP, but please make sure this section is covered before starting to debug.
In your case, the logs are saying there was an error in the connection to the server…
I used the Troubleshoot guide that includes the Diagnose issues link to get to the Debug connection issues page that has a lot of useful information on how to debug any connectivity issue.
Generally, connection issues fall into one of the following three areas:
Connecting - are you able to reach your instance over the network?
Authorizing - are you authorized to connect to the instance?
Authenticating - does the database accept your database credentials?
Each of those can be further broken down into different paths for investigation.
Once determining the connection method, there are different questions that will help to guide you through the possible troubleshooting paths.
If using these guides doesn’t get you a solution, please make sure to update your answer with the results, steps, and information followed to provide further help. This would be a good example, as it has the same log error, and this other question shows that there are a few different troubleshooting paths for this specific log message, plus they have useful information for you.
I have this scenario:
a HOST machine running Debian that runs docker containers.
a CentOS docker container that have CodeReady Containers (CRC) installed on itself. CRC working on the container, via command line, without problems.
I want access, from the Host machine, to CRC web console that works on https://console-openshift-console.apps-crc.testing (on a specific IP in the hosts file of the container).
I found this RedHat guide for accessing CRC remotely.
And, applied to Docker containers, making the following changes to haproxy.conf:
global
log 127.0.0.1 local0
debug
defaults
log global
mode http
timeout connect 5000
timeout check 5000
timeout client 30000
timeout server 30000
frontend apps
bind CONTAINER_IP:80
bind CONTAINER_IP:443
option tcplog
mode tcp
default_backend apps
backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
frontend api
bind CONTAINER_IP:6443
option tcplog
mode tcp
default_backend api
backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
enabling forwarding for the container:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
and also starting CRC behind a proxy:
$ crc config set http-proxy http://example.proxy.com:<port>
$ crc config set https-proxy http://example.proxy.com:<port>
$ crc config set no-proxy <comma-separated-no-proxy-entries>
I can successfully call the url https://console-openshift-console.apps-crc.testing from the Host machine (that have dnsmasq as DNS resolver properly configured)!!!
but I get this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Notes:
when CRC starts I have a warning: WARN Wildcard DNS resolution for apps-crc.testing does not appear to be working
even trying to login with oc, on Host machiche via command line, fail with an error message with status "Forbidden": Error from server (InternalError): Internal error occurred: unexpected response: 403.
Where is the problem? I can't figure it out.
For those interested, this is the project's Git repository on GitHub.
This message means that the user "system:anonymous" have not the permission to access the cluster. Have you done a login into the crc cluster as written in the documentation?
3.3. Accessing the OpenShift cluster
oc login -u developer https://api.crc.testing:6443
This is the final message when you run crc start
To access the cluster, first set up your environment by following 'crc oc-env' instructions.
Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p xxxx-xxxx-xxxxx-xxxx https://api.crc.testing:6443'.
To access the cluster, first set up your environment by following 'crc oc-env' instructions.
Therefore, you have to run first to have the oc client available on the command line:
crc oc-env
Then you have to run login with oc client. In my installation was:
oc login -u developer https://api.crc.testing:6443
I'm having issues when trying to connect to my Cloud SQL instance. I created a SQL Server instance, downloaded the cloud sql proxy, and everything seems to start to connect, but I keep getting the following error:
errors parsing config:
invalid "instance-connection-name": unsupported network: unix
I'm specifying the tcp port to use, but it still complains about UNIX. Here is the command I'm using when trying to connect (I replaced the actual instance connection name for privacy/security):
./cloud_sql_proxy.exe -instances=[instance-connection-name]=tcp:3306
Any help would be appreciated.
Thanks!
I tried this and it works
Rename cloud_sql_proxy_xxx to cloud_sql_proxy
Open cmd in your cloud_sql_proxy's location
Run the following command: cloud_sql_proxy -instances=[project:region:instance-name]=tcp:1433 without [ ]
From Connecting to a Cloud SQL for SQL Server using a Cloud SQL Proxy:
Depending on your language and environment, you can start the proxy using either TCP sockets or Unix sockets.
TCP sockets:
Copy your instance connection name from the Instance details page
For example: myproject:us-central1:myinstance.
If you are using a service account to authenticate the proxy, note the location on your client machine of the private key file that was created when you created the service account.
Start the proxy.
Some possible proxy invocation strings:
a) Using Cloud SDK authentication:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
The specified port must not already be in use, for example, by a local database server.
b) Using a service account and explicit instance specification (recommended for production environments):
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433 \
-credential_file=<PATH_TO_KEY_FILE> &
I have created a Mongo container using only the base mongo:3.6.4 official docker image and deployed it to my OpenShift OKD cluster, but cannot connect to this MongoDB instance using a Mongo client from outside the cluster.
I can access the pod at http://mongodb.my.domain and successfully get the "It looks like you are trying to access MongoDB over HTTP on the native driver port." message.
When using the terminal on the pod I can successfully log-in using:
mongo "mongodb://mongoadmin:pass#localhost" --authenticationDatabase admin
But when trying to connect from outside OKD the connection fails.
My client needs to pass through a proxy before it can access the OKD pods and I do have a .der certificate file but am unsure if this is related to the issue.
Some commands I have tried:
mongo "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
mongo --ssl "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
I expected to be able to connect successfully but instead get this error message:
MongoDB shell version v3.4.20
connecting to: mongodb://mongoadmin:pass#mongodb.my.domain:80
2019-05-15T11:32:25.514+0100 I NETWORK [thread1] recv(): message len 1347703880 is invalid. Min 16 Max: 48000000
2019-05-15T11:32:25.514+0100 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongodb.my.domain:80' :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I am unsure if it an issue with how I am using my MongoDB client or potentially some proxy settings on my OKD cluster. Any help would be appreciated.
The problem here is that external OpenShift routes aren't great at handling database connections. When you attempt to connect to the Mongo pod via the route, the route will accept the connection and transmit your connection to the Mongo service. I believe this transmission wraps the connection in in a HTTP wrapper, which Mongo doesn't like to handle. The OKD documentation highlights that path based route traffic should be HTTP based, which will cause the connection to fail.
You can see evidence of this when trying to connect to a MongoDB database and it returns "It looks like you are trying to access MongoDB over HTTP on the native driver port." to the browser. The user relief.malone explains this and has proposed a couple of solutions / workarounds in their answer to this question.
To add to relief.malone's answer, I would suggest that you port forward from the MongoDB pod to your local machine for development/debugging. In production, you could deploy an application to OKD that references the MongoDB service via it's internal DNS name, which will look something like this: mongodb.project_namespace.svc:27017. This way you will avoid the route interfering with the connection.
The Openshift OKD documentation on port-forwarding isn't that informative, but, since oc runs the kubectl command under the hood, you can read this Kubernetes guide to get some more information
According to the general instruction to the github.com API and the explanation of the create command
curl -u "krichter722" https://api.github.com # works (returns JSON response)
curl -d '{"name":"test"}' https://api.github.com/user/repos/
should work and create a repository, but the second command fails with
{
"message": "Not Found",
"documentation_url": "https://developer.github.com/v3"
}
I found Using `curl` to create a repo on GitHub.com with two-factor authentication which resolves an issue caused by missing parts in the request for two-factor authentication.
Other questions, like "Bad Credentials" when attempting to create a GitHub repo through the CLI using curl, indicate that the URL is correct (the creation fails due to bad credentials according to error message in this case).
You can do it as follows:-
curl -u "$username:$token" https://api.github.com/user/repos -d '{"name":"'$repo_name'"}'
You can find personal access token in Github Settings -> Application, replace username with your username and repo_name with repository name.
Note:- You might need to create personal access token if you haven't used it earlier.