I am using docker compose to spin up an api, along with Dapr for state management and a local cosmosDB emulator.
When specifying my local azure cosmosDB as the state store in my local Dapr component, the Dapr container fails to start with the error message:
level=warning msg="error initializing state store cosmosdb (state.azure.cosmosdb/v1): Post "https://localhost:8081//dbs": dial tcp 127.0.0.1:8081: connect: connection refused"
If I change the Dapr component to point at my cosmosDB in azure the Dapr container works fine, so I know this is an issue with cosmosDB locally. Im new to dapr and cosmos db but I'm not sure what I'm missing.
Since you are using https do you have a self-signed certificate? this might be because of SSL exception.
Also, check if your local Azure Cosmos DB Emulator is running or not by navigating to the below url:
https://localhost:8081/_explorer/index.html
Checkout some related issues: here, here, here try closing and restarting the emulator sometime.
Related
I have cloud postgres setup on GCP with public IP enabled. Previously, I have been able to connect via google sql proxy but now I am getting following error:
errors parsing config:
googleapi: Error 503: Policy checks are unavailable., backendError
If I try to list sql instances using gcloud sql instances list that also throws same error. As per https://status.cloud.google.com/ there is no disruption in any services.
Postgres server is running with no issues and other tools accessing the db directly are also running, the issue with accessing the db locally using google cloud proxy.
Both spinnaker and bitbucket located inside private subnet scheme. Spinnaker is deployed on kubernetes. When I try to test connection it gives this error:
http://spinnaker-mydomain.com:8084/webhooks/git/bitbucket
java.lang.IllegalStateException: Request cannot be executed; I/O
reactor status: STOPPED
when I try to test connection from telnet it closes as well saying connection refused.
Wondering weather it has something to do with authentication or there is problem in bitbucket service
If you're using BitBucket on-prem, use the url:
http://spinnaker-mydomain.com:8084/webhooks/git/stash
I am trying to connect my thingworx to azure postgres database. I have two azure accounts. When I am creating my postgresql database in one of the accounts and connecting then it is working fine.
But when I am trying to connect to the azure postgres database present in the second account , the connection is failing and I am getting the error as follows:
Unable to Invoke Service GetStudentData on Database_Functions : FATAL: Client from Azure Virtual Networks is not allowed to access the server. Please make sure your Virtual Network is correctly configured.
Apparently, the postgresql is part of a Virtual Network service endpoint and a service endpoint tag was enabled.
To solve the problem disable the service endpoint and add the public IP to the Connection Security section.
It seems a firewall rule problem, not at all related to TW itself.
I'm running a Sails.js application that uses a Google Cloud Postgresql instance on Google App Engine. I'm getting a connection refused error when I deploy the application. This is my sailsjs connection config:
postgresGoogle: {
adapter: 'sails-postgresql',
socketPath: '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME,
user: 'xxxxx',
password: 'xxxxx',
database: 'postgres'
}
If I add the host, it throws a timeout error. Does anyone know the proper way to configure a sailsjs connection with GCP postresql?
Where exactly is your Sails.js application? Is it on App Engine Flex? I would recommend deploying to App Engine Flex, as described here and then connect to the PostgreSQL from the Flex environment. Otherwise, are you using any of the option steps described in this link for connection?
Solved
As of 16 December 2022...
I finally got Sails.js to work today with GCP SQL. If you follow the tutorials from Google you have either Unix Sockets or TCP options to try -- and sadly neither work with out of the box sails-postgres.
My workaround was to connect via a VPC connector with a dedicated IP address. This way I can connect to Cloud SQL using a regular Postgres connection string, directly to the DB.
https://cloud.google.com/appengine/docs/legacy/standard/python/outbound-ip-addresses
Then I whitelisted the new dedicated IP in Cloud SQL security settings, and forced SSL to require valid SSL certificates.
It may not be best practice for now per Google's docs, but it works.
I'm trying to deploy a service fabric application to an unsecure Azure Service Fabric cluster. When I open the publish window in VS 2017 I get the following. If my cluster is unsecure, shouldn't I be able to publish it w/o configuring the cert?
I tried a publish anyways and I got:
Try accessing the server via powershell to maybe get a better error. Usually, errors like this are caused by firewalls blocking the port 19000.