I am using the following statement to connect to Elasticsearch server:
use Search::Elasticsearch;
# Connect to localhost:9200:
my $e = Search::Elasticsearch->new();
Is there a way to check, if Elasticsearch service is alive and kicking using this module? Also if it is running on a remote server, how do we check that the service is running?
Related
I'm trying to implement a small API in docker and i need that API writes to a database which is hosted on the same server but running on windows server 2006
I can't change the OS in the server because that server also works as a gateway for Powerbi
Should i mount the volume to (I'm guessing C:/mongodb/data) or should i make the insert by the localhost
These are my limitations :
host: running windows server 2006 (cant change this)
app: a container running in windows subsystem for linux (it has to run on linux because i need async functions and i only have knoweldge in python|nodejs) but it has to persist the data in the mongo database running on host
mongo database : it has to be running in windows server because a Power Bi Gateway is running comsuming data
keeping with diagrams maybe this will help to explain this in a better way
As far as I understand your system is as in the picture. You want to write data to MongoDB. There should be a network bridge to connect between the host and the Linux environment. You can access MongoDB via bridge IP. If you want to run another MongoDB and mount disk where is in the host. It is not reliable because the data may conflict.
I'm having issues when trying to connect to my Cloud SQL instance. I created a SQL Server instance, downloaded the cloud sql proxy, and everything seems to start to connect, but I keep getting the following error:
errors parsing config:
invalid "instance-connection-name": unsupported network: unix
I'm specifying the tcp port to use, but it still complains about UNIX. Here is the command I'm using when trying to connect (I replaced the actual instance connection name for privacy/security):
./cloud_sql_proxy.exe -instances=[instance-connection-name]=tcp:3306
Any help would be appreciated.
Thanks!
I tried this and it works
Rename cloud_sql_proxy_xxx to cloud_sql_proxy
Open cmd in your cloud_sql_proxy's location
Run the following command: cloud_sql_proxy -instances=[project:region:instance-name]=tcp:1433 without [ ]
From Connecting to a Cloud SQL for SQL Server using a Cloud SQL Proxy:
Depending on your language and environment, you can start the proxy using either TCP sockets or Unix sockets.
TCP sockets:
Copy your instance connection name from the Instance details page
For example: myproject:us-central1:myinstance.
If you are using a service account to authenticate the proxy, note the location on your client machine of the private key file that was created when you created the service account.
Start the proxy.
Some possible proxy invocation strings:
a) Using Cloud SDK authentication:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
The specified port must not already be in use, for example, by a local database server.
b) Using a service account and explicit instance specification (recommended for production environments):
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433 \
-credential_file=<PATH_TO_KEY_FILE> &
I have created a Mongo container using only the base mongo:3.6.4 official docker image and deployed it to my OpenShift OKD cluster, but cannot connect to this MongoDB instance using a Mongo client from outside the cluster.
I can access the pod at http://mongodb.my.domain and successfully get the "It looks like you are trying to access MongoDB over HTTP on the native driver port." message.
When using the terminal on the pod I can successfully log-in using:
mongo "mongodb://mongoadmin:pass#localhost" --authenticationDatabase admin
But when trying to connect from outside OKD the connection fails.
My client needs to pass through a proxy before it can access the OKD pods and I do have a .der certificate file but am unsure if this is related to the issue.
Some commands I have tried:
mongo "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
mongo --ssl "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
I expected to be able to connect successfully but instead get this error message:
MongoDB shell version v3.4.20
connecting to: mongodb://mongoadmin:pass#mongodb.my.domain:80
2019-05-15T11:32:25.514+0100 I NETWORK [thread1] recv(): message len 1347703880 is invalid. Min 16 Max: 48000000
2019-05-15T11:32:25.514+0100 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongodb.my.domain:80' :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I am unsure if it an issue with how I am using my MongoDB client or potentially some proxy settings on my OKD cluster. Any help would be appreciated.
The problem here is that external OpenShift routes aren't great at handling database connections. When you attempt to connect to the Mongo pod via the route, the route will accept the connection and transmit your connection to the Mongo service. I believe this transmission wraps the connection in in a HTTP wrapper, which Mongo doesn't like to handle. The OKD documentation highlights that path based route traffic should be HTTP based, which will cause the connection to fail.
You can see evidence of this when trying to connect to a MongoDB database and it returns "It looks like you are trying to access MongoDB over HTTP on the native driver port." to the browser. The user relief.malone explains this and has proposed a couple of solutions / workarounds in their answer to this question.
To add to relief.malone's answer, I would suggest that you port forward from the MongoDB pod to your local machine for development/debugging. In production, you could deploy an application to OKD that references the MongoDB service via it's internal DNS name, which will look something like this: mongodb.project_namespace.svc:27017. This way you will avoid the route interfering with the connection.
The Openshift OKD documentation on port-forwarding isn't that informative, but, since oc runs the kubectl command under the hood, you can read this Kubernetes guide to get some more information
I'm making a connection between a Google Compute Engine instance and a Google Cloud SQL instance, using the Cloud SQL Proxy.
Using this tutorial, I have managed to establish a connection by running this command:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306
However, when I quit the terminal instance I used to enter the above command the connection is lost.
How can i keep the connection alive throughout?
If you want the process of cloud_sql_proxy to run as long as the Google Compute Engine (GCE) instance is running, just make the process run in the background.
For that you just add the '&' character in the end of your command, so i would go like this:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306&
This way, as along as you don't stop the GCE instance, you can ssh to it and connect to your Cloud SQL instance (with INSTANCE_CONNECTION_NAME) with the Cloud SQL Proxy being used.
I have a distributed Mongodb setup and I'm trying to configure it with Icinga2 using the following link as reference,
https://admin-docs.com/databases/mongodb/mongodb-administration/monitor-mongodb-using-icinga/
As mine is a distributed setup, Icinga should connect to Mongodb along with hostname parameter as,
mongo -h ipaddress
Without this, Icinga2 dashboard shows the following error for all the MongoDB monitoring services,
CRITICAL - Connection to Mongo server on 127.0.0.1:27017 has failed
How do I configure my Icinga2 setup to use hostname in the command?
Finally got it working, pretty simple, just had to set the variable value for mongodb_address using the following,
apply Service "Mongodb Connection" {
check_command = "mongodb"
command_endpoint = host.vars.client_endpoint
vars.mongodb_address = "$address$"
assign where host.vars.client_endpoint && host.vars.os == "MongoOnLinux"
}
Here, $address$ is in built variable for host IP Address