yesterday appeared a strange behaviour:
on small load all queries take long time and then site return error
"Connection to MongoDB failed. Operation now in progress"
At mongostat we see about 10-30 connections (very small, because we
usualy work with 400-500)
But when I type "netstat -na | grep 27017" I see very big number of
TCP connections (> 150):
http://pastebin.com/3ghtwkVd
Why mongodb closes connection but TCP still open?
We doesn't use persistent connections and always doing Mongo:close()
at the end of scripts.
Site work on cloud system like Amazon EC2 (we doesn't observe any
network issues)
10.1.1.16 - MongoDB
10.1.1.7 - Apache
1Gbit/s between servers
OS: Debian 6 Squeeze
MongoDB: 1.8.2 (with 1.6.6 we have the same problem)
Apache 2
PHP 5.3.6
PHP mongo driver 1.1.0 (connection pooling in 1.2.x is very bad for
us)
Looks like your driver (e.g. PHP) does not actually close the TCP connection even when you close it using the method.
If you're using PHP as a module, try graceful apache reload to make PHP module unloaded and loaded again. This way, destructors are called and connections are closed.
If you use PHP as fastcgi app, restart (kill/exec) it and invoke again.
File a bug if necessary.
Related
I have a Flask app (technically Dash) that use longish DB queries up to 5 minutes. When I run with the development server (by using python app2.py), the app and DB queries run fine. But when I run the app with Gunicorn, no matter how I tweak the settings, the DB query times out, and the app does not run correctly.
I know that unlike the Flask development server, Gunicorn generally tries to avoid long I/O requests (like a DB query), but no matter how I change the timeout and worker type settings, nothing seems to fix the problem. I tried adding the number of workers and changing the worker type to gevent as I was researching that's better at handling I/O requests, but there is no change in behavior. Does anyone know what would solve this or where to even look? Below is the config I'm using to run Gunicorn, and also the failure message in the log when the DB query times out and spins endlessly. Also, I am running this on Ubuntu Server using SQLAlchemy in my Flask app to connect to a PostgreSQL DB. Thanks and let me know if you need any more details!
gunicorn --bind 127.0.0.1:8050 --workers 4 --worker-class gevent --timeout 600 app2:server
[29512] [CRITICAL] WORKER TIMEOUT (pid:29536)
[29512] [WARNING] Worker with pid 29536 was terminated due to signal 9
The Goal
I need to get data from a MongoDB updated every 15 minutes to use to build into a PowerBI report.
The Gear
I am connected from my windows machine via ssh to an RHEL server (server a). This server is running powerbi connector (SQLD) which is connected to my MongoDB that is running on a different server (server b). I'm also running MySQL on server b. My powerBI connector is installed on server b.
Exactly where I'm at
I am using the steps listed here (and all the associated pages) and have tried everything listed short of writing a config file, as the fact that things are working on mongosqld's end makes me think I don't need it... and if I can't get it working manually, having a config file won't exactly help.
https://docs.mongodb.com/bi-connector/current/connect/powerbi/
Using:
mongosqld --mongo-uri="mongodb://10.xxx.xxx.xx" --auth --mongo-username="ThisGuy" --mongo-password="test"
I successfully map the schema and show an active connection in the command window. I can also access my database from compass using an authorization enabled URL.
When I set up an ODBC connector I use the IP of server a, the user and password from my url, and port 3307. Nothing shows up in the dropdown, when I click 'test' I get the following message:
Connection Failed
[MongoDB][ODBC 1.4(w) Driver]Can't connect to MySQL server4 on '10.xxx.xxx.xxx' (10060)
I have also tried 3306, 27017, and 27015. Just to be safe I also added firewall rules for all traffic on these ports. I've tried this many times, including (just for the hell of it, and I'm kind of new to this stuff) the ip of server b, the ip of my machine, the credentials for MySQL, basically any combination of these things that I can think of.
In powerBI, my odbc driver shows up, and when selected in the dropdown, it asks for a username and password. I have tried both mongo credentials and MySQL. Not sure which I should be using?
regardless, I get the following error inside PowerBI:
Details: "ODBC: ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061)
ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061)"
Thoughts
I don't control either server, although I have root access, being new to this tech and company I am wary of screwing anything up that a co-worker will have to fix. I read in a different SO thread that maybe I need to downgrade the version of MySQL that is running on the server and that it could fix the problem, but I don't think that it will actually help and am afraid I might screw up something else on the server if I do this:
The C Authentication plugin was developed against MySQL 5.7.18 Community Edition (64-bit), and tested with MySQL 5.7.18 Community Edition and the latest version of MongoDB Connector for BI. The plugin is not compatible with MySQL Server or Connector/ODBC driver version 8 and later.
https://dba.stackexchange.com/questions/219550/access-denied-when-connecting-to-mongosqld-with-mysql
Maybe the problem is that server B is listening to server a on port 3307, and that there is another unknown port (not mentioned above) that my ODBC driver must be listening to? I'm not sure how to test for this when you get a step away like this.
So that's it. I'm really stuck and would love some help, I am going to try the downgrade tomorrow if nothing else shakes loose and will keep this thread updated.
Thank you for reading
I am beginning to explore MongoDB and wish to write a small program/script using TCP socket to create a document in my local MongoDB community edition server. I would like to access MongoDB (which is now locally installed and running on my laptop) via a TCP socket.
I have installed MongoDB 4.2.3 community edition (with Compass.) As far as I can tell, it is running.
I can run mongo.exe shell:
C:\Program Files\MongoDB\Server\4.2\bin>mongo.exe
and the "show dbs" command yields what I would expect given that no documents or other data have been uploaded:
show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
Now, I would like to access mongodb via a TCP socket opened by my own (very small/simple) program so I can experiment with generating commands and observing responses (such as "show dbs").
When I telnet to localhost:27017 (using Windows 10 telnet client) telnet appears to connect to a socket (screen switches from "Connecting to localhost..." to a blank screen after a few seconds.)
As I am a beginner with MongoDB, I would appreciate a pointer as to how I can achieve my goal of using a small program I write to interact with MongoDB server.
Thank you, and I am happy to supply additional details as needed (and of course, would be grateful to a pointer to an example or other learning material that would help me proceed.)
Dave
MongoDB uses a custom wire protocol described here
If you are able to send binary values via telnet, you could probably make that work (I've no intention of trying)
You would probably find is simpler to use one of the pre-made drivers
I'm setting up a server, with postgresql running as a service. I can use nmap to get current postgresql version
nmap -p 5432 -sV [IP]
It returns:
PORT STATE SERVICE VERSION
5432/tcp open postgresql PostgreSQL DB 9.3.1
Is there a way to hide the postgresql version from nmap scanning? I've searched but it's all about hiding the OS detection.
Thank you.
There's only one answer here: Firewall it.
If you have your Postgres port open, you will be probed. If you can be probed, your service can be disrupted. Most databases are not intended to be open like this to public, they're not hardened against denial-of-service attacks.
Maintain a very narrow white-list of IPs that are allowed to connect to it, and whenever possible use a VPN or an SSH tunnel to connect to Postgres instead of doing it directly. This has the additional advantage of encrypting all your traffic that would otherwise be plain-text.
You have a few options, but first understand how Nmap does it: PostgreSQL database server responds to a malformed handshake with an error message containing the line number in the source code where the error occurred. Nmap has a list of possible PostgreSQL versions and the line number where the error happens in that particular version. The source file in question changes frequently enough that Nmap can usually tell the exact version in use, or at least a range of 2 or 3 version numbers.
So what options do you have?
Do nothing. Why does it matter if someone can tell what version of PostgreSQL you are running? Keep it up to date and implement proper security controls elsewhere and you have nothing to worry about.
Restrict access. Use a firewall to limit access to the database system to only trusted hosts. Configure PostgreSQL to listen only on localhost if network communication is not required. Isolate the system so that unauthorized users can't even talk to it.
Patch the source and rebuild. Change PostgreSQL so that it does not return the source line where the error happened. Or just add a few hundred blank lines to the top of postmaster.c so Nmap's standard fingerprints can't match. But realize you'll have to do this every time there's a new version or security patch.
I have a web app that uses postgresql 9.0 with some plperl functions that call custom libraries of mine. So, when I want to start fresh as if just released, my build process for my development area does basically this:
dumps data and roles from production
drops dev data and roles
restores production data and roles onto dev
restarts postgresql so that any cached versions of my custom libraries are flushed and newly-changed ones will be picked up
applies my dev delta
vacuums
Since switching my app's stack from win32 to CentOS, I now sometimes (i.e., it seems, only if and only if I haven't run this build process in "a while"--perhaps at least a day) get an error when my build script tries to apply the delta:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Specifically, what's failing to execute at the shell level is this:
psql --host=$host -U $superuser -p $port -d $db -f "$delta_filename.sql"
If, immediately after seeing this error, I try to connect to the dev database with psql, I can do so with no trouble. Also, if I just re-run the build script, it works fine the second time, every time I've encountered this. Acceptable workaround, but is the underlying cause something to be concerned about?
So far in my attempts to debug this, I inserted a step just after the server restart (which of course reports OK shutdown, OK startup) whereby I check the results of service postgresql-dev status in a loop, waiting 2 seconds between tries if it fails. On my latest build script run, said loop succeeds on the first try--status returns "is running"--but then applying the delta still fails with the above connection error. Again, second try succeeds, as does connecting via psql outside the script just after it fails.
My next debug attempt was to sleep for 5 seconds before the first status check and see what happens. So far this seems to solve the problem.
So why is pgsql not listening on the socket after it starts [OK] and also has status running ok, for up to 5 seconds, unless it has "recently" been restarted?
The status check only checks whether the process is running. It doesn't check whether you can connect. There can be any amount of time between starting the process and the process being ready to accept connections. It's usually a few seconds, but it could be longer. If you need to cope with this, you need to script it so that it checks whether it is possible to connect before proceeding. You could argue that the CentOS package should do this for you, but it doesn't.
Actually, I think in your case there is no reason to do a full restart. Unless you are loading libraries with shared_preload_libraries, it is sufficient to restart the connection to pick up new libraries.