mongodump failed connecting to monogdb after version downgrade - mongodb

Since I use mongodb-clients 2.6.10 the mongodump doesn't work anymore. With the previous version 3.4.7 everything worked fine. It is a dedicated mongodb database as a service in the CF AppCloud where nothing has been changed. Unfortunately, it is not possible to use version 3.4.7 again.
Does anyone have an idea why it doesn't work anymore?
vcap#host:~$ mongodump -u XXX -p XXX -d XXX --authenticationDatabase XXX -h kubernetes-service-node.service.consul:XXX,kubernetes-service-node.service.consul:XXX,kubernetes-service-node.service.consul:XXX
Result: https://jsfiddle.net/yz1kp68p/

Judging from the error, it's probably got nothing to do with the mongodump version. Can you generally connect to the database (i.e. with the mongo shell instead of mongodump)? My guess is that the app either isn't bound (cf bind-service) to the database or hasn't been restaged (cf restage) after being bound - both is necessary to enable firewall access from the app to the database. Also, why can't you use a newer mongodump version anymore? Sounds more like that's what needs to be addressed in the first place.

I successfully installed mongo-tools from the Ubuntu artful repository to have a mongodump version that supports SCRAM-SHA-1 authentication mechanism. The dumper app now works without problems.
Installing mongodb-clients out of the artful repository did not work in my case, but mongo-tools did it.

Related

Tables could not be fetched - Error loading schema content

I open workbench and connect to a local database on XAMPP and when open the connection the schema show the error message:
"tables could not be fetched"
run this command on terminal
mysql_upgrade -u root -p
Run this command on terminal
sudo /opt/lampp/bin/mysql_upgrade
And as per the comment by #jonathan-delean , you might need to run this instead:
sudo /opt/lampp/bin/mysql_upgrade -u root -p
For XAMPP, this worked for me - run this on terminal:
sudo /Applications/XAMPP/xamppfiles/bin/mysql_upgrade
Disconnect then reconnect to your db.
First, locate the directory of which Xampp is installed at.
In linux you can just type this in a terminal:
whereis xampp
In my case (btw I use arch, jk) it was located at /opt/lampp/bin . If you're using windows, you may find it under a different location, like in C:\Program Files\xampp\bin
Next, locate the file mysql_upgrade and execute it as an administrator or a sudo.
If you're using Linux:
cd /opt/lampp/bin then sudo ./mysql_upgrade
According to MySQL documentation:
Each time you upgrade MySQL, you should execute mysql_upgrade, which
looks for incompatibilities with the upgraded MySQL server: It
upgrades the system tables in the mysql schema so that you can take
advantage of new privileges or capabilities that might have been
added. It upgrades the Performance Schema, INFORMATION_SCHEMA, and sys
schema. It examines user schemas.
So I believe mysql_upgrade should resolve the problem. It worked for me before.
More on mysql_upgrade here:
4.4.5 mysql_upgrade — Check and Upgrade MySQL Tables
That's because the latest XAMPP use MariaDB and MYSQL Workbench is using MYSQL Database, so they are not fully compatible, raising that error for example.You can try to downgrade to some of the previous XAMPP versions.
For MacOS users:
sudo /Applications/XAMPP/bin/mysql_upgrade
I created another Connection in MySQL workbench, and the fetching problem for me was resloved.
I did have this problem today, the reason is:
Error Code: 1356 View 'test.xyz' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
After dropping those view (actually those views) the error was solved.
currently working with MySQL Workbench 8.0.28, and MySQL 8.0.28.
for macOS users run this on terminal:
sudo /Applications/XAMPP/bin/mysql_upgrade
this worked for me
See YouTube video: MySQL 8 - The message "Tables Could Not Be Fetched"
https://www.youtube.com/watch?v=phi6o8B7kKI
Either a table or view or function used in code has been dropped; hence the "...could not be fetched".
this works for me
sudo mysql_upgrade --force
As #Brittany Layne Rapheal says, with that command you can fix the issue, is also recommended to give execution privileges to that file:
So you should run first this command:
sudo chmod +x /Applications/XAMPP/xamppfiles/bin/mysql_upgrade
And then, this:
sudo /Applications/XAMPP/xamppfiles/bin/mysql_upgrade --force
--force is necessary because as the parameter says to force the update (Necessary)

mongodump hangs when using --uri parameter

I am either misusing mongodump or it has a bug, but I'm not sure which. I typically use mongo connection strings in my applications and scripts, e.g.
mongo mongodb://username:ps#myhostname/dbname this works
The mongodump tool supposedly supports URL strings, but every time I try to use it it starts and then does nothing:
mongodump --uri mongodb://username:ps#myhostname/dbname this runs but stops and does nothing with no CPU usage.
I've tried using -vvvvv and there is no interesting data shown.
If I do the exact same thing using the "old" parameters, it works, but then I'd have to parse URIs and that makes me sad:
mongodump --host myhostname --username username --password ps -d dbname this works
1) Am I doing this wrong?
2) If this is a bug, where would I file a ticket?
3) Is there a tool that would parse a mongodb:// URI back into pieces so that I can keep using URIs in my automation stack?
$ mongodump --version
mongodump version: r3.6.8
git version: 6bc9ed599c3fa164703346a22bad17e33fa913e4
Go version: go1.8.5
os: linux
arch: amd64
compiler: gc
OpenSSL version: OpenSSL 1.1.0f 25 May 2017
db.version() in a connected shell also returns 3.6.8
I ran into this same issue, and likewise, was quite sad. However, I'm happy again because I realized you MUST append the following two options to your connection string:
?ssl=true&authSource=admin
Pop those bad boys on your uri and you should be smooth sailing.

Getting empty folder when using mongodump to back up my MongoDB

Basically, I have a problem with using mongodump to back up my MongoDB.
This is the general syntax I use in SSH:
mongodump -d myDatabaseName -o ~/backups/my_backup
This is the resulting message:
Fri Apr 22 20:39:57.304 DATABASE: myDatabaseName to /root/backups/my_backup/myDatabaseName
This simply generates a blank folder with no files in it whatsoever. The actual database is fairly large, so not sure what's going on.
I would also like to add that my mongodump client and my MongoDB version are both the same (version 2.4.9).
Not sure how to go about fixing this. Any help is appreciated.
This is a similar question as Mongodump getting blank folders
There is no accepted answer as of writing my answer. Here is what I did to resolve my issue and I believe it will help you as well.
The default mongodb-client deb package with Ubuntu is the issue. I removed those and installed the mongodb-org-tools package from mongodb.com https://docs.mongodb.com/master/tutorial/install-mongodb-on-ubuntu/?_ga=2.36209632.1945690590.1499275806-1594815486.1499275806
They have other install instructions for your specific OS if you are not on Ubuntu https://www.mongodb.com/download-center?jmp=nav#community
Try adding the port of mongodb_port as in:
mongodump --port your_number -c the_collection -d the_database
Make sure that you have the exact name of the database. If you spell it wrong, this could happen. To confirm, connect to your mongo database and type show dbs to see a list of database names. Then make sure that your databasename parameter -d <databasename> matches one of those in the list.

Copy staging database from heroku to nitrous development

I've found a number of similar problems - and even added an answer to a one similar non-dup. But I can't see or solve this problem. Here's the core problem:
I have a staging server on Heroku. I want to copy the staging server database to development on Nitrous to solve a problem. Nitrous has Postgres 9,2,4, and Heroku has Postgres 9.3.3.
My boss is away on holiday, and I have no authority to upgrade the Heroku staging service to a paid plan in which I can fork (and then use the forked Heroku database as a remote database for development).
I have used heroku pg:push to send development databases to staging, in earlier work. No problem. But I can not use heroku pg:pull - it fails, saying that:
pg_dump: server version: 9.3.3; pg_dump version: 9.2.4
pg_dump: aborting because of server version mismatch
I have tried a rake db:structure:dump - fails for version mismatch reasons. I'd vaguely hoped that this used the pg gem and would magically work, ignoring rev levels. Hey, if you're ignorant enough, magic does work, sometimes.
I have a Nitrous box for development because the office firewall blocks, well, pretty much everything but 25, 80 and 443. All the useful ports like 22, 5432, 3000, etc, are blocked. So I develop on Nitrous. It's pretty neat. But it never occurred to me that Nitrous would have an old version of Postgres, and no apparent way to update it. Especially given that Nitrous often emphasises using Heroku.
I've tried using the more basic commands:
pg_dump -h ec2-XX-XXX-XX-XXX.compute-1.amazonaws.com -p 5432 -Fc --no-acl --no-owner --compress 3 -o -U ${DBNAME} > dumpfile.gz
But that fails (heroku pg:pull probably uses this command, under the hood) for the same reasons - version mismatch.
I realise that if I'd known more when I started, I could have requested that Heroku used 9.2. But I have data now, in a 9.3.3 instance, and I want that data, not the data I would have had, if only a time machine was available to me, and I could cope with the trousers of time paradoxes.
Possible solutions? Is there another web IDE that has PG 9.3? Is there a flag that I can't find that lets PG Dump 9.2 work with an up-rev DB? Is there a way to upgrade Nitrous to 9.3? At least for the critical pg_dump command?
Browser based IDE's versions of Postgres (as of 2014/08/13):
nitrous - 9.2
koding - 9.1
cloud9: 9.3 (Yay! - Pick me! Pick me!)
I spent another couple of hours and worked out a solution, using a different browser based IDE. Cloud9 offers Postgres 9.3, pre-installed in a new VM.
You'll need to register your Cloud9 ID with Heroku (find the SSH keys in the Cloud9 Console, and paste into your ID SSH Keys in Heroku). And you'll need to sign in to Heroku from Cloud 9.
Then use pg_dump and pg_restore on Cloud9, using Heroku databases as the source and target.
pg_dump -h ec2-XX-XX-XX-XX.compute.amazonaws.com -p 5432 --no-owner --no-acl -Fc -o -U ${HEROKU_STAGING_DATABASE_USER} ${HEROKU_STAGING_DATABASE_NAME} > staging.dump
pg_restore -h ec2-XX-XX-XX-XX.compute.amazonaws.com -p 5432 --no-owner --no-acl --clean --verbose -d ${HEROKU_DEV_DATABASE_NAME} -U ${HEROKU_DEV_DATABASE_USER} < staging.dump
In your dev environment, make sure you update the config/database.yaml (or whatever it is your web apps need) to use the Heroku remote DB Service.
Simples. Eventually.
I ran into precisely this problem, and solved with blind magic by
Downloading a recent.dump file from the Heroku postgres dashboard
Moving that file into the Nitrous box (and into the app directory)
Following the instructions here:
https://stackoverflow.com/a/11391586/3850418
pg_restore -O -d MY_APPNAME_DEV recent.dump
I got a bunch of warnings, but it seemed to have worked, at least enough for my dev/testing purposes.

How should I fix this PostgreSQL installation?

I realized that PostgreSQL was already running on my laptop (Mac OS X) before I installed from the Postgres site. So when I used the installer, I got the PostgreSQL and logged in to the postgres user account that was created.
In the terminal I wrote
psql -U postgres
And provided my password. I got logged in but it said,
WARNING: psql version 9.0, server version 9.1.
Some psql features might not work.
How should I go about fixing this so that I can access the database properly without any issues?
The warning comes from psql, the PostgreSQL interactive terminal. Nothing bad will happen.
As you have two versions of PostgreSQL installed in parallel, you would need two versions of psql. Maybe you even have them on disk. But when you type the command psql, your system will default to one of those, not knowing beforehand which database server version you are going to connect to.
You can type the explicit path to the psql version you want. Find the full path of all variants with this shell command (works with Linux, not tested with Mac OS X):
which -a psql
If you did not also install the psql version 9.1 along with your PostgreSQL, you have to install it first, of course.
If you are not going to use PostgreSQL 9.0 any more, you can uninstall it to remove ambiguities.
In Debian you can also set the default of multiple alternatives with:
update-alternatives
But in Debian you also have a wrapper that calls the matching psql dynamically if you specify the database cluster like this:
psql --cluster 9.1/main
Not sure about Mac OS X.
You have installed postgresql-server 9.1 (server side) and postgres (client side 9.0). Maybe you have installed client 9.1 too, but it is not on the path, so you have to find it or if you have not it, then install it.