Why is Heroku telling me "Your app has no databases" when it clearly does? - postgresql

I know the subject of migrating Heroku databases has lots of documentation but I have yet to find my answer, and nobody seems to be mentioning the error I'm getting.
I developed my app using the basic/free version of Heroku, where I get my two random dictionary words and a number. I've got a Rails app running in this instance, populated with data. It's what I've used to demo to management.
My company now has paid space on Heroku, including Postgres. I've gotten my application deployed to this new space, including an empty Postgres database (I've run migrations), and now I would like to move my data over from the free/shared space, to my paid space.
I believe this is the page of directions I'm supposed to be following:
https://devcenter.heroku.com/articles/migrating-from-shared-database-to-heroku-postgres
But when I get to this step:
heroku pgbackups:capture --expire -a [my_app]
I get the error in my question, "Your app has no databases." I've done the necessary steps, added the pgbackups add-on and so forth. If I execute this command against my newly created paid app (with the empty database), it works fine. But running it for my old/free/shared-db version gets the error.
I get that it does not have a paid database, no. If I go to http://postgres.heroku.com it doesn't even show up. But I've got data living in a database somewhere in Heroku world, and I'd like to get at it. The documentation does lead me to believe that these are the instructions for getting off the 5mb shared space, which is what I'm on.

I didn't take into account some corner cases on an update I wrote for the client. A later version fixed it as I think you figured out. Sorry.

Related

Error CloudKit Dashboard - There was a problem loading the environment's status

Good day!
In CloudKit Dashboard I get the error:
There was a problem loading the environment’s status
This happens when I select the action "Deploy Schema to Production..." for the "Development" schema:
I have a released application using CloudKit (respectively, there are two working schemes - Development and Production). Before the release of the application, the Development schema in CloudKit Dashboard was translated into Production (Deploy schema to production).
Now I needed to make changes to the schema.
A new field and indexes for it, as well as indexes for an existing field, have been added to the Development schema.
Now I am trying to move the schema from Development to Production in CloudKit Dashboard (so that my changes show up in Production) and
this error persists, is there any other way you can update the Production schema or fix this error?
There can be a lot of strange errors in the CloudKit dashboard. Here are a few suggestions:
Try again later (and always do a hard refresh when you do). Sometimes the error is temporary.
Try in a different web browser. Support for Chrome has improved lately, but there were times when Safari was the only way to make certain things happen.
Create a new CloudKit container, rebuild your schema, and then try to deploy. I've had certain bugs never go away within a particular container and I've had to start over fresh.
If the issue persists, submit a Feedback to the CloudKit team. They have fixed things within a day or two for me in the past.
Aside from that, your particular error isn't terribly descriptive and it's most likely something on Apple's end.
I found out that it runs into a client-side timeout of 1 minute, but the server takes longer. You can kill the timeout with
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
For details, see https://stackoverflow.com/a/67862078/

Have you ever experienced connection issues with Postgres database' based on just a db name?

I've been idly bashing away at an issue with Postgres for months now. I've a bit of software (custom in-house stuff) that on 24 out of 25 servers does a certain process absolutely fine, no issues what so ever.
On the 25th server though, the process wouldn't quite complete properly, it would fail at the final hurdle which was a simple date change.
It's been a back-burner type issue so I'd not committed much time to working it out until management started to get angsty so I spent most of yesterday bashing away at it.
Obvious checks were done first:
Postgres version (9.6)
Software version
Windows patches (Server 2019)
GPO's
NTFS permissions
etc
All checked out as matches across every server. Went through the Postgres and the in-house software logs at length, had one of the developers build a stand alone executable for the process with a ridiculous amount of logging, still no dice. No indicators. Procmon and Wireshark logs showed the same story, nothing clear at all as to what was going on.
So then we take a backup of the database, load it in with a different name for testing and start running the process to find that it now works fine on the cloned database. This leads us to thinking there's maybe a formatting issue of some kind in the database, conscious of the idea that doing the backup & restore would shake things around. So we go back to the live, back it up again - delete the DB from postgres and restore from the backup.
No dice. Still broken.
Cue some serious confusion. We've done essentially the same thing with cloning live to test and are still getting the same fault at the end of the process.
After some head scratching and more prodding around in the logs I hit upon an idea of doing a fresh backup of the live DB, deleting the database, restoring the backup with a different name and then pointing the live software install to the newly named live DB and testing the process again.
It works!
For clarity, the filenames are basic alpha only. Upper case and lower case. No numbers, no symbols. Less than 15 characters in length.
I'm at a loss as to why it's now working and I'd love to get some input from the community.

Google Cloud SQL Migration Job stuck on Running

I've got a database on Google SQL that is used by our application running on kubernetes in GKE.
The mysql instance is running on 5.6, and I need to update it to 5.7, so I tried using the new migration jobs.
I've set up the connection profile and all the required permissions for the source DB, then followed the instructions to make a continuous migration.
The Job says it's running, migrating the ~450GB database. After about a day, it's still running, the storage used seems to have stopped growing, and the replication delay is at 0. The source database is not currently in use (That's why I'm unsing it to try this out before doing the same with a more important db).
According to this, if the dump phase is done, I should be able to promote the instance, but the promote button remains greyed out, and there's no way to check the running state (it only says "running", and I don't see any way to check if it's dumping, on CDC, or anything else).
The documentation seems a bit lacking, and I couldn't find anything by googling around. Has anyone been using this?
In short, my questions are:
Why can't I promote the instance?
and how can I check in what phase is the migration?
Here's a screencap of my job:
link because SO doesn't let me embed images yet
Thanks.
p.d.: the tag that the documentation says should be used in stackoverflow is: google-cloud-database-migration-service, which is too long and stackoverflow doesn't allow, so I used google-cloud-sql instead :/
I am seeing an issue like this, but possibly more frustrating. After a week for a 2TB database, storage resets to near-zero and the full dump restarts, without any errors or indication of what happened.

PostgreSQL only clear user password

I am having a big problem, quite difficult to find/search.
I have a server in Ubuntu, where inside that server I have installed:
GITLAB (have all proyect)
POSTGRESSQL (Independent gitlab database is used for a personal project)
TOMCAT with APP WEB (Springboot, this use postgres)
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
I am having various problems:
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
Very frequently, almost every day, the user postgres from the postgresql server "erases" the password. Without anyone doing it manually, "it happens exponentially". I notice why the application stops responding, and then I access postgresql and note that the postgres user has no password.
I looked for many places, and I can't find anything. I really don't know where else to look. If someone passed it to you or has information about it, I would be grateful if you could provide it to me.
------More information added----------
I was looking at the postgres logs, before I have no authentication and I see this.
There are times when no one could have been using the springboot server,
--2020-01-17 00:30:21.286
And also the two log that show before that moment. Could it be something that is deleting my password?
Thank you.
PostgreSQL does not randomly delete its own passwords, and I really doubt Tomcat or Gitlab do either. Indeed they shouldn't even have access to the server as the 'postgres' user or any other superuser, and so shouldn't be able to even if they wanted.
It seems like that there is an intruder in your system. After gaining access they create their own user with their own password. Then disabling your normal superuser from logging on is a common way to try to prevent you from regaining control and kicking them out. Do any users exist that you do not recognize?
The bit of the log file you posted clearly shows someone trying to guess your password, starting at 2:58. You aren't logging IP addresses (%h) so it doesn't show where they are coming from. It doesn't show that they succeed, but unless you have log_connections = on, it wouldn't show successes.

Authentication Fail with MongoDB Compass Community

I've just created a new MongoDB account and I'm now trying to connect the free cluster I created via MongoDB Compass Community application but I'm getting a 'Authentication Fail' error being displayed.
This is what I've checked so far:
From my MongoDB Clusters section when I clicked on the Connect (…) button which then gives you various options. From there, I selected 'Connect with MongoDB Compass' and copied the connection string.
This was detected as expected by the Compass and the information was filled automatically in all the relevant fields and I also filled the password by copy/pasting it into the relevant field. 100% sure it is correct.
I checked that the username used was indeed set up as an admin and it is.
I checked my Authentication database was correct and it is.
I've checked that my public IP was added to the whitelist and it is. The only thing I've noticed is that when I added my public IP address, it added a /32 at the end. Is that the port?
But I'm not quite sure what else to test for to resolve this problem.
Any suggestions?
Thanks.
I eventually found out what the problem was after speaking to someone from MongoDB support Team!
Everything was done correctly except for one thing. I was being impatient after changing my Cluster User's password. It can take up to 2 minutes for the system to be updated and therefore to allow Compass to access it.
Once I waited a couple of minutes, I was able to login as expected in Compass.
I still can't quite believe I wasted so much time on such a simple issue but the main thing is that it is resolved.
I did send them some feedback as a lot of things could have been done a lot better:
Highlight it better in their documentation i.e. red??
Make the "warning" message displayed on the webpage after updating the user details more obvious. It was right in my face and never spotted it appear or disappear as once I'd update the user detail on the website, I'd swap immediately to Compass to try to login. By the time, I'd be done, well over 2 minutes would elapsed and the message would be long gone, so not very useful the way it is currently done.
Instead of just saying: 'Authentication Fail', which is correct, the message could read differently when it knows the user is being updated i.e. 'Authentication Fail - Please try again in a few minutes as we're updating this user's details'... Something like this anyway.
So, remember to be patient when changing your user's details in MongoDB and if you are, then yes, you will have a database up and running in the cloud in 5 minutes or less! :)