Google SQL instance stuck after operation "restore from backup" - google-cloud-sql

After I proceed to restore database from automated backup file generated on Mar 13, 2019, the SQL instance stuck in this state forever:"
Restoring from backup. This may take a few minutes. While this operation is running, you may continue to view information about the instance."
The database size is very small, less than 1MB.

For future users that experience problems like this is in the future, here is how you can handle it:
If you have a Google Cloud support package, file a support ticket directly with support for the quickest response.
Otherwise please file a private GCP issue describing the problem, remembering to include the project id and instance name.
However - Cloud SQL instances are monitored for stuck states like this, so often the issue will resolve itself within a few hours.

Related

GCP Cloud SQL Point-in-time recovery

I am working on DR for Cloud SQL. I found that we can enable Point in time recovery for Cloud SQL and get the data restored till a particular time in case of any data corruption.
In the document, I found that we will have to create a clone after enabling point in time recovery.
Creating a clone will create a new IP address for the cloned database. Will the admin credentials going to change when we create a clone of the database or is it going to be the same?
As mentioned in the link:
Point-in-time recovery allows you to recover an instance to a specific point in time. For example, if an operator ‘fat finger’ error causes a loss of data you can
recover a database to the state it was just before the error occurred. It’s also great for testing your application and diagnosing issues since you can clone your live data to a testing database. See the point-in-time-recovery docs for more information.
Admin credentials will be the same for both databases. For more information you refer to the link where recovery with the Google Cloud SQL(postgreSQL) has been explained.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

BizTalk Databases Missing, Not sure what to do

I was testing my code changes, meaning undeploying/redeploying applications in Biztalk and then all of the BizTalk databases disappeared (BAMAcrhive, BAMPrimaryImport, BiztalkDTADb, BizTalkMgmtDb, BizTalkMsgBoxDb, BizTalkRulEngineDb, BTAHL7). This is my test environment however, i did not have any backups of these databases (yes i have learned my lesson).
I tried restoring databases from another test environment and then updating the server names and what not within the tables. I tried stopping/deleting some applications in the console but I get more errors that come up.
I am assuming that the GUIDs/Keys of the deployed applications in TESTSERVER1 and TESTSERVER2 are different therefore it won't delete properly.
I am currently getting this error"Schema referenced by Map 'XXXXX' has been deleted. The local, cached version of the BizTalk Server group configuration is out of date. You must refresh the BizTalk Server group configuration before making further changes. (Microsoft.BizTalk.Administration.SnapIn)".
When I try to refresh the BizTalk Group in the console, i get the above error as well as "The application does not exist"
I tried truncating the tables that consisted of this data but there are too many references to go through the trouble.
I have also tried to restore the SSO key. Updated services (Biztalk, SSO, and a few more). When i try to start the BizTalk Service BizTalk Group: BizTalkServerApplication. It says the service has started and stopped.
So a few questions:
What should i do? I hope a re-installation of BizTalk is last resort.
How did the databases disappear in the first place, the undeployment scripts have nothing to do with the databases, only applications
Sorry if the solution is obvious, I am by no means a BizTalk Developer. Just a stressed junior BI developer on a friday night.
if you already lost the Biztalk environnements(undeployed applications + DBs lost), the best choice is to reinstall your environment and setup a backup just after. but try to understand the source proble in windows and sql server logs.

MongoDB replica set in Azure "Waiting for role to start... Calling OnRoleStart()"

I have a problem trying to implement a mongodb replica set as a worker role instance in Windows Azure. In the Windows Azure portal, one of the instances is shown as busy with the status:
Waiting for role to start... Calling OnRoleStart()
I have checked all the settings and everything seems to be ok, what could the problem be?
Denis Markelov's blog post helped me solve this problem. The solution is mainly his, however I had to take an extra step to get it to work and thought others might find it useful.
Solution from blog:
Windows Azure reuses virtual machines for roles, so after a fresh
deployment on a hard drive you can find files that were created during
previous sessions. If MongoDB was terminated improperly - there might
be a lock file ("persisted mutex" analogue), because of which MongoDB
refuses to start. It is located at the drive with a label
"WindowsAzureDrive" (say it is F:), at the path:
F:\data\mongod.lock
In the case of a production use this situation might require a
recovery procedures, but if you are just in the process of initial
setup - it is safe to remove this file, letting MongoDB to start
again.
I was having this problem and did as suggested, however I was still having the same problem. So I took a look at the log file at
C:\Resources\Directory\.MongoDB.WindowsAzure.MongoDBRole.MongodLogDir\mongod.txt
And saw that another file was also giving an error. In order to fix the problem, you also have to delete the file local.ns in the same directory as mongod.lock.

Database migrations: manage with build script or automatic on app startup?

I'm in the process of developing a deployment system for a new web app and I'm wondering where the best point in the process to manage database migrations is (the question of how to do the migrations is another problem entirely).
It seems there are two ways to go:
Use a migration script that can
either be run manually from command
line or as part of the automatic
deployment/build process
Run the migrations when the app
starts up (I'm using ASP.NET so this
can be done easily enough without
causing a long-running user request)
Does anyone have any suggestions/insight/experience with these approaches? Any other suggestions?
I can see why #1 might be more attractive - it gives me complete control over when the DB is updated. However, I quite like #2 as it allows me to quickly iterate between deployments and reduces the manual process. #2 could also be used on my development machine to allow even quicker iterations. Hmm, starting to think having both might be a good thing...
We have a sales-force system with ~100 client and we are updating database at application startup (True, our is a desktop application.) I like this approach, it's safe and iterative if we have indeterministic startpoint (is the client database new or only updated to verison x.y.z?).
But at serverside I'm preferr your #1 option: we create a SQL query file on our virtual machine (based on the copy of the original database) and runs this query against the real server.
So IMHO:
Disconnected clients: startup, iterative scripts
Server: query created on VM based on the actual and real database
So I'm interrested in this problem too, and find some (half)frameworks as RikMigrations. After some googling there is a good startplace about DB versioning/migration frameworks: .NET Database Migration Tool Roundup. Not neccessarely the documentation but the team blogs can be interresting.
I like option #1 better as it seems much more flexible. In lieu of actually performing migrations on each app start, I think I would verify that the database schema (version number?) matches the code, and if not, throw a warning or error about a mismatched database schema.
I'd prefer option #1 for a number of reasons. First, integration tests usually require your DB schema to be up-to-date, and launching a web-site to upgrade the schema will be a huge timewaster. Second, you cannot change database schema while your site is running (say, add a couple of indexes to speed things up).
As for production side of things, upgrading your database in transaction MSI-style installation is much better than attempting to upgrade at each app startup since you can potentially end up with desynchronized database-application versions.
And if you're looking for the migration framework, take a look at Wizardby.
If the application ever has to run on a customer's machine than migrating at startup can prevent a lot of support calls - assuming you can do seamless migration without user intervention (I hope you aren't normally running your web app with permission to modify the database).
If the application always runs under your control automatic migration is less of an issue - but still can be a good feature, especially if you want to minimize downtime and manual deployment steps.