NewRelic gives nice database analyses, however it seems to track only the web app's transactions.
I have independently managed servers which query and load my Heroku postgresql database. Is there a way I can get diagnostics and analysis of the database activity so that it will include all connections to it?
New Relic application monitoring will only collect data on database queries that are part of a web transaction or background task that is being monitored. If you're using one of New Relic's supported languages to query your database, you may be able to track that code as a background task (see https://newrelic.com/docs/features/monitoring-background-processes). If you would like a general monitoring plugin for your postgresql database, you could check out the postgresql plugin for New Relic (created and supported by Boundless): http://newrelic.com/plugins/boundless/109.
You should also try Heroku PG Extras: https://github.com/heroku/heroku-pg-extras. That will give info about cache hit, indexes, long queries, etc.
Related
I am using Rundeck Docker Container. It was running well for 2 months and suddenly it crashed. I lost all the data wrt a project that I had created using CLI. Is there a way to change the default path to store all project related data including job definitions, resources etc?
Out of the box, Rundeck stores the project/jobs data on their internal H2 database, this database is only for testing purposes and probably will crash with a lot of data (storing projects at filesystem is deprecated right now), the best approach is to use a "real" database like MySQL, PostgreSQL, or Oracle, in that way Rundeck stores all project/jobs data on a robust backend.
Check this MySQL, PostgreSQL and Oracle Docker environment examples.
Of course, having a backup policy for your instance would be ideal to keep safe all your instance data.
I am seeing some slow performance on a couple of my queries that run against my db2 on cloud instance. When I had a local db2, I would try these tools to see if I could improve performance. Now, with db2 on cloud, I believe I can run them using admin_cmd, however, if they are already being run automatically on my db objects, there is no point, but I am not sure how to tell.
Yes, Db2 on Cloud does auto reorgs and runstats automatic. We do recommend running them manually, if you are running a lot of data loads to better the performance.
As you stated, Db2 on Cloud is a managed (as a Service) database offering. But this is for the general part, not for application-specific stuff. Backup / restore can be done without any application insights, but creating indexes, running runstats or performing reorgs is application-specific.
Runstats can be invoked using admin_cmd. The same is true for running reorg on tables and indexes.
We're encountering issues with using Flyway for database migrations with multiple nodes in parallel, backed by a PostgreSQL database behind PgBouncer with transaction pooling.
The problem is that when multiple nodes start up at the same time, Flyway gets an exclusive lock but this seems to be a session lock, which isn't supported by PgBouncer transaction pooling (as multiple nodes may get the same session). This then causes each node to not start up because they've locked each other.
Is there anything we can change or configure in Flyway to support this? We'd prefer not to switch away from transaction pooling if possible, as that's our main motivation for using PgBouncer.
At the moment, Flyway doesn't support PgBouncer, so you're seeing errors because of that lack of support. No work arounds from the developers currently. I'd suggest opening an issue on the Community Github. That's the best way to get changes in.
As a workaround, we're currently configuring two data sources for our application - one to PgBouncer as normal, and another with a single connection that's used solely for Flyway that bypasses PgBouncer and connects directly to the PostgreSQL back-end.
I have an web application that executes query against a RDS Postgres Database. For this application, we use a Trunk based development and our developers can and should deploy anything on master branch directly to production. During the day, when we are operating in a low workload we can't see any performance degradation on database, but at night ( we operate a courier service), when we experiment huge workload we can have some performance degradation...
My question is: How should I monitor this kind of behaviour?
I don't want to impose to run a stress test before deploy to production.
I would like to have a tool that can monitor our database and inform like: "Take care! You have a new query (or a slow query) on your database caused by Pull Request 1234".
If you are on RDS for PostgreSQL 10, or can upgrade to that version, then you can use Performance Insights to monitor your running instance, to see which queries are generating load on your instance, and what wait states those queries are in. You can find more info here: https://aws.amazon.com/rds/performance-insights/
Full disclosure: I am the Product Manager for Amazon Aurora PostgreSQL, which was the first db engine to support Performance Insights.
The simple solution is to use the pg_stat_statements. extension. It can show you the queries that consumed the most run time altogethet ar one glance.
I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !