System metrics not support for postgresql. Skipping processing - postgresql

I am trying to get some sample data from all the tables from my postgres DB. I am running the profiler job from airflow. I am getting below error
{system.py:65} INFO - System metrics not support for postgresql. Skipping processing.
any input would be great.
thank you

Related

Grafana (v. 8.4.1) not connecting to InfluxDB (v.2.1.1) database

I have three docker containers running. The first runs a python script that writes data from a sensor to the InfluxDB emon_data bucket running in a second container. This works perfectly and i can run queries and create dashboards within InfluxDB. The third container runs Grafana. The data source setting in Grafana that establishes the connection to InfluxDB seems to be correct as it confirms having a connection to the data source - see picture.
However, when I go to set up a dashboard in Grafana it keeps throwing an error stating that the database cannot be found - see picture.
I have tried to find information on this error but am not finding much and what I am finding seems to be for much older versions of InfluxDB and Grafana. Any suggestions or pointers on how to resolve this would be much appreciated.
Baobab

AWS DMS Task failed with error: Error executing source loop; Stream component failed at subtask 0

I want to migrate my PostgresDB hosted in Citus cloud service to AWS RDS Aurora Postgres.
I am using AWS DMS service. Have created task but getting following errors:
Last failure message Last Error Stream Component Fatal error. Task
error notification received from subtask 0, thread 0
[reptask/replicationtask.c:2860] [1020101] Error executing source
loop; Stream component failed at subtask 0, component
st_0_QOIS7XIGJDKNPY6RXMGYRLJQHY2P7IQBWIBA5NQ; Stream component
'st_0_QOIS7XIGJDKNPY6RXMGYRLJQHY2P7IQBWIBA5NQ' terminated
[reptask/replicationtask.c:2868] [1020101] Stop Reason FATAL_ERROR
Error Level FATAL
Frankly speaking not able to understand what is wrong here, so any help is appreciated.
cloudwatch logs:
I changed type to Full load it worked so it is not working for ongoing replication Citus Cloud service don't support it.
I had a similar error to this using Aurora PostgreSQL v14.5 and AWS DMS. I was using a DMS Full load + CDC job (using pglogical behind the scenes) to migrate from one table to another (on the same system).
Issue was resolved by rolling back my PostgreSQL version from 14.5 to 13.7.

IBM iSeries - getting information about job waits and DB via SQL sys views/tables

On IBM iSeries is there any way how to get information about
job waits
logical database I/O operations
from some of the DB sys views or tables?
I mean something similar like using QSYS2.ACTIVE_JOB_INFO for i.e. CPU information?
I went through IBM documentation, but cannot find it there.
Thank you

Zombie giant unkillable task blocks Druid at restart

I'm running Apache Druid 0.17 deploying with nohup ./bin/start-nano-quickstart > mylog.log. As the deep storage I am using s3 and I have parquet extension enabled and all work fine. I could ingest with several small spark partitioned parquet datasources from s3 correctly. All the remaining configurations are untouched.
As I tried loading a giant datasource to test the performance and resource usage the task died after a couple of hours because of OutOfMemory.(It was expected)
2020-02-07T17:32:20,519 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[arc_2016-09-29T12:00:00.000Z_2016-09-29T13:00:00.000Z_2020-02-07T17:22:45.965Z] for sequenceName[index_parallel_arc_chgindko_2020-02-07T14:59:32.337Z].
Terminating due to java.lang.OutOfMemoryError: GC overhead limit exceeded
Now every time I restart Druid, it starts that giant task and it is impossible to kill it. Even when the task apparently dies or turns in waiting status the CPU usage is about 140% and I cannot submit new tasks to Druid. I tried to access the Derby database manually to find the task and remove it but I was not successful and this solution is really nasty. I know that I can change the database in the configuration so the next time I will have a fresh Druid but it is not a good solution as I will miss all other datasources. How can I get ready of this long running zombie task?

Is there a way to connect Apache airflow with Superset?

Can we connect Superset with Airflow? So that when a task gets completed, Airflow can trigger the Superset to fetch the data from a data source?
Superset is a visualization tool which can display dashboards for multiple database types. While Airflow is a orchestration tool to manage job.
What Airflow can do is to ask the task to fetch data into database where Superset can consume.