Get DTC Active Transactions (Powershell) - powershell

I'm writing some Powershell scripts so we can monitor our SQLServer instances through Nagios, I need to get the DTC Active Transaction count in PS so I can output it to Nagios. Is this possible? If so how do I do it?
I'm very much a Windows/Powershell n00b so sorry if this is a basic question. Most of the params I need seem to be avaliable with 'Get-Counter' but this one doesn't seem to be

You could query the performance counters directly from Nagios using check_nrpe instead:
$USER1$/check_nrpe -H 192.168.1.123 -p 5666 -c CheckCounter -a "Counter:DTCTx=\Distributed Transaction Coordinator\Active Transactions" ShowAll MaxWarn=100 MaxCrit=150
This assumes that $USER1$ points to your Nagios libexec folder.
You'd need to set MaxWarn and MaxCrit to thresholds that meet your own alerting requirements.

Related

how to properly create my own custom items and triggers for zabbix 4

:)
I have Zabbix 4.4.1 installed on Ubuntu 19.10.
I have a postgresql plugin configured and working properly so it checks my database metrics.
I have a a table that I want to check a timestamp column for the last inserted row. column name is insert_time.
if the last inserted row have a insert time of more then 5 minutes to product warning and 10 minutes to product error.
I'm new to zabbix.. all I did so far is for googling, not sure if that's the way to go.. it's probably not cause it's not working :)
ok so first thing I did is created a bash files at /etc/zabbix/mytools, get-last-insert-time.sh.
I perform the query and send the output to zabbix_sender with the following template:
#!/bin/bash
PGPASSWORD=<PASSWORD> -U <USER> <DB> -t -c "<RELEVANT QUERY>" | awk '{$1=$1};1' | tr -d "\n" | xargs -I {} /usr/bin/zabbix_sender -z $ZABBIXSERVER -p $ZABBIXPORT -s $ZABBIXAGENT -k "my.pgsql.cdr.last_insert_time" -o {}
is there a way to test this step? how can I make sure that zabbix_sender receives that information? is there some kind of.. zabbix_sender_sniffer of some sort ? :)
next.. I created a configuration files at /etc/zabbix/zabbix_agentd.d called get-last-insert-time.conf with the following template:
UserParameter=my_pgsql_cdr_last_insert_time,/etc/zabbix/mytools/get-last-insert-time.sh;echo $?
here the key is my_pgsql_cdr_last_insert_time while the key in zabbix_sender is my.pgsql.cdr.last_insert_time. as far as I understand these should be two different keys.
why?!
then I created a template and attached it to the relevant host and I created 2 items for it:
item for insert time with the key my.pgsql.cdr.last_insert_time and of type Zabbix Trapper
a Zabbix Agent item with the key my_pgsql_cdr_last_insert_time Type of information: text.
is that the type of information for timestamp ?
now on Overview -> latest data I see:
CDR last insert time with no data
and Run my database trappers insert time that is... the text is disabled? it's in gray.. and there is also no data.
so before I begin to create an alert. what did I do wrong ?
any information regarding this issue would be greatly appreciated.
update
thanks Jan Garaj for this valuable information.
I was expecting that creating such a trigger should be easier then what I found on google, glad to see I was correct.
I edited my bash scripts to return seconds since epoch, since it's from postgresql it returns float, so I configured the items as float. I do see in latest data that the items receive the proper values.
I created triggers, I made sure that the warning trigger depends on the critical trigger so they won't both appear on the same time.
for example I created this trigger {cdrs:pgsql.cdr.last_insert_time.fuzzytime(300)}=0 so if the last insert time is greater then 5 minutes to return a critical error. the problem is that it returns a critical error.. always! even when it shouldn't. I couldn't find a way to debug this. so besides actually getting the triggers to work properly everything else is well configured.
any ideas ?
update 2
when I configured the script to return a timestamp, I changed it to a different timezone instead of living it as it is, which actually compared the data with current time + 2 hours in the future :)
I found that out while going to latest data, checking the timestamp and converting it to actual time. so everything works now thanks a lot!
It looks over complicated, because you are mixing sender with agent approach. Simpler approach - agent only:
UserParameter=pgsql.cdr.last_insert_time,/etc/zabbix/mytools/get-last-insert-time.sh
Script /etc/zabbix/mytools/get-last-insert-time.sh returns last insert Unix timestamp only, e.g.1574111464 (no new line and don't use zabbix_sender in the script). Keep in mind, that zabbix agent uses zabbix user usually, so you need to configure proper script (exec) permissions, eventually env variables.
Test it with zabbix-get from the Zabbix server, e.g.:
zabbix_get -s <HOST IP> -p 10050 -k "pgsql.cdr.last_insert_time"
For any issue on the agent side: increase log agent level and watch agent logs
When you sort agent part, then create template with item key pgsql.cdr.last_insert_time and Numeric (unsigned) type. Trigger can use fuzzytime(60) function.

psql client failing to import dump file - the system cannot find the specified file

I'm attempting to import an SQL dump in PgAdmin 4 using the psql client - However the error message returned is - The system cannnot find the file specified.
Here is a screenshot of my psql client -
The file films.sql is currently stored on my desktop, but I suspect the default location that the psql client accesses is not my desktop? Is there anyway to set the location that the client looks in order to resolve this?
The file SQL is viewable here: https://github.com/datacamp/courses-intro-to-sql/tree/master/datasets
I simply want to get the database on my local machine so that I don't need to store queries in an online learning platform. It would be best if this database is available locally to query and practice on.
I've attempted to execute the whole SQL file as a query on the films database but this does not seem to be working either and returns 'Asynchronous query execution/operation underway.
Query returned successfully in 388 msec.' - However it seems to be the case that the Asynchronous query never completes when I refresh the database.
Please can someone help?
Just give the path to your file:
psql -d my_database -f /path/to/the/file.sql
psql -d my_database -f C:/path/to/the/file.sql
Depending on whether you are on a unix/linux machine or Windows.
Oh, and if you aren't familiar with file paths you may want to take a step back and become more familiar with general computer terminology before diving into a RDBMS. Your learning will be much easier if you have a solid foundation to build upon.
I suspect this question might be moot for the asker at this point, but for anyone else stumbling upon it like I did: the interactive connection info prompts are provided by a batch script (in Windows, I'd guess there's an analogous shell script for Unix) called runpsql.bat, which then just passes your inputs as commandline arguments to the psql.exe executable. I was getting this error because I had migrated my Postgres installation and the batch script was calling a nonexistent path for psql.exe, hence The system cannot find the file specified. I edited runpsql.bat to point to the correct location of psql.exe and that resolved the issue. So for OP, I would look into PgAdmin4 and see where it's (presumably) calling runpsql.bat, then make sure that that calls psql.exe with the correct path.

How to view a log of all writes to MongoDB

I'm able to see queries in MongoDB, but I've tried to see what writes are being performed on a MongoDB database without success.
My application code doesn't have any write commands in it. Yet, when I load test my app, I'm seeing a whole bunch of writes in mongostat. I'm not sure where they're coming from.
Aside from logging writes (which I'm unable to do), are there any other methods that I can use to determine where those writes are coming from?
You have a few options that I'm aware of:
a) If you suspect that the writes are going to a particular database, you can set the profiling level to 2 to log all queries
use [database name]
db.setProfilingLevel(2)
...
// disable when done
db.setProfilingLevel(0)
b) You can start the database with various levels of versbosity using -v
-v [ --verbose ] be more verbose (include multiple times for more
verbosity e.g. -vvvvv)
c) You can use mongosniff to sniff the port
d) If you're using replication, you could also check the local.oplog.rs collection
I've tried all of jeffl's suggestions, and one of them was able to show me the writes: mongosniff. Thanks jeffl!
Here are the commands that I used to install mongosniff on my Ubuntu 10 box, in case someone else finds this useful:
git clone git://github.com/mongodb/mongo.git
cd mongo
git checkout r2.4.6
apt-get install scons libpcap-dev g++
scons mongosniff
build/linux2/normal/mongo/mongosniff --source NET lo 27017
I made a command line tool to see the logs, and also activate the profiler activity first without the need of others client tools: "mongotail".
To activate the log profiling to level 2:
mongotail databasename -l 2
Then to show the latest 10 queries:
mongotail databasename
Also you can use the tool with the -f option, to see the changes in "real time".
mongotail databasename -f
And finally, filter the result with egrep to find a particular operation, like only show writes operations:
mongotail databasename -f | egrep "(INSERT|UPDATE|REMOVE)"
See documentation and installation instructions in: https://github.com/mrsarm/mongotail

Remotely restarting services on several servers

I have around 1000 servers on which I need to restart the SNMP service on, is there an easy method to this via a script or a batch file?
Do you have any sort of collection of the IP's and the root users and passwords (or SSH keys)?
If so, you could use a for loop to cycle through them (implementation depends on the way they're stored), and select the username and password with regular expression filtering or selecting by field and use expect to provide it the password.
If you don't have a collection like that, it seems that you'll have to build a database of them, and it may just be easier to do it manually, but it may be worth creating the database anyways in case you ever need to do this again.
You should give a look to Ansible provisioning tool.
The steps should be somthing like this:
Install Ansible: sudo apt-get install ansible (on ubuntu)
Define your server groups at /etc/ansible/hosts
[snmpservers]
myhostnames[01:10000].example.com
Restart the service on all servers
ansible snmpservers -m service -a "name=snmp state=restarted"

App to monitor PostgreSQL queries in real time?

I'd like to monitor the queries getting sent to my database from an application. To that end, I've found pg_stat_activity, but more often then not, the rows which are returned read " in transaction". I'm either doing something wrong, am not fast enough to see the queries come through, am confused, or all of the above!
Can someone recommend the most idiot-proof way to monitor queries running against PostgreSQL? I'd prefer some sort of easy-to-use UI based solution (example: SQL Server's "Profiler"), but I'm not too choosy.
PgAdmin offers a pretty easy-to-use tool called server monitor
(Tools ->ServerStatus)
With PostgreSQL 8.4 or higher you can use the contrib module pg_stat_statements to gather query execution statistics of the database server.
Run the SQL script of this contrib module pg_stat_statements.sql (on ubuntu it can be found in /usr/share/postgresql/<version>/contrib) in your database and add this sample configuration to your postgresql.conf (requires re-start):
custom_variable_classes = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = top # top,all,none
pg_stat_statements.save = off
To see what queries are executed in real time you might want to just configure the server log to show all queries or queries with a minimum execution time. To do so set the logging configuration parameters log_statement and log_min_duration_statement in your postgresql.conf accordingly.
pg_activity is what we use.
https://github.com/dalibo/pg_activity
It's a great tool with a top-like interface.
You can install and run it on Ubuntu 21.10 with:
sudo apt install pg-activity
pg_activity
If you are using Docker Compose, you can add this line to your docker-compose.yaml file:
command: ["postgres", "-c", "log_statement=all"]
now you can see postgres query logs in docker-compose logs with
docker-compose logs -f
or if you want to see only postgres logs
docker-compose logs -f [postgres-service-name]
https://stackoverflow.com/a/58806511/10053470
I haven't tried it myself unfortunately, but I think that pgFouine can show you some statistics.
Although, it seems it does not show you queries in real time, but rather generates a report of queries afterwards, perhaps it still satisfies your demand?
You can take a look at
http://pgfouine.projects.postgresql.org/