How to run vowpal wabbit in daemon mode - daemon

I want to run vowpal wabbit in daemon mode training. I found the stackoverflow link Read data from memory in Vowpal Wabbit? , but did not get how to specify model file name. What I am doing is run
vw --save_resume -f ob/e/nsefut/VWDaemon/model.vw --quiet --daemon --port 26542
and then send examples. What I did get from the link is that I have to send tags starting "save" to make vw understand that its a training data. So, I sent it as
echo '2 save| b:1.0 c:2.8 ' | netcat localhost 26542
But I cant locate the model file. It would be really great if there is some tutorial for it.
edit:
Plus in between training in daemon mode, I also want to be able to see the coefficients till that point.

You must use echo 'save' | netcat localhost 26542 to instruct vw to dump the current regressor coefficients into the model file. As for obtaining coeffs values pls refer to this answer. In short: you can't.

Related

Can gem5 only simulate executable binary? How to run full system gem5 simulation

I am trying to simulate hardware changes such as cache on the application performance. However, what I want is arbitrary applications such as NodeJS, bash shell, java...
build/X86/gem5.opt \
configs/example/se.py \
--cmd /usr/bin/node \
--options /path/to/my/node.js
(1) Is this the correct way? Or do I have to feed an executable binary?
Using the command way, I got the error though:
fatal: syscall epoll_create1 (#291) unimplemented.
I found similar Q1 Q2
(2) If I did right in (1), how can I fix the errors. Maybe more than one unimplemented syscall. The Q2 answer says try gem5 full system model. I have little experience with gem5, so can you give me an example of using gem5 full system model to run a node, bash or whatever application that is not binary but command-line type?

how to properly create my own custom items and triggers for zabbix 4

:)
I have Zabbix 4.4.1 installed on Ubuntu 19.10.
I have a postgresql plugin configured and working properly so it checks my database metrics.
I have a a table that I want to check a timestamp column for the last inserted row. column name is insert_time.
if the last inserted row have a insert time of more then 5 minutes to product warning and 10 minutes to product error.
I'm new to zabbix.. all I did so far is for googling, not sure if that's the way to go.. it's probably not cause it's not working :)
ok so first thing I did is created a bash files at /etc/zabbix/mytools, get-last-insert-time.sh.
I perform the query and send the output to zabbix_sender with the following template:
#!/bin/bash
PGPASSWORD=<PASSWORD> -U <USER> <DB> -t -c "<RELEVANT QUERY>" | awk '{$1=$1};1' | tr -d "\n" | xargs -I {} /usr/bin/zabbix_sender -z $ZABBIXSERVER -p $ZABBIXPORT -s $ZABBIXAGENT -k "my.pgsql.cdr.last_insert_time" -o {}
is there a way to test this step? how can I make sure that zabbix_sender receives that information? is there some kind of.. zabbix_sender_sniffer of some sort ? :)
next.. I created a configuration files at /etc/zabbix/zabbix_agentd.d called get-last-insert-time.conf with the following template:
UserParameter=my_pgsql_cdr_last_insert_time,/etc/zabbix/mytools/get-last-insert-time.sh;echo $?
here the key is my_pgsql_cdr_last_insert_time while the key in zabbix_sender is my.pgsql.cdr.last_insert_time. as far as I understand these should be two different keys.
why?!
then I created a template and attached it to the relevant host and I created 2 items for it:
item for insert time with the key my.pgsql.cdr.last_insert_time and of type Zabbix Trapper
a Zabbix Agent item with the key my_pgsql_cdr_last_insert_time Type of information: text.
is that the type of information for timestamp ?
now on Overview -> latest data I see:
CDR last insert time with no data
and Run my database trappers insert time that is... the text is disabled? it's in gray.. and there is also no data.
so before I begin to create an alert. what did I do wrong ?
any information regarding this issue would be greatly appreciated.
update
thanks Jan Garaj for this valuable information.
I was expecting that creating such a trigger should be easier then what I found on google, glad to see I was correct.
I edited my bash scripts to return seconds since epoch, since it's from postgresql it returns float, so I configured the items as float. I do see in latest data that the items receive the proper values.
I created triggers, I made sure that the warning trigger depends on the critical trigger so they won't both appear on the same time.
for example I created this trigger {cdrs:pgsql.cdr.last_insert_time.fuzzytime(300)}=0 so if the last insert time is greater then 5 minutes to return a critical error. the problem is that it returns a critical error.. always! even when it shouldn't. I couldn't find a way to debug this. so besides actually getting the triggers to work properly everything else is well configured.
any ideas ?
update 2
when I configured the script to return a timestamp, I changed it to a different timezone instead of living it as it is, which actually compared the data with current time + 2 hours in the future :)
I found that out while going to latest data, checking the timestamp and converting it to actual time. so everything works now thanks a lot!
It looks over complicated, because you are mixing sender with agent approach. Simpler approach - agent only:
UserParameter=pgsql.cdr.last_insert_time,/etc/zabbix/mytools/get-last-insert-time.sh
Script /etc/zabbix/mytools/get-last-insert-time.sh returns last insert Unix timestamp only, e.g.1574111464 (no new line and don't use zabbix_sender in the script). Keep in mind, that zabbix agent uses zabbix user usually, so you need to configure proper script (exec) permissions, eventually env variables.
Test it with zabbix-get from the Zabbix server, e.g.:
zabbix_get -s <HOST IP> -p 10050 -k "pgsql.cdr.last_insert_time"
For any issue on the agent side: increase log agent level and watch agent logs
When you sort agent part, then create template with item key pgsql.cdr.last_insert_time and Numeric (unsigned) type. Trigger can use fuzzytime(60) function.

How to load fish configuration from a remote repository?

I have a zillion machines in different places (home network, cloud, ...) and I use fish on each of them. The problem is that I have to synchronize their configuration every time I change something in there.
Is there a way to load the configuration from a remote repository? (= a place where it would be stored, not necessarily git but ideally I would manage them in GitHub). In such a case I would just have a one liner everywhere.
I do not care too much about startup time, loading the config each time would be acceptable
I cannot push the configuration to the machines (via Ansible for instance) - not of them are reachable from everywhere directly - but all of them can reach Internet
There are two parts to your question. Part one is not specific to fish. For systems I use on a regular basis I use Dropbox. I put my ~/.config/fish directory in a Dropbox directory and symlink to it. For machines I use infrequently, such as VMs I use for investigating problems unique to a distro, I use rsync to copy from my main desktop machine. For example,
rsync --verbose --archive --delete -L --exclude 'fishd.*' krader#macpro:.config .
Note the exclusion of the fishd.* pattern. That's part two of your question and is unique to fish. Files in your ~/.config/fish directory named with that pattern are the universal variable storage and are currently unique for each machine. We want to change that -- see https://github.com/fish-shell/fish-shell/issues/1912. The problem is that file contains the color theme variables. So to copy your color theme requires exporting those vars on one machine:
set -U | grep fish_color_
Then doing set -U on the new machine for each line of output from the preceding command. Obviously if you have other universal variables you want synced you should just do set -U and import all of them.
Disclaimer: I wouldn't choose this solution myself. Using a cloud storage client as Kurtis Rader suggested or a periodic cron job to pull changes from a git repository (+ symlinks) seems a lot easier and fail-proof.
On those systems where you can't or don't want to sync with your cloud storage, you can download the configuration file specifically, using curl for example. Some precious I/O time can be saved by utilizing HTTP cache control mechanisms. With or without cache control, you will still need to create a connection to a remote server each time (or each X times or each Y time passed) and that wastes quite some time already.
Following is a suggestion for such a fish script, to get you started:
#!/usr/bin/fish
set -l TMP_CONFIG /tmp/shared_config.fish
curl -s -o $TMP_CONFIG -D $TMP_CONFIG.headers \
-H "If-None-Match: \"$SHARED_CONFIG_ETAG\"" \
https://raw.githubusercontent.com/woj/dotfiles/master/fish/config.fish
if test -s $TMP_CONFIG
mv $TMP_CONFIG ~/.config/fish/conf.d/shared_config.fish
set -U SHARED_CONFIG_ETAG (sed -En 's/ETag: "(\w+)"/\1/p' $TMP_CONFIG.headers)
end
Notes:
Warning: Not tested nearly enough
Assumes fish v2.3 or higher.
sed behavior varies from platform to platform.
Replace woj/dotfiles/master/fish/config.fish with the repository, branch and path that apply to your case.
You can run this from a cron job, but if you insist to update the configuration file on every init, change the script to place the configuration in a path that's not already automatically loaded by fish, e.g.:
mv $TMP_CONFIG ~/.config/fish/shared_config.fish
and in your config.fish run this whole script file, followed by a
source ~/.config/fish/shared_config.fish

How to view a log of all writes to MongoDB

I'm able to see queries in MongoDB, but I've tried to see what writes are being performed on a MongoDB database without success.
My application code doesn't have any write commands in it. Yet, when I load test my app, I'm seeing a whole bunch of writes in mongostat. I'm not sure where they're coming from.
Aside from logging writes (which I'm unable to do), are there any other methods that I can use to determine where those writes are coming from?
You have a few options that I'm aware of:
a) If you suspect that the writes are going to a particular database, you can set the profiling level to 2 to log all queries
use [database name]
db.setProfilingLevel(2)
...
// disable when done
db.setProfilingLevel(0)
b) You can start the database with various levels of versbosity using -v
-v [ --verbose ] be more verbose (include multiple times for more
verbosity e.g. -vvvvv)
c) You can use mongosniff to sniff the port
d) If you're using replication, you could also check the local.oplog.rs collection
I've tried all of jeffl's suggestions, and one of them was able to show me the writes: mongosniff. Thanks jeffl!
Here are the commands that I used to install mongosniff on my Ubuntu 10 box, in case someone else finds this useful:
git clone git://github.com/mongodb/mongo.git
cd mongo
git checkout r2.4.6
apt-get install scons libpcap-dev g++
scons mongosniff
build/linux2/normal/mongo/mongosniff --source NET lo 27017
I made a command line tool to see the logs, and also activate the profiler activity first without the need of others client tools: "mongotail".
To activate the log profiling to level 2:
mongotail databasename -l 2
Then to show the latest 10 queries:
mongotail databasename
Also you can use the tool with the -f option, to see the changes in "real time".
mongotail databasename -f
And finally, filter the result with egrep to find a particular operation, like only show writes operations:
mongotail databasename -f | egrep "(INSERT|UPDATE|REMOVE)"
See documentation and installation instructions in: https://github.com/mrsarm/mongotail

Get DTC Active Transactions (Powershell)

I'm writing some Powershell scripts so we can monitor our SQLServer instances through Nagios, I need to get the DTC Active Transaction count in PS so I can output it to Nagios. Is this possible? If so how do I do it?
I'm very much a Windows/Powershell n00b so sorry if this is a basic question. Most of the params I need seem to be avaliable with 'Get-Counter' but this one doesn't seem to be
You could query the performance counters directly from Nagios using check_nrpe instead:
$USER1$/check_nrpe -H 192.168.1.123 -p 5666 -c CheckCounter -a "Counter:DTCTx=\Distributed Transaction Coordinator\Active Transactions" ShowAll MaxWarn=100 MaxCrit=150
This assumes that $USER1$ points to your Nagios libexec folder.
You'd need to set MaxWarn and MaxCrit to thresholds that meet your own alerting requirements.