After setting db to some arbitrary value, i am not able to perform any operation in shell.
Is it know bug or expected behavior?
> use tutorial
switched to db tutorial
> db
tutorial
> db = 5
5
> db
5
> show dbs
Fri Mar 23 17:18:40 TypeError: db.getMongo is not a function shell/utils.js:1235
>
> use tutorial
Fri Mar 23 17:18:55 TypeError: db.getMongo is not a function shell/utils.js:1167
> db = 'tutorial'
tutorial
> show dbs
Fri Mar 23 17:19:38 TypeError: db.getMongo is not a function shell/utils.js:1235
The Mongo Interactive Shell is a Javascript Shell, and hence it obeys all laws of a Javascript Shell. You are overriding the db variable that got initialized during startup.
> a = db
SocialStreams
> db = "Hello"
Hello
> db.help()
Fri Mar 23 12:08:13 TypeError: db.help is not a function (shell):1
> db = a
SocialStreams
> db.help()
DB methods:
db.addUser(username, password[, readOnly=false])
db.auth(username, password)
db.cloneDatabase(fromhost)
...
Related
I have some questions, how I can set telegraf.conf file for collect logs from the "zimbra.conf" file?
Now I tried to use this config text, but it does not work :(((
I want to send this logs to grafana
One of the lines "zimbra.conf" for example:
Oct 1 10:20:46 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)
And I do not understand exactly how works the "grok_patterns ="
[[inputs.tail]]
files = ["/var/log/zimbra.log"]
from_beginning = false
grok_patterns = ['%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
name_override = "zimbra_access_log"
grok_custom_pattern_files = []
grok_custom_patterns = '''
TS_UNIX %{MONTH}%{SPACE}%{MONTHDAY}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}
TS_CUSTOM %{MONTH}%{SPACE}%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}
'''
grok_timezone = "Local"
data_format = "grok"
I have copied your example line into a log file called Prueba.txt wich contains the following lines:
Oct 3 00:52:32 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0$
Oct 13 06:25:01 webmail systemd-logind[949]: New session 229478 of user zimbra.
Oct 13 06:25:02 webmail zmconfigd[27437]: Shutting down. Received signal 15
Oct 13 06:25:02 webmail systemd-logind[949]: Removed session c296.
Oct 13 06:25:03 webmail sshd[28005]: Failed password for invalid user julianne from 120.131.2.210 port 10570 ssh2
I have been able to parse the data with this configuration of the tail.input plugin:
[[inputs.tail]]
files = ["Prueba.txt"]
from_beginning = true
data_format = "grok"
grok_patterns = ['%{TIMESTAMP_ZIMBRA} %{GREEDYDATA:source} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
grok_custom_patterns = '''
TIMESTAMP_ZIMBRA (\w{3} \d{1,2} \d{2}:\d{2}:\d{2})
'''
name_override = "log_frames"
You need to match the input string with regular expressions. For that there are some predefined patters such as GREEDYDATA = .* that you can use to match your input (another example will be NUMBER = (?:%{BASE10NUM}) BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))) . You can also define your own patterns in grok_custom_patterns. Take a look at this website with some patters: https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Apx-GrokPatterns/GrokPatterns_title.html
In this case I defined a TIMESTAMP_ZIMBRA pattern for matching Oct 3 00:52:32 and Oct 03 00:52:33 alike inputs.
Here is the collected metric by Prometheus:
# HELP log_frames_delay Telegraf collected metric
# TYPE log_frames_delay untyped
log_frames_delay{delays="0.09/0.01/0.58/0.19",dsn="2.0.0",host="localhost.localdomain",message="BD5BAE9999:",path="Prueba.txt",program="postfix/smtp",relay="mo94.cloud.mail.com[92.97.907.14]:25",source="webmail",status="sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)",to="user#mail.com"} 0.73
P.D.: Ensure that telegraf has access to the log files.
I want to make changes to some EDBs and then find out how the IDBs changed as a result.
The docs say that the query stage comes after the final stage and "has access to the effects of stage FINAL". But if I run
query '_(id) <- ^level(id; _).'
(where level is an IDB) I get
block block_1Z7PZ61E: line 2: error: predicate level is an IDB, therefore deltas for it will not be available until stage final. (code: STAGE_INITIAL_IDB_DELTA)
^level(id; _).
^^^^^^^^^^^^^
1 ERROR
BloxCompiler reported 1 error in block 'block_1Z7PZ61E'
I also tried following the diff predicate example. But this
query '_(id) <- (level\level#prev)(id; _).'
causes a syntax error:
block block_1Z80EFSZ: line 2: error: illegal character 'U+005C' (code: ILLEGAL_CHARACTER)
(level\level#prev)(id; _).
^
block block_1Z80EFSZ: line 2: error: unexpected token 'level' (code: UNEXPECTED_TOKEN)
(level\level#prev)(id; _).
^^^^^
block block_1Z80EFSZ: line 2: error: unexpected token ')' (code: UNEXPECTED_TOKEN)
(level\level#prev)(id; _).
^
block block_1Z80EFSZ: line 2: error: unexpected token ';' (code: UNEXPECTED_TOKEN)
(level\level#prev)(id; _).
^
4 ERRORS
BloxCompiler reported 4 errors in block 'block_1Z80EFSZ'
Unfortunately the diff predicate functionality is not yet released (as of Dec 2016). We've noted that as a documentation bug to fix. The feature is expected in LB 4.4 in Q1 2017.
Here's a simple example for printing or querying the values in the IDB predicate after changes to the EDB. You don't need to use delta logic in this case.
[dan#glycerine Fri Dec 30 09:55:54 ~/Temp]
$ cat foo.logic
foo(x), foo_id(x: y) -> int(y).
edb[x] = y -> foo(x), int(y).
idb[x] = y -> foo(x), int(y).
idb[x] = y <- edb[x] + 10 = y.
[dan#glycerine Fri Dec 30 09:56:33 ~/Temp]
$ lb create --overwrite foo && lb addblock foo --name foo --file foo.logic && lb exec foo '+foo(x), +foo_id[x] = 1, +edb[x] = 5.'
created workspace 'foo'
added block 'foo'
[dan#glycerine Fri Dec 30 09:57:23 ~/Temp]
$ lb print foo edb
[10000000005] 1 5
[dan#glycerine Fri Dec 30 09:57:28 ~/Temp]
$ lb print foo idb
[10000000005] 1 15
[dan#glycerine Fri Dec 30 09:57:32 ~/Temp]
$ lb exec foo '^edb[x] = 15 <- foo_id[x] = 1.'
[dan#glycerine Fri Dec 30 09:57:54 ~/Temp]
$ lb print foo idb
[10000000005] 1 25
[dan#glycerine Fri Dec 30 09:57:55 ~/Temp]
$ lb query foo '_[x] = edb[x].'
/--------------- _ ---------------\
[10000000005] 1 15
\--------------- _ ---------------/
I can think of a couple options for watching changes more as they happen. In once case you could create a watcher predicate and add values to it like this.
watcher(x, y) -> int(x), int(y).
^watcher(x, y) <- foo_id[x] = id, ^idb[id] = y.
That could be enhanced to have its own entity and timestamp...
Another way, if you are using web services to make the EDB updates, is to use
lb web-server monitor -w foo idb
to watch a list of predicates.
I have two simple indexes:
First, 01.conf:
searchd
{
listen = 9301
listen = 9401:mysql41
pid_file = /var/run/sphinxsearch/searchd01.pid
log = /var/log/sphinxsearch/searchd01.log
query_log = /var/log/sphinxsearch/query01.log
binlog_path = /var/lib/sphinxsearch/data/test/01
}
source base
{
type = mysql
sql_host = localhost
sql_db = test
sql_user = root
sql_pass = toor
sql_query_pre = SET NAMES utf8
sql_attr_uint = group_id
}
source test : base
{
sql_query = \
SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \
FROM documents WHERE id % 2 = 0
}
index test
{
source = test
path = /var/lib/sphinxsearch/data/test/01
}
Second looks like first but with "02" instead "01" in filename and inside.
And distributed index in 00.conf:
searchd
{
listen = 9305
listen = 9405:mysql41
pid_file = /var/run/sphinxsearch/searchd00.pid
log = /var/log/sphinxsearch/searchd00.log
query_log = /var/log/sphinxsearch/query00.log
binlog_path = /var/lib/sphinxsearch/data/test
}
index test
{
type = distributed
agent = 127.0.0.1:9301:test
agent = 127.0.0.1:9302:test
}
And I try to use distributed index:
sudo searchd --config /etc/sphinxsearch/d/00.conf --stop
sudo searchd --config /etc/sphinxsearch/d/01.conf --stop
sudo searchd --config /etc/sphinxsearch/d/02.conf --stop
sudo searchd --config /etc/sphinxsearch/d/01.conf
sudo searchd --config /etc/sphinxsearch/d/02.conf
sudo indexer --all --rotate --config /etc/sphinxsearch/d/01.conf
sudo indexer --all --rotate --config /etc/sphinxsearch/d/02.conf
sudo searchd --config /etc/sphinxsearch/d/00.conf
Unfortunately I obtain next output:
...
using config file '/etc/sphinxsearch/d/00.conf'...
listening on all interfaces, port=9305
listening on all interfaces, port=9405
precached 0 indexes in 0.000 sec
Why?
And when I try to search something with distributed index (9305):
no enabled local indexes to search.
And mysql indexes are works perfectly if I use them with port 9301 and 9302 respectively. But searching in distributed index returns nothing.
UPDATE
# tail /var/log/sphinxsearch/searchd00.log
[Thu Sep 29 23:43:04.599 2016] [ 2353] binlog: finished replaying /var/lib/sphinxsearch/data/test/binlog.001; 0.0 MB in 0.000 sec
[Thu Sep 29 23:43:04.599 2016] [ 2353] binlog: finished replaying total 4 in 0.000 sec
[Thu Sep 29 23:43:04.599 2016] [ 2353] accepting connections
[Thu Sep 29 23:43:24.336 2016] [ 2353] caught SIGTERM, shutting down
[Thu Sep 29 23:43:24.472 2016] [ 2353] shutdown complete
[Thu Sep 29 23:43:24.473 2016] [ 2352] watchdog: main process 2353 exited cleanly (exit code 0), shutting down
[Thu Sep 29 23:43:24.634 2016] [ 2404] watchdog: main process 2405 forked ok
[Thu Sep 29 23:43:24.635 2016] [ 2405] listening on all interfaces, port=9305
[Thu Sep 29 23:43:24.635 2016] [ 2405] listening on all interfaces, port=9405
[Thu Sep 29 23:43:24.636 2016] [ 2405] accepting connections
UPDATE2
Hmm... It seems what problem in querying data from Sphinx. Also I renamed distributed index into test1. Next code works well.
# mysql -h 127.0.0.1 -P 9405
mysql> select * from test1 where match ('one|two');
+------+----------+
| id | group_id |
+------+----------+
| 1 | 1 |
| 2 | 1 |
+------+----------+
2 rows in set (0,00 sec)
I think what problem was in old version of sphinxapi.php what I used.
precached 0 indexes in 0.000 sec
Well that it self, is normal. There are no local indexes to 'precache'. A distributed index has no index files to 'load' or (pre)cache.
... but searchd should still be running at the end of that. I think searchd should start up ok.
Try also checking
/var/log/sphinxsearch/searchd00.log
might have some more.
Although I suppose its possible sphinx will not startup without any real indexes (ie cant have JUST distributed index), so could just add a fake index to that config.
Is it possible to read a znode with a space in it through Zookeeper CLI?
I've 2 values under regions ('us-west 1' and 'us-east')
[zk: localhost:2181(CONNECTED) 11] get /regions/
us-west 1 us-east
I can read 'us-east'.
[zk: localhost:2181(CONNECTED) 11] get /regions/us-east
null
cZxid = 0xa
ctime = Tue Jul 10 12:41:49 IST 2012
mZxid = 0xa
mtime = Tue Jul 10 12:41:49 IST 2012
pZxid = 0x1b
cversion = 9
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 9
But not 'us-west 1'
[zk: localhost:2181(CONNECTED) 11] get /regions/us-west 1
Node does not exist: /regions/us-west
I tried options like '%20', '\ ' , '+' etc.. for the space, but nothing worked.
Please try zookeepercli, as follows:
$ zookeepercli --servers srv1,srv2,srv3 -c create "/demo_only 1" "the value"
$ zookeepercli --servers srv1,srv2,srv3 -c get "/demo_only 1"
the value
zookeepercli is free and open source.
Disclaimer: I'm author of this tool.
It looks like you will not be able to do that from the ZK command-line client. The Zookeeper Java client (which is the one you are using, probably) separates commands (e.g. get) from their parameters (e.g. /regions/us-west 1 in your case) by parsing whitespace characters, as you can see in the code provided with the client (e.g. zookeeper-3.3.5\src\java\main\org\apache\zookeeper\ZooKeeperMain.java):
public boolean parseCommand( String cmdstring ) {
String[] args = cmdstring.split(" ");
if (args.length == 0){
return false;
}
command = args[0];
cmdArgs = Arrays.asList(args);
return true;
}
Since they are splitting by a " ", unless you discover a way to overcome that split call by some sort of escaping when calling the get command, you won't be able to retrieve those nodes using this client. The code above interprets your 1 in a get call as if it were the watch parameter, as you can see in the get command syntax:
get path [watch]
I recommend you to use a different character, like '_' for example, instead of the whitespace on the znodes naming. If that is not an option, you will either need to modify the ZK Java client yourself, or use another client.
There's an open issue in JIRA for this feature, but a workaround is to pass a command on the command-line instead of using the interactive console:
$ zkCli.sh -c get "/regions/us-west 1"
I installed SNMP on my Linux in order to monitory load balance, disk space etc.
yum install net-snmp net-snmp-utils
vim /etc/snmp/snmpd.conf # added "rocommunity public"
/etc/init.d/snmpd start
Walking the SNMP returns very few entries, no load average, disk space etc.:
snmpwalk -v 1 -c public localhost
SNMPv2-MIB::sysDescr.0 = STRING: Linux XXX 2.6.35.4-rscloud #8 SMP Mon Sep 20 15:54:33 UTC 2010 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (232884) 0:38:48.84
SNMPv2-MIB::sysContact.0 = STRING: Root <root#localhost> (configure /etc/snmp/snmp.local.conf)
SNMPv2-MIB::sysName.0 = STRING: XXX
SNMPv2-MIB::sysLocation.0 = STRING: Unknown (edit /etc/snmp/snmpd.conf)
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORID.1 = OID: SNMPv2-MIB::snmpMIB
SNMPv2-MIB::sysORID.2 = OID: TCP-MIB::tcpMIB
SNMPv2-MIB::sysORID.3 = OID: IP-MIB::ip
SNMPv2-MIB::sysORID.4 = OID: UDP-MIB::udpMIB
SNMPv2-MIB::sysORID.5 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup
SNMPv2-MIB::sysORID.6 = OID: SNMP-FRAMEWORK-MIB::snmpFrameworkMIBCompliance
SNMPv2-MIB::sysORID.7 = OID: SNMP-MPD-MIB::snmpMPDCompliance
SNMPv2-MIB::sysORID.8 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance
SNMPv2-MIB::sysORDescr.1 = STRING: The MIB module for SNMPv2 entities
SNMPv2-MIB::sysORDescr.2 = STRING: The MIB module for managing TCP implementations
SNMPv2-MIB::sysORDescr.3 = STRING: The MIB module for managing IP and ICMP implementations
SNMPv2-MIB::sysORDescr.4 = STRING: The MIB module for managing UDP implementations
SNMPv2-MIB::sysORDescr.5 = STRING: View-based Access Control Model for SNMP.
SNMPv2-MIB::sysORDescr.6 = STRING: The SNMP Management Architecture MIB.
SNMPv2-MIB::sysORDescr.7 = STRING: The MIB for Message Processing and Dispatching.
SNMPv2-MIB::sysORDescr.8 = STRING: The management information definitions for the SNMP User-based Security Model.
SNMPv2-MIB::sysORUpTime.1 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.2 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.3 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.4 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.5 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.6 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.7 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.8 = Timeticks: (0) 0:00:00.00
Found the problem:
needed to reset the .conf file completely:
echo "rocommunity public" > /etc/snmp/snmp.conf
needed to specify the exact branch to walk:
snmpwalk -v 1 -c public localhost .1.3.6.1.4.1.2021.10
For others like me who might search for this:
It is due to the rocommunity being restricted to basic access in the default SNMPd config with the line:
rocommunity public default -V systemonly
To give full access to localhost just uncomment the previous line which is:
rocommunity public localhost
Or you can give all computers full access like OP did via:
rocommunity public