Heka writing debug output to file every 15 seconds? - hekad
I'm trying to figure out Heka configuration, and made such a test config:
[hekad]
maxprocs = 4
[Dashboard]
type = "DashboardOutput"
address = ":4352"
ticker_interval = 15
[testfile]
type = "LogstreamerInput"
log_directory = "/tmp"
file_match = 'test\.log'
[filewriter]
type = "FileOutput"
path = "/tmp/output.log"
perm = "666"
message_matcher = "TRUE"
encoder = "PayloadEncoder"
[PayloadEncoder]
append_newlines = false
prefix_ts = true
ts_from_message = false
Next I write to log with while true; do date >> /tmp/test.log ; sleep 1; done and I would expect /tmp/output.log to get filled with the same info, yet regardless of whether test.log is written to or not, output log gets filled with info like:
[2016/Jan/05:11:01:51 +0200] [2016/Jan/05:11:02:06 +0200] {"encoders":[{"Name":"filewriter-PayloadEncoder"}],"globals":[{"InChanCapacity":{"representation":"count","value":100},"InChanLength":{"representation":"count","value":100},"Name":"inputRecycleChan"},{"InChanCapacity":{"representation":"count","value":100},"InChanLength":{"representation":"count","value":100},"Name":"injectRecycleChan"},{"InChanCapacity":{"representation":"count","value":30},"InChanLength":{"representation":"count","value":0},"Name":"Router","ProcessMessageCount":{"representation":"count","value":260}}],"inputs":[{"Name":"testfile","testfile-bytes":{"representation":"count","value":84419},"testfile-filename":{"representation":"","value":"/tmp/test.log"}}],"outputs":[{"InChanCapacity":{"representation":"count","value":30},"InChanLength":{"representation":"count","value":0},"LeakCount":{"representation":"count","value":0},"MatchAvgDuration":{"representation":"ns","value":1506},"MatchChanCapacity":{"representation":"count","value":30},"MatchChanLength":{"representation":"count","value":0},"Name":"Dashboard"},{"InChanCapacity":{"representation":"count","value":30},"InChanLength":{"representation":"count","value":0},"LeakCount":{"representation":"count","value":0},"MatchAvgDuration":{"representation":"ns","value":550},"MatchChanCapacity":{"representation":"count","value":30},"MatchChanLength":{"representation":"count","value":0},"Name":"filewriter"}],"splitters":[{"Name":"testfile-TokenSplitter-1"}]}
[2016/Jan/05:11:02:06 +0200] [2016/Jan/05:11:02:21 +0200] {"encoders":[{"Name":"filewriter-PayloadEncoder"}],"globals":[{"InChanCapacity":{"representation":"count","value":100},"InChanLength":{"representation":"count","value":100},"Name":"inputRecycleChan"},{"InChanCapacity":{"representation":"count","value":100},"InChanLength":{"representation":"count","value":100},"Name":"injectRecycleChan"},{"InChanCapacity":{"representation":"count","value":30},"InChanLength":{"representation":"count","value":0},"Name":"Router","ProcessMessageCount":{"representation":"count","value":262}}],"inputs":[{"Name":"testfile","testfile-bytes":{"representation":"count","value":84419},"testfile-filename":{"representation":"","value":"/tmp/test.log"}}],"outputs":[{"InChanCapacity":{"representation":"count","value":30},"InChanLength":{"representation":"count","value":0},"LeakCount":{"representation":"count","value":0},"MatchAvgDuration":{"representation":"ns","value":1506},"MatchChanCapacity":{"representation":"count","value":30},"MatchChanLength":{"representation":"count","value":0},"Name":"Dashboard"},{"InChanCapacity":{"representation":"count","value":30},"InChanLength":{"representation":"count","value":0},"LeakCount":{"representation":"count","value":0},"MatchAvgDuration":{"representation":"ns","value":550},"MatchChanCapacity":{"representation":"count","value":30},"MatchChanLength":{"representation":"count","value":0},"Name":"filewriter"}],"splitters":[{"Name":"testfile-TokenSplitter-1"}]}
What is this, why is it written, how do I disable it?
update:
I've removed ticker_interval from DashboardOutput, yet the problem persists.
Apparently DashboardOutput overrides ticker_interval from the default of 0 to 5, so in order to get rid of those strings, ticker_interval = 0 should be added to its config.
Related
Save job output from SDSF into a PDS and using ISPF functions in REXX
We periodically runs jobs and we need to save the output into a PDS and then parse the output to extract parts of it to save into another member. It needs to be done by issuing a REXX command using the percent sign and the REXX member name as an SDSF command line. I've attempted to code a REXX to do this, but it is getting an error when trying to invoke an ISPF service, saying the ISPF environment has not been established. But, this is SDSF running under ISPF. My code has this in it (copied from several sources and modified): parse arg PSDSFPARMS "(" PUSERPARMS parse var PSDSFPARMS PCURRPNL PPRIMPNL PROWTOKEN PPRIMCMD . PRIMCMD=x2c(PPRIMCMD) RC = isfquery() if RC <> 0 then do Say "** SDSF environment does not exist, exec ending." exit 20 end RC = isfcalls("ON") Address SDSF "ISFGET" PPRIMPNL "TOKEN('"PROWTOKEN"')" , " (" VERBOSE ")" LRC = RC if LRC > 0 then call msgrtn "ISFGET" if LRC <> 0 then Exit 20 JOBNAME = value(JNAME.1) JOBNBR = value(JOBID.1) SMPDSN = "SMPE.*.OUTPUT.LISTINGS" LISTC. = '' SMPODSNS. = '' SMPODSNS.0 = 0 $ = outtrap('LISTC.') MSGVAL = msg('ON') address TSO "LISTC LVL('"SMPDSN"') ALL" MSGVAL = msg(MSGVAL) $ = outtrap('OFF') do LISTCi = 1 to LISTC.0 if word(LISTC.LISTCi,1) = 'NONVSAM' then do parse var LISTC.LISTCi . . DSN SMPODSNS.0 = SMPODSNS.0 + 1 i = SMPODSNS.0 SMPODSNS.i = DSN end IX = pos('ENTRY',LISTC.LISTCi) if IX <> 0 then do IX = pos('NOT FOUND',LISTC.LISTCi,IX + 8) if IX <> 0 then do address ISPEXEC "SETMSG MSG(IPLL403E)" EXITRC = 16 leave end end end LISTC. = '' if EXITRC = 16 then exit 0 address ISPEXEC "TBCREATE SMPDSNS NOWRITE" , "NAMES(TSEL TSMPDSN)" I execute this code by typing %SMPSAVE next to the spool output line on the "H" SDSF panel and it runs fine until it gets to this point in the REXX: 114 *-* address ISPEXEC "TBCREATE SMPDSNS NOWRITE" , "NAMES(TSEL TSMPDSN)" >>> "TBCREATE SMPDSNS NOWRITE NAMES(TSEL TSMPDSN)" ISPS118S SERVICE NOT INVOKED. A VALID ISPF ENVIRONMENT DOES NOT EXIST. +++ RC(20) +++ Does anyone know why it says I don't have a valid ISPF environment and how I can get around this? I've done quite a bit in the past with REXX, including writing REXX code to handle line commands, but this is the first time I've tried to use ISPEXEC commands within this code. Thank you, Alan
SphinxSearch - indexer config option problem
I using sphinx search to create indexes and search data in my PostgreSQL database. I have 2 questions about it. If I run command /usr/bin/indexer --config /etc/sphinxsearch/sphinx.conf --rotate --all I get output from 'show tables;' Index Type dist_title_de distributed word_title_de local word_titlestemmed_de local rt_title_de rt But If I run command /usr/bin/indexer --config /etc/sphinxsearch/sphinx_another_conf_file.conf --rotate --all Then I get the same output on terminal, but I dont see new indexes on 'show tables;'. It seems like '--config' option in indexer not working and only properly name is sphinx.conf. It's problematic, because if I want reindex sphinxsearch I have to changing file sphinx.conf. Second question is it possible to 'add' new index without delete old ones? Currently I using sphinx like (everyday): Get new data (datasource1, datasource2, ..., datasource8) Index --rotate --all (index data from 8 datasources) Search some info on indexes Write it to db But now, I want sth like: Get new data from datasource1 Index datasource1 Get new data from datasource2 Index datasource2 (without delete index datasource1) Search something in index datasource1 .... Get new data form datasource8 (without deleteing indexes) Index datasource8 etc On 'without delete index' I mean, now if I use command from top of topic, I 'lost' my indexes and get only new ones (from sphinx.conf). My sphinx.conf (only 1 datasource): source src_title_de { type = pgsql sql_host = ####### sql_user = ####### sql_pass = ####### sql_db = ####### sql_port = 3306 # optional, default is 3306 sql_query = \ SELECT id, group_id, (date_extraction::TIMESTAMP) AS date_extraction, title \ FROM sphinx_test sql_ranged_throttle = 0 } index word_title_de { source = src_title_de path = /var/lib/sphinxsearch/data/word_title_de docinfo = extern dict = keywords mlock = 0 morphology = none stopwords = /var/lib/sphinxsearch/data/stopwords.txt wordforms = /var/lib/sphinxsearch/data/wordforms_de.txt min_word_len = 1 } index word_titlestemmed_de : word_title_de { path = /var/lib/sphinxsearch/data/word_titlestemmed_de morphology = stem_en } index dist_title_de { type = distributed local = word_title_de local = word_titlestemmed_de agent = localhost:9313:remote1 agent = localhost:9314:remote2,remote3 agent_connect_timeout = 1000 agent_query_timeout = 3000 } index rt_title_de { type = rt path = /var/lib/sphinxsearch/data/rt_title_de rt_field = title rt_field = content rt_attr_uint = gid } indexer { mem_limit = 128M } searchd { listen = 9312:sphinx listen = 9306:mysql41 log = /var/log/sphinxsearch/searchd.log query_log = /var/log/sphinxsearch/query.log read_timeout = 5 client_timeout = 300 max_children = 30 persistent_connections_limit = 30 pid_file = /var/run/sphinxsearch/searchd.pid seamless_rotate = 1 preopen_indexes = 1 unlink_old = 1 mva_updates_pool = 1M max_packet_size = 8M max_filters = 256 max_filter_values = 4096 max_batch_queries = 32 workers = threads # for RT to work } My second file for 8 datasources like the same like above with CTRL+C CTRL+V on 'source src_title_de', 'index word_title_de', 'index word_titlestemmed_de', 'index rt_title_de' with another countries and change table with data in 'sql_query'.
On your first question, the --config option only applies to that indexer run. Ie the --all should cause it index (or try to ) index all the plain indexes mentioned in that file. ... but when it sends the signal to reload (what the --rotate) does, searchd just reloads its CURRENT config file, NOT the one you told indexer about. To get serachd to use a new config file would have to stop searchd, and start it again with new config file. So change sphinx.conf directly, rather than a 'second' file. Acully the second question is the same answer... So change sphinx.conf directly, rather than a 'second' file. Ie add your new index to sphinx.conf, use indexer to 'build' it. When indexer has finished, it will tell 'reload' whcih will cause searchd to load the new config file AND the new index just built.
Odoo 12 : Not enought limit time to finish the backup?
I use the auto_backup to backup production database everyday. It was working well until now. Now, the backup can't finish until the end, I mean, I get the half size of the .zip file and it is impossible to restore it. Normaly, the backup takes about 15mn. I think that it's related to the Odoo configuration. Here it is : workers = 3 longpolling_port = 8072 limit_memory_soft = 2013265920 limit_memory_hard = 2415919104 limit_request = 8192 limit_time_cpu = 600 limit_time_real = 3600 limit_time_real_cron = 3600 proxy_mode = True Can you help me? I have another question, What does mean limit_time_real_cron = -1 if the limit_time_real_cron = 0 is unlimited?
Try to increase limit_time_cpu.
Storage values using os.stat(filename)
I'm trying to create an EDUCATIONAL PURPOSES ONLY virus. I do not plan on spreading it. It's purpose is to grow a file to the point your storage is full and slow your computer down. It prints the size of the file every 0.001 seconds. With that, I also want to know how fast it is growing the file. The following code doesn't seem to let it run: class Vstatus(): def _init_(Status): Status.countspeed == True Status.active == True Status.growingspeed == 0 import time import os #Your storage is at risk of over-expansion. Please do not let this file run forever, as your storage will fill continuously. #This is for educational purposes only. while Vstatus.Status.countspeed == True: f = open('file.txt', 'a') f.write('W') fsize = os.stat('file.txt') Key1 = fsize time.sleep(1) Key2 = fsize Vstatus.Status.growingspeed = (Key2 - Key1) Vstatus.Status.countspeed = False while Vstatus.Status.active == True: time.sleep(0.001) f = open('file.txt', 'a') f.write('W') fsize = os.stat('file.txt') print('size:' + fsize.st_size.__str__() + ' at a speed of ' + Vstatus.Status.growingspeed + 'bytes per second.') This is for Educational Purposes ONLY The main error I keep getting when running the file is here: TypeError: unsupported operand type(s) for -: 'os.stat_result' and 'os.stat_result' What does this mean? I thought os.stat returned an integer Can I get a fix on this?
Vstatus.Status.growingspeed = (Key2 - Key1) You can't subtract os.stat objects. Your code also has some other problems. Your loops will run sequentially, meaning that your first loop will try to estimate how quickly the file is being written to without writing anything to the file. import time # Imports at the top import os class VStatus: def __init__(self): # double underscores around __init__ self.countspeed = True # Assignment, not equality test self.active = True self.growingspeed = 0 status = VStatus() # Make a VStatus instance # You need to do the speed estimation and file appending in the same loop with open('file.txt', 'a+') as f: # Only open the file once start = time.time() # Get the current time starting_size = os.fstat(f.fileno()).st_size while status.active: # Access the attribute of the VStatus instance size = os.fstat(f.fileno()).st_size # Send file desciptor to stat f.write('W') # Writing more than one character at a time will be your biggest speed up f.flush() # make sure the byte is written if status.countspeed: diff = time.time() - start if diff >= 1: # More than a second has gone by status.countspeed = False status.growingspeed = (os.fstat(f.fileno()).st_size - starting_size)/diff # get rate of growth else: print(f"size: {size} at a speed of {status.growingspeed}")
xDebug session never starts
I've just change my OS to Windows 7 64. I have Apache 2.2, PHP 5.3 (32bit) TS and Eclipse 3.7 (64bit) with PDT installed on my machine. xDebug section in my php.ini zend_extension = "C:\Program Files (x86)\PHP\ext\php_xdebug-2.1.4-5.3-vc9.dll" xdebug.auto_trace = 0 xdebug.collect_includes = 1 xdebug.collect_params = 0 xdebug.collect_return = 0 xdebug.default_enable = 1 xdebug.extended_info = 1 xdebug.idekey = "STATION24$" xdebug.max_nesting_level = 100 xdebug.profiler_append = 0 xdebug.profiler_enable = 0 xdebug.profiler_enable_trigger = 1 xdebug.profiler_output_dir = "C:\WINDOWS\temp" xdebug.profiler_output_name = "xdebug_profile.%p" xdebug.remote_autostart = 0 xdebug.remote_enable = 1 xdebug.remote_handler = "dbgp" xdebug.remote_host = "localhost" xdebug.remote_log = 1 xdebug.remote_mode = "req" xdebug.remote_port = 9001 xdebug.show_exception_trace = 0 xdebug.show_local_vars = 0 xdebug.show_mem_delta = 1 xdebug.trace_format = 0 xdebug.trace_output_dir = "C:\WINDOWS\Temp" xdebug.trace_output_name = "trace.%c" xdebug.var_display_max_depth = 5 In Eclipse I've configured PHP->Debug section as shown on images: And now, when I try to launch debug, Eclipse freeze at starting debug session. So, I'd read about this problem in past, people say this is because some application use xDebug port (in my case 9001), but I've checked, no other use this port, only xDebug do. My firewall disabled, so no one application can block connection to xDebug. And one thing - "debug as CLI application" works well, only "debug as Web application" does not work. I don't know what to do, please help.
If you run into problems and you don't know whether the Xdebug side is not working, or the IDE's side, then from Xdebug 2.2.0RC1 the remote debug log will also log connection issues.
Just a thought, you mention PHP TS. Maybe you should use xdebug TS too then? (Although I read everywhere that lately you better not use TS at all, so might want to change php not TS or so).