Why is replaying this log file is causing a segmentation fault? - kdb

DB+ 3.6 2018.07.30 Copyright (C) 1993-2018 Kx Systems
l64/ 16()core
I am trying to replay a log file of 1.7G.
I get the following error :
m1 -6341068259609826952
wsfull
Sorry, this application or an associated library has encountered a fatal error and will exit.
If known, please email the steps to reproduce this error to tech#kx.com
with a copy of the kdb+ startup banner and the info printed below.
Thank you.
SIGSEGV: Fault address 0x3752e00000c93
the upd function is a simple : upd:upsert
q).Q.w[]
used| 667888
heap| 67108864
peak| 67108864
wmax| 0
mmap| 0
mphy| 270882623488
syms| 3459
symw| 228104
none of the -11!(-1 -2) work. replay works fine for first ~2000 records

There were some checks added on 2019.11.04 around this
added further integrity checks to streaming execute (-11!x) to avoid wsfull or segfault on corrupted/incomplete log files. e.g.
`:log set();h:hopen`:log;h enlist(`upd;1#0xff);hclose h;`:log1 1: (-5_(read1`:log)),0xffffffff0000ffffff;-11!`:log1 / wsfull or segfaulted
I would try again with a more recent version of kdb+, albeit you'll still have some issue with the replay
Jason

Related

kernel - postgres segfault error 15 in libc-2.19.so

Yesterday we had crash of PostgreSQL 9.5.14 running on Debian 8 (Linux xxxxxx 3.16.0-7-amd64 #1 SMP Debian 3.16.59-1 (2018-10-03) x86_64 GNU/Linux) - Segmentation fault. Database closed all connections and reinitialized itself staying ~1 minute in recovery mode.
PostgreSQL log:
2018-10-xx xx:xx:xx UTC [580-2] LOG: server process (PID 16461) was
terminated by signal 11: Segmentation fault
kern.log:
Oct xx xx:xx:xx xxxxxxxx kernel: [117977.301353] postgres[16461]:
segfault at 7efd3237db90 ip 00007efd3237db90 sp 00007ffd26826678 error
15 in libc-2.19.so[7efd322a2000+1a1000]
According to libc documentation (https://support.novell.com/docs/Tids/Solutions/10100304.html) error code 15 means:
NX_EDEADLK 15 resource deadlock would occur - which does not tell me much.
Could you tell me please if we can do something to avoid this problem in the future? Because this server is of course production one.
All packages are up to date currently. Upgrade of PG is unfortunately not the option. Server runs on Google Compute Engine.
error code 15 means: NX_EDEADLK 15
No, it doesn't mean that. This answer explains how to interpret 15 here.
It's bits 0, 1, 2, 3 set => protection fault, write access, user mode, use of reserved bit. Most likely your postgress process attempted to write to some wild pointer.
if we can do something to avoid this problem in the future?
The only thing you can do is find the bug and fix it, or upgrade to a release of postgress where that bug is already fixed (and hope that no new ones were introduced).
To understand where the bug might be, you should check whether a core dump was produced (if not, do enable them). If you have the core, use gdb /path/to/postgress /path/to/core, and then where GDB command. That will give you crash stack trace, which may allow you to find similar bug reports.

TYPO3 - Extension ke_search - Bug in scheduler

I'm using :
TYPO3 6.2
ke_search 2.2
Everything work fine except the indexing process, I mean :
If I manually index (with the backend module) it's OK, no error messages.
If I run manually the scheduler indexing task it's OK, no error messages.
If I run the scheduler with the php typo3/cli_dispatch.phpsh scheduler command, then I got this error :
Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to
allocate 87 bytes) in
/path_to_my_website/typo3/sysext/core/Classes/Cache/Frontend/VariableFrontend.php on line 99
For your information :
my PHP memory_limit setting is on 128M.
Other tasks are OK.
After this error appears on my console, the scheduler task is locked :
I can't figure out what's wrong ?
EDIT : I made flush frontend caches + flush general caches + flush system caches. If I run one more time the scheduler via the console, this is the new error I get :
Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to
allocate 12288 bytes) in
/path_to_my_website/typo3/sysext/core/Classes/Database/QueryGenerator.php
on line 1265
EDIT 2 : if I disable all my indexer configurations, all goes well. But if I enable even 1 configuration -> PHP error.
Here is one of the indexer file :

Yiimp pool reject all blocks

i have setup my YIIMP pool but seems that all blocks are rejected, i think is blocknotify problem
14:54:03: BTCRUBLE 213314 - diff 1.592820338 job e to 1/1/1 clients, hash 165.101/114.019 in 0.1 ms
14:54:05: *** REJECTED :( BTC RUBLE block 213314 1 txs
2018-02-02 14:54:05: REJECTED BTCRUBLE block 213314
14:54:23: BTC RUBLE 213314 not reporting
14:54:24: BTCRUBLE 213315 - diff 1.592820338 job f to 1/1/1 clients, hash 157.281/114.019 in 0.1 ms
14:54:25: *** REJECTED :( BTC RUBLE block 213315 1 txs
2018-02-02 14:54:25: REJECTED BTCRUBLE block 213315
My conf file of wallet is like this:
rpcuser=btcrublerpc
rpcpassword=mypassword
rpcport=4921
rpcthreads=8
rpcallowip=127.0.0.1
# onlynet=ipv4
maxconnections=12
daemon=1
gen=0
When i add this blocknotify part i get error blocknotify not found:
alertnotify=echo %s | mail -s "BTC RUBLE alert!" myemail#gmail.com
blocknotify=blocknotify 94.177.204.50:3433 1425 %s
Can someone help please? i can pay to get it working.
Thanks a lot!
To answer your question in your blocknotify call did you put /var/stratum in front of blocknotify example : blocknotify=/var/stratum/blocknotify 94.177.204.50:3433 1425 %s
Rejected blocks have nothing to do with 'blocknotify' that is mere a notification whenever a block has been found. It has no impact, whatsoever on mining.
The problem you are facing your blocks being rejected could be related to Yiiimp coin's admin misconfiguration per se or coin's conf file is not properly configured.
You need to thoroughly check BTCRUBLE's Settings and Daemon tabs in Yiimp coin's admin.
Apparently coin's conf seems fine perhaps you need to add this param
server=1

How to fill data upto a size in multiple disk?

I am creating 4 mountpoint disk in Windows OS. I need to copy files up to a threshold value (say 50 GB).
I tried with vdbench. It works fine, but it throws an exception at last.
compratio=4
dedupratio=1
dedupunit=256k
* Host Definition section
hd=default,user=Administator,shell=vdbench,jvms=1
hd=localhost,system=localhost
********************************************************************************
* Storage Definition section
fsd=fsd1,anchor=C:\UnMapTest-Volume1\disk1\,depth=1,width=1,files=1,size=5g
fsd=fsd2,anchor=C:\UnMapTest-Volume2\disk2\,depth=1,width=1,files=1,size=5g
fwd=fwd1,fsd=fsd*,operation=write,xfersize=1m,fileio=sequential,fileselect=random,threads=10
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=1h,interval=1
Below is the exception from vdbench. Due to this my calling script would fail.
05:29:14.287 Message from slave localhost-0:
05:29:14.289 file=C:\UnMapTest-Volume1\disk1\\vdb.1_1.dir\vdb_f0001.file,busy=true
05:29:14.290 Thread: FwgThread write C:\UnMapTest-Volume1\disk1\ rd=rd1 For loops: None
05:29:14.291
05:29:14.292 last_ok_request: Thu Dec 28 05:28:57 PST 2017
05:29:14.292 Duration: 16.92 seconds
05:29:14.293 consecutive_blocks: 10001
05:29:14.294 last_block: FILE_BUSY File busy
05:29:14.294 operation: write
05:29:14.295
05:29:14.296 Do you maybe have more threads running than that you have
05:29:14.296 files and therefore some threads ultimately give up after 10000 tries?
05:29:14.300 *
05:29:14.301 ******************************************************
05:29:14.302 * Slave localhost-0 aborting: Too many thread blocks *
05:29:14.302 ******************************************************
05:29:14.303 *
05:29:21.235
05:29:21.235 Slave localhost-0 prematurely terminated.
05:29:21.235
05:29:21.235 Slave aborted. Abort message received:
05:29:21.235 Too many thread blocks
05:29:21.235
05:29:21.235 Look at file localhost-0.stdout.html for more information.
05:29:21.735
05:29:21.735 Slave localhost-0 prematurely terminated.
05:29:21.735
java.lang.RuntimeException: Slave localhost-0 prematurely terminated.
at Vdb.common.failure(common.java:335)
at Vdb.SlaveStarter.startSlave(SlaveStarter.java:198)
at Vdb.SlaveStarter.run(SlaveStarter.java:47)
I am using PowerShell in a Windows machine. Even if some other tools like Diskspd is having way to fill data up to some threshold then please provide me.
I found the answer by myself.
I have done this using Diskspd.exe as below
The following command fill 50 GB data in the mentioned disk folder
.\diskspd.exe -c50G -b4K -t2 C:\UnMapTest-Volume1\disk1\testfile1.dat
It is very simple than Vdbench for my requirement.
Caution : But it is not having real data so array side disk size is
not shown up with the size

OpenOCD multiple STLinks

I need to be connect to 2 STM32s over 2 ST-Links at the same time. I found this issue described here.
However, solution doesn't work for me.
ST-Link ID1: 55FF6B067087534923182367
ST-Link ID2: 49FF6C064983574951291787
OpenOCD cfg file:
source [find interface/stlink-v2.cfg]
hla_serial "55FF6B067087534923182367"
source [find target/stm32f4x.cfg]
# use hardware reset, connect under reset
reset_config srst_only srst_nogate
I get:
$ openocd.exe -f stm32f4_fmboard.cfg
Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
none separate
srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : clock speed 1800 kHz
Error: open failed
in procedure 'init'
in procedure 'ocd_bouncer'
I do not know if solved but:
pi#raspberrypi:~/prog/bootloader $ st-info --probe
Found 1 stlink programmers
serial: 363f65064b46323613500643
openocd: "\x36\x3f\x65\x06\x4b\x46\x32\x36\x13\x50\x06\x43"
flash: 0 (pagesize: 0)
sram: 0
chipid: 0x0000
descr: unknown device
this tool shows serial of st-links and there is option called openocd. When I put hla_serial "\x36\x3f\x65\x06\x4b\x46\x32\x36\x13\x50\x06\x43" in file then it works for me. Your way does not. It also does not work in command line given as argument. It works only as I described in cfg file
The format of the configuration file seems to have changed recently. The following applies for Open On-Chip Debugger 0.10.0+dev-00634-gdb070eb8 (2018-12-30-23:05).
Find out the serial number with lsusb, st-link, or with ls -l /dev/serial/by-id. The latter yields (with two STLink/V2.1 connected):
total 0
lrwxrwxrwx 1 root root 13 Nov 30 14:31 usb-STMicroelectronics_STM32_STLink_066CFF323535474B43125623-if02 -> ../../ttyACM0
lrwxrwxrwx 1 root root 13 Dec 30 23:55 usb-STMicroelectronics_STM32_STLink_0672FF485457725187052924-if02 -> ../../ttyACM1
The specification on the .cfg-file is now plain hex. Do not use the C string syntax any longer. For selecting the latter device, simply write:
#hla_serial "066CFF323535474B43125623"
hla_serial "0672FF485457725187052924"