Core file Issue - coredump

Any idea about the core file debugged by dbx debugger. I am not getting why this core file is generated. Please help me on this
For information about new features see `help changes'
To remove this message, put `dbxenv suppress_startup_message 7.6' in your .dbxrc
Reading mhost.new
core file header read successfully
Reading ld.so.1
Reading librt.so.1
Reading libclntsh.so.9.0
Reading libm.so.2
Reading libnsl.so.1
Reading libsocket.so.1
Reading libgen.so.1
Reading libdl.so.1
Reading libthread.so.1
Reading libc.so.1
Reading libaio.so.1
Reading libmd.so.1
Reading libwtc9.so
Reading libsched.so.1
Reading libc_psr.so.1
WARNING!!
A loadobject was found with an unexpected checksum value.
See `help core mismatch' for details, and run `proc -map'
to see what checksum values were expected and found.
dbx: warning: Some symbolic information might be incorrect.
t#1 (l#1) program terminated by signal SEGV (no mapping at the fault address)
0xff3be704: elf_find_sym+0x0114: ldsb [%l0 + %l4], %o2
(dbx) where
current thread: t#1
=>[1] elf_find_sym(0xffbfbbd8, 0xffbfbc68, 0xffbfbc64, 0xf194, 0xfe5986d2, 0xff3f0358), at 0xff3be704
[2] _lookup_sym(0xff3f7360, 0xffbfbbd8, 0xffbfbc68, 0xffbfbc64, 0x0, 0xff3f0358), at 0xff3bbb7c
[3] lookup_sym(0xffbfbc6c, 0xffbfbc68, 0xffbfbc64, 0xff3f7360, 0x1, 0xfe5986d2), at 0xff3bbe6c
[4] elf_bndr(0x84d, 0xff391d38, 0xfe5c2124, 0xfe5986d2, 0xff3f42f0, 0x0), at 0xff3d207c
[5] elf_rtbndr(0xfe5c2124, 0xfe6c3800, 0x1c00, 0x0, 0x0, 0x0), at 0xff3b84fc
[6] 0xfe6bf3c4(0x0, 0x1cc4, 0xfe6c3800, 0xfe6c5180, 0xff352a00, 0x1c00), at 0xfe6bf3c4
[7] _exithandle(0xfe6c5400, 0xfe6c3800, 0x1c00, 0x0, 0x0, 0x0), at 0xfe5c2124
[8] exit(0x0, 0xffbfbe4c, 0xffbfbeb4, 0x139800, 0xff350100, 0x0), at 0xfe5b0550

SEGV indicates an access to undefined memory. It occurred in elf_find_sym. If debug symbols were included, then the dump would indicate the line number.
Use gcc -g <files>... to include debug symbol and line number information.

A proper stack would help in finding out what might have caused the crash. Assuming this core is generated on Solaris platform and you are analyzing a core generated on a different system.
In that case it would be good to collect all the dependent libraries from the environment where the core is generated. Extract them locally, and map the local directories using the pathmap subcommand in dbx.
for example, if a library is present in the failing environment under /home/app/lib and on the local environment where core is analyzed is under /home/user/app/lib
(dbx) pathmap /home/app/lib /home/user/app/lib
If there are multiple such paths, then all the directories need to be mapped to the respective local directories. Once all the paths are mapped, you can run the following
(dbx) debug executable-name corefile-name
Alternatively you can also try
mdb debugger
pstack command on the core file.

Related

Stm32CubeIde data mismatch

I am trying work with the demo of touchgfx designer with my stm32g071rb and the x-nucleo-gfx01m2.
The example project works fine if I flash it via TouchGFX designer.
When I import it in stm32cubeide, after setting up the external loader found in the touchgfx directory, i get the following error:
Erasing memory corresponding to segment 0:
Erasing internal memory sectors [0 34]
Erasing memory corresponding to segment 1:
Erasing external memory sector 0
Download in Progress:
File download complete
Time elapsed during download operation: 00:00:02.135
Verifying ...
Error: Data mismatch found at address 0x90000000 (byte = 0x00 instead of 0xD2)
Error: Download verification failed
Shutting down...
Exit.
I've also tried to use the stm32programmer but i get the same error.
How can i fix this?

Windbg for memory analysis using mimikatz ERROR [CRYPTO] acquire keys

I'm using windbg version 6.12 and using mimilib.dll for debugging memory. All works fine until I get following output on UI
0:000> !mimikatz
DPAPI Backup keys
=================
Current prefered key:
Compatibility prefered key:
SekurLSA
========
[ERROR] [CRYPTO] Acquire keys
note: the memory dmp is of lsass
Is this anything to do with symbol or respective dll /system32? Kindly suggest.
I was using WinDbg x86 where x64 was required.

Error: No chunks found for a file with Mongo gridFS

An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>

Intersystems Cache Unexpected error occurred: <WIDE CHAR>

I am trying to load in an old CACHE.DAT database into Intersystems Cache (2012.1.1 win32 evaluation). I've managed to create a namespace and database, and I'm able to query some of the database tables.
However, for other tables, I get the following error:
ERROR #5540: SQLCODE -400 Message: Unexpected error occurred: <WIDE CHAR>
The documentation tells me that this means that a multibyte character is read where a one byte character is expected. I suspect this might mean that the original database was in UTF-16, while my new installation is using UTF-8.
My question is: is there a way to either convert the database, to configure Cache so that it can deal with , or to deal with this problem in another way?
maybe the original database was created in unicode installation
and current installation 8-bit
Caché read a multibyte character where a 1-byte character was expected.
you can send your cboot.log from mgr directory ?
for example first lines in my cboot.log
Start of Cache initialization at 02:51:00PM on Apr 7, 2012
Cache for Windows (x86-64) 2012.2 (Build 549U) Sun Apr 1 2012 17:34:18 EDT
Locale setting is rusw
Source directory is c:\intersystems\ensemble12\mgr\utils\