JFR.dump fails. "No data in specified interval" - trace

I get the following error message when executing jcmd 115 JFR.dump name=continuous_recording:
115:
Dump failed. No data found in the specified interval.
I started the recording with the following configuration:
XX:StartFlightRecording=disk=true,dumponexit=true,
filename=/home/site/diagnostics/recording.jfr,
maxsize=1024m,maxage=1d,name=continuous_recording
It could be that the buffer has not yet filled the minimum chunk size. But the JFR.check command does not provide that information.
Update:
I can get a dump from the Java app if I run JFR.dump without specifying the name of the recording. I tried encapsulating the recording name with quotes (escaped and un-escaped) and got the same error as before.
005c736ce3ee:/home# jcmd 115 JFR.dump filename="home/6_10_dump1.jfr"
Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true
115:
Dumped recording, 155.8 MB written to: /home/6_10_dump1.jfr

What version of the JDK are you using? The chunk should not need to be filled up.
There is a bug [1] that happens in JDK 11 or later, if you specify the filename of the file you want to dump at startup, but don't specify it when you dump the file.
Try this as a work around:
$ jcmd 115 JFR.dump filename=recording.jfr
[1] https://bugs.openjdk.java.net/browse/JDK-8220657

Azul support helped figure this out. If you don't specify the filename when starting the VM, then the following works:
XX:StartFlightRecording=disk=true,name=continuous_recording,dumponexit=true,maxsize=1024m,maxage=1d
This bug was filed and fixed as JDK-8220657. Azul said they will backport this to Zulu 8 and 11 in the future.

Related

"image is too large" keeps on happening to openbmc image for Raspberrypi platform

Could someone please give me advice to make an openbmc image for Raspberrypi platform ?
Before I tried, I looked through related documents and believed an openbmc image can be worked on Raspberrypi.
Like OpenBMC with Raspberry Pi (2 or 3) and build bmcweb?
and https://kevinleeblog.github.io/project1/2019/11/25/openbmc-for-raspberry-pi-zero/.
So, I followed these instructions and tried the following steps.
#1: Git clone openbmc.git to my local PC.
tm#tm-VB1:~/Rpi4-64$ git clone https://github.com/openbmc/openbmc.git
Snip the logs but it looks no problem.
Receiving objects: 100% (182121/182121), 84.10 MiB | 5.55 MiB/s, done.
Resolving deltas: 100% (96860/96860), done.
#2: set TEMPLATECONF for raspberrypi
tm#tm-VB1:~/Rpi4-64$ export TEMPLATECONF=meta-evb/meta-evb-raspberrypi/conf
tm#tm-VB1:~/Rpi4-64$ echo $TEMPLATECONF
meta-evb/meta-evb-raspberrypi/conf
#3: set up the environment by "openbmc-env"
tm#tm-VB1:~/Rpi4-64/openbmc$ . openbmc-env
### Initializing OE build env ###
Snip the logs but it looks no problem. As you know, the script automatically creates a subdirectory,build, under openbmc.
Common targets are:
obmc-phosphor-image
tm#tm-VB1:~/Rpi4-64/openbmc/build$
#4: Change the directory and edit local.conf for my Raspberrypi platform.
tm#tm-VB1:~/Rpi4-64/openbmc/build$ cat ./conf/local.conf
Snip the log for unchanged part.
MACHINE ??= "raspberrypi4-64" <<< Change here for my platform.
DL_DIR ?= "/home/tm/Yocto/downloads" <<< Add here for build-time reduction at retry.
SSTATE_DIR ?= "/home/tm/Yocto/sstate-cache" <<< Add here for build-time reduction at retry.
#5: Change FLASH_SIZE variable based on the following sugestion. https://github.com/openbmc/openbmc/issues/3590
tm#tm-VB1:~/Rpi4-64/openbmc/meta-phosphor/classes$ cat image_types_phosphor.bbclass
Snip the log.
# Flash characteristics in KB unless otherwise noted
FLASH_SIZE ?= "131072" <<< I changed only this variable from 32768 to 131072.
#6: bitbake starts.
tm#tm-VB1:~/Rpi4-64/openbmc/bitbake obmc-phosphor-image
Then, ERROR happened.
ERROR: Logfile of failure stored in: /home/tm/Rpi/openbmc/build/tmp/work/raspberrypi-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2055074
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.09147 s, 367 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=495980 name=/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/u-boot.bin
DEBUG: Spanning start=0K end=512K
DEBUG: Compare needed=495980 available=524288 margin=28308
484+1 records in
484+1 records out
495980 bytes (496 kB, 484 KiB) copied, 0.00120141 s, 413 MB/s
DEBUG: Considering file size=8266960 name=/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/fitImage-obmc-phosphor-initramfs-raspberrypi-raspberrypi
DEBUG: Spanning start=512K end=4864K
>>>DEBUG: Compare needed=8266960 available=4456448 margin=-3810512
ERROR: Image '/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/fitImage-obmc-phosphor-initramfs-raspberrypi-raspberrypi' is too large!
DEBUG: Python function do_generate_static finished
It said margin=-3810512.
Now, my 2nd try.
I removed the whole openbmc directory and did the same steps above.
But this time, I change FLASH_SIZE from 32768 to 262144.
It is the same result like below.
ERROR: obmc-phosphor-image-1.0-r0 do_generate_static: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
ERROR: Logfile of failure stored in: /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2061792
ERROR: Task (/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3915 tasks of which 2633 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static
Summary: There were 2 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
tm#tm-VB1:~/Rpi4/openbmc/build$ cat /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2061792
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.177223 s, 189 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=548224 name=/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin
DEBUG: Spanning start=0K end=512K
>>>DEBUG: Compare needed=548224 available=524288 margin=-23936
ERROR: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
DEBUG: Python function do_generate_static finished
tm#tm-VB1:~/Rpi4/openbmc/build$
It said margin=-23936.
OK. Image is too large. So,my 3rd try.
I removed the whole openbmc directory and did the same steps above.
But this time, I change FLASH_SIZE from 32768 to 9437184.
It is the same result like below.
ERROR: obmc-phosphor-image-1.0-r0 do_generate_static: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
ERROR: Logfile of failure stored in: /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2058361
ERROR: Task (/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3935 tasks of which 0 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static
Summary: There were 4 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
tm#tm-VB1:~/Rpi4/openbmc$
tm#tm-VB1:~/Rpi4/openbmc$ cat /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2058361
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.173685 s, 193 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=548224 name=/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin
DEBUG: Spanning start=0K end=512K
>>>DEBUG: Compare needed=548224 available=524288 margin=-23936
ERROR: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
DEBUG: Python function do_generate_static finished
tm#tm-VB1:~/Rpi4/openbmc$
It said the same margin as 256MB case.
My 4th try.
I removed the whole openbmc directory and did the same steps above.
I changed MACHINE ??= "raspberrypi4-64" to "raspberrypi2"
But this time, I change FLASH_SIZE from 32768 to 33554432.
It is the same result before.
My 5th try.
I removed the whole openbmc directory and did the same steps above.
I used MACHINE ??= "raspberrypi2"
But this time, I change FLASH_SIZE from 32768 to 67108864.
It is the same result before.
After I tried several variations, it always said "image is too large" although I changed FLASH_SIZE to much much larger one.
So, I am wondering if I have missed some important configuration or it needs another parameter to fix this except FLASH_SIZE.
By the way, I tried romulus and made it.
My environment is ubuntu-20.04.2.0-desktop-amd64.
I really appreciate someone could kindly give me advice to make this work.
Interesting, I don't have a quick fix for you but I did notice the partition that is over sized is the uboot partition. The uboot is a smaller separate binary installed on the machine. It looks as if your uboot build is over 512k and the partition is set for 512k. Your flash size is massize
FLASH_SIZE = 9437184" that is more then a gig, (because FLASH_SIZE is in K)
If I were you I would first try to build an older version of openbmc for raspberry pi. (It used to work so you just need to find the commit before uboot grew to big). Use git to move back a month until you find it works.
If that does not work I would try to modify the partition table.
here is where you failing
this looks fine building the uboot image looks fine
increasing the kernel offset make if build, but the other targets in openbmc will not be happy with this solution. So maybe meta-raspberry-pi will have to override the partition table (if uboot can not be shrunk)
What ever you do, open an issue on the github and share you changes. Also use the discord, and gerrit.
I just replicated this issue. We should fix it

pyglet "Unable to share contexts" exception when running PsychoPy demo twice

PsychoPy looks like just what I need. But I want to use my own development environment (a straightforward IPython prompt combined with the editor of my choice) instead of the provided IDE.
The trouble is that you seem to have to quit Python and relaunch after every PsychoPy run. If for example I cd to the ...../demos/coder/stimuli directory and type run gabor.py it runs fine, but if I then type run gabor.py again I get this exception from pyglet:
C:\snap\PsychoPy2\lib\site-packages\pyglet\window\win32\__init__.pyc in _create(self)
259 if not self._wgl_context:
260 self.canvas = Win32Canvas(self.display, self._view_hwnd, self._dc)
--> 261 self.context.attach(self.canvas)
262 self._wgl_context = self.context._context
263
C:\snap\PsychoPy2\lib\site-packages\pyglet\gl\win32.pyc in attach(self, canvas)
261 self._context = wglext_arb.wglCreateContextAttribsARB(canvas.hdc,
262 share, attribs)
--> 263 super(Win32ARBContext, self).attach(canvas)
C:\snap\PsychoPy2\lib\site-packages\pyglet\gl\win32.pyc in attach(self, canvas)
206 raise RuntimeError('Share context has no canvas.')
207 if not wgl.wglShareLists(share._context, self._context):
--> 208 raise gl.ContextException('Unable to share contexts')
209
210 def set_current(self):
ContextException: Unable to share contexts
Is there some sort of pyglet.cleanup() I can call (analogous to pygame.quit()) to allow PsychoPy scripts to run more than once in the same session? Or other way of avoiding this problem?
I'm using the Standalone PsychoPy distro version 1.81.02, untouched. The problem is not specific to IPython---it also can also be demonstrated from the plain Python prompt if you disable sys.exit and type execfile('gabor.py') twice:
C:\snap\PsychoPy2\Lib\site-packages\PsychoPy-1.81.02-py2.7.egg\psychopy\demos\coder\stimuli>python
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import sys; sys.exit = lambda x:x
>>> execfile('gabor.py')
0.6560 WARNING Movie2 stim could not be imported and won't be available
1.6719 WARNING Monitor specification not found. Creating a temporary one...
>>>
>>> execfile('gabor.py')
Traceback (most recent call last):
[snip]
File "C:\snap\PsychoPy2\lib\site-packages\pyglet\gl\win32.py", line 208, in attach
raise gl.ContextException('Unable to share contexts')
pyglet.gl.ContextException: Unable to share contexts
I don't know how to undo all the pyglet/psychopy initialisation - neither are really designed for you to do this, so there would be some work here. But I'm not sure it's a good idea anyway to run scripts the way you are doing.
The PsychoPy app itself gets around the by launching each script in a new process. It means that you know the namespace is clean on each run. Running your script on top of the previous one can lead to some really hard-to-find bugs because you don't know in what state the previous script left the memory, graphics card and namespace.
cheers
Jon

lvcGame, VBS and HLA setup Error 127 out of LVCGame

I am trying to setup VBS to talk HLA to a legacy app using LVCGame.
I am using openRTI currently in my project. I have LVCGame pointed to a directory with the following dlls:
RTI-NG.dll
OpenRIT.dll (copied from another folder)
libRTI-NG.dll (copy of RTI-NG.dll and renamed)
FedTime.dll
My vbsClient.config relevant lines:
Plugins = HLA-1.3.dll : HLA-1.3\Project\HLA.config
I am getting the following error out of LVCGame:
2014-12-31 10:48:03 INFO (LVCGAME::LVCGame::init) Initialised.
2014-12-31 10:48:03 ERROR (LVCGAME::LVCGame::start) 'class LVCGAME::UTILS::Exception' (src\LVCGame.cpp, line 766): Couldn't load plugin .\lib\HLA-1.3.dll. Error Code: 127
I did send an email to VBS support but it I had any idea what a 127 error was maybe I could get further.
I found that another free RTI Portico implementation is mentioned as working with LVCgame giving that a try has pretty similar results I get and error 126 instead of a 127.
2015-01-02 09:47:47 INFO (LVCGAME::UTILS::IOUtils::extractDllLoadPath) Using DLL path specified as 'D:\Program Files (x86)\Portico\portico-2.0.1\bin\vc10'.
2015-01-02 09:47:47 INFO (LVCGAME::LVCGame::init) Initialised.
2015-01-02 09:47:47 ERROR (LVCGAME::LVCGame::start) 'class LVCGAME::UTILS::Exception' (src\LVCGame.cpp, line 766): Couldn't load plugin .\lib\HLA-1.3.dll. Error Code: 126
2015-01-02 09:47:47 ERROR (LVCGAME::LVCGame::start) LVCGame start failed!
So eventually I found that Portico was creating a log file located in my VBS directory\logs. From there I was able to see that Portico at least was not able to find my .fed file I had the path messed up.

Windbg - !clrstack

I'm attempting to debug a manual dump file of a 64bit w3wp process with 64bit Windbg (Version 6.10). The dump was taken with taskmgr. I can't get anything from the !clrstack command. Here is what I'm getting:
!loadby sos clr
!runaway
User Mode Time
Thread Time
17:cf4 0 days 5:37:42.455
~17s
ntdll!ZwDelayExecution+0xa:
00000000`776208fa c3 ret
!clrstack
GetFrameContext failed: 1
What is GetFrameContext failed: 1?
Use !dumpstack command instead of !clrstack. It usually works.
Try getting the "native" call stack by doing "k" and see what that gets you. Sometimes, the stack isn't quite right and the !ClrStack extension is pretty sensitive.

Analyzing a .dmp crash file?

I tried traditional ways and answers to analyze my .DMP files. WinDbg doesn`t take it, it outputs:
Could not find the C:\program files\softwaredir\dumps\dumpname_0313.rsa.dmp
File, Win32 error 0n87
It's an Multi Theft Auto: San Andreas crash dump. .rsa.dmp
Why doesn`t WinDbg take it, does it have to do with a certain dump type?
If anyone would want to try opening, here is the dumpfile with the issue
I would like to know if you have the solution to solve the opening problem/or any other tool to open the dump.
But I really need the exception that caused the crash, so either I'll need advise on how to open it or if I can fix it.
For case 2 (can't solve it for me here) the crash memory locations are:
Version 1.3.5-release 6078.0.000
Time Tue Jan 21 03:13:18 2014
Module C:\Program Files (x86)\MTA San Andreas 1.3\mods\deathmatch\client.dll
Code 0 x C0000005
Offset 0 x 0009E796
EAX 00000000 EBX 30994AB0 ECX 21E82218 EDX 0028F71C ESI 3098E520
EDI 6FBBCCC9 EBP 0028F7BC ESP 0028F6F4 EIP 1B00E796 FLG 00210246
CS 0023 DS 002B SS 002B ES 002B FS 0053 GS 002B
This is what I did to generate and analyze a dump file:
Downloaded ProcDump (a free Sysinternals tool)
Created a folder c:\dumps
Added ProcDump to PATH
Entered procdump -ma -i c:\dumps into command prompt to start the JustInTime debugger
Opened the dump file in visual studio
Using this method we were able to resolve a difficult bug and determine what exception was causing my machine to crash.