Why !heap -p -a gives correct size but !heap -i fails. Is the block corrupted.?
Image upload failed from Android app so uploaded it to reddit -
https://www.reddit.com/r/windbg/comments/i9iq7w/why_does_heap_p_a_command_show_the_block_size_but/?utm_medium=android_app&utm_source=share
Related
I'm very new to raster2pgsql so please bear with me. I'm trying to load a 60mb .tif (from the High-Resolution Settlements Layer project) to my postgis-enabled database with the following code:
raster2pgsql -s 5235 -C -F [path to the .tif] public.hrsl_lka | psql
-h localhost -U postgres -p 5432 -d project
However, I get the following error:
ERROR: insert_records: Could not allocate memory for INSERT statement
ERROR: process_rasters: Could not convert raster tiles into INSERT or
COPY statements ERROR: Unable to process rasters
Loading smaller .tifs of around 3mb to the same database but from other sources works fine, however.
Is there a size limit with raster2pgsql? I'm on PostgreSQL 12.4.
With many thanks,
Gregor
Have you tried setting the tile size -t?
According to the documentation:
-t: Tile size - expressed as width x height. If not provided, a default is worked out automatically in the range of 32-100 so it best
matches the raster dimensions. It is worth remembering that when
importing multiple files, tiles will be computed for the first raster
and then applied to others.
Alternatively you can let the script compute it for you by means of setting -t to auto e.g.
raster2pgsql -s 5235 -t auto -C -F file.tif public.hrsl_lka | psql -d db
Related answer: Are there limitations using a PostGIS out-db raster?
I use the openocd script below to dump the flash memory of a STM32 microcontroller.
mkdir -p dump
openocd -f board/stm3241g_eval_stlink.cfg \
\
-c "init" \
-c "reset halt" \
-c "dump_image dump/image.bin 0x08000000 0x100000" \
-c "shutdown" \
FILENAME=dump/image.bin
FILESIZE=$(stat -c%s "$FILENAME")
echo "Size of $FILENAME = $FILESIZE bytes."
The script is supposed to read the whole memory which is 1MB in my case but it does it very rarely. Generally it stops reading the memory before the end.
Why can't I obtain 1MB each time I execute this script? What is the problem here to cause openocd stop dumping the rest of the memory?
You can use dfu-utils to reflash your STM32 micros.
In Ubuntu/Debian distros you can install dfu-utils with apt:
$ sudo apt-get install dfu-util
$ sudo apt-get install fwupd
Boot your board in DFU mode (check datasheet). Once in DFU mode, you should see something similar to this:
$ lsusb | grep DFU
Bus 003 Device 076: ID 0483:df11 STMicroelectronics STM Device in DFU Mode
Once booted in DFU mode, reflash your binary:
$ sudo dfu-util -d 0483:df11 -a 0 -s 0x08000000:leave -D build/$(PROJECT).bin
With -d option you choose product:vendorid such as listed by lsusb in DFU mode.
With the -a 0 option you select alternate mode 0, check the options available as in the following example:
$ sudo dfu-util -l
Found DFU: [0483:df11] ver=2200, devnum=101, cfg=1, intf=0, alt=1, name="#Option Bytes /0x1FFFF800/01*016 e", serial="FFFFFFFEFFFF"
Found DFU: [0483:df11] ver=2200, devnum=101, cfg=1, intf=0, alt=0, name="#Internal Flash /0x08000000/064*0002Kg", serial="FFFFFFFEFFFF"
As you can see, alt=0 is for internal flash memory.
With the -s option you specify the flash memory address where you save your binary. Check your memory map in datasheet.
Hope this helps! :-)
I am trying to emulate a Raspberry Pi zero W with Qemu based on an image I used on a real Raspberry Pi zero W.
The command I am using is:
sudo qemu-system-arm \
-kernel ./qemu-rpi-kernel/kernel-qemu-4.9.59-stretch \
-append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" \
-hda pi_zero_kinetic_raspbian.qcow \
-cpu arm1176 -m 512 \
-M versatilepb \
-no-reboot \
-serial stdio \
-net nic -net user \
-net tap,ifname=vnet0,script=no,downscript=no
But Qemu complain that Error: unrecognized/unsupported machine ID (r1 = 0x00000183)
So added this option:
-dtb linux/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
But In this case:
qemu-system-arm: Unable to copy device tree in memory
Couldn't open dtb file qemu-rpi-kernel/tools/linux/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
So I tried to compile the dts in order to get the dtb with:
dtc -O dtb -o bcm2835-rpi-zero-w.dtb bcm2835-rpi-zero-w.dts
But the compilation fail and I get:
Error: bcm2835-rpi-zero-w.dts:13.1-9 syntax error
FATAL ERROR: Unable to parse input tree
I couldn't find any tutorial about Pi zero and all the tutorial about the first Rapsberry Pi seems to be outdated. I am not sure that compiling the dtb on my own is the way to go.
Any input would be appreciated, thanks!
This isn't going to work, because the QEMU option "-M versatilepb" says "emulate a VersatilePB development board", which will not run a kernel that is intended to boot on the Pi Zero. The versatilepb board does not have devices in the places that a Pi Zero DTB file says they are, so if you provide the kernel with a Pi Zero DTB then the kernel is going to crash immediately because it can't find anything where it expects.
In general Arm devboards are not like x86 -- they are all different, and you can't just boot a kernel intended for one on a different one. This is in fact what the "unrecognized machine ID" error is telling you -- it's from the guest kernel, and it's saying "I can't boot on this board".
You need to either:
use -M versatilepb and pass QEMU a kernel and dtb intended for that machine, not some other one
use some other -M option and a kernel and dtb that work with it (for instance we support 'raspi2' now for a RaspberryPi 2 board model, with some notable caveats including "no USB, no networking")
Also, as you seem to have discovered, -dtb wants a DTB file (the compiled binary), not a DTS file (the source).
I am trying to create an image (.iso) of my installed CentOS 5.11 system, for this i am using the "Mondo Rescue" with the command:
mondoarchive -Oi -9 -L -d /tmp/centos_iso -I / -T /tmp/centos_iso -p centos_image -s 4480m -E "/tmp|/var/spool/squid|/var/log"
The image is generated, and apparently is functional.
To install the iso in other machine i'm using the option "Nuke", in initialization page of "Mondo Rescue", like is represented in below image
When attempting to perform the installation, the following error occurs during the process:
Kernel panic - not syncing: No init found. Try passing init= options to kernel.
I am working on a PostgreSQL extension in C that segfaults, so I want to look at the core dump file on my OS X Lion box. However, there are no core files in /cores or anywhere else that I can find. It appears that they are enabled in the system but are limited to a size of 0:
> sysctl kern.coredump
kern.coredump: 1
> ulimit -c
0
I tried setting ulimit -c unlimited in the shell session I'm using to start and stop PostgreSQL, and it seems to stick:
> ulimit -c
unlimited
And yet no matter what I do, no core files. I am starting PostgreSQL with pg_ctl -c, where the -c tells PostgreSQL to generate core dumps. But the system has nothing. How can I get Lion to dump core files?
The /cores/ directory is not necessarily there in Lion , and if it's not there, you won't get cores. You should be able to set the ulimit (as you have), run a program like cat(1), quit with a SIGQUIT (control-backslash) and get a coredump:
lion:~ user$ ulimit -c unlimited
lion:~ user$ cat
^\
^\
Quit: 3 (core dumped)
lion:~ user$ ls -l /cores/
total 716584
-r-------- 1 user user 366891008 Jun 21 23:35 core.1263
lion:~ user$
Technical Note TN2124 http://developer.apple.com/library/mac/#technotes/tn2124/ as suggested by Yuji in https://stackoverflow.com/a/3783403/225077 is helpful.