I am trying to get a yocto image working networking on windows QEMU ARM64
I have proven that it works with ubuntu server on ARM following this example
https://gist.github.com/billti/d904fd6124bf6f10ba2c1e3736f0f0f7
so im trying to start the image with the same networking as the example above
qemu-system-aarch64 -m 2G -cpu cortex-a57 -M virt -kernel Image-qemuarm64-tt.bin -drive file=tt-qemuarm64-tt.rootfs.ext4 -nographic -append "root=/dev/vda" -device virtio-net-device, netdev=net0 -netdev user,hostfwd=tcp:127.0.0.1:2222-:22,id=net0
but i get the below warning and the image wont start
WARNING: Image format was not specified for 'tt-qemuarm64-tt.rootfs.ext4' and pr
obing guessed raw.
Automatically detecting the format is dangerous for raw images, write o
perations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
qemu-system-aarch64: netdev=net0: drive with bus=0, unit=0 (index=0) exists
anybody know what is the issue above based on the info above
You have an extra space character between -device virtio-net-device, and netdev=net0. This means that QEMU treats netdev=net0 as a separate command line argument, which is to say a disk image filename. It then complains because it thinks you've specified an image for the first disk drive in two conflicting ways.
TLDR: remove that extra space.
The warning about 'raw' format is separate; you can silence it by changing your -drive option to explicitly tell QEMU your disk is a raw image, with -drive file=whatever,format=raw.
Related
Docker version 18.03.1-ce, build 9ee9f40
I'm using powershell to build a big project on windows.
When issuing the command
docker save docker.elastic.co/kibana/kibana > deploy/kibana.docker
I'm getting an file 1.4Gb.
Same command run in CMD produces 799Mb image.
Same command run in bash produces 799Mb image.
CMD and Bash takes less than a minute to save an image, while Powershell takes about 10 minutes.
I did not manage to find any normal explanation of this phenomenon in docker or MS docs.
Right now the "solution" is
Write-Output "Saving images to files"
cmd /c .\deploy-hack.cmd
But I want to find the actual underlying reason for this.
PowerShell doesn't support outputting / passing raw byte streams through - any output from an external program such as docker is parsed line by line, into strings and the strings are then re-encoded on output to a file (if necessary).
It is the overhead of parsing, decoding and re-encoding that explains the performance degradation.
Windows PowerShell's > redirection operator produces UTF16-LE ("Unicode") files by default (whereas PowerShell Core uses UTF8), i.e., files that use (at least) 2 bytes per character. Therefore, it produces files that are twice the size of raw byte input[1], because each byte is interpreted as a character that receives a 2-byte representation in the output.
Your best bet is to use docker save with the -o / --output option to specify the output file (see the docs):
docker save docker.elastic.co/kibana/kibana -o deploy/kibana.docker
[1] Strictly speaking, how PowerShell interprets output from external programs depends on the value of [console]::OutputEncoding, which, if set to UTF8 (chcp 65001 on Windows), could situationally interpret multiple bytes as a single character. However, on Windows PowerShell the default is determined by the (legacy) system locale's OEM code page, which is always a single-byte encoding.
I have STM32F404 board and I am trying to flash it. I am following this tutorial.
In the project Makefile
$(PROJ_NAME).elf: $(SRCS)
$(CC) $(CFLAGS) $^ -o $#
$(OBJCOPY) -O ihex $(PROJ_NAME).elf $(PROJ_NAME).hex
$(OBJCOPY) -O binary $(PROJ_NAME).elf $(PROJ_NAME).bin
burn: proj
$(STLINK)/st-flash write $(PROJ_NAME).bin 0x8000000
The bin file is generated using OBJCOPYand then flashed using the Make target burn
My questions :
Question 1: What does OBJCOPY=arm-none-eabi-objcopy in this case. I opened the man but I didn't fully undrestand can anyone explain it simply ?
Question 2: Flashing the bin file gives the expected result (Leds blinking) However the leds are not blinking by flashing the elf file $(STLINK)/st-flash write $(PROJ_NAME).elf 0x8000000 so why ?
Question 1: What does OBJCOPY=arm-none-eabi-objcopy in this case. I opened the man but I didn't fully undrestand can anyone explain it simply ?
It assigns value arm-none-eabi-objcopy to make variable OBJCOPY.
When make executes this command:
$(OBJCOPY) -O binary $(PROJ_NAME).elf $(PROJ_NAME).bin
the actual command that runs is
arm-none-eabi-objcopy -O binary tim_time_base.elf tim_time_base.bin
Question 2: Flashing the bin file gives the expected result (Leds blinking) However the leds are not blinking by flashing the elf file $(STLINK)/st-flash write $(PROJ_NAME).elf 0x8000000 so why?
The tim_time_base.elf is an ELF file -- it has metadata associated with it. Run arm-none-eabi-readelf -h tim_time_base.elf to see what some of this metadata are.
But when you processor jumps to location 0x8000000 after reset, it is expecting to find executable instructions, not metadata. When it finds "garbage" it doesn't understand, it probably just halts. It certainly doesn't find instructions to blink the lights.
In case someone wants to use the DFU ("Do a Firmware Upgrade") function, this tutorial teaches how to use the binary file to be loaded via USB, when the STM32 is operating with USB Host (or maybe OTG):
STM32 USB training - 11.3 USB MSC DFU host labs
This tutorial is part of a series of videos that are highly recommended for the programmer to watch, to understand a little better how the STM32 USB ports work and use (videos provided by the STM32 manufacturer itself, I recommend that the programmer watch all the videos on this channel):
MOOC - STM32 USB training
Notes: The example code from the STM32 tutorials are available in the descriptions of the videos themselves.
The binary file (*.bin) can be obtained with the help of the command that the colleague above explained (Employed Russian), and it (command) can also be adapted to produce a file containing the comparison value for CRC usage, as can be seen some details in these following posts:
Hands-on: CRC Checksum Generation
Srec_cat could be used to generate CRC checksum and put it into HEX
file. To simplify the process, please put srec_cat.exe into the root
of project folder.
Some tips and solutions about this CRC usage (Windows/Linux)
Unfortunately the amount of code is too big to post here directly, but I leave the code related to the other answer below:
arm-none-eabi-objcopy -O ihex "${BuildArtifactFileBaseName}.elf"
"${BuildArtifactFileBaseName}.hex" && ..\checksum.bat
${BuildArtifactFileBaseName}.hex
Contents of the checksum.bat file:
#!/bin/bash
# Windows [Dos comment: REM]:
#..\srec_cat.exe %1 -Intel -fill 0xFF 0x08000000 0x080FFFFC -STM32 0x080FFFFC -o ROM.hex -Intel
# Linux [Linux comment: #]:
srec_cat $1 -Intel -fill 0xFF 0x08000000 0x080FFFFC -STM32 0x080FFFFC -o ROM.hex -Intel
Note: In this case, the file to be written is ROM.hex (you will need to configure the STM32CubeIDE to be able to do this operation, the IDE uses the *.elf file, see how to do it in the tips above)
This other tutorial deals with using the file with *.DFU extension:
DFU - DfuSe
The key benefits of the DFU Boatloader are: No specific tools such us
JTAG, ST-LINK or USB-to-UART cable are needed. The ability to program
an "empty" STM32 device in a newly-assembled board via USB. And easy
upgrade the STM32 firmware during development or pre-production.
This need to use a HEX file facilitates the operation of the implementation of the ROM.hex file generated with the CRC value, being practically a continuity:
You must generate a .DFU file from an .HEX or .S19 file, for do this
use the DFU File Manager.
But it seems that using the *.DFU file is not as standalone as using the *.BIN file, so I found this other code that converts the HEX file (generated with CRC) to the *.BIN file, which can be used with a USB stick, as per the tutorial cited at the beginning of this answer (11.3 USB MSC DFU host):
objcopy --input-target=ihex --output-target=binary code00.hex code00.bin
Source
It sounds a little confusing, but we have these steps:
1- The STM32CubeIDE generates the *.elf file.
2- After compilation, the *.elf file is converted to *.hex.
3- CRC value is added in *.hex file via srec_cat application.
4- Now the *.hex file is converted to *.bin.
5- The BIN file is then stored on a USB flash drive.
6- STM32 updates firmware using USB flash drive file.
To use the *.BIN file it is necessary that the STM32 is already programmed to load the BIN file. If it is not programmed (the STM32 is empty, virgin or the program was not made to load the BIN file), it will be necessary to use St-Link or another programmer, or perhaps making use of the DFU method described in the tutorial above (DFU - DfuSe).
XV6 has 2 GB for user space and 2 GB for kernel space. If I want to change it to 3 GB for user space and 1 GB for kernel space. How should I implement this modification?
I tried modify KERNBASE + PHYSTOP in memlayout.h and then modify the start address in the linker script kernel.ld. But it failed.
Your approach is not wrong. Are you running xv6 using QEMU? If so, modify the Makefile and increase the memory to 4GB or more.
There are place where memory is set using the -m option around line 215 of the file. The default is 512 GB.
QEMUOPTS = -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp $(CPUS) -m 512 $(QEMUEXTRA)
Then modify the memlayout.h and the kernel.ld file.
Probably it should work. If xv6 does not work, please tell me the part that failed. Please also show the modified memlayout.h and kernel.ld file.
Im running the gsutil cp command in parallel (with the -m option) on a directory with 25 4gb json files (that i am also compressing with the -z option).
gsutil -m cp -z json -R dir_with_4g_chunks gs://my_bucket/
When I run it, it will print out to terminal that it is copying all but one of the files. By this I mean that it prints one of these lines per file:
Copying file://dir_with_4g_chunks/a_4g_chunk [Content-Type=application/octet-stream]...
Once the transfer for one of them is complete, it says that it'll be copying the last file.
The result of this is that there is one file that only starts to copy only when one of the others finishes copying, significantly slowing down the process
Is there a limit to the number of files I can upload with the -m option? Is this configurable in the boto config file?
I was not able to find the .boto file on my Mac (as per jterrace's answer above), instead I specified these values using the -o switch:
gsutil -m -o "Boto:parallel_thread_count=4" cp directory1/* gs://my-bucket/
This seemed to control the rate of transfer.
From the description of the -m option:
gsutil performs the specified operation using a combination of
multi-threading and multi-processing, using a number of threads and
processors determined by the parallel_thread_count and
parallel_process_count values set in the boto configuration file. You
might want to experiment with these value, as the best value can vary
based on a number of factors, including network speed, number of CPUs,
and available memory.
If you take a look at your .boto file, you should see this generated comment:
# 'parallel_process_count' and 'parallel_thread_count' specify the number
# of OS processes and Python threads, respectively, to use when executing
# operations in parallel. The default settings should work well as configured,
# however, to enhance performance for transfers involving large numbers of
# files, you may experiment with hand tuning these values to optimize
# performance for your particular system configuration.
# MacOS and Windows users should see
# https://github.com/GoogleCloudPlatform/gsutil/issues/77 before attempting
# to experiment with these values.
#parallel_process_count = 12
#parallel_thread_count = 10
I'm guessing that you're on Windows or Mac, because the default values for non-Linux machines is 24 threads and 1 process. This would result in copying 24 of your files first, then the last 1 file afterward. Try experimenting with increasing these values to transfer all 25 files at once.
I want to read a block in zpool storage pool using dd command. Since zpool doesn't create a device file like other volume manager like vxvm. I dunno which block device to use for reading. Is there any way to read block by block data in zpool ?
You can probably use the zdb command. Here is a pdf about it, and the help output.
http://www.bruningsystems.com/osdevcon_draft3.pdf
# zdb --help
zdb: illegal option -- -
Usage: zdb [-CumdibcsDvhL] poolname [object...]
zdb [-div] dataset [object...]
zdb -m [-L] poolname [vdev [metaslab...]]
zdb -R poolname vdev:offset:size[:flags]
zdb -S poolname
zdb -l [-u] device
zdb -C
Dataset name must include at least one separator character '/' or '#'
If dataset name is specified, only that dataset is dumped
If object numbers are specified, only those objects are dumped
Options to control amount of output:
-u uberblock
-d dataset(s)
-i intent logs
-C config (or cachefile if alone)
-h pool history
-b block statistics
-m metaslabs
-c checksum all metadata (twice for all data) blocks
-s report stats on zdb's I/O
-D dedup statistics
-S simulate dedup to measure effect
-v verbose (applies to all others)
-l dump label contents
-L disable leak tracking (do not load spacemaps)
-R read and display block from a device
Below options are intended for use with other options (except -l):
-A ignore assertions (-A), enable panic recovery (-AA) or both (-AAA)
-F attempt automatic rewind within safe range of transaction groups
-U <cachefile_path> -- use alternate cachefile
-X attempt extreme rewind (does not work with dataset)
-e pool is exported/destroyed/has altroot/not in a cachefile
-p <path> -- use one or more with -e to specify path to vdev dir
-P print numbers parsable
-t <txg> -- highest txg to use when searching for uberblocks
Specify an option more than once (e.g. -bb) to make only that option verbose
Default is to dump everything non-verbosely
Unfortunately, I don't know how to use it.
# zdb
tank:
version: 28
name: 'tank'
...
vdev_tree:
...
children[0]:
...
children[0]:
...
path: '/dev/label/bank1d1'
phys_path: '/dev/label/bank1d1'
...
So I took the array indexes 0 0 to get my first disk (bank1d1) and did this command. It did something. I don't know how to read the output.
zdb -R tank 0:0:4e00:200 | strings
Have fun... try not to destroy anything. Here is your warning from the man page:
The zdb command is used by support engineers to diagnose failures and
gather statistics. Since the ZFS file system is always consistent on
disk and is self-repairing, zdb should only be run under the direction
by a support engineer.
And please tell us what you actually were looking for. Was Alan right that you wanted to do backups?
You can read from underlying raw devices in the pool, but as far as I can tell there's no concept of single contiguous block device representing the whole pool.
The pool in ZFS is not a single contiguous block of sectors that 'classic' volume managers are. ZFS internal structure is closer to a tree which would be somewhat challenging to represent as a flat array of blocks.
Ben Rockwood's blog post "zdb: Examining ZFS At Point-Blank Range" may help getting better idea of what's under the hood.
No idea about what might be useful doing so but you certainly can read blocks in the underlying devices used by the pool. They are shown by the zpool status command. If you are really asking about zvols instead of zpools, they are accessible under /dev/zvol/rdsk/pool-name/zvol-name. If you want to look at internal zpool data, you probably want to use zdb.
If you want to backup ZFS filesystems you should be using the following tools:
'zfs snapshot' to create a stable snapshot of the filesystem
'zfs send' to send a copy of the snapshot to somewhere else
'zfs receive' to go back from a snapshot to a filesystem.
'dd' is almost certainly not the tool you should be using. In your case you could 'zfs send' and redirect the output into a file on your other filesystem.
See chapter 7 of the ZFS administration guide for more details.