Why is 38 written as 9 + 9 + 9 +12 - 1 in xv6 RISC-V source code - operating-system

I was trying to add a few system calls to the xv6 source code developed at MIT, and upon reading this resource (https://pdos.csail.mit.edu/6.S081/2020/xv6/book-riscv-rev1.pdf), on page 26, they consider the maximum possible virtual address that XV6RISC-V supports.
When I tried to look at the source code for the definition of MAXVA (MAXimum Virtual Address), I came across the following code snippet.
// one beyond the highest possible virtual address.
// MAXVA is actually one bit less than the max allowed by
// Sv39, to avoid having to sign-extend virtual addresses
// that have the high bit set.
#define MAXVA (1L << (9 + 9 + 9 + 12 - 1))
I was intrigued by the expression '9 + 9 + 9 + 12 - 1' instead of simply writing '38'. I tried to look up the underlying reasoning for this but did not find anything related. Is this some kind of optimization? If so, at what level is this relevant and where else could this be relevant?
I have some experience in assembly language programming and understand the basics of how the C code written is being translated to the assembly structure and how the final assembly code corresponding to bit shifting might look like, (using x86 salq and sarq or RISC-V slli). Any hints/ thoughts would be appreciated as well.

The answer could be guessed from vm.c
// The risc-v Sv39 scheme has three levels of page-table
// pages. A page-table page contains 512 64-bit PTEs.
// A 64-bit virtual address is split into five fields:
// 39..63 -- must be zero.
// 30..38 -- 9 bits of level-2 index.
// 21..29 -- 9 bits of level-1 index.
// 12..20 -- 9 bits of level-0 index.
// 0..11 -- 12 bits of byte offset within the page.
9+9+9 correspond to each 9 bits index to address the page
12 correspond to the offset in the page
For the compiler, write 9+9+9+12-1 is the same than 38, but for the human -aware of bits fields- reading this, the first is more clear.

Related

OTP size for STM32F746XX

I found a strange mismatch in the reference manual for the OTP memory size:
Accordingly to the manual (RM0385, page 78) the start-address of the OTP memory begins at 0x1FF0 F000 and ends at address: 0x1FF0 F41F (in AXIM mode) and is declared as a memory area with the size of 1024 bytes:
But if I calculate 0x1FF0F41F - 0x1FF0F000 + 1 I get a total of 1056 bytes OTP memory?!
Same addresses are set in the latest CMSIS header V1.17.0 (https://raw.githubusercontent.com/STMicroelectronics/STM32CubeF7/master/Drivers/CMSIS/Device/ST/STM32F7xx/Include/stm32f746xx.h):
#define FLASH_OTP_BASE 0x1FF0F000UL
#define FLASH_OTP_END 0x1FF0F41FUL
Is this a typo in the manual or is my calculation wrong?
I don't think it's a typo, and your calculation is correct.
My explanation is as follows:
Chapter 3.6 of the same Reference Manual shows the organization of the one-time programmable (OTP) part of the OTP area, and explains:
The OTP area is divided into 16 OTP data blocks of 64 bytes and one lock OTP block of 16 bytes. The OTP data and lock blocks cannot be erased. The lock block contains 16 bytes LOCKBi (0 ≤ i ≤ 15) to lock the corresponding OTP data block (blocks 0 to 15). (...)
So it consists of 1024 data bytes and 16 lock bytes. I guess the lock bytes are implemented as special "registers" and are not considered part of the sector. It's more a matter of definition and does not really matter that much.
There probably is a 1024-byte sector to write the (one-time programmable) data to, but the address range is slightly larger because the lock bits have to be addressed as well.

can open dbc edit - selecting non sequential bytes for 16 bit data

I am trying to write a .dbc file for a can-open data log (example of one of lines I am trying to use below)
Time 884.163000, ID:2a1, Data Bytes (0)7b (1)00 (2)95 (3)68 (4)e5 (5)8e (6)49 (7)54
I have written .dbc files using both motorola and intel endianness using candb++ covering 16 bit data over 2 bytes, but this has always been with sequential bytes, ie- (2),(3) or (5),(6).
The bytes I need to use for the particular data in the above example are (3) and (7) in intel format (54,68 in this case). I have written a .dbc for just byte 3 shown in the snip below.
BO_ 673 Rig_Pressure: 4 Vector__XXX
SG_ Pressure_Multiplex M : 15|8#0+ (1,0) [0|0] "" Vector__XXX
SG_ Pressure m0 : 31|8#0- (0.45,58) [0.399999999999999|115.15] "Bar" Vector__XXX
I am asking if there is a way to modify the text file (or use cabdb++) to specify each bit or pick 2 non sequential bytes in the .dbc, something like modifying the bit start something like
31|8#0- to 31|8#0- 63|8#0-
I am far from a computer programmer, I much prefer GUI based programs and am only starting out in learning python, so please be gentle!!!
Thank you!

interpreting i2c register map for ISL12022

I am trying to program an ISL12022M RTC and am having trouble interpreting the register map (self taught with little experience). The documentation says that the RTC registers (SC,MN,HR,DT,MO,YR,DW) are BCD representations. In order to allow write capabilitiy into the RTC registers the WRTC bit(bit 6 of address 08h is set to '1'.The map looks like this:
The FAQ example from the Intersil site tells me that to set the WRTC bit I need to send DEh (slave address) 08h (register address) and 41 (Enable WRTC bit, other bits remain in default). Why not hex? Why 41 and not 40? And what does SC22 in SC bit 6, SC21 in bit 5, etc. mean?
Datasheet
Example
I've read the documentation until I can't see anymore and I've searched until I am just getting more confused. Any help is appreciated.
Well, it looks like these values in the map are nibbles. The range for the first register is 0 - 59. When represented in BCD, 4 bits are needed for the digit in the ones place and three bits are needed for the 10's place. So, bits 0 - 3 belong to the first nibble; bit 0 = SC(register name)1(first nibble)0(first bit). Bits 4, 5 and 6 belong to the second nibble. Bit 4 = SC(register name)2(second nibble)0(first bit). Bit 7 is not needed.
The example sheet from Intersil has a typo; the WRTC value needs to be 40h or 41h.

CJ1W-CT021 Card Error Omron PLC

I got this error on a CJ1W-CT021 card. It happen all of a sudden after its been running the program for some time. How i found it was by going to the IO Table and Unit Set up. Clicked on parameters for that card and found two settings in red.
Output Control Mode and And/Or Counter Output Patterns. This was there reading
Output Control Mode = 0x40 No Applicable Set Data
And/Or Counter Output Patterns = 0x64 No Applicable Set Data
no idea on how or why these would change they should of been
Output Control Mode = Range Mode
And/Or Counter Output Patterns = Logically Or
I have added some new code, but nothing big or really even used as i had the outputs of the new rungs jumped out. One thing i thought might cause this is every cycle of the program it was checking the value of an encoder connected to this card. Maybe checking it too offten? Anyhow if anyone has any idea what these do or how they would change please post.
Thanks
Glen
EDIT.. I wanted to add the bits i used, dont think any are part of this cards internal io but i may be wrong?
Work bits 66.01 - 66.06 , 60.02 - 60.07 , 160.12, 160.01 - 160.04, 161.02, 161.03
and
Data Bits (D)20720, 20500, 20600, 20000, 20590, 20040
I would check section 4-1 through 4-2-4 of the CT021 manual - make sure you aren't writing to reserved memory locations used for configuration data of the CT021 unit.
EDIT:
1) Check Page 26 of the above manual to see the location of the machine switch settings. The bottom dial sets the '1's digit and the top dial sets the '10's digit (ie machine number can be 0-99);
2) Per page 94, D-Memory is allocated from D20000 + (N X 100) (400 Words) where N is equal to the machine number.
I would guess that your machine number is set to 0 (ie: both dials at '0'), 5, or 6. In the case of machine number '0', this would make the reserved DM range D20000 -> D20399. In this case (see pages 97, 105) D20000 would contain configuration data for Output Control Mode (bits 00-07) and Counter Output Patterns (bits 08-15). It looks like you are writing 0x6440 to D20000 (or D20500, D20600 for machine number 5 or 6, respectively) and are corrupting the configuration data.
If your machine number is 0 then stay away from D20000-D20399 unless you are directly trying to modify the counter's configuration state (ie: don't use them in your program!).
If the machine number is 1 then likewise for D20100-D20499, etc. If you have multiple counters they can overlap ranges so they should always be set with machine numbers which are 4 apart from each other.

How to calculating data/error blocks number of QR code at version > 3

I am working on a QR code encoding/decoding project.
I have been read through the ISO/IEC 18004 (2006) and some tutorials ( http://www.thonky.com/guides/
http://www.matchadesign.com/_blog/Matcha_Design_Blog/post/QR_Code_Demystified_-_Part_1/
http://www.swetake.com/qr/qr1_en.html
)
The ISO documentation and those very nice tutorials helped me a lot. But there’s still one thing I can’t understand, that’s how we can calculate the number of data/error blocks when creating a QR code at Version 3 or higher.
The image below is from the ISO/IEC 18004 – 2006:
A version 7-H (H is error correction capacity level ) symbol that has 66 data codewords and 130 error codewords. They split both of them into 5 blocks.
The document says that the n blocks number (in this case n = 5 ) can be calculated from Table 9 (ISO 18004) according to the version and error correction level. But it seems like I can’t get that number. Please show me how I can calculate it.
Now I got it. All needed information for block splitting actually is at Table 9 of the ISO/IEC 18004 document. Just because of my careless reading.