How to calculating data/error blocks number of QR code at version > 3 - encoding

I am working on a QR code encoding/decoding project.
I have been read through the ISO/IEC 18004 (2006) and some tutorials ( http://www.thonky.com/guides/
http://www.matchadesign.com/_blog/Matcha_Design_Blog/post/QR_Code_Demystified_-_Part_1/
http://www.swetake.com/qr/qr1_en.html
)
The ISO documentation and those very nice tutorials helped me a lot. But there’s still one thing I can’t understand, that’s how we can calculate the number of data/error blocks when creating a QR code at Version 3 or higher.
The image below is from the ISO/IEC 18004 – 2006:
A version 7-H (H is error correction capacity level ) symbol that has 66 data codewords and 130 error codewords. They split both of them into 5 blocks.
The document says that the n blocks number (in this case n = 5 ) can be calculated from Table 9 (ISO 18004) according to the version and error correction level. But it seems like I can’t get that number. Please show me how I can calculate it.

Now I got it. All needed information for block splitting actually is at Table 9 of the ISO/IEC 18004 document. Just because of my careless reading.

Related

Why is 38 written as 9 + 9 + 9 +12 - 1 in xv6 RISC-V source code

I was trying to add a few system calls to the xv6 source code developed at MIT, and upon reading this resource (https://pdos.csail.mit.edu/6.S081/2020/xv6/book-riscv-rev1.pdf), on page 26, they consider the maximum possible virtual address that XV6RISC-V supports.
When I tried to look at the source code for the definition of MAXVA (MAXimum Virtual Address), I came across the following code snippet.
// one beyond the highest possible virtual address.
// MAXVA is actually one bit less than the max allowed by
// Sv39, to avoid having to sign-extend virtual addresses
// that have the high bit set.
#define MAXVA (1L << (9 + 9 + 9 + 12 - 1))
I was intrigued by the expression '9 + 9 + 9 + 12 - 1' instead of simply writing '38'. I tried to look up the underlying reasoning for this but did not find anything related. Is this some kind of optimization? If so, at what level is this relevant and where else could this be relevant?
I have some experience in assembly language programming and understand the basics of how the C code written is being translated to the assembly structure and how the final assembly code corresponding to bit shifting might look like, (using x86 salq and sarq or RISC-V slli). Any hints/ thoughts would be appreciated as well.
The answer could be guessed from vm.c
// The risc-v Sv39 scheme has three levels of page-table
// pages. A page-table page contains 512 64-bit PTEs.
// A 64-bit virtual address is split into five fields:
// 39..63 -- must be zero.
// 30..38 -- 9 bits of level-2 index.
// 21..29 -- 9 bits of level-1 index.
// 12..20 -- 9 bits of level-0 index.
// 0..11 -- 12 bits of byte offset within the page.
9+9+9 correspond to each 9 bits index to address the page
12 correspond to the offset in the page
For the compiler, write 9+9+9+12-1 is the same than 38, but for the human -aware of bits fields- reading this, the first is more clear.

can open dbc edit - selecting non sequential bytes for 16 bit data

I am trying to write a .dbc file for a can-open data log (example of one of lines I am trying to use below)
Time 884.163000, ID:2a1, Data Bytes (0)7b (1)00 (2)95 (3)68 (4)e5 (5)8e (6)49 (7)54
I have written .dbc files using both motorola and intel endianness using candb++ covering 16 bit data over 2 bytes, but this has always been with sequential bytes, ie- (2),(3) or (5),(6).
The bytes I need to use for the particular data in the above example are (3) and (7) in intel format (54,68 in this case). I have written a .dbc for just byte 3 shown in the snip below.
BO_ 673 Rig_Pressure: 4 Vector__XXX
SG_ Pressure_Multiplex M : 15|8#0+ (1,0) [0|0] "" Vector__XXX
SG_ Pressure m0 : 31|8#0- (0.45,58) [0.399999999999999|115.15] "Bar" Vector__XXX
I am asking if there is a way to modify the text file (or use cabdb++) to specify each bit or pick 2 non sequential bytes in the .dbc, something like modifying the bit start something like
31|8#0- to 31|8#0- 63|8#0-
I am far from a computer programmer, I much prefer GUI based programs and am only starting out in learning python, so please be gentle!!!
Thank you!

powerpc e500 / p1020 . Read 64bit (2x32bit registers) in atomic way

I have just started working with P1020 PowerPC IC and have my first problem. I was looking into P1020 reference manual and e500 ppc documentation and cannot find answer for my question.
How can I read 64bit value - created as two 32bit lower TBL and 32bit upper TBU registers of Time Base module - and prevent race condition? Is it guaranteed that the value will be correct (registers are latched?). Is there any assembler instruction that can read both registers in atomic way? Where I can find this kind of info in the doc?
Thanks
The PowerPC architecture document has a specific section on exactly this - see section 2.2.1.2 “Reading the Time Base in 32-Bit Mode” (on page 60) of https://wiki.alcf.anl.gov/images/f/fb/PowerPC_-Assembly-_IBM_Programming_Environment_2.3.pdf .
In short: you want to read the upper portion of the timebase, then the lower, then the upper again, and compare the two reads of the upper. If they're not equal, then your reads spanned a carry, so perform all three reads again.
As the document describes in assembly:
loop:
mftbu rx # load from TBU
mftb ry # load from TBL
mftbu rz # load from TBU
cmpw rz, rx # see if ‘old’ = ‘new’
bne loop # loop if carry occurred

extract Date and Time from two 16-bit modbus registers

I'm using an ElNet energy&power meter that communicates with my processor via Modbus RTU protocol.
There are two 16-bit ElNet registers that contain information about the Date and Time (separately) in a Win Format (registers 85-86, page 6 of this document). I'm able to read these two registers. However, I'm unable to extract information about the Date and Time.
For example, Date register contains decimal value of 17841 for today's date (31/07/2015). Is there any person willing to explain me how to convert 17841 into 31/07/2015?
I have the same problem with the time. My time register contains a decimal value of 55296. Can you help me extract the time from this number?
This thread addresses the same problem:
HEX/Decimal to date and time from modbus
However, I'm not sure I understand the extraction algorithm applied there. My point of operation is processor with the code written in C or C++.
Thank you very much for your time and effort to help me.
Sincerely,
Bojan.
The MS-DOS date/time format is described here: https://archive.is/2bVlz (was http://proger.i-forge.net/MS-DOS_date_and_time_format/OFz but is gone)
It makes sense for the 17256 value mentioned in the other question, as it translates to 2013-11-08. See here how to do it:
Register bit description: 0bYYYYYYYMMMMDDDDD
Registervalue: 17256 0b0100001101101000
Yearmask: 0b1111111000000000
Yearpart: 0b0100001000000000
Yearpart rightshifted 9 steps: 0b0000000000100001 = 33 years after 1980
Monthmask: 0b0000000111100000
Monthpart: 0b0000000101100000
Monthpart rightshifted 5 steps: 0b0000000000001011 = 11
Daymask: 0b0000000000011111
Daypart: 0b0000000000001000 = 8
Unfortunately your register value 17841 does not make sense, as it translates to 2014-13-17 (That is month 13).
Are you sure that:
you read the correct register? (change the time setting in the instrument, and see what happens to the register value)
you do not mix up the two bytes in the register?
the time setting is correct?

Length of string results in google apps

I've a function in gApps spreadsheet that takes a dollar figure multiplies by 100 and then returns the length of the string...
function checkWrite(amount) {
amount = amount * 100
amount = amount+'';
var aLength = amount.length;
return(aLength);
Return gives the following results:
amount--length
4. ---- 3
4.01 -- 3
4.02 -- 18
4.03 -- 3
4.04 -- 3
4.05 -- 3
4.06 -- 18
... and so on. I get many correct answers, 3, and some random wrong ones 18.
Can anyone shed some light on what's going on? I'm not great at coding, but I'm pretty sure that results should be consistent anyway.
Most likely the numbers are not rounded as you'd expect. Float point values are not exactly represented as you're thinking. To check that you should return the string representation generated as well, e.g. change the last line to return aLength+' '+amount;
Anyway, what's the usage of such function? If I have any idea on what you use it for, I could suggest some alternative solution.