How do loop over the search results for a byte string and offset the resultant pointer (in WinDbg)? - windbg

I'm attempting to search for an arbitrarily long byte string in WinDbg and print out the address if an integer in the vicinity meets some criteria.
Pseudo-register $t0 contains the starting address I want to search.
Here's something that, based on the Windows docs, maybe could work (though it clearly doesn't).
.foreach (place { s -[1] #$t0 L?30000 00 00 00 00 00 20 00 00 }) { .if ( (place +0x8) <= 0x1388) { .printf "0x%x\n", place } }
Search
First, the search command doesn't quite work correctly. I only want the address of the match (not the data).
s -[1] #$t0 L?30000 00 00 00 00 00 20 00 00
The docs say that the 1 flag will only return the address. When I issue that command, WinDbg replies
^ Syntax error in 's -1 #$t0 L?30000 00 00 00 00 00 20 00 00 '
If I leave out the -1, it finds two matches.
What am I doing wrong here?
Condition
I don't think the condition is behaving the way I want. I want to look at the third dword starting at place, i.e. place+8, and verify that it's smaller than 5000 (decimal). The .if inside the .foreach isn't printing a meaningful value for place (i.e. the address returned from the search). I think it's dereferencing place first and comparing the value of that integer to 5000. How do I look at the value of, say, *(int*)(place+8)?
Documentation?
The docs are not helping me very much. They only have sparse examples, none of which correspond to what I need.
Is there better documentation somewhere besides MS's Hardware Dev Center?

you can start writing JavaScript for a more legible way of scripting
old way
0:000> s -b vect l?0x1000 4d
00007ff7`8aaa0000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff ff 00 00 MZ..............
00007ff7`8aaa00d4 4d 90 80 d2 df f9 82 d3-4d 90 80 d2 52 69 63 68 M.......M...Rich
00007ff7`8aaa00dc 4d 90 80 d2 52 69 63 68-4c 90 80 d2 00 00 00 00 M...RichL.......
0:000> s -[1]b vect l?0x1000 4d
0x00007ff7`8aaa0000
0x00007ff7`8aaa00d4
0x00007ff7`8aaa00dc
using javascript
function search(addr,len)
{
var index = []
var mem = host.memory.readMemoryValues(addr,len)
for(var i = 0; i < len; i++)
{
if(mem[i] == 0x4d)
{
index.push(addr+i)
}
}
return index
}
executed will return address like which you can manipulate further
0:000> dx -r1 #$scriptContents.search(0x00007ff78aaa0000,1000)
#$scriptContents.search(0x00007ff78aaa0000,1000) : 140701160046592,140701160046804,140701160046812
length : 0x3
[0x0] : 0x7ff78aaa0000
[0x1] : 0x7ff78aaa00d4
[0x2] : 0x7ff78aaa00dc
improving the script a little to find something based on first result
we will try to find the index of Rich string that follows the character 'M'
modified script
function search(addr,len)
{
var index = []
var Rich = []
var result = []
var mem = host.memory.readMemoryValues(addr,len)
for(var i = 0; i < len; i++)
{
if(mem[i] == 0x4d)
{
index.push(addr+i)
var temp = host.memory.readMemoryValues(addr+i+4,1,4)
host.diagnostics.debugLog(temp +"\t")
if(temp == 0x68636952)
{
Rich.push(addr+i)
}
}
}
result.push(index)
result.push(Rich)
return result
}
result only the third occurance of char "M" is followed by Rich string
0:000> dx -r2 #$scriptContents.search(0x00007ff78aaa0000,1000)
3 3548576223 1751345490 #$scriptContents.search(0x00007ff78aaa0000,1000) : 140701160046592,140701160046804,140701160046812,140701160046812
length : 0x2
[0x0] : 140701160046592,140701160046804,140701160046812
length : 0x3
[0x0] : 0x7ff78aaa0000
[0x1] : 0x7ff78aaa00d4
[0x2] : 0x7ff78aaa00dc
[0x1] : 140701160046812
length : 0x1
[0x0] : 0x7ff78aaa00dc
0:000> s -b vect l?0x1000 4d
00007ff7`8aaa0000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff ff 00 00 MZ..............
00007ff7`8aaa00d4 4d 90 80 d2 df f9 82 d3-4d 90 80 d2 52 69 63 68 M.......M...Rich
00007ff7`8aaa00dc 4d 90 80 d2 52 69 63 68-4c 90 80 d2 00 00 00 00 M...RichL.......
load the extensension jsprovider.dll .load jsprovider
write a script say foo.js
load the script .scriptload ...\path\foo.js
execute any functions inside the js you wrote with dx #$scriptContents.myfunc(myargs)
see below using cdb just for ease of copy paste windbg works just as is
F:\>type mojo.js
function hola_mojo ()
{
host.diagnostics.debugLog("hola mojo this is javascript \n")
}
F:\>cdb -c ".load jsprovider;.scriptload .\mojo.js;dx #$scriptContents.hola_mojo();q" cdb | f:\usr\bin\grep.exe -A 6 -i reading
0:000> cdb: Reading initial command '.load jsprovider;.scriptload .\mojo.js;dx #$scriptContents.hola_mojo();q'
JavaScript script successfully loaded from 'F:\mojo.js'
hola mojo this is javascript
#$scriptContents.hola_mojo()
quit:

If I read this part of the documentation
s [-[[Flags]Type]] Range Pattern
correctly, you cannot leave out Type when specifying flags. That's because the flags are inside two square brackets. Otherwise it would have been noted as s [-[Flags][Type]] Range Pattern.
Considering this, the example works:
0:000> .dvalloc 2000
Allocated 2000 bytes starting at 00ba0000
0:000> eb 00ba0000 01 02 03 04 05 06 07 08 09
0:000> eb 00ba1000 01 02 03 04 05 06 07 08 09
0:000> s -[1]b 00ba0000 L?2000 01 02 03 04 05 06 07 08
0x00ba0000
0x00ba1000
Also note that you'll have a hidden bug for the use of place: it should be ${place}. By default, that will work with the address (line break for readability on SO):
0:000> .foreach (place {s -[1]b 00ba0000 L?2000 01 02 03 04 05 06 07 08 })
{ .if ( (${place} +0x8) < 0xba1000) { .printf "0x%x\n", ${place} } }
0xba0000
In order to read a DWord from that address, use the dwo() MASM oerator (line break for readability on SO):
0:000> .foreach (place {s -[1]b 00ba0000 L?2000 01 02 03 04 05 06 07 08 })
{ .if ( (dwo(${place} +0x8)) < 0xba1000)
{ .printf "0x%x = 0x%x\n", ${place}, dwo(${place}+8) } }
0xba0000 = 0x9
0xba1000 = 0x9

Related

Analyze peculiar avcC atom structure

I need some help to understand the avcC atom structure of a particular mp4 sample I am trying to analyze.
Hex dump:
00 00 00 38 61 76 63 43 01 64 00 1F FF E1 00 1C 67 64 00 1F AC D9 80
50 05 BB 01 6A 02 02 02 80 00 00 03 00 80 00 00 1E 07 8C 18 CD 01 00
05 68 E9 7B 2C 8B FD F8 F8 00 00 00 00 13 63 6F 6C 72
This is what I understand from the above:
00 00 00 38 Size of avcC atom
61 76 63 43 avcC signature
01 configurationVersion
64 AVCProfileIndication
00 profile_compatibility
1F AVCLevelIndication
FF 111111b + lengthSizeMinusOne
E1 111b + numOfSequenceParameterSets (in this case, 1 SPS)
00 1C SPS length (in this case, 28 bytes)
67 64 00 1F AC D9 80 50 05 BB 01 6A 02 02 02 80 00 00 03 00 80 00 00 1E 07 8C 18 CD SPS data (28 bytes as per above)
01 numOfPictureParameterSets (in this case, 1 PPS)
00 05 PPS length
This is where the problem begins. Based on the PPS length given by the previous bytes, the next 5 bytes should be the PPS data: 68 E9 7B 2C 8B
However according to the avcC header, the total length of the atom is 56 bytes (0x38), which means that the following 4 bytes should be included: FD F8 F8 00
But the problem is that the PPS length is given as 5 bytes (0x05). So what exactly are these final 4 bytes?
Then follows the header of the colr atom:
00 00 00 13 size of colr atom
63 6F 6C 72 colr signature
Which I have checked and is indeed 19 bytes in length (0x13).
The problem is with the avcC atom and with that particular mp4 sample I am analyzing (I've checked other samples too and they didn't have this peculiarity).
You can find the sample here.
EDIT
mp4info tool from the bento4 suite reports the following as the avcC atom's size: 8+48
And mp4dump reports:
AVC SPS: [6764001facd9805005bb016a02020280000003008000001e078c18cd]
AVC PPS: [68e97b2c8b]
So it correctly reports the total size of the atom as 56 bytes (0x38) based on what is found in the avcC header, but the SPS/PPS data are analyzed the same way as above. I still don't understand what the final 4 bytes are or where do they belong.
I dind't get any answer but fortunately a bit more careful reading of ISO 14496-15 solved this issue:
if( profile_idc == 100 || profile_idc == 110 ||
profile_idc == 122 || profile_idc == 144 )
{
bit(6) reserved = ‘111111’b;
unsigned int(2) chroma_format;
bit(5) reserved = ‘11111’b;
unsigned int(3) bit_depth_luma_minus8;
bit(5) reserved = ‘11111’b;
unsigned int(3) bit_depth_chroma_minus8;
unsigned int(8) numOfSequenceParameterSetExt;
for (i=0; i< numOfSequenceParameterSetExt; i++) {
unsigned int(16) sequenceParameterSetExtLength;
bit(8*sequenceParameterSetExtLength) sequenceParameterSetExtNALUnit;
}
}
Apparently a sequence of 4+ bytes may exist at the end of an avcC atom depending on the profile used. In my sample above the profile is 100 (0x64), hence it meets the criteria. So the last 4 bytes are:
FD = bits 111111 are reserved, remaining 01 means chroma subsampling 4:2:0
F8 = bits 11111 are reserved, remaining 000 means luma bit depth is 8
F8 = bits 11111 are reserved, remaining 000 means chroma bit depth is 8
00 = zero SPS extensions

How to retrieve details of the console port used by BIOS using efivars?

As part of installation of linux, I would like to set the "console device properties"(example, console=ttyS0,115200n1) via the kernel cmdline for Intel based platform.
There is No VGA console, only serial consoles via COM interface.
On these systems BIOS already has the required settings to interact using the appropriate serial port.
I see that EFI has variables ConIn, ConOut, ConErr which I am able to see from /sys/firmware/efi but unable to decode the contents of it.
Is it possible to identify which COM port is being used by the BIOS by examining the efi variables.
Example, of the EFI var on my box.
root#linux:~# efivar -p -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-ConOut
GUID: 8be4df61-93ca-11d2-aa0d-00e098032b8c
Name: "ConOut"
Attributes:
Non-Volatile
Boot Service Access
Runtime Service Access
Value:
00000000 02 01 0c 00 d0 41 03 0a 00 00 00 00 01 01 06 00 |.....A..........|
00000010 00 1a 03 0e 13 00 00 00 00 00 00 c2 01 00 00 00 |................|
00000020 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 89 |...........I7/T.|
00000030 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a 14 |L.&5.. .........|
00000040 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f c1 |.SG..........'?.|
00000050 4d 7f 01 04 00 02 01 0c 00 d0 41 03 0a 00 00 00 |M.........A.....|
00000060 00 01 01 06 00 00 1f 02 01 0c 00 d0 41 01 05 00 |............A...|
00000070 00 00 00 03 0e 13 00 00 00 00 00 00 c2 01 00 00 |................|
00000080 00 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 |............I7/T|
00000090 89 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a |.L.&5.. ........|
000000a0 14 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f |..SG..........'?|
000000b0 c1 4d 7f ff 04 00 |.M.... |
root#linux:~#
The contents of the ConOut variable are described in the UEFI specification - current version (2.8B):
3.3 - globally defined variables:
| Name | Attribute | Description |
|---------|------------|------------------------------------------------|
| ConOut | NV, BS, RT | The device path of the default output console. |
For information about device paths, we have:
10 - Protocols — Device Path Protocol:
Apart from the initial description of device paths, table 44 shows you the Generic Device Path Node structure, from which we can start decoding the contents of the variable.
The type of the first node is 0x02, telling us this node describes an ACPI device path, of 0x000c bytes length. Now jump down to 10.3.3 - ACPI Device Path and table 52, which tells us 1) that this is the right table (subtype 0x01) and 2) that the default ConOut has a _HID of 0x0a03410d and a _UID of 0.
The next node has a type of 0x01 - a Hardware Device Path, described further in 10.3.2, in this case table 46 (SubType is 0x01) for a PCI device path.
The next node describes a Messaging Device Path of type UART and so on...
Still, this only tells you what UEFI considers to be its default console, SPCR is what an operating system is supposed to be looking at for serial consoles. Unfortunately, on X86 the linux kernel handily ignores SPCR apart from for earlycon. I guess this is what you're trying to work around. It might be good to start some discussion on kernel development lists about whether to fix that and have X86 work like ARM64.
In my case since I know that console port is a "Serial IOPORT",
I could get the details now as follows.
a. Get hold of the /sys/firmware/acpi/tables/SPC table.
b. Read the Address offset 44-52. Actually one the last two bytes suffice.
Reference:
a. https://learn.microsoft.com/en-us/windows-hardware/drivers/serports/serial-port-console-redirection-table states that
Base Address 12 40
The base address of the Serial Port register set described using the ACPI Generic Address Structure.
0 = console redirection disabled
Note:
COM1 (0x3F8) would be:
Integer Form: 0x 01 08 00 00 00000000000003F8
Viewed in Memory: 0x01080000F803000000000000
COM2 (Ox2F8) would be:
Integer Form: 0x 01 08 00 00 00000000000002F8
Viewed in Memory: 0x01080000F802000000000000

Where are files marked as assume-unchanged in Git? [duplicate]

I like to modify config files directly (like .gitignore and .git/config) instead of remembering arbitrary commands, but I don't know where Git stores the file references that get passed to "git update-index --assume-unchanged file".
If you know, please do tell!
It says where in the command - git update-index
So you can't really be editing the index as it is not a text file.
Also, to give more detail on what is stored with the git update-index --assume-unchanged command, see the Using “assume unchanged” bit section in the manual
As others said, it's stored in the index, which is located at .git/index.
After some detective work, I found that it is located at the: assume valid bit of each index entry.
Therefore, before understanding what follows, you should first understand the global format of the index, as explained in my other answer.
Next, I will explain how I verified that the "assume valid" bit is the culprit:
empirically
by reading the source
Empirical
Time to hd it up.
Setup:
git init
echo a > b
git add b
Then:
hd .git/index
Gives:
00000000 44 49 52 43 00 00 00 02 00 00 00 01 54 e9 b6 f3 |DIRC........T...|
00000010 2d 4f e1 2f 54 e9 b6 f3 2d 4f e1 2f 00 00 08 05 |-O./T...-O./....|
00000020 00 de 32 ff 00 00 81 a4 00 00 03 e8 00 00 03 e8 |..2.............|
00000030 00 00 00 00 e6 9d e2 9b b2 d1 d6 43 4b 8b 29 ae |...........CK.).|
00000040 77 5a d8 c2 e4 8c 53 91 00 01 62 00 c9 a2 4b c1 |wZ....S...b...K.|
00000050 23 00 1e 32 53 3c 51 5d d5 cb 1a b4 43 18 ad 8c |#..2S<Q]....C...|
00000060
Now:
git update-index --assume-unchanged b
hd .git/index
Gives:
00000000 44 49 52 43 00 00 00 02 00 00 00 01 54 e9 b6 f3 |DIRC........T...|
00000010 2d 4f e1 2f 54 e9 b6 f3 2d 4f e1 2f 00 00 08 05 |-O./T...-O./....|
00000020 00 de 32 ff 00 00 81 a4 00 00 03 e8 00 00 03 e8 |..2.............|
00000030 00 00 00 00 e6 9d e2 9b b2 d1 d6 43 4b 8b 29 ae |...........CK.).|
00000040 77 5a d8 c2 e4 8c 53 91 80 01 62 00 17 08 a8 58 |wZ....S...b....X|
00000050 f7 c5 b3 e1 7d 47 ac a2 88 d9 66 c7 5c 2f 74 d7 |....}G....f.\/t.|
00000060
By comparing the two indexes, and looking at the global structure of the index, see that the only differences are:
byte number 0x48 (9th on line 40) changed from 00 to 80. That is our flag, the first bit of the cache entry flags.
the 20 bytes from 0x4C to 0x5F. This is expected since that is a SHA-1 over the entire index.
This has also though me that the SHA-1 of the index entry in bytes from 0x34 to 0x47 does not take into account the flags, since it did not changed between both indexes. This is probably why the flags are placed after the SHA, which only considers what comes before it.
Source code
Now let's see if that is coherent with source code of Git 2.3.
First look at the source of update-index, grep assume-unchanged.
This leads to the following line:
{OPTION_SET_INT, 0, "assume-unchanged", &mark_valid_only, NULL,
N_("mark files as \"not changing\""),
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL, MARK_FLAG},
{OPTION_SET_INT, 0, "no-assume-unchanged", &mark_valid_only, NULL,
N_("clear assumed-unchanged bit"),
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL, UNMARK_FLAG},
so the value is stored at mark_valid_only. Grep it, and find that it is only used at one place:
if (mark_valid_only) {
if (mark_ce_flags(path, CE_VALID, mark_valid_only == MARK_FLAG))
die("Unable to mark file %s", path);
return;
}
CE means Cache Entry.
By quickly inspecting mark_ce_flags, we see that:
if (mark)
active_cache[pos]->ce_flags |= flag;
else
active_cache[pos]->ce_flags &= ~flag;
So the function basically sets or unsets the CE_VALID bit, depending on mark_valid_only, which is a tri-state:
mark: --assume-unchanged
unmark: --no-assume-unchanged
do nothing: the default value 0 of the option set at {OPTION_SET_INT, 0
Next, by grepping under builtin/, we see that no other place sets the value of CE_VALID, so --assume-unchanged must be the only command that sets it.
The flag is however used in many places of the source code, which should be expected as it has many side-effects, and it is used every time like:
ce->ce_flags & CE_VALID
so we conclude that it is part of the ce_flags field of struct cache_entry.
The index is specified at cache.h because one of its functions is to be a cache for creating commits faster.
By looking at the definition of CE_VALID under cache.h and surrounding lines we have:
#define CE_STAGEMASK (0x3000)
#define CE_EXTENDED (0x4000)
#define CE_VALID (0x8000)
#define CE_STAGESHIFT 12
So we conclude that it is the very first bit of that integer (0x8000), just next to the CE_EXTENDED, which is coherent with my earlier experiment.

Mifare Desfire Wrapped Mode: How to calculate CMAC?

When using Desfire native wrapped APDUs to communicate with the card, which parts of the command and response must be used to calculate CMAC?
After successful authentication, I have the following session key:
Session Key: 7CCEBF73356F21C9191E87472F9D0EA2
Then when I send a GetKeyVersion command, card returns the following CMAC which I'm trying to verify:
<< 90 64 00 00 01 00 00
>> 00 3376289145DA8C27 9100
I have implemented CMAC algorithm according to "NIST special publication 800-38B" and made sure it is correct. But I don't know which parts of command and response APDUs must be used to calculate CMAC.
I am using TDES, so MAC is 8 bytes.
I have been looking at the exact same issue for the last few days and I think I can at least give you some pointers. Getting everything 'just so' has taken some time and the documentation from NXP (assuming you have access) is a little difficult to interpret in some cases.
So, as you probably know, you need to calculate the CMAC (and update your init vec) on transmit as well as receive. You need to save the CMAC each time you calculate it as the init vec for the next crypto operation (whether CMAC or encryption etc).
When calculating the CMAC for your example the data to feed into your CMAC algorithm is the INS byte (0x64) and the command data (0x00). Of course this will be padded etc as specified by CMAC. Note, however, that you do not calculate the CMAC across the entire APDU wrapping (i.e. 90 64 00 00 01 00 00) just the INS byte and data payload is used.
On receive you need to take the data (0x00) and the second status byte (also 0x00) and calculate the CMAC over that. It's not important in this example but order is important here. You use the response body (excluding the CMAC) then SW2.
Note that only half of the CMAC is actually sent - CMAC should yield 16 bytes and the card is sending the first 8 bytes.
There were a few other things that held me up including:
I was calculating the session key incorrectly - it is worth double checking this if things are not coming out as you'd expect
I interpreted the documentation to say that the entire APDU structure is used to calculate the CMAC (hard to read them any other way tbh)
I am still working on calculating the response from a Write Data command correctly. The command succeeds but I can't validate the CMAC. I do know that Write Data is not padded with CMAC padding but just zeros - not yet sure what else I've missed.
Finally, here is a real example from communicating with a card from my logs:
Authentication is complete (AES) and the session key is determined to be F92E48F9A6C34722A90EA29CFA0C3D12; init vec is zeros
I'm going to send the Get Key Version command (as in your example) so I calculate CMAC over 6400 and get 1200551CA7E2F49514A1324B7E3428F1 (which is now my init vec for the next calculation)
Send 90640000010000 to the card and receive 00C929939C467434A8 (status is 9100).
Calculate CMAC over 00 00 and get C929939C467434A8A29AB2C40B977B83 (and update init vec for next calculation)
The first half of our CMAC from step #4 matches the 8 byte received from the card in step #3
Sry for my English,- its terrible :) but it's not my native language. I'm Russian.
Check first MSB (7 - bit) of array[0] and then shiffting this to the left. And then XOR if MSB 7 bit was == 1;
Or save first MSB bit of array[0] and after shiffting put this bit at the end of array[15] at the end (LSB bit).
Just proof it's here:
https://www.nxp.com/docs/en/application-note/AN10922.pdf
Try this way:
Zeros <- 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SessionKey <- 00 01 02 03 E3 27 64 0C 0C 0D 0E 0F 5C 5D B9 D5
Data <- 6F 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00
First u have to encrypt 16 bytes (zeros) with SesionKey;
enc_aes_128_ecb(Zeros);
And u get EncryptedData.
EncryptedData <- 3D 08 A2 49 D9 71 58 EA 75 73 18 F2 FA 6A 27 AC
Check bit 7 [MSB - LSB] of EncryptedData[0] == 1? switch i to true;
bool i = false;
if (EncryptedData[0] & 0x80){
i = true;
}
Then do Shiffting of all EncryptedData to 1 bit <<.
ShiftLeft(EncryptedData,16);
And now, when i == true - XOR the last byte [15] with 0x87
if (i){
ShiftedEncryptedData[15] ^= 0x87;
}
7A 11 44 93 B2 E2 B1 D4 EA E6 31 E5 F4 D4 4F 58
Save it as KEY_1.
Try bit 7 [MSB - LSB] of ShiftedEncryptedData[0] == 1?
i = false;
if (ShiftedEncryptedData[0] & 0x80){
i = true;
}
Then do Shiffting of all ShiftedEncryptedData to 1 bit <<.
ShiftLeft(ShiftedEncryptedData,16);
And now, when i == true - XOR the last byte [15] with 0x87
if (i){
ShiftedEncryptedData[15] ^= 0x87;
}
F4 22 89 27 65 C5 63 A9 D5 CC 63 CB E9 A8 9E B0
Save it as KEY_2.
Now we take our Data (6F 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00)
As Michael say's - pad command with 0x80 0x00...
XOR Data with KEY_2 - if command was padded, or KEY_1 if don't.
If we have more like 16 bytes (32 for example) u have to XOR just last 16 bytes.
Then encrypt it:
enc_aes_128_ecb(Data);
Now u have a CMAC.
CD C0 52 62 6D F6 60 CA 9B C1 09 FF EF 64 1A E3
Zeros <- 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SessionKey <- 00 01 02 03 E3 27 64 0C 0C 0D 0E 0F 5C 5D B9 D5
Key_1 <- 7A 11 44 93 B2 E2 B1 D4 EA E6 31 E5 F4 D4 4F 58
Key_2 <- F4 22 89 27 65 C5 63 A9 D5 CC 63 CB E9 A8 9E B0
Data <- 6F 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00
CMAC <- CD C0 52 62 6D F6 60 CA 9B C1 09 FF EF 64 1A E3
C/C++ function:
void ShiftLeft(byte *data, byte dataLen){
for (int n = 0; n < dataLen - 1; n++) {
data[n] = ((data[n] << 1) | ((data[n+1] >> 7)&0x01));
}
data[dataLen - 1] <<= 1;
}
Have a nice day :)

How do I find what is the value of offset of this byte?

So somehow from the following hex data (03 00 21 04 80 04) the values below were obtained.
Can anybody can tell how how can I do this and how it was achieved?
Band = 3 (40,6)
Duplex_Mode = 0 (46,1)
Result = 0 (47,1)
Reserved_1 = 0 (48,8)
Min_Search_Half_Frames = 1 (56,5)
Min_Search_Half_Frames_Early_Abort = 1 (61,5)
Max_Search_Half_Frames = 1 (66,5)
Max_PBCH_Frames = 0 (71,5)
Number_of_Blocked_Cells = 0 (76,3)
Number_PBCH_Decode_Attemp_Cells = 1 (79,3)
Number_of_Search_Results = 1 (82,4)
Reserved_2 = 0 (86,2)
The parameters in paranthesis is the Offset/Length I am told. I don't understand how based on that information should I be able to unpack this payload.
So I have written
my $data = pack ('C*', map hex, split /\s+/, "03 00 21 04 80 04");
($tmp1, $Reserved_1, $tmp2) = unpack("C C V", $data);
And now help. How do I unpack the table values above from $tmp1 and $tmp2 ?
EDIT: Hex Data = "00 00 00 7F 08 03 00 21 04 80 04 FF D7 FB 0C EC 01 44 00 61 1D 00 00 10 3B 00 00 FF D7 FB 0C 00 00 8C 64 00 00 EC 45"
Thanks!
You might want to define a set of bitmasks, and use bitwise AND operations to unpack your data.