how to set max-functions in pcie device tree? (about the syntax "max-functions = /bits/ 8 <8>;") - linux-device-driver

In linux-6.15.68, in Documentation/devicetree/bindings/pci/rockchip-pcie-ep.txt, I see these explanations. (please see the marked line.)
Optional Property:
- num-lanes: number of lanes to use
- max-functions: Maximum number of functions that can be configured (default 1).
pcie0-ep: pcie#f8000000 {
compatible = "rockchip,rk3399-pcie-ep";
#address-cells = <3>;
#size-cells = <2>;
rockchip,max-outbound-regions = <16>;
clocks = <&cru ACLK_PCIE>, <&cru ACLK_PERF_PCIE>,
<&cru PCLK_PCIE>, <&cru SCLK_PCIE_PM>;
clock-names = "aclk", "aclk-perf",
"hclk", "pm";
max-functions = /bits/ 8 <8>; // <---- see this line
num-lanes = <4>;
reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0x80000000 0x0 0x20000>;
<skip>
In the example dts, what does "max-functions = /bins/ 8 <8>;" mean?
I found in Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml, it says
max-functions:
$ref: /schemas/types.yaml#/definitions/uint32
description: maximum number of functions that can be configured
But I don't know how to read the $ref document.
ADD
I found this.
The storage size of an element can be changed using the /bits/ prefix. The /bits/ prefix allows for the creation of 8, 16, 32, and
64-bit elements. The resulting array will not be padded to a
multiple of the default 32-bit element size.
e.g. interrupts = /bits/ 8 <17 0xc>; e.g. clock-frequency = /bits/
64 <0x0000000100000000>;
Does this mean 17 and 0xc are both 8-bit variables and when it is compiled to dtb, it keep the 8-bit format? The linux code will analyze the dtb file, then does the dtb contains the format information too?

The Device Tree Compiler v1.4.0 onwards supports some extra syntaxes for specifying property values that are not present in The Devicetree Specification up to at least version v0.4-rc1. These extra property value syntaxes are documented in the Device Tree Compiler's Device Tree Source Format and include:
A number in an array between angle brackets can be specified as a character literal such as 'a', '\r' or '\xFF'.
The size of elements in an array between angle brackets can be set using the prefix /bits/ and a bit-width of 8, 16, 32, or 64, defaulting to 32-bit integers.
The binary Flattened Devicetree (DTB) Format contains no explicit information on the type of a property value. A property value is just a string of bytes. Numbers (and character literals) between angle brackets in the source are converted to bytes in big-endian byte order in accordance with the element size in bits divided by 8. For example:
<0x11 'a'> is encoded in the same way as the bytestring [00 00 00 11 00 00 00 61].
/bits/ 8 <17 0xc> is encoded in the same way as the bytestring [11 0c].
It is up to the reader of the property value to "know" what type it is expecting. For example, the Rockchip AXI PCIe endpoint controller driver in the Linux kernel ("drivers/pci/controller/pcie-rockchip-ep.c") "knows" that the "max-functions" property should have been specified as a single byte and attempts to read it using the statement err = of_property_read_u8(dev->of_node, "max-functions", &ep->epc->max_functions);. (It is probably encoded as a single byte property for convenience so that it can be copied directly into the u8 max_functions member of a struct pci_epc.)

Related

Problems writing string with ASCII value > 127 to serial port with Powershell script

My problem is to retrieve real-time data from an inverter (Voltronic family).
The inverter has a server and, if correctly asked, can send back information according to a communication protocol.
The communication is done through the serial port.
In particular a string similar to "XXXX"+ + CR has to be sent and the relevant data are sent back.
In my case the only string I need to send is "QPIGS". In this case I would have back many information that will allow me to produce a sort of control desk.
Since the string I need is always and only this, I made an off-line calculation of the <crc> that I need to complete the request.
The <crc> value is composed by two bytes, "·©". The first is the "mid point", hex b7, and the second is the "copyright sign" hex a9.
So the complete string should be "QPIGS·©". if I add the CR in powershell "`r", the complete string should be "QPIGS·©`r".
The script is very simple:
$port= new-Object System.IO.Ports.SerialPort COM1,2400,None,8,one
$port.ReadTimeout = 1000
$port.open()
$str='QPIGS·©`r'
$port.WriteLine($str')
Start-Sleep -Milliseconds 300
while ($x = $port.ReadExisting())
{
Write-Host $x
}
$port.Close()
But unfortunately it didn't work.
The inverter recognise the string but it doesn't match with what it was expecting and send back a NACK response. The exchange happens but is not succesfull.
In order to investigate more deeply I used a serial port serial sniffer to have evidence of what was really sent to the inverter and I found that what has been sent is the following
175 15/10/2022 17:06:29 IRP_MJ_WRITE DOWN 51 50 49 47 53 3f 3f 0a QPIGS??. 8 8 COM1
instead of what I was expecting
175 15/10/2022 17:06:29 IRP_MJ_WRITE DOWN 51 50 49 47 53 b7 a9 0d QPIGS·©. 8 8 COM1
It seems that the two <crc> bytes are ignored and substituted with two ?, hex 3f.
I imagine a problem of encoding.......but I can't find a solution.
Thanks for your help.
Tip of the hat to CherryDT for his comments and links to relevant related posts.
the complete string should be "QPIGS·©" [ + "`r" for a CR]
If you send strings to a serial port, its character encoding matters.
The default encoding is ASCII, which means that only Unicode characters in the 7-bit ASCII subrange can be sent, which excludes · (MIDDLE DOT, U+00B7) and © (COPYRIGHT SIGN, U+00A9) - that is, any Unicode character whose code point is greater than 0x7f (127) is "lossily" converted to a verbatim ASCII-range ? character, 0x3f (63).
You have two basic options:
Avoid string processing altogether and send an array of bytes: Convert the QPIGS substring to an array of (ASCII-range) byte values and append byte values 0xb7 and 0xa9:
Because the .NET strings are Unicode strings (encodes as UTF-16LE code units), you can take advantage of the fact that the code-point range 0x0 - 0x7f coincides with the ASCII code-point range, so you can simply cast ASCII-range characters to [byte[]] (via a [char[]] cast):
# Results in the following byte array:
# [byte[]] (0x51, 0x50, 0x49, 0x47, 0x53, 0xb7, 0xa9, 0xd)
[byte[]] $bytes = [char[]] 'QPIGS' + 0xb7, 0xa9, [char] "`r"
$port.Write($bytes, 0, $bytes.Count)
Use the port's .Encoding property to specify a character encoding in which the string "QPIGS·©`r" results in the desired byte values:
In this case you need the Windows-1252 encoding, where · is represented as byte value 0xb7, and © as 0xa9, and all ASCII-range characters are represented by their usual byte values:
$port.Encoding = [System.Text.Encoding]::GetEncoding(1252)
$port.Write("QPIGS·©`r")

CoreFoundation UTF-16 un-paired surrogate

I'm trying to encode from utf16 to say utf32 using Apple Core Foundation API :
cfString = CFStringCreateWithBytes(nullptr, str, strLen, kCFStringEncodingUTF16, FALSE);
auto range = CFRangeMake(0, CFStringGetLenth(cfString));
CFStringGetBytes(cfString, range, kCFStringEncodingUTF32, 0, false, buffer, bufferSize, usedsize);
Most of the time that works, untill input buffer contains first part of surrogate pair say U+df9f, Corefoundation will simply return output without ill-formed characters.
So to be a bit unicode compliant, I have to manually determine that situation and follow unicode documentation to create standard substitution for that in form of U+FFFD: http://www.unicode.org/versions/Unicode6.0.0/ch03.pdf
Same situation for other encodings: like symbol 0x80 in the middle of utf-8, then CFStringCreateWithBytes always return nullptr instead of pointing to invalid character.
Is that expected behaviour or UB of Corefoundation, or may be there is a hint to tune CF to be reporting malformed input somehow?
UPDATE:
I did exactly following:
UInt8 str[] = {0x41, 0x00, 0x9f, 0xdf}; // coresponding to unicode A + invalid surogate pair
CFStringRef mystr = CFStringCreateWithBytes(nullptr, str, 4, kCFStringEncodingUTF16, false, FALSE);
after that mystr has 2 characters len according to CFStringGetLength(), so looks invalid char gets processed
std::vector<char> str(7);
CFStringGetCString(mystr, &*str.begin(), str.size(), kCFStringEncodingUTF8);
that gives me false, so no conversion to utf8 is possible, and Xcode debug watches shows nothing for string myStr.
So output is nothing for utf8, and c-string, ok after that i checked with conversion to utf-32 with get bytes routine
result = CFStringGetBytes(s, range, kCFStringEncodingUTF32BE, 0, false, buffer, bufferSize, usedSize);
that gives me usedSize=4, result=1, and output contains 0x0041, so only A symbol converted. So that is why i’m thinking no substitution happened for malformed surogate pair.

perl bitwise AND and bitwise shifting

I was reading some example code snippet for the module Net::Pcap::Easy, and I came across this piece of code
my $l3protlen = ord substr $raw_bytes, 14, 1;
my $l3prot = $l3protlen & 0xf0 >> 2; # the protocol part
return unless $l3prot == 4; # return unless IPv4
my $l4prot = ord substr $packet, 23, 1;
return unless $l4prot == '7';
After doing a total hex dump of the raw packet $raw_bytes, I can see that this is an ethernet frame, and not on a TCP/UDP packet. Can someone please explain what the above code does?
For parsing the frame, I looked up this page.
Now onto the Perl...
my $l3protlen = ord substr $raw_bytes, 14, 1;
Extract the 15th byte (character) from $raw_bytes, and convert to its ordinal value (e.g. a character 'A' would be converted to an integer 65 (0x41), assuming the character set is ASCII). This is how Perl can handle binary data as if it were a string (e.g. passing it to substr) but then let you get the binary values back out and handle them as numbers. (But remember TMTOWTDI.)
In the IPv4 frame, the first 14 bytes are the MAC header (6 bytes each for destination and source MAC address, followed by 2-byte Ethertype which was probably 0x8000 - you could have checked this). Following this, the 15th byte is the start of the Ethernet data payload: the first byte of this contains Version (upper 4 bytes) and Header Length in DWORDs (lower 4 bytes).
Now it looks to me like there is a bug in the next line of this sample code, but it may well normally work by a fluke!
my $l3prot = $l3protlen & 0xf0 >> 2; # the protocol part
In Perl, >> has higher precedence than &, so this will be equivalent to
my $l3prot = $l3protlen & (0xf0 >> 2);
or if you prefer
my $l3prot = $l3protlen & 0x3c;
So this extracts bits 2 - 5 from the $l3prot value: the mask value 0x3c is 0011 1100 in binary. So for example a value of 0x86 (in binary, 1000 0110) would become 0x04 (binary 0000 0100).
In fact a 'normal' IPv4 value is 0x45, i.e. protocol type 4, header length 5 dwords. Mask that with 0x3c and you get... 4! But only by fluke: you have tested the top 2 bits of the length, not the protocol type!
This line should surely be
my $l3prot = ($l3protlen & 0xf0) >> 4;
(note brackets for precedence and a shift of 4 bits, not 2). (I found this same mistake in the CPAN documentation so I guess it's probably quite widely spread.)
return unless $l3prot == 4; # return unless IPv4
For IPv4 we expect this value to be 4 - if it isn't, jump out of the function right away. (So the wrong code above gives the result which lets this be interpreted as an IPv4 packet, but only by luck.)
my $l4prot = ord substr $packet, 23, 1;
Now extract the 24th byte and convert to ordinal value in the same way. This is the Protocol byte from the IP header:
return unless $l4prot == '7';
We expect this to be 7 - if it isn't jump out of the function right away. (According to IANA, 7 is "Core-based trees"... but I guess you know which protocols you are interested in!)

Convert text to binary and store in a single array in matlab

I need to convert the given text (not in file format) into binary values and store in a single array that is to be given as input to other function in Matlab .
Example:
Hi how are you ?
It is to be converted into binary and stored in an array.I have used dec2bin() function but i did not suceed in getting the output required.
Sounds a bit like a trick question. In MATLAB, a character array (string) is just a different representation of 16-bit unsigned character codes.
>> str = 'Hi, how are you?'
str =
Hi, how are you?
>> whos str
Name Size Bytes Class Attributes
str 1x16 32 char
Note that the 16 characters occupy 32 bytes, or 2 bytes (16-bits) per character. From the documentation for char:
Valid codes range from 0 to 65535, where codes 0 through 127 correspond to 7-bit ASCII characters. The characters that MATLAB® can process (other than 7-bit ASCII characters) depend upon your current locale setting. To convert characters into a numeric array,use the double function.
Now, you could use double as it recommends to get the character codes into double arrays, but a minimal representation would simply involve uint16:
int16bStr = uint16(str)
To split this into bytes, typecast into 8-bit integers:
typecast(int16bStr,'uint8')
which yields 32 uint8 values (bytes), which are suitable for conversion to binary representation with dec2bin, if you want to see the binary (but these arrays are already binary data).
If you don't expect anything other than ASCII characters, just throw out the extra bits from the start:
>> int8bStr =
72 105 44 32 104 111 119 32 97 114 101 32 121 111 117 63
>> binStr = reshape(dec2bin(binStr8b.'),1,[])
ans =
110011101110111001111111111111110000001001001011111011000000 <...snip...>

digits in long to base64 characters

I am working on a small task which requires some base64 encoding. I am trying to do it in head but getting lost .
I have a 13 digit number in java long format say: 1294705313608 , 1294705313594 , 1294705313573
I do some processing with it, bascially I take this number append it with stuff put it in a byte array and then convert it to base64 using:
String b64String = new sun.misc.BASE64Encoder().encodeBuffer(bArray);
Now , I know that for my original number, the first 3 digits would never change. So 129 is constant in above numbers. I want to find out how many chars corresponding to those digits would not change in the resultant base64 string.
Code to serialize long to the byte array. I ignore the first 2 bytes since they are always 0:
bArray[0] = (byte) (time >>> 40);
bArray[1] = (byte) (time >>> 32);
bArray[2] = (byte) (time >>> 24);
bArray[3] = (byte) (time >>> 16);
bArray[4] = (byte) (time >>> 8);
bArray[5] = (byte) (time >>> 0);
Thanks.
Notes:
I know that base64 would take 6 bits and make one character out of it. So if first 3 digits do not change in long how many chars would not change in base64.
This in NOT a HW assignment, but I am not very familiar with encoding.
1290000000000 is 10010110001011001111111011110010000000000 in binary.
1299999999999 is 10010111010101110000010011100011111111111 in binary.
Both are 41 bits long, and they differ after the first 7 bits. Your shift places bits 41-48 in the first byte, which will always be 00000001. The following byte will always be 00101110, 00101101, or 00101110. So you've got the leading 14 bits in common across all your possible array values, which (at 6 bits per encoded base64 char) means 2 characters in common in the encoded string.
Appears you're on the right track. I think what you want to do is convert a long to a byte array, then convert the byte array to Base64.
How do I convert Long to byte[] and back in java shows you how to convert it to bytes.