Creating a 4 kilo bytes of data structure in system-verilog
How to divide this 4 kilo bytes space into 128 bit each location
use, struct type in SystemVerilog.
for example 512 bite data structure of 128 bit,
struct {
bit [127:0] part1;
bit [127:0] part2;
bit [127:0] part3;
bit [127:0] part4;
} largePart_512;
Note that, you have to access this struct with largePart_512,
part1 - largePart_512[127:0]
part2 - largePart_512[255:128]
part3 - largePart_512[383:256]
part4 - largePart_512[511:384]
Create a memory with each word as 128 bits and the depth equal to 4096/128:
logic [127:0] mem [4096/128];
Related
I am mqtt-ing a string from a Rasbperry Pi(sitting in a field, supported by LTE internet, costing me 10$/500MB/month) to an MQTT broker. I am using paho-mqtt client in python to do this for me. The string looks something like "MM:DD:YYY HH:MM:SS, X1, X2, X3, , , , X24", and I am sending a new string every 30 seconds. X1 to Xn are floating point numbers 0 to 700 with 2 digit precision. I think this will cost me a lot of internet when I deploy it to 24/7 use. Is my data format good? What other data formats should I look at?
You can represent the Unix time with a 4-byte float. And you can represent a float with an IEEE754 float in 4 bytes. So your time and 24 floats can be packed into 100 bytes with Python struct.pack(). That looks like this:
import struct
import time
import random
# Synthesize some sample data - a time and 24 floats 0..700
data = [time.time()] + [ random.uniform(0, 700) for _ in range(24)]
# Pack as 25 IEEE754 floats of 4 bytes each
payload = struct.pack('!25f', *data)
print(len(payload)) # prints 100 (bytes)
Currently, you seem to be using:
19 bytes for your time and
around 7 bytes for each float including separators
So, that's around 180 bytes as you currently have it.
If you multiplied your floats by 100 and made them integer you could maybe encode as 16-bit unsigned values (i.e. half the space of a 4-byte float) which would go from 0..65535 to represent 0..655 which is close to your data range of 0..700. So that would be 4 bytes for the time, plus 24 samples of 2 bytes each, for a total of 52 bytes.
So, rather than 100, use 65535/700 or 93.62:
# Scale the data to the range 0..65535 and make into integers
smallerData = [data[0]] + [ int(93.62*data[i]) for i in range(1,25)]
payload = struct.pack('!f24H', *smallerData)
print(len(payload)) # prints 52 (bytes)
Obviously all the numbers above exclude MQTT protocol overhead.
#include<stdio.h>
int main()
{
typedef unsigned char *byte_pointer;
void show_bytes(byte_pointer start, size_t len)
{
int i;
for (i = 0; i < len; i++)
{
printf(" %.2x", start[i]);
printf("\n");
}
}
void show_int(int x)
{
show_bytes((byte_pointer) &x, sizeof(int));
}
void show_float(int x)
{
show_bytes((byte_pointer) &x, sizeof(float));
}
void show_pointer(int x)
{
show_bytes((byte_pointer) &x, sizeof(void *));
}
int a = 0x12345678;
byte_pointer ap = (byte_pointer) &a;
show_bytes(ap, 3);
return 0;
}
(Solutions according to the CS:APP book)
Big endian: 12 34 56
Little endian: 78 56 34
I know systems have different conventions for storage allocation but if two systems use the same convention but are different endian why are the hex values different?
Endian-ness is an issue that arises when we use more than one storage location for a value/type, which we do because somethings won't fit in a single storage location.
As soon as we use multiple storage locations for a single value that gives rise to the question of: What part of the value will we store in each storage location?
The first byte of a two-byte item will have a lower address than the second byte, and in particular, the address of the second byte will be at +1 from the address of the lower byte.
Storing a two-byte item in two bytes of storage, do we store the most significant byte first and the least significant byte second, or vice versa?
We choose to use directly consecutive bytes for the two bytes of the two-byte item, so no matter which (endian) way we choose to store such an item, we refer to the whole two-byte item by the lower address (the address of its first byte).
We can express these storage choices with a formula, here item[0] refer to the first byte while item[1] refers to the second byte.
item[0] = value >> 8 // also value / 256
item[1] = value & 0xFF // also value % 256
value = (item[0]<<8) | item[1] // also item[0]*256 | item[1]
--vs--
item[0] = value & 0xFF // also value % 256
item[1] = value >> 8 // also value / 256
value = item[0] | (item[1]<<8) // also item[0] | item[1]*256
The first set of formulas is for big endian, and the second for little endian.
By these formulas, it doesn't matter what order we access memory as to whether item[0] first, then item[1], or vice versa, or both at the same time (common in hardware), as long as the formulas for one endian are consistently used.
If the item in question is a four-byte value, then there are 4 possible orderings(!) — though only two of them are truly sensible.
For efficiency, the hardware offers us multibyte memory access in one instruction (and with one reference, namely to the lowest address of the multibyte item), and therefore, the hardware itself needs to define and consistently use one of the two possible/reasonable orderings.
If the hardware did not offer multibyte memory access, then the ordering would be entirely up to the software program itself to define (accessing memory one byte at a time), and the program could choose big or little endian, even differently for each variable, as long as it consistently accesses the multiple bytes of memory in the same manner to reassemble the values stored there.
In a similar manner, when we define a structure of multiple items (e.g. struct point { int x; int y; }, software chooses whether x comes first or y comes first in memory ordering. However, since programmers (and compilers) will still choose to use hardware instructions to access individual fields such as x in one go, the hardware's endian configuration remains necessary.
I am having a hard time getting a valid value out of the HR characteristics. I am clearly not handling the values properly in Dart.
Example Data:
List<int> value = [22, 56, 55, 4, 7, 3];
Flags Field:
I convert the first item in the main byte array to binary to get the flags
22 = 10110 (as binary)
this leads me to believe that it is U16 (bit[0] is == 1)
HR Value:
Because it is 16 bit I am trying to get the bytes in the 1 & 2 indexes. I then try to buffer them into a ByteData. From there I get convert them to Uint16 with the Endian set to Little. This is giving me a value of 14136. Clearly I am missing something fundamental about how this is supposed to work.
Any help in clearing up what I am not understanding about how to process the 16 bit BLE values would be much appreciated.
Thank you.
/*
Constructor - constructs the heart rate value from a BLE message
*/
HeartRate(List<int> values) {
var flags = values[0];
var s = flags.toRadixString(2);
List<String> flagsArray = s.split("");
int offset = 0;
//Determine whether it is U16 or not
if (flagsArray[0] == "0") {
//Since it is Uint8 i will only get the first value
var hr = values[1];
print(hr);
} else {
//Since UTF 16 is two bytes I need to combine them
//Create a buffer with the first two bytes after the flags
var buffer = new Uint8List.fromList(values.sublist(1, 3)).buffer;
var hrBuffer = new ByteData.view(buffer);
var hr = hrBuffer.getUint16(0, Endian.little);
print(hr);
}
}
Your updated data looks much better. Here's how to decode it, and the process you'd use to figure this out yourself from scratch.
Determine the format
The Bluetooth site has been reorganized recently (~2020), and in particular they got rid of some of the document viewers, which makes things much harder to find and read IMO. All the documentation is in the Heart Rate Service (HRS) document, linked from the main GATT page, but for just parsing the format, the best source I know of is the XML for org.bluetooth.characteristic.heart_rate_measurement. (Since the reorganization, I don't know how you can find this page without searching for it. It doesn't seem to be linked anymore.)
Byte 0 - Flags: 22 (0001 0110)
Bits are numbered from LSB (0) to MSB (7).
Bit 0 - Heart Rate Value Format: 0 => UINT8 beats per minute
Bit 1-2 - Sensor Contact Status: 11 => Supported and detected
Bit 3 - Energy Expended Status: 0 => Not present
Bit 4 - RR-Interval: 1 => One or more values are present
The meaning of RR-intervals is explained in the HRS document, linked above. It sounds like you just want the heart rate value, so I won't go into them here.
Byte 1 - UINT8 BPM: 56
Since Bit 0 of flags was 0, this is the beats per minute. 56.
Bytes 2-5 - UINT16 RR Intervals: 55, 4, 7, 3
You probably don't care about these, but there are two UINT16 values here (there can be an arbitrary number of RR-Interval values). BLE is always little-endian, so [55, 4] is 1,079 (55 + 4<<8), and [7, 3] is 775 (7 + 3<<8).
I believe the docs are a little confusing on this one. The XML suggests that these values are in seconds, but the comments say "Resolution of 1/1024 second." The normal way to express this would be <BinaryExponent>-10</BinaryExponent>, and I'm certain that's what they meant. So these would be:
RR0: 1.05s (1079/1024)
RR1: 0.76s (775/1024)
I know that offset is : block size=2^n (offset=n). But i have seen that when block size = 8 bytes we do : 8=2^n so offset=n=3 bits, which is correct, but when block size = 1 word, i have seen 1=2^n (offset=n=0). Dont we need to convert word to bytes if we know that cache has 32-bit memory address? (So we have 32bit=4bytes, 4=2^n offset is 2 in that case).
Your did right, It's intuitive that one should know that a word is 4 byte in 32 bit processor and 8 byte in 64 bit.
The byte offset can also can be find in this way, Assume you have address size 32 bits then
byte_offset = 32-tag_bits-set_bits.
In order to solve problem of this kind it's good to know some useful parameter and equation.
Parameter to know
C = cache capacity
b = block size
B = number of blocks
N = degree of associativity
S = number of set
tag_bits
set_bits (also called index)
byte_offset
v = valid bits
Equations to know
B = C/b
S = B/N
b = 2^(byte_offset)
S = 2^(set_bits)
Memory Address
|___tag________|____set___|___byte offset_|
I am receiving EEG data from a 24 bit ADC over serial. The ADC data is transmitting in 3 bytes from MSB to LSB. The full packet is 21 bytes:
The first byte is the start byte - 0xFF (255 in decimal)
Then packet number byte.
Then the next 3 bytes are the 24 bit ADC value broken into MSB LSB2 LSB1
I can parse the data fine, but re-constructing a 2's complement signed int32 number is causing issues. The values I am getting out certainly don't reflect what the ADC should be giving out.
Below are the lines to read and parse the 504 samples (which gives me 24 ADC values (504samples/21bytes = 24 values)). I have tried uint8 instead of uchar with similar results (when I try int8 I get a invalid specified precision error).
comEEGSMT = serial(com,'BaudRate',3000000);
fopen(comEEGSMT);
rawData(1:504) = fread(comEEGSMT, 504, 'uchar');
fclose(comEEGSMT);
startPackets = find(rawData == 255);
bytes = rawData([startpackets+2 startpackets+3 startpackets+4]);
I have tried the following method to reconstruct the value:
ADC_value = bytes(:,1)*256^2 + bytes(:,2)*256 + bytes(:,3);
and the following line is the formula to convert the above number to volts:
ADC_value_volts = ADC_value*(5/3)*(1/(2^32));
The values are in the range of 4000 - 8000 microvolts with large jumps in value. The values SHOULD be in the range of 200 - 600 microvolts with small changes.
I have found other questions relating to similar issues, but have had no success trying the proposed solutions such as in the link below:
https://uk.mathworks.com/matlabcentral/answers/137965-concatenate-3-bytes-array-of-real-time-serial-data-into-single-precision
Any help would be very much appreciated as I've been stuck on this for quite long.
Thanks Mark
Starting with ADC_value as int32 with value 0, then:
ADC_value |= MSB << 16;
ADC_value |= LSB2 << 8;
ADC_value |= LSB1;
And then, to find out the corresponding volts value, supposing your ADC has a reference voltage VREF, in volts (e.g. 5.0V):
ADC_value_volts = (ADC_value * VREF)/2^24
since your converter is 24 bits, not 32.
Note the above expressions are in C language equivalent, not Matlab.
EDIT:
The ADC data sheet tell us the PGA gain can be set for the following values:
1, 2, 4, 6, 8, 12, 24, one value at the time for each channel.
The FSR (full scale range) of measurement is: (2*VREF)/Gain = 5/3, for Gain=6,
(eq.(5) page 23) so this must be accounted for in expression computing the volts
values. (these can be verified if you have access to the hardware and can make some
measurements).
Data resulted from ADC is already in two's complement, binary form, 24 bits.
The weird thing is the data sheet counts bits starting with 1, not 0, so this
is why shifting with "17" instead 16 - this is in fact 16 for coding.
(revealed in fig 47, page 42).
So the computing formula of ADC_value_volts should be:
ADC_value_volts = (AC_value * FSR/(2^23))/3 (1LSB=FSR/(2^23), pg.37)
If some other calculations/modifications from original, them these must be explained by provider.
If the provider is not friendly, worth to be changed...