I'm using the LSM303AGR Accelerometer (Datasheet) with STM32 I2C Hal Library but I can't figure out the sensitivity.
Here is my code to set configuration registers:
void LSM303AGR_Init() {
uint8_t Data[2] = {0};
Data[0]= 0x20;
Data[1]= 0x57; //ODR #100Hz //Accelerometer Control Register 1 and Data
HAL_I2C_Master_Transmit(&hi2c1,0x19<<1,Data,2,50);
Data[0]= 0x23;
Data[1]= 0x20; //+/-8G and in normal mode
HAL_I2C_Master_Transmit(&hi2c1,0x19<<1,Data,2,50);}
The first I2C write transfer is to register 0x20 and is supposed to set sensor to normal mode and output data rate of 100Hz and the second I2C write transfer is supposed to set scale to +/-8G.
Also, here is my code to read XYZ 16bit values and convert to mg (15.63 is the sensitivity as per data sheet):
void LSM303AGR_AccReadXYZ(float* pData) {
HAL_I2C_Master_Transmit(&hi2c1,(0x19<<1)|0x01,&accXYZregAutoRead ,1,50);
HAL_I2C_Master_Receive(&hi2c1,(0x19<<1)|0x01, buffer,6,50);
for(int i=0; i<3; i++) {
pData[i]=(float)((int16_t)((uint16_t)buffer[2*i+1] << 8) | buffer[2*i]) / 15.63;} //Readings in mg
}
I know for a fact that I am writing to these registers and reading from the correct registers through my debugging. However, with the set up above I get output values of about 250mg on a tabletop (for the z-axis of course the others are about zero) using a sensitivity of 15.63 but when I change 15.63 to 3.9 sensitivity (datasheet pg 13) I get about 1000mg on z axis which is correct! Problem is, my registers are set to +/-8G (datasheet pg 49) and normal power mode (datasheet pg 47) and according to datasheet the sensitivity should be should be 15.63 not 3.9!
Any help would be much appreciated!
You are using normal mode, so your data should be 10bit and you should shift right your reads by 6 (values are left justified).
Related
I have used STM32CubeMX/IDE to generate a USB HID project for the STM32F3DISCOVERY board.
The USB BTABLE register is zero, indicating that the BTABLE is at the start of the Packet Memory Area.
(I zero the whole PMA at program start, to avoid stale values.)
Just before the execution of the __HAL_RCC_USB_CLK_ENABLE macro (in HAL_PCD_MspInit() in usbd_conf.c) the values of the BTABLE (at index zero onwards, in the PMA are:
After that macro is executed, the values are:
The macro expands to:
do { \
volatile uint32_t tmpreg; \
((((RCC_TypeDef *) ((0x40000000UL + 0x00020000UL) + 0x00001000UL))->APB1ENR) |= ((0x1UL << (23U))));\
/* Delay after an RCC peripheral clock enabling */ \
tmpreg = ((((RCC_TypeDef *) ((0x40000000UL + 0x00020000UL) + 0x00001000UL))->APB1ENR) & ((0x1UL << (23U))));\
(void)tmpreg; \
} while(0U)
How does this macro cause the BTABLE to be initialised?
(I need pma[12] to be 0x100 instead of 0x0 as I want to use endpoint 3 for the HID interface in a composite device. I am using this simple HID device to test the use of a different endpoint. Changing 0x81 to 0x83 in USBD_LL_Init() and #define HID_EPIN_ADDR are not sufficient to change the value of pma[12]. The incorrect TX pointer at pma[12] is used and corrupt data is observed in wireshark.)
Update:
If I add code to manually set pma[12] to 0x100:
HAL_StatusTypeDef HAL_PCDEx_PMAConfig(PCD_HandleTypeDef *hpcd,
uint16_t ep_addr,
uint16_t ep_kind,
uint32_t pmaadress)
...
/* Here we check if the endpoint is single or double Buffer*/
if (ep_kind == PCD_SNG_BUF)
{
/* Single Buffer */
ep->doublebuffer = 0U;
/* Configure the PMA */
ep->pmaadress = (uint16_t)pmaadress;
// correct PMA BTABLE
uint32_t *btable = (uint32_t *) USB_PMAADDR; // Test this.
if (ep->is_in) {
btable[ep->num * 4] = pmaadress;
}
}
The value at pam[12] does get set, but it later gets overwritten:
__HAL_RCC_USB_CLK_ENABLE() enables clocks for the USB block. Before it is enabled, all peripheral locations are read as zeroes. After clock is enabled, the actual PMA content becomes visible, whatever was written there before reset or random garbage left after the power up. So executing __HAL_RCC_USB_CLK_ENABLE() has nothing to do with your problem.
I don't know where the TX buffer address for endpoint 3 gets overwritten, but I guess it is the Cube sets it when it decides to send data on the endpoint. I am not familiar with the Cube, does it have an API to send a USB packet?
Also, double-check that your pma array has the right definition. On F1 and I likely F3, there is a 2-byte value at each of the 32-bit location.
UPD: Sorry, I saw this question first, but your real problem is why TX addr gets overwritten or not set up correctly.
I need to collect voice pieces from a continuous audio stream. I need to process later the user's voice piece that has just been said (not for speech recognition). What I am focusing on is only the voice's segmentation based on its loudness.
If after at least 1 second of silence, his voice becomes loud enough for a while, and then silent again for at least 1 second, I say this is a sentence and the voice should be segmented here.
I just know I can get raw audio data from the AudioClip created by Microphone.Start(). I want to write some code like this:
void Start()
{
audio = Microphone.Start(deviceName, true, 10, 16000);
}
void Update()
{
audio.GetData(fdata, 0);
for(int i = 0; i < fdata.Length; i++) {
u16data[i] = Convert.ToUInt16(fdata[i] * 65535);
}
// ... Process u16data
}
But what I'm not sure is:
Every frame when I call audio.GetData(fdata, 0), what I get is the latest 10 seconds of sound data if fdata is big enough or shorter than 10 seconds if fdata is not big enough, is it right?
fdata is a float array, and what I need is a 16 kHz, 16 bit PCM buffer. Is it right to convert the data like: u16data[i] = fdata[i] * 65535?
What is the right way to detect loud moments and silent moments in fdata?
No. you have to read starting at the current position within the AudioClip using Microphone.GetPosition
Get the position in samples of the recording.
and pass the optained index to AudioClip.GetData
Use the offsetSamples parameter to start the read from a specific position in the clip
fdata = new float[clip.samples * clip.channels];
var currentIndex = Microphone.GetPosition(null);
audio.GetData(fdata, currentIndex);
I don't understand what exactly you convert this for. fdata will contain
floats ranging from -1.0f to 1.0f (AudioClip.GetData)
so if for some reason you need to get values between short.MinValue (= -32768) and short.MaxValue(= 32767) than yes you can do that using
u16data[i] = Convert.ToUInt16(fdata[i] * short.MaxValue);
note however that Convert.ToUInt16(float):
value, rounded to the nearest 16-bit unsigned integer. If value is halfway between two whole numbers, the even number is returned; that is, 4.5 is converted to 4, and 5.5 is converted to 6.
you might want to rather use Mathf.RoundToInt first to also round up if a value is e.g. 4.5.
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] * short.MaxValue));
Your naming however suggests that you are actually trying to get unsigned values ushort (or also UInt16). For this you can not have negative values! So you have to shift the float values up in order to map the range (-1.0f | 1.0f ) to the range (0.0f | 1.0f) before multiplaying it by ushort.MaxValue(= 65535)
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] + 1) / 2 * ushort.MaxValue);
What you receive from AudioClip.GetData are the gain values of the audio track between -1.0f and 1.0f.
so a "loud" moment would be where
Mathf.Abs(fdata[i]) >= aCertainLoudThreshold;
a "silent" moment would be where
Mathf.Abs(fdata[i]) <= aCertainSiltenThreshold;
where aCertainSiltenThreshold might e.g. be 0.2f and aCertainLoudThreshold might e.g. be 0.8f.
I am trying to interface A71CH with raspberry PI 3 over i2c, the device requires repeated starts and when a read request is made the first byte the device sends, is always the length of the whole message. When I am trying to make a read, instead of reading a fixed sized message , I want to read the first byte then send NACK signal to the slave after certain amount of bytes have been received that is indicated with the first byte. I used to following code but could not get the results I expected because it only read one byte than sends a NACK signal as you can see below.
struct i2c_rdwr_ioctl_data packets;
struct i2c_msg messages[2];
int r = 0;
int i = 0;
if (bus != I2C_BUS_0) // change if bus 0 is not the correct bus
{
printf("axI2CWriteRead on wrong bus %x (addr %x)\n", bus, addr);
}
messages[0].addr = axSmDevice_addr;
messages[0].flags = 0;
messages[0].len = txLen;
messages[0].buf = pTx;
// NOTE:
// By setting the 'I2C_M_RECV_LEN' bit in 'messages[1].flags' one ensures
// the I2C Block Read feature is used.
messages[1].addr = axSmDevice_addr;
messages[1].flags = I2C_M_RD | I2C_M_RECV_LEN|I2C_M_IGNORE_NAK;
messages[1].len = 256;
messages[1].buf = pRx;
messages[1].buf[0] = 1;
// NOTE:
// By passing the two message structures via the packets structure as
// a parameter to the ioctl call one ensures a Repeated Start is triggered.
packets.msgs = messages;
packets.nmsgs = 2;
// Send the request to the kernel and get the result back
r = ioctl(axSmDevice, I2C_RDWR, &packets);
Is there any way that allows me to make variable sized i2c reads ? What can I do to make it work ? Thanks for looking.
Raspbery doesn't support SMBUS Block Reads, only way to overcome this is to do bitbanging on GPIO pins. As #Ian Abbott mentioned above, I managed to modify bbI2CZip function to fit my need by checking the first byte of the received message and updating the read length afterwards.
I had a similar issue with the rpi3. I wanted to read exactly 32 bytes of data from a register on a slave device, but i2c_smbus_read_block_data() was returning -71 and errno 71 EPROTO.
The solution was to use i2c_smbus_read_i2c_block_data() instead of i2c_smbus_read_block_data().
/* Until kernel 2.6.22, the length is hardcoded to 32 bytes. If you
ask for less than 32 bytes, your code will only work with kernels
2.6.23 and later. */
extern __s32 i2c_smbus_read_i2c_block_data(int file, __u8 command, __u8 length,
__u8 *values);
A question/problem for anyone experienced with Xilinx Vivado HLS and FPGA design:
I need help reducing the utilization numbers of a design within the confines of HLS (i.e. can't just redo the design in an HDL). I am targeting the Zedboard (Zynq 7020).
I'm trying to implement 2048-bit RSA in HLS, using the Tenca-koc multiple-word radix 2 montgomery multiplication algorithm, shown below (More algorithm details here):
I wrote this algorithm in HLS and it works in simulation and in C/RTL cosim. My algorithm is here:
#define MWR2MM_m 2048 // Bit-length of operands
#define MWR2MM_w 8 // word size
#define MWR2MM_e 257 // number of words per operand
// Type definitions
typedef ap_uint<1> bit_t; // 1-bit scan
typedef ap_uint< MWR2MM_w > word_t; // 8-bit words
typedef ap_uint< MWR2MM_m > rsaSize_t; // m-bit operand size
/*
* Multiple-word radix 2 montgomery multiplication using carry-propagate adder
*/
void mwr2mm_cpa(rsaSize_t X, rsaSize_t Yin, rsaSize_t Min, rsaSize_t* out)
{
// extend operands to 2 extra words of 0
ap_uint<MWR2MM_m + 2*MWR2MM_w> Y = Yin;
ap_uint<MWR2MM_m + 2*MWR2MM_w> M = Min;
ap_uint<MWR2MM_m + 2*MWR2MM_w> S = 0;
ap_uint<2> C = 0; // two carry bits
bit_t qi = 0; // an intermediate result bit
// Store concatenations in a temporary variable to eliminate HLS compiler warnings about shift count
ap_uint<MWR2MM_w> temp_concat=0;
// scan X bit-by bit
for (int i=0; i<MWR2MM_m; i++)
{
qi = (X[i]*Y[0]) xor S[0];
// C gets top two bits of temp_concat, j'th word of S gets bottom 8 bits of temp_concat
temp_concat = X[i]*Y.range(MWR2MM_w-1,0) + qi*M.range(MWR2MM_w-1,0) + S.range(MWR2MM_w-1,0);
C = temp_concat.range(9,8);
S.range(MWR2MM_w-1,0) = temp_concat.range(7,0);
// scan Y and M word-by word, for each bit of X
for (int j=1; j<=MWR2MM_e; j++)
{
temp_concat = C + X[i]*Y.range(MWR2MM_w*j+(MWR2MM_w-1), MWR2MM_w*j) + qi*M.range(MWR2MM_w*j+(MWR2MM_w-1), MWR2MM_w*j) + S.range(MWR2MM_w*j+(MWR2MM_w-1), MWR2MM_w*j);
C = temp_concat.range(9,8);
S.range(MWR2MM_w*j+(MWR2MM_w-1), MWR2MM_w*j) = temp_concat.range(7,0);
S.range(MWR2MM_w*(j-1)+(MWR2MM_w-1), MWR2MM_w*(j-1)) = (S.bit(MWR2MM_w*j), S.range( MWR2MM_w*(j-1)+(MWR2MM_w-1), MWR2MM_w*(j-1)+1));
}
S.range(S.length()-1, S.length()-MWR2MM_w) = 0;
C=0;
}
// if final partial sum is greater than the modulus, bring it back to proper range
if (S >= M)
S -= M;
*out = S;
}
Unfortunately, the LUT utilization is huge.
This is problematic because I need to be able to fit multiple of these blocks in hardware as axi4-lite slaves.
Could someone please provide a few suggestions as to how I can reduce the LUT utilization, WITHIN THE CONFINES OF HLS?
I've already tried the following:
Experimenting with different word lengths
switching the top level inputs to arrays so they are BRAM (i.e. not using ap_uint<2048>, but instead ap_uint foo[MWR2MM_e])
Experimenting with all sorts of directives: compartmentalizing into multiple inline functions, dataflow architecture, resource limits on lshr, etc.
However, nothing really drives the LUT utilization down in a meaningful way. Is there a glaringly obvious way that I could reduce the utilization that is apparent to anyone?
In particular, I've seen papers on implementations of the mwr2mm algorithm that (only use one DSP block and one BRAM). Is this even worth attempting to implement using HLS? Or is there no way that I can actually control the resources that the algorithm is mapped to without describing it in HDL?
Thanks for the help.
I'm looking into developing an iPhone app that will potentially involve a "simple" analysis of audio it is receiving from the standard phone mic. Specifically, I am interested in the highs and lows the mic pics up, and really everything in between is irrelevant to me. Is there an app that does this already (just so I can see what its capable of)? And where should I look to get started on such code? Thanks for your help.
Look in the Audio Queue framework. This is what I use to get a high water mark:
AudioQueueRef audioQueue; // Imagine this is correctly set up
UInt32 dataSize = sizeof(AudioQueueLevelMeterState) * recordFormat.mChannelsPerFrame;
AudioQueueLevelMeterState *levels = (AudioQueueLevelMeterState*)malloc(dataSize);
float channelAvg = 0;
OSStatus rc = AudioQueueGetProperty(audioQueue, kAudioQueueProperty_CurrentLevelMeter, levels, &dataSize);
if (rc) {
NSLog(#"AudioQueueGetProperty(CurrentLevelMeter) returned %#", rc);
} else {
for (int i = 0; i < recordFormat.mChannelsPerFrame; i++) {
channelAvg += levels[i].mPeakPower;
}
}
free(levels);
// This works because one channel always has an mAveragePower of 0.
return channelAvg;
You can get peak power in either dB Free Scale (with kAudioQueueProperty_CurrentLevelMeterDB) or simply as a float in the interval [0.0, 1.0] (with kAudioQueueProperty_CurrentLevelMeter).
Don't forget to activate level metering for AudioQueue first:
UInt32 d = 1;
OSStatus status = AudioQueueSetProperty(mQueue, kAudioQueueProperty_EnableLevelMetering, &d, sizeof(UInt32));
Check the 'SpeakHere' sample code. it will show you how to record audio using the AudioQueue API. It also contains some code to analyze the audio realtime to show a level meter.
You might actually be able to use most of that level meter code to respond to 'highs' and 'lows'.
The AurioTouch example code performs Fourier analysis
on the mic input. Could be a good starting point:
https://developer.apple.com/iPhone/library/samplecode/aurioTouch/index.html
Probably overkill for your application.