I am currently making a system using digital camera, my camera resolution is 384x288 4:2:2 YCbCr. My camera works on BT.656 with embedded sync (without hardware sync).
I configured STM32 DCMI with embedded sync and it didn't work. Then I started to debug it and found some weird STM32 DCMI issue:
STM32 DCMI embedded for BT.656 is inverted.
STM32 DCMI document (out of ST's
AN5020):
BT.656 Protocol (from
here):
I have 2 questions:
Has anyone been able to make BT.656 work on STM32H7 or any STF32FXX?
BT.656 resolution for PAL is 720x576 but my camera is 384x288.
What is the format of the camera?
Or: How can I understand how many lines are used?
Related
If i want to use mpu6050 intterrupt to wake up stm32.
when i move my board,stm32 can wake up form stop mode.
i have realized the iic communication between stm32 and mpu6050.and stm32 can already enter stop mode.
I'm trying to figure out how to do multi channel audio in unity, and it looks like something isn't working correctly.
Here is my setup:
Blank unity project (unity 2020.3.0f1)
Set project settings > audio > default speaker mode to "Quad"
Create a 4-channel audio clip with audacity (1 "click" on each channel, one at a time)
Import the clip into unity with default settings
I don't have an external sound card (yet) so I'm using VB-Cable as a virtual 4-channel output
VB-Cable provides a Control panel where I can see the levels for each channel.
Playing back the source file with quicktime, I can see the channel 1,2,3 and 4 levels moving as I expect.
However, playing back the imported audio clip in unity's asset preview merges channel 1 with 3, and 2 with 4. The same thing happens when playing the audio in-game.
Another setup I've tried is the same as above, but setting up my computer to use an "aggregate audio device":
2 outputs from my built-in output
2 outputs from VB-Cable
(I also changed the VB-Cable settings to make it a 2-channel output, so that when combined with the built-in output, I get a 4-channel output device)
With this setup, playing back the clip in Quicktime or in the unity asset preview works as expected (I hear the channels 1 and 2 through the Built-in output and "see" the channels 3 and 4 on the VB-Cable control panel).
Playing the clip in-game however, only the channels 1,2 and 3 seem to work. Nothing happens on channel 4 (Using a unity audio mixer, I can see channels 1,2 and 3 "lighting up" but 4 stays silent)
I know changing audio config with unity opened is not guaranteed to work, so for every test I do, I quit and restart Unity.
I hope I've explained the issue properly. If not, here is a zip file containing the test unity project (including the 4 channel audio track), and screen recording of different tests I've made.
System:
MacBook Pro (Retina, 13-inch, Late 2013)
MacOS 11.6.2
16GB RAM
2,8 GHz Dual-Core Intel Core i7
What I am trying to do is:
Programatically I want to switch on and off the USB power of the RPi B3+ or 4 models.
I know that I can put 0 in /sys/devices/platform/soc/3f980000.usb/buspower but seems like it does not work as expected.
Let me show you what all things I have tried:
Check 1:
Connect USB device (In my case, its camera) to the Rpi
Fire command "lsusb" In the list,
I get my camera listed properly
Check 2:
The camera is connected and lsusb is showing the device
sudo vi /sys/devices/platform/soc/3f980000.usb/buspower and put 0 in it and save the file
again lsusb
All the connected devices list is gone. Looks like its working
BUT:
When I put voltage meeter on the usb port(As showin in screenshot) , it shows same current flowing through the USB ports.
Refer attached screenshot below to understand this issue.
Looking for the solution if anyone knows.
i try to connect lcd with touch panel AR1100 to Raspberry pi3 with Android things. Android does not have driver for this touch, i found this solve:
create file:
/system/usr/idc/Vendor_04d8_Product_0c03.idc
and write:
# This is an example of an input device configuration file.
# It might be used to describe the characteristics of a built-in touch screen.
# The device is a external device.
device.internal = 0
# The device should behave as a touch screen device.
touch.deviceType = touchScreen
# The device uses the same orientation as the built-in display.
touch.orientationAware = 1
But after restart touch not working. What wrong??
How i can connect this touch to android things?
In your Android Things application (not on Android Things system level) you can add AR1100 touch support via Raspberry Pi UART by implementing AR1100 Data Protocol (pp. 14-15 of datasheet):
The host should be configured for 9600 BAUD, 8 data bits and 1 Stop
bit.
...
Touch reports always originate from the AR1100 and are
transmitted in response to touch detection. The format of the
touch report is mode-dependent. The measurement resolution for
touch coordinates is 10-bit. The measured values are shifted
(multiplied by 4) and reported in a 12-bit format. In the
reporting protocol, the Least Significant coordinate bits X1:X0
and Y1:Y0 will be zeros. The resulting full-scale range for reported
touch coordinates is 0 to 4095.
...
The ‘standard’, 5-byte touch report is formatted as in Table 4-2:
etc.
Android Things UART example you can find here.
I am decoding a H264 stream using FFmpeg on the iPhone. I know the H264 stream is valid and the SPS/PPS are correct as VLC, Quicktime, Flash all decode the stream properly. The issue I am having on the iPhone is best shown by this picture.
It is as if the motion vectors are being drawn. This picture was snapped while there was a lot of motion in the image. If the scene is static then there are dots in the corners. This always occurs with predictive frames. The blocky colors are also an issue.
I have tried various build settings for FFmpeg such as turning off optimizations, asm, neon, and many other combinations. Nothing seems to alter the behavior of the decoder. I have also tried the Works with HTML, Love and Peace releases, and also the latest GIT sources. Is there maybe a setting I am missing, or maybe I have inadvertently enabled some debug setting in the decoder.
Edit
I am using sws_scale to convert the image to RGBA. I have tried various different pixel formats with the same results.
sws_scale(convertCtx, (const uint8_t**)srcFrame->data, srcFrame->linesize, 0, codecCtx->height, dstFrame->data, dstFrame->linesize);
I am using PIX_FMT_YUV420P as the source format when setting up my codec context.
What you're looking at is ffmpeg's motion vector visualization. Make sure that none of the following debug flags are set:
avctx->debug & FF_DEBUG_VIS_QP
avctx->debug & FF_DEBUG_VIS_MB_TYPE
avctx->debug_mv
Also, keep in mind that decoding H264 video using the CPU will be MUCH slower and less power-efficient on iOS than using the hardware decoder.