Flir Lepton 3.5 Banana pi zero Armbian I2C issue - i2c

I have (I believed succesfully) enabled the spi and i2c on armbian to communicate with the lepton. Using the LEPTON FLIR libraries (e.g. https://github.com/Myzhar/Lepton3_Jetson/tree/master/grabber_lib/Lepton_SDK) I can communicate with the camera (set output etc. returns LEP_OK). However when I try to reboot the camera (using the LEP_RunOemReboot( &port )) I get I2C error. I tried to check in the file linux_I2C.c and I know that in function DEV_I2C_MasterReadData(...) the check if(bbb_result != 0 || bytesActuallyRead != bytesToRead) fails with values bbb_result = -1; bytesActuallyRead = 0 and bytesToRead = 2. As the settings otherwise returns OK (I have no way to check it, as the camera seems to get stuck after one frame. I want to reboot it and I get to this loop).
Any idea what is going on?
[EDIT] does not seem to be this issue as sleeps around the write function did nothing.
(the issue seems to be the write int DEV_I2C_MasterReadData(...) returning -1)

Related

How to set AVAudioEngine input and output devices (swift/macos)

I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..

Roblox - while testing trusses work when i drag them in, but when i use script to set all the stuff, then i cant climb it

I am using a script to set cancollide = true and transparency = 0, but i cant climb the truss. but when I'm already in testing mode (in studio) and i drag in the same truss, i can climb it, and i am looking at properties, its same, both anchored is true, they are touching same parts, i dont know why this is happening. Please help thanks :)by the way, i am making tycoon and this is script i am using:
wait(1)
amount = 0 -- cost of model
owner = script.Parent.Parent.Owner
local stun = false
pcall(script.Parent.Head.Touched:connect(function(hit)
if hit.Parent ~= nil then
player = game.Players:findFirstChild(hit.Parent.Name)
if not stun and player ~= nil then
if player.Name == owner.Value then
if player:findFirstChild("leaderstats") ~= nil then
stats = player:findFirstChild("leaderstats")
if stats.Money.Value >= amount then
stun = true
stats.Money.Value = stats.Money.Value - amount
script.Parent.ladder.CanCollide = true
script.Parent.ladder.Transparency = (0)
script.Parent.Head:Remove()
wait(1)
stun = false
end
end
end
end
end
end))
dont worry about other stuff, it works, its just this part that matters now:
script.Parent.ladder.CanCollide = true
script.Parent.ladder.Transparency = (0)
script.Parent.Head:Remove()
Please help :( this is the problem with the ladder using script not working and same dragged in from toolbox truss working, iv done this with many trusses and ladders and same result :(
So, a error I noticed in your code, is that you erase your Character's Head, which would kill your character, or possibly throw a error if the .Touched event fires two times.
There are many efficency errors in this script, such as .Transparency = (0), or doing script.Parent.ladder instead of using variables, but it really is no problem most of the times. Something you could try, is to use Instance.new() to create the ladder.If this game is FilteringEnabled on (Expermiental mode off), please note your script won't work as intended if it is a LocalScript, it will only work well on Server-Side (aka, a normal Script instead of a LocalScript)

LPUART1 not working on STM32L476 (based on VisualGDB)

Hi I am developed a board based on the Discovery L476 board (STM32L476VGT6) using MBED and after porting it to VisualGDB everything works great. The only thing that doesn't work is LPUART1. I hooked it to PB10(LPUART1_RX), PB11(LPUART1_TX) but whenever I declare the port in my code and download it, the program hangs and doesn't even start:
Serial RS232(PB_11, PB_10);
If I remove this line, the code works great (but I can't use this port)
I changed the pin definitions in PeripheralPins.c so PB10 and PB11 will function as the LPUART TX and RX pins: (I added the lines)
const PinMap PinMap_UART_RX[] = { {PB_10, LPUART_1, STM_PIN_DATA(STM_MODE_AF_PP, GPIO_PULLUP, GPIO_AF8_LPUART1)},
//
const PinMap PinMap_UART_TX[] = { {PB_11, LPUART_1, STM_PIN_DATA(STM_MODE_AF_PP, GPIO_PULLUP, GPIO_AF8_LPUART1)},
but it still doesn't work. Any ideas?
See https://github.com/ARMmbed/mbed-os/issues/5389, the baud rate needs to be set at [sys_clk / 4096 ... sys_clk / 3]. Sys clock on this device is running at 80MHz. You could fix it in the HAL for this board until a real fix is deployed.

Linux V4L driver - Polling camera input format

I am unfamiliar with Linux kernel development but I'm tasked with updating a kernel driver so that it will return a status code that can be read by an application. This will require that the driver poll the hardware a couple of times a second to see what camera format is being sent (PAL, NTSC, or none).
However, I'm at a loss on how this would be accomplished. I understand how the driver communicates with the hardware, but I don't understand how to pass this data to the application. Does this type of behavior require the use of an ioctl() call or is this a read file operation? Also if the application is calling an IOCTL or read function and then needs to wait for the hardware to respond will this create a performance issue?
Also for added information, I am working on a 2.6 version of the kernel. I'm working my way through "Linux Device Drivers 3rd Ed", but nothing stands out on how to address this specific issue. LDD3 makes it sound like ioctl() only for sending commands to the driver. Since this is a V4L driver, open file will return image data, not the status information I want, I think.
I recommend checking the latest V4L2 API document, hosted on linuxtv.org: http://linuxtv.org/downloads/v4l-dvb-apis/
Userspace applications can call an IOCTL to query the current input format. The following userspace code can be used to query the kernel driver for the current video standard:
(quoting http://www.linuxtv.org/downloads/legacy/video4linux/API/V4L2_API/spec-single/v4l2.html#standard)
Example 1.5. Information about the current video standard
v4l2_std_id std_id;
struct v4l2_standard standard;
if (-1 == ioctl (fd, VIDIOC_G_STD, &std_id)) {
/* Note when VIDIOC_ENUMSTD always returns EINVAL this
is no video device or it falls under the USB exception,
and VIDIOC_G_STD returning EINVAL is no error. */
perror ("VIDIOC_G_STD");
exit (EXIT_FAILURE);
}
memset (&standard, 0, sizeof (standard));
standard.index = 0;
while (0 == ioctl (fd, VIDIOC_ENUMSTD, &standard)) {
if (standard.id & std_id) {
printf ("Current video standard: %s\n", standard.name);
exit (EXIT_SUCCESS);
}
standard.index++;
}
/* EINVAL indicates the end of the enumeration, which cannot be
empty unless this device falls under the USB exception. */
if (errno == EINVAL || standard.index == 0) {
perror ("VIDIOC_ENUMSTD");
exit (EXIT_FAILURE);
}

IMFSourceReader giving error 0x80070491 for some resolutions

I'm trying to catpure video from a 5MP UVC camera using an IMFSourceReader from Microsoft Media Foundation (on Windows 7 x64). Everything works just like the documentation with no errors on any API calls until the first callback into OnReadSample() which has "0x80070491 There was no match for the specified key in the index" as it's hrStatus parameter.
When I set the resolution down to 1080p it works fine even though 5MP is the camera's native resolution and 5MP (2592x1944) enumerates as an available format.
I can't find anything in the the Microsoft documentation to say that this behaviour is by design but it seems consistent so far. Has anyone else got IMFSourceReader to work at more that 1080p?
I see the same effects on the Microsoft MFCaptureToFile example when it's forced to select the native resolution:
HRESULT nativeTypeErrorCode = S_OK;
DWORD count = 0;
UINT32 streamIndex = 0;
UINT32 requiredWidth = 2592;
UINT32 requiredheight = 1944;
while ( nativeTypeErrorCode == S_OK )
{
IMFMediaType * nativeType = NULL;
nativeTypeErrorCode = m_pReader->GetNativeMediaType( streamIndex, count, &nativeType );
if ( nativeTypeErrorCode != S_OK ) continue;
// get the media type
GUID nativeGuid = { 0 };
hr = nativeType->GetGUID( MF_MT_SUBTYPE, &nativeGuid );
if ( FAILED( hr ) ) return hr;
UINT32 width, height;
hr = ::MFGetAttributeSize( nativeType, MF_MT_FRAME_SIZE, &width, &height );
if ( FAILED( hr ) ) return hr;
if ( nativeGuid == MFVideoFormat_YUY2 && width == requiredWidth && height == requiredheight )
{
// found native config, set it
hr = m_pReader->SetCurrentMediaType( streamIndex, NULL, nativeType );
if ( FAILED( hr ) ) return hr;
break;
}
SafeRelease( &nativeType );
count++;
}
Is there some undocumented maximum resolution with Media Framework?
It turns out that the problem was with the camera I was using, NOT the media streaming framework or UVC cameras generally.
I have switched back to using DirectShow sample grabbing which seems to work ok so far.
I ran into this same problem on windows 7 with a usb camera module I got from Amazon.com (ELP-USBFHD01M-L21). The default resolution of 1920x1080x30fps (MJPEG) works fine, but when I try to select 1280x720x60fps (also MJPEG, NOT h.264) I get the 0x80070491 error in the ReadSample callback. Various other resolutions work OK, such as 640x480x120fps. 1280x720x9fps (YUY2) also works.
The camera works fine at 1280x720x60fps in Direct Show.
Unfortunately, 1280x720x60fps is the resolution I want to use to do some fairly low latency augmented reality stuff with the Oculus Rift.
Interestingly, 1280x720x60fps works fine with the MFCaptureD3D sample in Windows 10 technical preview. I tried copying the ksthunk.sys and usbvideo.sys drivers from my windows 10 installation to my windows 7 machine, but they failed to load even when I booted in "Disable Driver Signing" mode.
After looking around on the web, it seems like various people with various webcams have run into this problem. I'm going to have to use DirectShow to do my video capture, which is annoying since it is a very old API which can't be used with app store applications.
I know this is a fairly obscure problem, but since Microsoft seems to have fixed it in Windows 10 it would be great if they backported the fix it to Windows 7. As it is, I can't use their recommended media foundation API because it won't work on most of the machines I have to run it on.
In any case, if you are having this problem, and Windows 10 is an option, try that as a fix.
Max Behensky