Custom Device Tree Overlay Problem (Allwinner sun50i-h6, Orange Pi 3 LTS, Armbian) - overlay

I'm trying find the current syntax for a custom DT Overlay for Armbian OS on an Orange Pi 3 LTS.
This overlay is meant to set a specific GPIO pin (PD15) as Output HIGH on bootup.
I create the following code based on various references scattered online:
/dts-v1/;
/plugin/;
/ {
compatible = "allwinner,sun50i-h6";
fragment#0 {
target = <&leds>;
__overlay__ {
mps_led: led {
label = "mps_led";
linux,default-trigger = "default-on";
pinctrl-names = "default";
gpios = <&pio 3 15 0>; /* PD15 */
function = "gpio_out";
default-state = "on";
status = "okay"
};
};
};
};
I used the 'leds' device class as it was recommended to use an existing class for the DT hook rather than try to create a new one.
The code compiles from *.dts to *.dto and gets added to armbianEnv under user-overlay as expected, but doesn't work (on bootup the pin remains OFF and LOW).
Can anyone please help with correcting the syntax and/or other mistakes in the above to make it work?

Related

Is it possible to add a new root-level node in a devicetree overlay?

I'm bringing-up a Xilinx ZedBoard using the Analog Devices Kuiper image. I want to start with a base device-tree without any fpga-axi peripherals and add one (and associated peripherals) only as-needed using a live-overlay. All the examples I have seen only reconfigure an existing root node. Is it possible to add a new device node and can someone point me to an example so I can see the syntax.
This compiles, but causes a Kernel panic when I apply the overlay.
// HSI Generated overlay/pl.dtsi file.
// Enable the axi-gpio interface
/dts-v1/;
/plugin/;
/ {
fpga_axi#0 {
compatible = "simple-bus";
#address-cells = <0x01>;
#size-cells = <0x01>;
ranges;
};
};
dmesg tells me that it's looking for at least one fragment
[ 942.395773] OF: overlay: no fragments or symbols in overlay
[ 942.395792] OF: overlay: init_overlay_changeset() failed, ret = -22
[ 942.395802] create_overlay: Failed to create overlay (err=-22)

How to use the RTC clock with the STM32 using HSE with PLL

I am using the stm32F0xx series and am trying to get the RTC to work. I have an external 8MHz crystal connected and using PLL to create a sysclk of 48MHz. Obviously I would like to use this clock with the RTC. I have tried the following:
//(1) Write access for RTC registers
//(2) Enable init phase
//(3) Wait until it is allow to modify RTC register values
//(4) set prescaler,
//(5) New time in TR
//(6) Disable init phase
//(7) Disable write access for RTC registers
RTC->WPR = 0xCA; //(1)
RTC->WPR = 0x53; //(1)
RTC->ISR |= RTC_ISR_INIT; //(2)
while ((RTC->ISR & RTC_ISR_INITF) != RTC_ISR_INITF) //(3)
{
//add time out here for a robust application
}
RCC->BDCR = RCC_BDCR_RTCSEL_HSE;
RTC->PRER = 0x007C2E7C; //(4)
RTC->TR = RTC_TR_PM | 0x00000001; //(5)
RTC->ISR &=~ RTC_ISR_INIT; //(6)
RTC->WPR = 0xFE; //(7)
RTC->WPR = 0x64; //(7)
In the main loop there is an infinite for that turns two led's on and off. Without the RTC config this works fine but as soon as I add in the code above it stops working.
If I do this then the rest of the code breaks. Can I use HSE and if so am I using the prescalar correctly?
This example from actual working code for using HSE for RTC at STM32f429. It uses STM HAL software library, but can gives you a clue to solve.
Please note, that HSE already must be configured and used as frequency source before this code.
Remark: when reading, you should read not just time but also date.
i.e.:
HAL_RTC_GetTime(&RTChandle, &RTCtime, FORMAT_BIN); //first
HAL_RTC_GetDate(&RTChandle, &RTCdate, FORMAT_BIN); //second, even if you dont required
otherwise registers stay frozen (in this case you see ticks only under debugger but not in real run, because debug reads both registers)
// enable access to rtc register
HAL_PWR_EnableBkUpAccess();
// 1. 8Mhz oscillator (Source crystal! Not after PLL!) div by 8 = 1 Mhz
__HAL_RCC_RTC_CONFIG(RCC_RTCCLKSOURCE_HSE_DIV8);
RTChandle.Instance = RTC;
RTChandle.Init.HourFormat = RTC_HOURFORMAT_24;
// 2. (1 Mhz / 125) = 7999 ticks per second
RTChandle.Init.AsynchPrediv = 125 - 1;
RTChandle.Init.SynchPrediv = 8000 - 1;
RTChandle.Init.OutPut = RTC_OUTPUT_DISABLE;
RTChandle.Init.OutPutPolarity = RTC_OUTPUT_POLARITY_HIGH;
RTChandle.Init.OutPutType = RTC_OUTPUT_TYPE_OPENDRAIN;
// do init
HAL_RTC_Init(&RTChandle);
// enable hardware
__HAL_RCC_RTC_ENABLE();

System-wide tap simulation on iOS

I would like to achieve system-wide tap simulation on iOS, via a MobileSubstrate plugin. The idea is to be able to simulate touches (in a first time single touch, then multitouch) on a system-wide level, in iOS 5.1.1 .
I've been successful in implementing this article to simulate touches on a specific view, I now would like to be able to simulate them system-wide.
I understand I should use the private MobileServices framework to do so, and I've documented myself on GSEvent (I've also looked at Veency & MouseSupport sourcecodes).
I tried to hook up a view to intercept UIEvents and look at the underlying structure :
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
GSEventRef eventRef = (GSEventRef)[event performSelector:#selector( _gsEvent)];
GSEventRecord record = *_GSEventGetGSEventRecord(eventRef);
< breakpoint here to look at record >
}
and the result is extremely similar to the old (iOS 3) structure detailled above.
I then tried to fire up those events myself (on a Standalone app, not a MS tweak for now) :
+(void)simulateTouchDown:(CGPoint)point{
point.x = roundf(point.x);
point.y = roundf(point.y);
GSEventRecord record;
memset(&record, 0, sizeof(record));
record.type = kGSEventHand;
record.windowLocation = point;
record.timestamp = GSCurrentEventTimestamp();
GSSendSystemEvent(&record);
}
Now, that doesn't work at all (doesn't crash either).
Most codes (MouseSupport, Veency) look like this
// Create & populate a GSEvent
struct {
struct GSEventRecord record;
struct {
struct GSEventRecordInfo info;
struct GSPathInfo path;
} data;
} event;
memset(&event, 0, sizeof(event));
event.record.type = kGSEventHand;
event.record.windowLocation = point;
event.record.timestamp = GSCurrentEventTimestamp();
event.record.infoSize = sizeof(event.data);
event.data.info.handInfo.type = kGSHandInfoTypeTouchDown;
event.data.info.handInfo._0x44 = 0x1;
event.data.info.handInfo._0x48 = 0x1;
event.data.info.pathPositions = 1;
event.data.path.pathIndex = 0x01;
event.data.path.pathIdentity = 0x02;
event.data.path.pathProximity = 0x00;
event.data.path.pathLocation = event.record.windowLocation;
GSSendSystemEvent(&event.record);
Only :
GSEventRecordInfo is unknown (and I can't find where it might be defined)
I don't see the point of making a whole event to only pass the record
Please someone who's been through this on iOS 5 guide me.
GSEventRecordInfo may have been introduced since iOS 3.2 (that's where the Graphics Services headers you linked to are from). class-dump the framework that's actually on your device on iOS 5.1.1 and see if it's defined there?

How can you get Eclipse CDT to understand MSPGCC (MSP430) includes?

I'm using Eclipse and CDT to work with the mspgcc compiler, it compiles fine, but the code view highlights all my special function registers as unresolved.
I've made a C project where the compiler is "msp430-gcc -mmcu=msp430x2012", and that's set to look for includes in /usr/msp430/include/. I've set the linker to "msp430-gcc -mmcu=msp430x2012", and that's set to look fo libraries in /usr/msp430/lib/. I've set the assembler to "msp430-as". I've told eclipse it's making an elf and I've disabled automatic includes discovery to not find the i686 libraries on my linux box (stupid eclipse!).
Here's the code:
#include <msp430.h>
#include <signal.h> //for interrupts
#define RED 1
#define GREEN 64
#define S2VAL 8
void init(void);
int main(void) {
init(); //Setup Device
P1OUT = GREEN; //start with a green LED
_BIS_SR(LPM4_bits); //Go into Low power mode 4, main stops here
return(1); //never reached, surpresses compiler warning
}
interrupt (PORT1_VECTOR) S1ServiceRoutine(void) {
//we wake the MCU here
if (RED & P1IN) {
P1OUT = GREEN;
} else {
P1OUT = RED;
}
P1IFG = 0; //clear the interrupt flag or we immidiately go again
//we resume LPM4 here thanks to the RETI instruction
}
void init(void) {
WDTCTL = WDTPW + WDTHOLD; // Stop WDT
/*Halt the watchdog timer
P1DIR = ~S2VAL; //Set LED pins as outputs and S2 as input
P1IES = S2VAL; //interrupt on High to Low
P1IE = S2VAL; //enable interrupt for S1 only
WRITE_SR(GIE); //enable maskable interrupts
}
All the variables defines in the mspgcc includes such as P1OUT and WDTCTL show up in the problems box as "not resolved", but remember it builds just fine. I've even tried explicitly including the header file for my chip (normally msp430-gcc does this via msp430.h and the -mmcu option).
I resolved this issue by explicitly including the msp430g2553.h file
#include <msp430g2553.h>
I resolved the issue by following the instructions here

How to use multiple USB webcam in Matlab working simultaneously?

I would like to take the live video with two USB webcams (Philips SPC 900NC), but I found that they cannot work simultaneously on my laptop. Either of the two USB webcams could work alone or work with another webcam (mounted on my laptop originally).
When I use the simulink block 'From video device', Matlab gave the error message with ' Multiple VIDEOINPUT objects cannot access the same device simultaneously.'. Then I checked the video input device with command 'imaqhwinfo', only one of the USB Philips webcam could be detected.
I would like to know that,
what's the reason of this situation? is it because the hardware limitation (USB bus bandwidth) or just matlab video object don't support same multiple video devices?
what's the solution of this? could anyone give me some suggestions?
You may interest in this link:
http://opencv.willowgarage.com/wiki/faq#How_to_use_2_cameras_.28multiple_cameras.29_with_cvCam_library
Which contains:
First, init the cvcam library and get the number of cams by:
int ncams = cvcamGetCamerasCount( ); //returns the number of available cameras in the system
Show dialog to choose which cameras in use
int* out; int nselected = cvcamSelectCamera(&out);
Get the selected cams and enable them.
int cam1 = out[0];
int cam2 = out[1];
cvcamSetProperty(cam1, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam1, CVCAM_PROP_RENDER, CVCAMTRUE); //We'll render stream from this source
cvNamedWindow("Cam1", 1);
cvcamWindow MyWin1 = (cvcamWindow)cvGetWindowHandle("Cam1");
cvcamSetProperty(cam1, CVCAM_PROP_WINDOW, &MyWin1); // Selects a window for video rendering
//Same code for camera 2
cvcamSetProperty(cam2, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam2, CVCAM_PROP_RENDER, CVCAMTRUE);
cvNamedWindow("Cam2", 1);
cvcamWindow MyWin2 = (cvcamWindow)cvGetWindowHandle("Cam2");
cvcamSetProperty(cam2, CVCAM_PROP_WINDOW, &MyWin1);
//If you want to open the property dialog for setting the video format parameters, uncomment this line
//cvcamGetProperty(cam1, CVCAM_VIDEOFORMAT, NULL);
//cvcamGetProperty(cam2, CVCAM_VIDEOFORMAT, NULL);
Enable the stereo mode (2 cameras working at the same time)
cvcamSetProperty(cam1, CVCAM_STEREO_CALLBACK , stereocallback); //stereocallback is the function running to process every frames
cvcamInit();
cvcamStart();
//Your app is working
while (1)
{
int key = cvWaitKey(5);
if (key == 27) break;
}
cvcamStop( );
cvcamExit( );
Define the stereocallback function outside of the function above.
void stereocallback(IplImage* image1, IplImage* image2) {
//Process 2 images here
}