Hi i am thinking to connect a 14 inch or bigger lcd touch screen with my discovery boards. Currently i have stm32f4 and stm32f7 discovery boards.
Currently i am aware of different methods to connect them. But i am most interested in LTDC. I have tried using old protocols like Motorola 6800 Parralel RGB. But Touch screens of bigger size and bigger resolution available on Alibaba for example does not support them.
Please correct me in my understanding if i am wrong that for bigger screen size and bigger resolution old protocols like Motorola 6800 , Intel® 8080 bus and SPI protocol wont work. Because bigger touch screens found in market does not have them.
Will Parallel RGB connection be able to work with bigger screen. If yes can u give me a link for that.
The devices i see are coming with lvds connection. Will they be able to work with stm32 discovery kits. If yes than how.
Finally can u suggest me some good working tutorials for them apart of datasheet.
Thank You !!!
Yes, RGB can be used to drive larger display panels. However, you will need to convert the signal to LVDS.
Assuming you are using the STM32756G-EVAL you could link the LCD modules connector (CN20) to J1 and J2 (depending whether your display panel is single or dual LVDS) of Texas Instruments’ LDI demo kit Transmitter board http://www.ti.com/general/docs/litabsmultiplefilelist.tsp?literatureNumber=snlu036a
Related
I am a software engineer working on a microcontroller system for a side project. The microcontroller I am using is the SparkFun ProMicro (based on the RP2040 board). I am trying to flash the board so that I can write data to the onboard flash memory.
All of the tutorials I have found online suggest starting in boot mode, dragging and dropping the UF2 file, and done!
When I do this, the microcontroller ejects from my computer. Is that meant to happen? It just reboots then doesn't reboot in bootloader?
Once I got MicroPython installed I moved on to writing and flashing code to the board.
I am using the Thonny IDE which identified the correct board (albeit the PICO), then saved the following file as main.py (taken from RPI foundation). It prints toggle, and I believe the output shows that it is being printed from the board, but the light on the board isn't blinking. (code and output below)
I considered that the pinout could be different from this board and the PICO, but some research shows they both use Pin 25 for the LED control.
All this leads me to believe I am on the right path, but I think I am missing something that is taken for granted in the tutorials. My end goal is to write arbitrary text data to flash storage, but I understand it can only take about 8000-10,000 writes before it becomes unreliable, so I want to test that I can write working code before I use some of those.
Is there something I am missing, or am I not thinking about this in the right way?
When I do this, the microcontroller ejects from my computer. Is that meant to happen? It just reboots then doesn't reboot in bootloader?
Yep.
but the light on the board isn't blinking.
Maybe your LED is busted, cause your code is right.
My end goal is to write arbitrary text data to flash storage
That's a terrible idea, unless you just like burning up boards for no good reason. Get an SD Card reader or concoct one out of a solution like this one, and use this sdcard library that will even mount your card, and add it to the syspath. Then you can essentially write all the arbitrary text data you like without burning up your RP2040.
Blinking this LED was harder than I expected. I ended up finding this sample code from AdaFruit and using the commented out neopixel code. The... bright side was that there was way more control over this led that I had realized.
Dont forget you have to add the neopixel.mpy from the bundle to your board.
With the RPi Pico W, you can now identify the led pin with "LED" instead of pin 25 (or another pin). This change is due to pin 25 being used for the Wifi chip on the Pico W. This works on Pico W as below
from machine import Pin
import time
led = Pin("LED", Pin.OUT)
while True:
led.toggle()
time.sleep(0.5)
I have verified this working on a RPi Pico W with MicroPython - using the unstable python version - rp2-pico-w-20220719-unstable-v1.19.1-181-gc947c2529.uf2.
I believe this is intended to become the standard way to access an on board led, since the port can be changed for different boards without changing source code.
That's not a simple LED connected to pin 25 on the Pro Micro RP2040 - it's a WS2812 RGB LED, sometimes called a NeoPixel. There's a one-wire protocol to drive these devices.
MicroPython has support for NeoPixel's built-in:
from machine import Pin
from neopixel import NeoPixel
pin = Pin(25, Pin.OUT) # set Pin 25 to output to drive a NeoPixel
np = NeoPixel(pin, 1) # create NeoPixel driver on Pin 25 for a single pixel
np[0] = (255, 0, 0) # set the first pixel to red (R, G, B)
np.write()
See the rp2 Quickref for more details.
I recently created an app in flash cs6 to be used on my iPhone 4.
The app doesnt need to work through the app store its just a tech demo but when i put the app on my device all the animations become really slow/choppy.
My iPhones up to date and im using air3.2 (i did try updating to air 3.7 but then my application just became a white screen)
I have also tried cacheing all the movie clips as bitmaps bit it doesnt seem to make a difference.
(the app works fine inside the flash simulator)
please help?!
There could be a few reasons why this is happening,as I am taking a punt here as you havent gave much information on what your doing.
but your frame rate may have effect.. how are you creating your tweens?timeline animation or tween scripts.I found tween scripting works better as its distance over time opposed to distance over frames.
Also depending on your animation, images sizes, event listeners etc..
You need to take inconsideration your device itself and the resources it has available, free memory or actual storage available.
The reason it will work freely on your PC would be due to the fact of less limitations, your pc has more to computes and resources to throw at your application .
Add this line to your code to check your memory:
import flash.filesystem.File;
this.addEventListener(Event.ENTER_FRAME,performMemTest);
function performMemTest(e:Event):void {
trace(System.totalMemory);
}
I have read anywhere around 14MB++you may experience problems.
I'm doing an application for mobile devices, specifically for iphone and ipad. And I want to use different interfaces for each. How can I do that? Is there a variable that holds the name of the device?
Such as
if(device=="iPhone")
{use this state}
else if (device=="iPad")
{use that state}
??
You can use Capabilities.os to obtain the device's operating system and see if it uses iOS, then use Capabilities.screenResolutionX and Capabilities.screenResolutionY to determine if the resolutions correspond to an iPhone or an iPad.
Look at this document which shows general principles for scaling things based on the screen resolution.
It is not an exact answer to your question but I am guessing that you just want to scale your interfaces based on the screen resolution.
You can also check the screen DPI using Capabilities.screenDPI. which is nice to know how spread out those pixels are along with Capabilities.screenResolutionX and Capabilities.screenResolutionY to get the resolution.
If you really want to know what the operating system is you can check using Capabilities.os, but as for an exact device I'm not sure if there is a method for that.
i need to merge 5 monitors in XNA (something like Eyefinity).
I have two graphics cards (HD 5450), which have DP connector, of course,
5x flat monitors with resolution 1024*768.
I need to merge/group this monitors in XNA, because i want fullscreen this over 5 monitors.
(fullscreen over multiple monitors)
I just need the visual studio to detect one graphics device with resolution 5120x768.
How i should modify GraphicsDeviceManager / GraphicsAdapter, make it work ?
I cant use Eyefinity, because i have two graphic cards and that i'm trying do "my own eyefinity" in xna.
In my app, i have 5 models dividing to 5 viewports, which are moved every 1024px.
OR, how i should to make it looking like a fullscreen. I don't want the border being visible and i want to have in the middle of screen - how center it ?
Thanks for answers.
To be honest this is going to be difficult if not impossible to do using XNA. And you'd have to get so far outside of what the XNA framework is providing you that there would be little benefit in the end to even using XNA at that point.
Here's a great thread on the App Hub forums talking about different ways of potentially hacking around the XNA framework to achieve multiple monitor fullscreen using XNA.
http://forums.create.msdn.com/forums/p/5562/571993.aspx
As you can see, no one really had any great suggestions and by the time you were dong you were basically programming at such a low level that you might as well be doing C++ and DirectX. Which is exactly what I would recommend to you.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb206364(v=vs.85).aspx
Using DirectX you can see that you're going to get a game/application running fullscreen with a multiple monitor setup much faster and without having to hack your way into it.
while in graduate school for biomedical eng I developed a device we called a "vein finder". It was a simple device that was good enough for our eng school to patent.
I think it would be very very easy to use the iphone camera to develop an iphone app whereby MD's/ nurses/EMT's could use the app to easily identify peripherial veins that are not visible to the naked eye. This would be invaluable in starting IV's and giving bedside medications. The vein finder was esp helpful for patients in shock who had poor venous filling and therefore didn't have veins that "popped into view" with a tourniquet.
It would require using the iphone light at specific wavelengths... anybody have any idea if that is possible?
I don't think you have any control over the flash light that illuminates images. You can only turn it on and off. I also doubt you would be able to get the specific wavelengths you need.
My suggestion would be to look into building a peripheral light device which plugs in to the headphone jack for power and has the necessary functions for emitting light at different wavelengths. That way, you would be able to get the exact result you require.
You may also need to look into the camera itself, as it may not have the ability to capture the light at the wavelengths you may require. Hope that Helps!
Do you mean overlaying veins over the picture on the camera? It would be easy if the veins are always in the same place... What would your imagined procedure be for using this app?
EDIT: You can not manipulate the iPhone light for different frequencies; you can only turn it on and off. You would need to get a separate external light for that.
To work around the fact that you can't set the wavelength of the built in light, you should use an external light along with the iOS device.
Apologies for massive necro!
The LED light of the iPhone is not the problem - most white light sources contain IR or near-IR (in fact they are made up from the entire visible spectrum). Daylight works very well too.
The problem is the receiver. Most camera lenses, including smartphones, have an IR filter - thus cutting out the useful part of the spectrum. All we need is a digital camera with IR filter removed, that has a viewfinder (where you'll see the veins).