Problems with TouchGFX, STM32F746G-discovery board and custom touchscreen - stm32

I encountered a problem while modifiying the STM32F746G-Discovery board.
I modified it like this:
instead of the 4.3" display, I soldered a 5" display (24 bit RGB--> RGB888, 800x480) to the STM32F7 discovery board
modified the LTDC settings to match the new display (Can there be an error?)
set the LCD-TFT clock to 38.4 MHZ (according to the datasheet)
used the touch controller FT5446 instead of the FT5336
modified the driver code (it's really similar)
Everytime I touch the display, I get some a disrupted lines in the displayed image, but the touchscreen itself works completely fine. After lifting my finger off the display, the displayed image looks like it should.
See the photo I added.
Has anyone a hint for me, what could be wrong?
Thank you very much!

Related

STM32F302R8 Custom Board Blink an LED Problem

I am a new STM32 user migrating from Atmel/Microchip's SAMD line.
I created my first project following along the tutorials here: https://www.youtube.com/watch?v=x_5rYfAyqq0&t=682s. It's a motor driver, with some other hardware shown outside of the screenshot below, but at the moment I am just trying to get statusLED to blink. I can successfully connect to the board with an STLink, when I press debug and resume, my LED will momentarily flash which I capture on video and scope, shown in the video here.
Strangely I don't lose connection to the board or anything, and my program continues to execute, but nothing else happens. As you can see from the code it's supposed to just blink every 500ms. Does anyone have an intuition as to what might be going on?
Here's a video showing the momentary flash (The LED is in the bottom right corner of the board and I press the debug/resume buttons off camera)
https://photos.app.goo.gl/BfGQbW1SX8EJT5eV8
I am using the internal clock for debug purposes, and only have Trace Asynchronous Sw debug + the statusLED set as GPIO output. My only added code is:
HAL_GPIO_WritePin (statusLED_GPIO_Port, statusLED_Pin,GPIO_PIN_SET);
HAL_Delay (500);
HAL_GPIO_WritePin (statusLED_GPIO_Port, statusLED_Pin,GPIO_PIN_RESET);
HAL_Delay (500);
Also here's the board schematic:
STM32F302R8 Board Layout
HAL_Delay depends on the SysTick interrupt being enabled and hooked up to the HAL.
You need to call HAL_Init() from main(), which in turn calls HAL_InitTick().
After that you need the function:
void SysTick_Handler(void)
{
HAL_IncTick();
}
The example projects in the STM32Cube package will include this.
if you configured your Hardware with CubeMX:
check your CubeMX configuration if your GPIO is set up correctly as "GPIO Output" in "Open Drain" oder "Push Pull" mode. According to the schematics "Open Drain" would be my recomendation, but Push Pull would also work.
One of my favourite mistakes is clicking one line to low in CubeMX, selecting "GPIO_Analog" for the pin instead of "GPIO_Output" and searching the code for a long time for a bug where there is none ;)
This turned out to be my having pulled up Boot0 with the intention of using a bootloader, and then forgetting about it. So I rotated my Boot0 resistor to be a pulldown and everything is working now.
Still unsure what configuration is going on to get the brief LED flash in the video, but definitely consider this resovled.

Why is the play button on my title screen not starting the game?

I managed to open the demo game that I need to see/play, however, it looks like the title screen isn't loading correctly. Clicking on the "Play" button should allow the user to start the game, but when I try clicking on it, nothing happens.
I'm not sure why this is happening because I downloaded the exact same files as the ones that were used in the demo and I also tried deleting/redownloading the files a couple of times. I also double checked the console messages and there aren't any errors/warnings for any scripts. I'll attach a screenshot of what I see and the link to the game files themselves if anyone wants to try it on their end.
Also, if this helps, I'm using Unity version 2018.3.2f1.
Here is a link to the project if you want to try it out yourself (I'd post the code, but I don't want to put a giant block of code up without a clear direction; however, I believe the main menu content is in the "Manager.cs" file): https://drive.google.com/file/d/1ekXt948b612dmyT1AZReUOuzh2XbnSDG/view?usp=sharing
This is what the game looks like if it helps:
After reading through the code in the other scripts, I realized that the error was coming from the specific region that was being used as a "hitbox" on the screen for the play button. And because I was setting my aspect ratio differently than what the developer used, the positioning of the "hitbox" did not line up correctly on my screen. So instead, I had to change the aspect ratio to fixed resolution and specific canvas sizes (width and length).

Screen record in unity3d

How to do screen record in unity?
I want to record my screen(gameplay) during my running game.
That should be play/stop , replay , save that recording on locally from device, open/load from my device (which is already we recorded).
In my game one camera which can capture native camera, and one 3d model.
I wish to record that both and use my functionality whenever i want.
Thank you in advance.
This is hard to implement, but not impossible. Because every frame or interval you need to capture screen shot of your camera view and store it in the list. You need good, (Smaller interval but not much. Because when it becomes smaller, needs more memory) interval value. If your interval is big raplay can be seen laggy.
While you play game your ram becomes full and os will terminate the app. So you need to fully cover memory optimization. Another solution is assets in Unity Asset store.
EZ Replay Manager can be used. (Keep in mind: I haven't tried it yet.)
Free
Pro
Check out this open-source project: https://github.com/getsocial-im/getsocial-capture.
By default our project records Main Camera's rendered content. C# examples are in the repo.
You can record in 2 modes:
Continuous mode - capture last X frames.
Manual mode - capture frames on your own when needed. For example, record a timelapse of the level.
Once the recording is done, you can generate GIF, get raw bytes and do whatever you want. E.g. let your users share that GIF with friends.
Here's the recording of a game session from the test app. The recorded GIF shows up in the end:
Disclaimer: I worked at GetSocial at the time of writing.
well i know a guy who post a similar project on github. link :- https://github.com/thanh-nguyen-kim/Unity_Android_Screen_Recorder
but there is a limitation and that is this code is only works on android devices(android means only android not even on ios).
but this is very powerful recorder and it is capture whatever appear on screen(so basically it is a screen recorder made with unity) and also it will capture your microphone output.give it a try.
and if you find any other solution then please also tell me. because it will very helpful for me.because i want to record video with in-game audio and also save it into gallery
Unity now has a screen recording tool builtin. It's called Recorder and doesn't require any coding.
In Unity, go to the Window menu, then click on Package Manager
By default, Packages might be set to "In Project". Select "Unity
Registry" instead
Type "Recorder" in the search box
Select the Recorder and click Install in the lower right corner of the window
That's about all you need to get everything set up and hopefully the
options make sense. The main thing to be aware of that setting
"Recording Mode" to "Single" will take a single screenshot (with
F10)
NOTE: This is a copy of my answer from a Unity screenshots question

OpenGL ES sprites no longer render after updating to iOS 7.1

I am using a simple 2D sprite class based on this tutorial to render PNG bitmaps to the screen:
http://www.raywenderlich.com/9743/how-to-create-a-simple-2d-iphone-game-with-opengl-es-2-0-and-glkit-part-1
Everything worked fine on both my iPhone 4S running iOS 6.1 and my iPhone 5S running iOS 7. Since I updated to iOS 7.1, and on my MacBook Air updated to Mavericks and XCode 5.1, sprites no longer appear on the screen (I just get an empty white screen, which is the color I cleared the background to). When I build the app using XCode 5.1 and run on my iPhone 4S again, it still works.
Does anyone know what could be causing this? Has anyone run into this issue? I am having trouble getting to the source of the problem due to my lack of understanding of OpenGL ES among other things. :) My sprite class is exactly the same as the one in the tutorial.
Let me know if more details/code snippets are required.
I'm not even going to look at the sample code since it doesn't sound like it's the same as your current code. (You said 'based on'.) But I'll tell you how to find your problem.
First, clear to something other than white (like red) as a test.
If the screen turns red you at least know that your view and context and open gl in general is working. That knocks out a lot of possible culprits.
Second.. use the debug tools and chances are it will take you right to your problem. You have to run on a tethered device though and not the sim. Run your app...
Click on FPS in the debug navigator. Then click analyze and be patient. It will take a one frame snapshot of whats happening in open gl. It has to do a bunch of stuff to make that happen and it takes about 30 seconds. But then you'll have an interactive thing that shows you the frame and will let you step through processes and see what code is making that happen, and the frame as it draws each element. It's super cool actually. And probably it will show you an error message (in red).
My guess is that it's no longer loading your sprite images. Something probably changed between 6 and 7 or the versions of Xcode or of OSX. In that case the screen is blank because they're not loaded and therefore not being drawn.
EDIT:
I think the analysis will help find your problem. But to offer more possibilities, in my experience when nothing is drawing it's often one of these things:
An overall OpenGL issue - set your clear color to red or blue to test.
Shader didn't compile - always output shader compile errors so you know.
Bad Shader math or logic - use gl_FragColor = vec4(0.0,0.0,1.0,1.0) or some contrasting color to test. Can you see your structures?
Program isn't getting an attribute. Did you remember glEnableVertexAttribArray
Program isn't getting a uniform. Use the Analyze feature above to check the uniform values to make sure they made it to the shader.
(i'll add more if I think of them)
When stuff does

google glass camera parameter settings

I'm working on an app that includes a custom camera application on Glass. I want to be able to hard-set different camera parameters, but I'm having difficulty figuring out which ones I actually have access to.
I tried calling parameters.flatten() and got a whole bunch of options that I thought I would be able to use, but when I tried testing them, nothing happened. (For example, when I tried setting the color effect to sepia, the result was still in normal color). Is there any documentation or code I can look at that will tell me which parameter options I actually have?
There are a few open issues on our issue tracker about camera parameters that do not behave as expected:
Issue 302: GDK: Camera effects not registered or take affect
Issue 303: GDK: Request: Additional camera focus modes (with auto focus)
Issue 304: GDK: Camera scene mode does not register or take affect
You may want to follow those so that you can be updated as the GDK evolves.