STM32 with display not showing whole screen - stm32

I have a STM32F4 custom built PCB with a 0.96 inch TFT 80x160 display (https://www.buydisplay.com/0-96-inch-mini-color-tft-lcd-display-module-80x160-ips-tft-st7735) using ST7735 Driver (https://controllerstech.com/st7735-1-8-tft-display-with-stm32/)
I get the screen to work and it can do the "testAll()" function which basically does a lot of things on the screen to see that it is working. But the problem is that not the whole display is on.
Now from the pictures it could seem like there are some dead pixels on top and that the display is broken. But this is not the case since I can do a rotation (this is function declaration void ST7735_Init(uint8_t rotation))
Rotation takes a number 0-3.
If I rotate in the init, this is the result.
We can see that the "dead" pixels have moved from the top to the bottom.
Okay, so the display itself is working fine. Must be the code.
In the ST7735.h file there are these lines:
#define ST7735_IS_160X80 1
//#define ST7735_IS_128X128 1
//#define ST7735_IS_160X128 1
#define ST7735_WIDTH 80
#define ST7735_HEIGHT 160
I uncommented the IS_160_80 one since that is what I have. And I put WIDTH as 80 and HEIGHT as 160.
In the ST7735.c file there are these rows:
int16_t _width = 80;
int16_t _height = 160;
int16_t cursor_x;
int16_t cursor_y;
uint8_t rotation;
uint8_t _colstart;
uint8_t _rowstart;
uint8_t _xstart;
uint8_t _ystart;
After all the STM32 inits this is all the display code I do:
ST7735_Init(2);
fillScreen(BLACK);
testAll();
I left some of them uninitialized now but I have tried with them all set to 0 too, same result.
I must be missing something, but can't figure out what. Does anyone have any ideas?

Related

16X02 not giving any character in display

I want to display some string on 16X02 lcd. For the time being, I am implementing the example given in the following link. My 16X02 lcd's backlight is on and bright, but it is not giving any character as display. What should I do now?
https://www.losant.com/blog/how-to-connect-lcd-esp8266-nodemcu
#include <LiquidCrystal_I2C.h>
// Construct an LCD object and pass it the
// I2C address, width (in characters) and
// height (in characters). Depending on the
// Actual device, the IC2 address may change.
LiquidCrystal_I2C lcd(0x27, 16, 2); // my lcd pin address is different from the example
void setup() {
// The begin call takes the width and height. This
// Should match the number provided to the constructor.
Serial.begin(115200);
Serial.println ("In Setup");
lcd.begin(16,2);
lcd.init();
// Turn on the backlight.
lcd.backlight();
// Move the cursor characters to the right and
// zero characters down (line 1).
lcd.setCursor(5, 0);
// Print HELLO to the screen, starting at 5,0.
lcd.print("HELLO");
// Move the cursor to the next line and print
// WORLD.
lcd.setCursor(5, 1);
lcd.print("WORLD");
}
void loop() {
}
I'm assuming you are verified your physical connection and providing proper voltage supply.
Are you using the same I2C to GPIO expansion module PCF8574. If no the you may need to modify the LCD module.
Also verify if you have set the proper contrast voltage by adjusting the pot. You should first set it to value where by you can see all background dots (pixels); once text is visible you can set it to optimum value.

Unity - Looking through the scope of a gun

Right now I have 2 Cameras: the main camera displays the gun at its normal state and a second camera is attached to the gun (the gun is a child of the main camera) and when toggled it looks through the scope of the gun and increases the field of view.
Heres a visual for a better understanding:
Now if I were to just toggle the second camera on and turn the main camera off, this would work splendid, but it's not very ideal. You should only have 1 camera per scene.
So I want to Lerp the position of the camera to look through the scope and manually decrease the fieldofview. So I have written the following script:
[RequireComponent(typeof(Camera))]
public class Zoom : MonoBehaviour {
private Transform CameraTransform = null;
public Transform ZoomedTransform;
private bool zoomed = false;
void Start () {
CameraTransform = Camera.main.transform;
}
// Update is called once per frame
void Update () {
if (Input.GetKey (KeyCode.LeftShift))
{
CameraTransform.position = Vector3.Lerp (
CameraTransform.position,
CameraTransform.position + ZoomedTransform.position,
5f * Time.deltaTime
);
CameraTransform.Rotate(ZoomedTransform.rotation.eulerAngles);
}
}
}
The problem with this is that it doesn't work: when I hit the zoom button, the camera speeds through the scene at the speed of light and it's hard to tell exactly what is going on.
Could anyone give me some insight as to what I'm doing wrong? I think it is something to do with the parent-child relationship, but even when I've tried using static values, I cannot seem to replicate the correct solution.
Hierarchy:
(This answer operates under the assumption that ZoomedTransform is a relative transformation, and not the absolute position of the camera as suspected by 31eee384's answer.)
I think there are a couple issues with your code. I'll tackle them individually so they're easier to understand, but they both relate to the following line:
CameraTransform.position = Vector3.Lerp (CameraTransform.position, CameraTransform.position + ZoomedTransform.position, 5f * Time.deltaTime);
First, let's look at how you're using Vector3.Lerp(). For the third argument of Vector3.Lerp(), you're supplying 5f * Time.deltaTime. What exactly does this value work out to? Well, the standard framerate is about 60 FPS, so Time.deltaTime = ~1/60. Hence, 5f * Time.deltaTime = 5/60 = ~0.0833.
What is Vector3.Lerp() expecting for the third argument, though? According to the documentation, that third argument should be between 0 and 1, and determines whether the returned Vector3 should be closer to the first or second given Vector3. So yes, 5f * Time.deltaTime falls within this range, but no interpolation will occur - because it will always be around ~0.0833, rather than progressing from 0 to 1 (or 1 to 0). Each frame, you're basically always getting back cameraPos + zoomTransform * 0.0833.
The other notable problem is how you're updating the value of CameraTransform.position every frame, but then using that new (increased) value as an argument for Vector3.Lerp() the next frame. (This is a bit like doing int i = i + 1; in a loop.) This is the reason why your camera is flying across the map so fast. Here is what is happening each frame, using the hypothetical result of your Vector3.Lerp() that I calculated earlier (pseudocode):
// Frame 1
cameraPosFrame_1 = cameraPosFrame_0 + zoomTransform * 0.0833;
// Frame 2
cameraPosFrame_2 = cameraPosFrame_1 + zoomTransform * 0.0833;
// Frame 3
cameraPosFrame_3 = cameraPosFrame_2 + zoomTransform * 0.0833;
// etc...
Every frame, zoomTransform * 0.0833 gets added to the camera's position. Which ends up being a really, really fast, and non-stop increase in value - so your camera flies across the map.
One way to address these problems is to have variables that stores your camera's initial local position, zoom progress, and speed of zoom. This way, we never lose the original position of the camera, and we can both keep track of how far the zoom has progressed and when to stop it.
[RequireComponent(typeof(Camera))]
public class Zoom : MonoBehaviour {
private Transform CameraTransform = null;
public Transform ZoomedTransform;
private Vector3 startLocalPos;
private float zoomProgress = 0;
private float zoomLength = 2; // Number of seconds zoom will take
private bool zoomed = false;
void Start () {
CameraTransform = Camera.main.transform;
startLocalPos = CameraTransform.localPosition;
}
// Update is called once per frame
void Update () {
if (Input.GetKey (KeyCode.LeftShift))
{
zoomProgress += Time.deltaTime;
CameraTransform.localPosition = Vector3.Lerp (startLocalPos, startLocalPos + ZoomedTransform.position, zoomProgress / zoomLength);
CameraTransform.Rotate(ZoomedTransform.rotation.eulerAngles);
}
}
}
Hope this helps! Let me know if you have any questions. This answer does ramble a little, so I hope you don't have any trouble getting the important points from it.
Your lerp target is relative to the camera's current position, so it's constantly moving. This is the target you have:
CameraTransform.position + ZoomedTransform.position
This means that as your camera moves to get closer to this position, the camera's new position causes the destination to change. So your camera keeps moving forever.
Your destination should be ZoomedTransform.position. No addition is necessary because position is in world coordinates. (And when you actually need to convert between spaces, check out TransformPoint and similar methods.)
It has been a while since I have done anything in Unity, but I think it is processing the Lerp function at frame time and not at actual time. You will need to call it in another function that is not being processed at frame time.

Issue reading screen pixels for color detaction

I am making a game representing freehand drawing and sprites to animate when pass over it. So i have to use color detection and cause an event when the change in color is encountered by sprite on the screen from where it passes. For this i am using glReadpixel() passing RGBA_8888 and GLES20 version and recieve its value in Red Green Blue form but everytime it returns everything to be 0. Tried to change pixelformat and make many hit and trial but no sucess. Can you please help
My code:
`
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
int mTemp = 0;
GLES20.glReadPixels(100, 100, 1,1,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
byte b[] = new byte[4];
PixelBuffer.get(b);
Log.e("COLOR", "R:" + PixelBuffer.get(0) + PixelBuffer.get(1) + PixelBuffer.get(2));
`
Result
Logcat : COLOR R: 000.
I tried using non black background and have red color on screen coordinate provided.
Thanks in advance

OpenGL ES 2.0 for iOS - multiple calls to glDrawElements causing EXC_BAD_ACCESS

Several years ago, I wrote a small Cocoa/Obj-C game framework for OpenGL ES 1.1 and iPhone. This was back when iOS 3.x was popular. My OpenGL ES 1.1 / iOS 3.x implementation of this all worked fine. Time passed, and here we are now with iOS 5.1, OpenGL ES 2.0, ARC, blocks, and other things. I decided that it was high time to port the project over to more... modern standards.
EDIT: Solved one of the problems on my own - that of why it was crashing on the simulator. Sort of - I am now able to draw smaller models, but larger ones (like the test police car) still cause an EXC_BAD_ACCESS - even if that is the only, single call to glDrawElements. I was also able to fix drawing-multiple-meshes on the Simulator - however, I don't know if this will function on-device until tomorrow morning. (my 5.0 test device is my friend's iPhone, don't). So I guess the main question is, why are larger models causing an EXC_BAD_ACCESS on the simulator?
Original post below
However, in moving it up to 5.0, I've run into some OpenGL ES 2.0 errors - two of them, specifically, although they may possibly be related. The first of them is simple - if I try to render my model on a device (iPhone 4S running 5.0.1), it displays, but if I try to display it on the simulator (iPhone Simulator running 5.0), it throws an EXC_BAD_ACCESS on glDrawElements. The second, also simple. I cannot draw multiple meshes. When I draw the model as one big group (one vertex array/index array combo) it draws fine - but when I draw the model as multiple parts (eg, multiple calls to drawElements) it fails, and displays a big black screen - the blackness is not from the model being drawn (I have verified this, outlined below).
To sum it up before the much-more-detailed part, attempting to render my model on the simulator crashes
Caveat: It all works fine for small meshes. I have no problem drawing my small, statically-declared cube over and over, even on the simulator. When I say statically-declared, I mean a hard-coded const array of structs that gets bound and loaded into the vertex buffer and a const array of GLushorts bound and loaded into the index array.
Note: when I say 'model' I mean an overall model, possibly made up of multiple vertex and index buffers. In code, this means that a model simply holds an array of meshes or model-groups. A mesh or model-group is a sub-unit of a model, eg one contiguous piece of the model, has one vertex array and one index array, and stores the lengths of both as well. In the case of the model I've been using, the body of the car is one mesh, the windows another, the lights a third. All together, they make up the model.
The model I am using is a police car, has several thousand vertices and faces, and is split into multiple parts (body, lights, windows, etc) - the body is about 3000 faces, the windows about 100, the lights a bit less.
Here are some things to know:
My model is loading properly. I have verified this in two ways -
printing out the model vertices and manually inspecting them, and
displaying each model-group individually as outlined in 2). I'd post images, but 'reputation limit' and this being my first question, I can't. I have also re-built the model loader twice from scratch with no change, so I know the vertex and index buffers are in the correct order/format.
When I load the model as a single model-group (ie, one vertex
buffer/index buffer) it displays the whole model correctly. When I
load the model as multiple model-groups, and display any given
model-group individually, it displays correctly. When I try to draw
multiple model-groups (multiple calls to glDrawElements) the big
black screen happens.
The black screen is not because of the model being drawn. I
verified this by changing my fragment shader to draw every pixel
red no matter what. I always clear the color buffer to a medium-gray (I clear the depth buffer as well, obviously), but attempting to draw multiple meshes/model-groups results in a black screen. We know it is not the model simply obscuring the view because it is colored black instead of red. This occurs on the device, I do not know what would happen on the simulator as I cannot get it to draw.
My model will not draw in the simulator. It will not draw as either a single mesh/model-group, nor multiple mesh/model-groups. The application loads properly, but
attempting to draw a mesh/model-group results in an EXC_BAD_ACCESS in the
glDrawElements. The relevant parts of the backtrace are:
thread #1: tid = 0x1f03, 0x10b002b5, stop reason = EXC_BAD_ACCESS (code=1, address=0x94fd020)
frame #0: 0x10b002b5
frame #1: 0x09744392 GLEngine`gleDrawArraysOrElements_ExecCore + 883
frame #2: 0x09742a9b GLEngine`glDrawElements_ES2Exec + 505
frame #3: 0x00f43c3c OpenGLES`glDrawElements + 64
frame #4: 0x0001cb11 MochaARC`-[Mesh draw] + 177 at Mesh.m:81
EDIT: It consistently is able to draw smaller dynamically-created models (~100 faces) but the 3000 of the whole model
I was able to get it to render a much-smaller, less-complicated, but still dynamically loaded, model consisting of 192 faces / 576 vertices. I was able to display it both as a single vertex and index buffer, as well as split up into parts and rendered as multiple smaller vertex and index buffers. Attempting to draw the single-mesh model in the simulator resulted in the EXC_BAD_ACCESS still being thrown, but only on the first frame. If I force it to continue, it displays a very screwed up model, and then every frame after that, it displayed 100% fine exactly as it ought to have.
My shaders are not in error. They compile and display correctly when I use a small, statically declared vertex buffer. However, for completeness I will post them at the bottom.
My code is as follows:
Render loop:
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//muShader is a subclass of a shader-handler I've written that tracks the active shader
//and handles attributes/uniforms
//[muShader use] just does glUseProgram(muShader.program); then
//disables the previous shader's attributes (if needed) and then
//activates its own attributes - in this case:
//it does:
// glEnableVertexAttribArray(self.position);
// glEnableVertexAttribArray(self.uv);
//where position and uv are handles to the position and texture coordinate attributes
[self.muShader use];
GLKMatrix4 model = GLKMatrix4MakeRotation(GLKMathDegreesToRadians(_rotation), 0, 1, 0);
GLKMatrix4 world = GLKMatrix4Identity;
GLKMatrix4 mvp = GLKMatrix4Multiply(_camera.projection, _camera.view);
mvp = GLKMatrix4Multiply(mvp,world);
mvp = GLKMatrix4Multiply(mvp, model);
//muShader.modelViewProjection is a handle to the shader's model-view-projection matrix uniform
glUniformMatrix4fv(self.muShader.modelViewProjection,1,0,mvp.m);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, self.policeTextureID);
//ditto on muShader.texture
glUniform1i(self.muShader.texture, 0);
for(int i=0; i < self.policeModel.count; i++)
{
//I'll expand muShader readyForFormat after this
[self.muShader readyForFormat:ModelVertexFormat];
//I'll expand mesh draw after this
[[self.policeModel meshAtIndex:i] draw];
}
muShader stuff
muShader binding attributes and uniforms
I won't post the whole muShader's class, it is unnecessary, suffice to say that it works or else it'd not display anything at all, ever.
//here is where we bind the attribute locations when the shader is created
-(void)bindAttributeLocations
{
_position = glGetAttribLocation(self.program, "position");
_uv = glGetAttribLocation(self.program, "uv");
}
//ditto for uniforms
-(void)bindUniformLocations
{
_modelViewProjection = glGetUniformLocation(self.program, "modelViewProjection");
_texture = glGetUniformLocation(self.program, "texture");
}
muShader readyForFormat
-(void)readyForFormat:(VertexFormat)vertexFormat
{
switch (vertexFormat)
{
//... extra vertex formats removed for brevity
case ModelVertexFormat:
//ModelVertex is a struct, with the following definition:
//typedef struct{
// GLKVector4 position;
// GLKVector4 uv;
// GLKVector4 normal;
//}ModelVertex;
glVertexAttribPointer(_position, 3, GL_FLOAT, GL_FALSE, sizeof(ModelVertex), BUFFER_OFFSET(0));
glVertexAttribPointer(_uv, 3, GL_FLOAT, GL_FALSE, sizeof(ModelVertex), BUFFER_OFFSET(16));
break;
//... extra vertex formats removed for brevity
}
}
Mesh stuff
setting up the vertex/index buffers
//this is how I set/create the vertex buffer for a mesh/model-group
//vertices is a c-array of ModelVertex structs
// created with malloc(count * sizeof(ModelVertex))
// and freed using free(vertices) - after setVertices is called, of course
-(void)setVertices:(ModelVertex *)vertices count:(GLushort)count
{
//frees previous data if necessary
[self freeVertices];
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(ModelVertex) * count, vertices, GL_STATIC_DRAW);
_vertexCount = count;
}
//this is how I set/create the index buffer for a mesh/model-group
//indices is a c-array of GLushort,
// created with malloc(count * sizeof(GLushort);
// and freed using free(vertices) - after setVertices is called, of course
-(void)setIndices:(GLushort *)indices count:(GLushort)count
{
[self freeIndices];
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort) * count, indices, GL_STATIC_DRAW);
_indexCount = count;
}
mesh draw
//vertexBuffer and indexBuffer are handles to a vertex/index buffer
//I have verified that they are loaded properly
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glDrawElements(GL_TRIANGLES, _indexCount, GL_UNSIGNED_SHORT, 0);
Shader stuff
Vertex Shader
attribute highp vec4 position;
attribute lowp vec3 uv;
varying lowp vec3 fragmentUV;
uniform highp mat4 modelViewProjection;
uniform lowp sampler2D texture;
void main()
{
fragmentUV = uv;
gl_Position = modelViewProjection * position;
}
Fragment shader
varying lowp vec3 fragmentUV;
uniform highp mat4 modelViewProjection;
uniform lowp sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture,fragmentUV.xy);
//used below instead to test the aforementioned black screen by setting
//every pixel of the model being drawn to red
//the screen stayed black, so the model wasn't covering the whole screen or anything
//gl_FragColor = vec4(1,0,0,1);
}
Answered it myself, when using multiple buffer objects, glEnableVertexAttribArray has to be called for every time you bind the vertex/index buffer object, rather than simply once per frame (per shader). This was the cause of all of the problems, including the simulator crashing.
Closed.

What is the right way to manage image assets for J2ME in NetBeans

I'm using NetBeans to develop a J2ME app that runs across many different devices. The app uses a lot of different image assets. Since the devices have different screen sizes, this means that I need to compile multiple binaries, each with different asset sizes.
So far, I've been using a manual process to control the assets. I have a directory consisting of a bunch of subdirectories, each corresponding to assets needed for a particular class of device. For example, I have one directory "320_240", that has assets sized for a 320x240 screen, and another "480_360", that has assets sized for a 480x360 screen. The files names are exactly the same as is the code that loads them. Before I compile, I just copy the proper files into the default package (under src).
This can obviously be improved. I already have different project configurations representing the different screen sizes, so I'd like to make the assets switch automatically, too. As a relative novice for NetBeans, I'm not sure what the best way to do this is.
FWIW, here's the best I've come up with yet:
Create asset. packages under src, where LABEL corresponds to the device class (e.g. "320_240", "480_360")
Put the images for each class into the proper src/asset/ directory
Create a static final String assetDir that gets set to "/asset//" according to the currently selected project config
Load the images using Image.creatImage(assetDir + "image.png")
For each configuration, only include the necessary asset directory in Project->Build->Sources Filtering (I think this is necessary to avoid storing the unused images in the compiled app, correct?)
This still feels a bit hokey, though. This has to be a common problem. Does anyone have a better solution?
Thanks!
If you are using lot of images, then the size of jar file will be increased. You can't install that jar in some low-end devices.
Just use one image and resize the image according to screen width and screen height.
To resize the image, use the below method.
public Image resizeImage(Image src, int screenHeight, int screenWidth) {
int srcWidth = src.getWidth();
int srcHeight = src.getHeight();
Image tmp = Image.createImage(screenWidth, srcHeight);
Graphics g = tmp.getGraphics();
int ratio = (srcWidth << 16) / screenWidth;
int pos = ratio / 2;
//Horizontal Resize
for (int index = 0; index < screenWidth; index++) {
g.setClip(index, 0, 1, srcHeight);
g.drawImage(src, index - (pos >> 16), 0);
pos += ratio;
}
Image resizedImage = Image.createImage(screenWidth, screenHeight);
g = resizedImage.getGraphics();
ratio = (srcHeight << 16) / screenHeight;
pos = ratio / 2;
//Vertical resize
for (int index = 0; index < screenHeight; index++) {
g.setClip(0, index, screenWidth, 1);
g.drawImage(tmp, 0, index - (pos >> 16));
pos += ratio;
}
return resizedImage;
}