Waveshare e-ink display - content faded when boxes drawn below - raspberry-pi

I'm using a Waveshare e-ink display (5x7) attached to a Pi Zero W via a HAT. I'm building the content from top to bottom.
As you can see from this photo (apologies for the reflection of the conservatory roof), all is fine up until this point :
However, if I then proceed to draw one or more boxes below the content, the weather icons fade out from right to left, like so :
The order in which I draw is irrelevant - it happens whether I draw the boxes then the weather data, or vice versa.
Relevant code is as follows :
# Draw one rectangle for top data
draw.rectangle([(0,0),(479,120)],outline = 0)
# And another for the tasks
draw.rectangle([(0,220),(239,700)],outline = 0)
# And a third for something else
draw.rectangle([(241,220),(479,700)],outline = 0)
# Draw the forecast (on a loop)
# If we have 400 pixels to play with, forecast covers next 5 hours, so 80 pixels per entry
i = 0
xoffset = 40
yoffset = 130
forecast = get_forecast()
while i < 5:
# Get the data
icon = get_icon(forecast[i]['icon'])
time = forecast[i]['time']
temperature = str(forecast[i]['temperature']) + u'\N{DEGREE SIGN}' + "C"
# Draw the forecast time
timewidth = forecastfont.getsize(time)[0]
textx = calculate_offset(xoffset, timewidth, xoffset)
texty = yoffset
draw.text((textx, texty), time, font = forecastfont, fill=0)
# Draw the forecast icon
iconwidth = weather24.getsize(icon)[0]
iconx = calculate_offset(xoffset, iconwidth, xoffset)
icony = yoffset + forecastfont.getsize(time)[1] + 5
draw.text((iconx, icony), icon, font = weather24, fill = 0)
# Draw the forecast temperature
tempwidth = temperaturefont.getsize(temperature)[0]
tempx = calculate_offset(xoffset, tempwidth, xoffset)
tempy = yoffset + forecastfont.getsize(time)[1] + weather24.getsize(icon)[1] + 5
draw.text((tempx, tempy), temperature, font = temperaturefont, fill=0)
# Advance the loop and move the offset
i += 1
xoffset += 60
My research appears to suggest that sleeping the display after writing should help, but I'm already doing that :
epd.display(epd.getbuffer(image))
epd.sleep()

This seems to be caused by too much light when the screen does update. It happends also once the screen is static but in reverse (screen darken with direct light).
It's a bit disapointing but you may want to move the screen in a darken place and especially avoid any direct lighting.
I noted this with a waveshare 7.5 v2 screen with esp32 drivers.

Related

Manually write world file (jgw) from Leaflet.js map

I have the need to export georeferenced images from Leaflet.js on the client side. Exporting an image from Leaflet is not a problem as there are plenty of existing plugins for this, but I'd like to include a world file with the export so the resulting image can be read into GIS software. I have a working script fort his, but I can't seem to nail down the correct parameters for my world file such that the resulting georeferenced image is located exactly correctly.
Here's my current script
// map is a Leaflet map object
let bounds = map.getBounds(); // Leaflet LatLngBounds
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let width_deg = bottomRight.lng - topLeft.lng;
let height_deg = topLeft.lat - bottomRight.lat;
let width_px = $(map._container).width() // Width of the map in px
let height_px = $(map._container).height() // Height of the map in px
let scaleX = width_deg / width_px;
let scaleY = height_deg / height_px;
let jgwText = `${scaleX}
0
0
-${scaleY}
${topLeft.lng}
${topLeft.lat}`
This seems to work well at large scales (ie zoomed in to city-level or so), but at smaller scales there is some distortion along the y-axis. One thing I noticed is that all examples of world files I can find (and those produced from QGIS or ArcMap) all have the x-scale and y-scale parameters being exactly equal (oppositely signed). In my calculations, these terms are different unless you are sitting right on the equator.
Example world file produced from QGIS
0.08984380916303301 // x-scale (size of px in x direction)
0 // rotation parameter 1
0 // rotation parameter 2
-0.08984380916303301 // y-scale (size of px in y direction)
-130.8723208723141056 // x-coord of top left px
51.73651369984968085 // y-coord of top left px
Example world file produced from my calcs
0.021972656250000017
0
0
-0.015362443783773333
-130.91308593750003
51.781435604431195
Example of produced image using my calcs with correct state boundaries overlaid:
Does anyone have any idea what I'm doing wrong here?
Problem was solved by using EPSG:3857 for the worldfile, and ensuring the width and height of the map bounds was also measured in this coordinate system. I had tried using EPSG:3857 for the worldfile, but measured the width and height of the map bounds using Leaflet's L.map.distance() function. To solve the problem, I instead projected corner points of the map bounds to EPSG:3857 using L.CRS.EPSG3857.project(), the simply subtracted the X,Y values.
Corrected code is shown below, where map is a Leaflet map object (L.map)
// Get map bounds and corner points in 4326
let bounds = map.getBounds();
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let topRight = bounds.getNorthEast();
// get width and height in px of the map container
let width_px = $(map._container).width()
let height_px = $(map._container).height()
// project corner points to 3857
let topLeft_3857 = L.CRS.EPSG3857.project(topLeft)
let topRight_3857 = L.CRS.EPSG3857.project(topRight)
let bottomRight_3857 = L.CRS.EPSG3857.project(bottomRight)
// calculate width and height in meters using epsg:3857
let width_m = topRight_3857.x - topLeft_3857.x
let height_m = topRight_3857.y - bottomRight_3857.y
// calculate the scale in x and y directions in meters (this is the width and height of a single pixel in the output image)
let scaleX_m = width_m / width_px
let scaleY_m = height_m / height_px
// worldfiles need the CENTRE of the top left px, what we currently have is the TOPLEFT point of the px.
// Adjust by subtracting half a pixel width and height from the x,y
let topLeftCenterPxX = topLeft_3857.x - (scaleX / 2)
let topLeftCenterPxY = topLeft_3857.y - (scaleY / 2)
// format the text of the worldfile
let jgwText = `
${scaleX_m}
0
0
-${scaleY_m}
${topLeftCenterPxX}
${topLeftCenterPxY}
`
For anyone else with this problem, you'll know things are correct when your scale-x and scale-y values are exactly equal (but oppositely signed)!
Thanks #IvanSanchez for pointing me in the right direction :)

Why can't I colour my segmented region from the original image

I have the following code:
close all;
star = imread('/Users/name/Desktop/folder/pics/OnTheBeach.png');
blrtype = fspecial('average',[3 3]);
blurred = imfilter(star, blrtype);
[rows,cols,planes] = size(star);
R = star(:,:,1); G = star(:,:,2); B = star(:,:,3);
starS = zeros(rows,cols);
ind = find(R > 190 & R < 240 & G > 100 & G < 170 & B > 20 & B < 160);
starS(ind) = 1;
K = imfill(starS,'holes');
stats = regionprops(logical(K), 'Area', 'Solidity');
ind = ([stats.Area] > 250 & [stats.Solidity] > 0.1);
L = bwlabel(K);
result = ismember(L,find(ind));
Up to this point I load an image, blur to filter out some noise, do colour segmentation to find the specific objects which fall in that range, then create a binary image that has value 1 for the object's colour, and 0 for all other stuff. Finally I do region filtering to remove any clutter that was left in the image so I'm only left with the objects I'm looking for.
Now I want to recolour the original image based on the segmentation mask to change the colour of the starfish. I want to create Red,Green,Blue channels, assign value to them then lay the mask over the image. (To have red starfishes for example)
red = star;
red(starS) = starS(:,:,255);
green = star;
green(starS) = starS(:,:,0);
blue = star;
blue(starS) = star(:,:,0);
out = cat(3, red, green, blue);
imshow(out);
This gives me an error: Index exceeds matrix dimensions.
Error in Project4 (line 28)
red(starS) = starS(:,:,255);
What is wrong with my current approach?
Your code is kinda confusing... I don't understand whether the mask you want to use is starS or result since both look like 2d indexers. In your second code snippet you used starS, but the mask you posted in your question is result.
Anyway, no matter what your desired mask is, all you have to do is to use the imoverlay function. Here is a small example based on your code:
out = imoverlay(star,result,[1 0 0]);
imshow(out);
and here is the output:
If the opaque mask of imoverlay suggested by Tommaso is not what you're after, you can modify the RGB values of the input to cast a hue over the selected pixels without saturating them. It is only slightly more involved.
I = find(result);
gives you an index of the pixels in the 2D image. However, star is 3D. Those indices will point at the same pixels, but only at the first 2D slice. That is, if I points at pixel (x,y), it is equivalently pointing to pixel (x,y,1). That is the red component of the pixel. To index (x,y,2) and (x,y,2), the green and blue components, you need to increment I by numel(result) and 2*numel(result). That is, star(I) accesses the red component of the selected pixels, star(I+numel(result)) accesses the green component, and star(I+2*numel(result)) accesses the blue component.
Now that we can access these values, how do we modify their color?
This is what imoverlay does:
I = find(result);
out = star;
out(I) = 255; % red channel
I = I + numel(result);
out(I) = 0; % green channel
I = I + numel(result);
out(I) = 0; % blue channel
Instead, you can increase the brightness of the red proportionally, and decrease the green and blue. This will change the hue, increase saturation, and preserve the changes in intensity within the stars. I suggest the gamma function, because it will not cause strong saturation artefacts:
I = find(result);
out = double(star)/255;
out(I) = out(I).^0.5; % red channel
I = I + numel(result);
out(I) = out(I).^1.5; % green channel
I = I + numel(result);
out(I) = out(I).^1.5; % blue channel
imshow(out)
By increasing the 1.5 and decreasing the 0.5 you can make the effect stronger.

Matlab radial gradient image

I am attempting to create a radial gradient image to look like the following using Matlab. The image needs to be of size 640*640*3 as I have to blend it with another image of that size. I have written the following code but the image that prints out is simply a grey circle on a black background with no fading around the edges.
p = zeros(640,640,3);
for i=1:640
for j=1:640
d = sqrt((i-320)^2+(j-320)^2);
if d < 640/3
p(i,j,:) = .5;
elseif d > 1280/3
p(i,j,:) = 0;
else
p(i,j,:) = (1 + cos(3*pi)*(d-640/3))/4;
end
end
end
imshow(p);
Any help would be greatly appreciated as I am new to Matlab.
Change:
p(i,j,:) = (1 + cos(3*pi)*(d-640/3))/4;
to
p(i,j,:) = .5-( (.5-0)*(d-640/3)/(640/3)) ;
This is an example of linear interpolation, where the grey value from the inner rim drops linearly to the background.
You can try other equations to have different kinds of gradient fading!
If you look more closely on your third case (which by the way should be a simple else instead of elseif), you can see that you have
= (1 + cos(3*pi))*...
Since cos(3*pi) = -1, this will always be 0, thus making all pixels within that range black. I assume that you would want a "d" in there somewhere.

How to get the average color of a sprite? (Unity3d)

Im finishing up my asset that I'm putting on the asset store. Right now one of the features require the average color of the sprite. Right now I have a public Color to find the average color, where the user can use the color picker or the color wheel or whatever to choose what they think looks like the average color of the sprite. I want to make it so the script automatically calculates the average sprite color, therefore increasing the accuracy by removing human error and increasing the efficiency by not wasting the users time guessing the average sprite color.
Well there is a post about it in Unity forums. Here the link. And the answer is:
Color32 AverageColorFromTexture(Texture2D tex)
{
Color32[] texColors = tex.GetPixels32();
int total = texColors.Length;
float r = 0;
float g = 0;
float b = 0;
for(int i = 0; i < total; i++)
{
r += texColors[i].r;
g += texColors[i].g;
b += texColors[i].b;
}
return new Color32((byte)(r / total) , (byte)(g / total) , (byte)(b / total) , 0);
}

laying out images in UIScrollView automatically

i have a list of images retrieved from xml i want to populate them to a uiscrollview in an order such that it will look like this.
1 2 3
4 5 6
7 8 9
10
if there is only 10 images it will just stop here.
right now my current code is this
for (int i = 3; i<[appDelegate.ZensaiALLitems count]-1; i++) {
UIButton *zenbutton2 =[UIButton buttonWithType:UIButtonTypeCustom];
Items *ZensaiPLUitems = [appDelegate.ZensaiALLitems objectAtIndex:i];
NSURL *ZensaiimageSmallURL = [NSURL URLWithString:ZensaiPLUitems.ZensaiimageSmallURL];
NSLog(#"FVGFVEFV :%#", ZensaiPLUitems.ZensaiimageSmallURL);
NSData *simageData = [NSData dataWithContentsOfURL:ZensaiimageSmallURL];
UIImage *itemSmallimage = [UIImage imageWithData:simageData];
[zenbutton2 setImage:itemSmallimage forState:UIControlStateNormal];
zenbutton2.frame=CGRectMake( (i*110+i*110)-660 , 300, 200, 250);
[zenbutton2 addTarget:self action:#selector(ShowNextZensaiPage) forControlEvents:UIControlEventTouchUpInside];
[scrollView addSubview:zenbutton2];
}
notice the CGRectMake , i have to manually assign fixed values to position them.
Is there any way to populate them out without manually assigning.
for e.g the images will automatically go down a position once the first row has 3 images and subsequently for the rest.
If I understand what you are saying, you should be able to write a simple block of code that assigns a position based on the image number.
Something like this (where i is the image number, starting from 0):
- (CGPoint)getImageOrigin:(NSInteger)imageNumber {
CGFloat leftInset = 30;
CGFloat xOffsetBetweenOrigins = 80;
CGFloat topInset = 20;
CGFloat yOffsetBetweenOrigins = 80;
int numPerRow = 3;
CGFloat x = leftInset + (xOffsetBetweenOrigins * (imageNumber % numPerRow));
CGFloat y = topInset + (yOffsetBetweenOrigins * floorf(imageNumber / numPerRow));
CGPoint imageOrigin = CGPointMake(x, y);
return imageOrigin;
}
The origin being calculated here is the upper left corner of each image.
To calculate the x value, I start with the minimum distance from the left side of the screen (leftInset). Then, I add the distance from the left side of one image to the next image, multiplied by the column (imageNumber % numPerRow).
Y is calculated in a similar fashion, but to calculate the row, I use the imageNumber / numPerRow rounded down.
Edit:
You asked me to explain further, so I'll see what I can do.
OK, so I want to be able to input the image number (starting at 0) into my function, and I want the origin (upper left corner point) back.
leftInset is the distance between the left edge of the view, and the left edge of the first image.
xOffsetBetweenOrigins is the distance from the left edge of one image to the left edge of the next image on the same row. So, if I set it to 80 and my image is 50px wide, there will be a 30px gap between two images in the same row.
topInset is like left inset. It is the distance from the top edge of the view to the top edge of the images in the top row.
yOffsetBetweenOrigins is the distance from the top edge of an image to the top edge of the image below it. If I set this to 80, and my image is 50px tall, then there will be a 30px vertical gap between rows.
numPerRow is straightforward. It is just the number of images per row.
To calculate the x value of the upper left corner of the image, I always start with the leftInset, because it is constant. If I am on the first image of a row, that will be the entire x value. If I am on the second image of the row, I need to add xOffsetBetweenOrigins once, and if I am on the third, I need to add it twice.
To do this, I use the modulus (%) operator. It gives me the remainder of a division operation, so when I say imageNumber % numPerRow, I am asking for the remainder of imageNumber/numPerRow.
If I am on the first image (imageNumber = 0), then 3 goes into 0 no times, and the remainder is 0. If I am on the second image (imageNumber = 1), then I have 1/3. 3 goes into 1 0 times, but the remainder is 1, so I get xOffsetBetweenOrigins*1.
For the y value, I do something similar, but instead of taking the modulus, I simply divide imageNumber/numPerRow and round down. Doing this, I will get 0 for 0, 1, and 2. I will get 1 for 3, 4, and 5.
Edit:
It occurred to me that you might actually have been asking how to use this method. In your code, you would say something like
CGRect newFrame = zenbutton2.frame;
newFrame.origin = [self getImageOrigin:i];
zenbutton2.frame = newFrame;
Another Edit:
Maybe you could try this?
CGPoint origin = [self getImageOrigin:i];
zenbutton2.frame = CGRectMake(origin.x, origin.y, width, height);
If that doesn't work, throw in
NSLog("Origin Values: %f,%f", origin.x, origin.y);
to make sure that you are actually getting something back from getImageOrigin.
I think you probably want to wrap your loop in another loop, to get what I'm going to call a 2D loop:
for (int row = 0; row < num_rows; row++) {
for (int col = 0; col < num_cols; col++) {
// code before
zenButton2.frame = CGRectMake(origin x dependent on col,
origin y dependent on row,
width,
height);
// code after
}
}
Where the x and y of the CGRectMake() are multiples of the width and height of your image times the row and column respectively. Hope that makes sense.