I have an image that is tiled that I display in leaflet. I would like the user to be able to select different options to apply pixel math on these tiles and then use look up table to display the corresponding color. Detailed example given below.
ex:
image name | pixel coordinate | RGB values
tile0.png | (0,0) | [100, 200, 200]
then the user will select option A which will do...R + G and divide by 2 then look up this value in a table to apply a new color.
(100 + 200) / 2 = 150. Use the look up table to find 150, it says change that pixel to [100, 100, 100]
Related
I'm looking for a working example to have a .png picture as the background of a scatter chart.
Currently, I use mark_image to draw the background image:
source = pd.DataFrame.from_records([
{"x": 0, "y": 0,
"img": "http://localhost:8888/files/BARTStreamlit/assets/BARTtracksmap.png?_xsrf=2%7Ce13c35be%7Ce013c83479b892363239e5b6a77d97dc%7C1652400559"}
])
tracksmap = alt.Chart(source).mark_image(
width=500,
height=500
).encode(
x='x',
y='y',
url='img'
)
tracksmap
Here is the resulted image drown:
and draw the scater chart,
chart = alt.Chart(maptable).mark_circle(size=60).encode(
x= 'x',
y= 'y',
tooltip=['short_name', 'ENTRY']
).interactive()
chart
I have scaled the x, y channel values for the scatter chart to be in the range of [0, 500]. 500 is the width and height of the background image that I guessed.
Here is the resulted scatter plot:
then I combined the two chart with layer mechanism:
geovizvega = alt.layer(tracksmap, chart)
geovizvega
resulting the following:
The two charts do not align. I'd like to have the scatter dots aligning with the tracks on the background image. How can I achieve that?
To have them aligned, I might need to have the background image's top left corner at the coordinates (0, 0), how can I achieve that? (It seems that the x, y channel values for mark_image is the coordinates of the center of the image? With accurate definition of the x, y channel values, it might be possible to calculate the proper value of x, and y for the top left coroner to be at (0, 0)).
I might need to to have precise dimension of the background image. How?
My above approach may not be the right one. Please show me a working example.
Yes, if you change the values of x and y in your image plot to something like y=-200 and x=200, the image should be more centered in the scatter plot.
You can also change the anchor point of the image using align and baseline:
import altair as alt
import pandas as pd
source = pd.DataFrame.from_records([
{"x": 2, "y": 2, "img": "https://vega.github.io/vega-datasets/data/7zip.png"}
])
imgs = alt.Chart(source).mark_image(
width=100,
height=100
).encode(
x='x',
y='y',
url='img'
)
imgs + imgs.mark_circle(size=200, color='red', opacity=1)
imgs = alt.Chart(source).mark_image(
width=100,
height=100,
align='right',
baseline='top'
).encode(
x='x',
y='y',
url='img'
)
imgs + imgs.mark_circle(size=200, color='red', opacity=1)
After this, you would still need to change the dimensions of the chart so that it has the same size as the image. The default is width=400 and height=300. You can get the dimensions of your image in most image editing software or using the file <imagename> command (at least on linux). But even after getting these dimensions, you would have to do some manual adjustments due to axes taking up some of that space in the chart.
I have following code:
image_points_to_world_plane (CamParam, Pose, intersection_points_row, intersection_points_col, 'mm', X1, Y1)
distance_pp (X1[2], Y1[2], X1[3], Y1[3], Measure1)
this code retunrs the values in mm.
now the code goes on as shown below, and I would need the area of the Regions1 in mm². Instead i get them in pixel..
access_channel(Image, ImageMono, 1)
threshold(ImageMono, Region, 0, 100)
fill_up(Region, RegionFillUp)
reduce_domain(ImageMono, RegionFillUp, ImageReduced)
threshold (ImageReduced, Regions1, 230, 255)
connection (Regions1, Connection)
select_shape(Connection, Labels, 'area', 'and', 2000, 99999)
area_center(Labels, AreaLabels, RowLabels, ColumnLabels)
AreaLabels is in px² and i would need it in mm². but couldnt find anything like region_to_world_plane... how can this be done?
Try looking at the operator "image_to_world_plane". It will transform the image so that the pixel size is in metric units (or whatever you prefer). It will also warp the image so that it looks as if the picture was taken directly from overhead. Then any area calculations you perform on this transformed image will be in the units you specified (mm, m, etc).
I am pretty new to GIS as a whole. I have a simple flat file in a csv format, as an example:
name, detail, long, lat, value
a, 123, 103, 22, 5000
b, 356, 103, 45, 6000
What I am trying to achieve is to assign a 3d polygon in Mapbox such as in this example. While the settings might be quite straight forward in Mapbox where you assign a height and color value based on a data range, it obviously does not work in my case.
I think I am missing out other files such as mentioned in the blog post, like shapefiles or some other file that is required to assign 3d layouts to the 3d extrusion.
I need to know what I am missing out in configuring a 3d polygon, say a cube in Mapbox based on the val data column in my csv.
So I figured what I was missing was the coordinates that make up the polygons I want to display. This can easily be defined in a geojson file format, if you are interested in the standards, refer here. For the visual I need, I would require:
Points (typically your long and lat coordinates)
Polygon (a square would require 5 vertices, the lines connecting and
defining your polygon)
Features (your data points)
FeatureCollection (a collection of features)
This are all parts of the geojson format, I used Python and its geojson module which comes with everything I need to do the job.
Using a helper function below, I am able to compute square/rectangular boundaries based on a single point. The height and width defines how big the square/rectangle appears.
def create_rec(pnt, width = 0.00005, height = 0.00005):
pt1 = (pnt[0] - width, pnt[1] - height)
pt2 = (pnt[0] - width, pnt[1] + height)
pt3 = (pnt[0] + width, pnt[1] + height)
pt4 = (pnt[0] + width, pnt[1] - height)
pt5 = (pnt[0] - width, pnt[1] - height)
return Polygon([[pt1,pt2,pt3,pt4,pt5]]) #assign to a Polygon class from geojson
From there it is pretty straight forward to append them into list of features, FeatureCollection and output as a geojson file:
with open('path/coordinates.csv', 'r') as f:
headers = next(f)
reader = csv.reader(f)
data = list(reader)
transform = []
for i in data:
#3rd last value is x and 2nd last is the y
point = Point([float(i[-3]), float(i[-2])])
polygon = create_rec(point['coordinates'])
#in my case I used a collection to store both points and polygons
col = GeometryCollection([point, polygon])
properties = {'Name':i[0]}
feature = Feature(geometry = col, properties = properties)
transform.append(feature)
fc = FeatureCollection(transform)
with open('target_doc_u.geojson', 'w') as f:
dump(fc, f)
The output file target_doc_u would contain all the listed items above that allows me to plot my point, as well as continue of the blog post in Mapbox to assign my filled extrusion
I'm using step() in imaq.VideoDevice, but can't find description of format of step() output. Am using thermal infrared camera, and want to filter for specific temperature range.
So, I want to use step() on each frame, and then search the frame for pixels within specific thermal range. And obviously need to know the X,Y of each pixel, too.
My goal is to filter pixels from a frame and leave only pixels within desired temperature.
You probably need to get the information on temperature and color from your IR camera. Look up the documentation it probably says which values correspond to what pixel values. At that point you just create a mask for each frame. something like this (Assuming the values from the ir camera is "grayscale', meanign there is only one channel)
highest_temp = 200; %just a random number
lowest_temp = 50;
my_mask = (im <= higest_Temp) & (im >= loest_temp);
my_mask is a logical array with a 0 when the pixel is outside the range, and a 1 (true) when the pixel is inside the range. IF you want to apply the mask to the image just multiply them together (and take care of units, here I assume the IR camera is <16 bits)
masked_im = uint16(im .* double(mask));
I would also use trigger function rather than step If I'm not mistaken the trigger action should take only 1 image/frame by default. so Make a loop, grab a frame, do your processing, then go to the next loop iteration, over and over. Hope that helps
Answer:
step() outputs ROW X COLUMN X pixel_color
where pixel_color = index 1 is the amount of red in the pixel.
pixel_color = index 2 is the amount of green in the pixel.
pixel_color = index 3 is the amount of blue in the pixel.
For example, for color of pixel at X, Y = 5,10 = row 5, column 10
then:
amount of red = (5, 10, 1)
amount of green is = (5, 10, 2)
amount of blue is = (5, 10, 3)
EXAMPLE USAGE THAT DISPLAY A FRAME WITH RED COLUMN AND GREEN ROW........
% Get a video frame:
load('handshakeStereoParams.mat');
videoFileLeft = 'handshake_left.avi';
readerLeft = vision.VideoFileReader(videoFileLeft, 'VideoOutputDataType', 'uint8');
frameLeft = readerLeft.step();
live_scene_player = vision.VideoPlayer('Position', [20, 600, 850, 500], 'Name','LEFT');
% Make green horizontal stripe at row 10 on the image:
frameLeft(10,:,1)=0; % remove red from stripe
frameLeft(10,:,2)=255; % turn on all green
frameLeft(10,:,3)=0; % remove blue from stripe
% Make red horizontal stripe at column 10 on the image:
frameLeft(:,10,1)=255; % remove red from stripe
frameLeft(:,10,2)=0; % turn on all green
frameLeft(:,10,3)=0; % remove blue from stripe
% display it:
step( live_scene_player, frameLeft); % originally from frameLeftRect
I have performed rgb2gray on an image and did a sobel edge detection on the image.
then did
faceEdges = faceNoNoise(:,:) > 50; %binary threshold
so it sets the outline of the image (a picture of a face), to black and white. Values 1 is white pixel, and 0 is black pixel. Someone said I could use this,
mouthsquare = rectangle('position',[recX-mouthBoxBuffer, recY-mouthBoxBuffer, recXDiff*2+mouthBoxBuffer/2, recYDiff*2+mouthBoxBuffer/2],... % see the change in coordinates
'edgecolor','r');
numWhite = sum(sum(mouthsquare));
He said to use two sum()'s because it gets the columns and rows of the contained pixels within the rectangle. numWhite always returns 178 and some decimal numbers.
If you have a 2D matrix M (this being -- for exmple -- an image), the way to count how many elements have the value 1 is:
count_1 = sum(M(:)==1)
or
count_1 = sum(reshape(M,1,[])==1)
If the target values are not exactly 1, but have a Δ-threshold of, let's say, +/- 0.02, then one should ask for:
count_1_pm02 = sum((M(:)>=0.98) & (M(:)<=1.02))
or the equivalent using reshape.