Combine elevation data with different snow cover maps - merge

I´m using R and I want to calculate at which height the snow coverage is declining the most over time.
To answer this question I made a classification of the snow cover over ten years (each year I got one plot). Furthermore I have a elevation map cropped to the research area.
But now my problem, I have no clue how I bring these two data packages together.
Here is my progress:
#The crop of the elevation data:
DEM_transform <- st_transform(ROI, crs = crs(DEM))
#crop and mask it
DEM_crop <- crop(DEM, extent(DEM_transform))
DEM_mask <- mask(DEM_crop, DEM_transform)
plot(DEM_mask)
#The classification of the snow cover
v_2022 <- getValues(ndsi_2022)
i_2022 <- which(!is.na(v_2022))
v_2022 <- na.omit(v_2022)
E_2022 <- kmeans(v_2022, 2, iter.max = 100, nstart =10)
kmeans_raster_2022 <- raster(ndsi_2022)
kmeans_raster_2022[i_2022] <- E_2022$cluster
plot(kmeans_raster_2022, col=c("grey90","grey25"), legend = FALSE)
legend("topright", title= "Legend", legend = c("Snowcover", "Snow-free"), fill = c("grey90","grey25"), cex=1.5)
At the end I want to say something like: Between the year 2015 ans 2020 the snow cover of the research area declines by 30% in an altitude higher than 3000m
I would really preciate some help in this issue.

Related

Fixing the Plot Shape signals on a highly profitable Pinescript strategy

I want to fix the attached pine script code in version 5 of Tradingview for a strategy I formed which is highly profitable but the signals for Buy, Sell, Exit Buy and Exit Sell are not occurring on the chart at the right points.
If someone can help me fix the attached code it would be great for even them as the strategy is highly profitable in trending markets and can be easily used to implement on a Trading Bot.
The strategy is as follows
I have 2 Exponential Moving Averages plotted on the chart, mainly EMA of 144 and EMA of 169. Once you plot these EMA's on the chart, they appear like a tunnel.
Buy Condition is - If the candle closes above both the EMA's, then a Buy Plot shape should occur on that candle.
Exit Buy Condition is - if the candle breaks the low of both the EMA's, even if not on a closing basis (even if the low of the candle is below both the EMA's), then a Exit Buy Plot shape should occur on that candle.
Sell Condition is - If the candle closes below both the EMA's, then a Sell Plot shape should occur on that candle.
Exit Sell Condition is - if the candle breaks the high of both the EMA's, even if not on a closing basis (even if the high of the candle is above both the EMA's), then a Exit Buy Plot shape should occur on that candle.
Also the most important aspect is that these shapes should not occur repeatedly. They should only occur on the specific candle when the condition is met.
I have attached the current pinescript code that I drafted but it isint working properly.
//#version=5
indicator("WavyCrorepati v2.0", overlay=true)
tunnel1 = ta.ema(close, 144)
tunnel2 = ta.ema(close, 169)
plot(tunnel1, color=color.white, linewidth=2)
plot(tunnel2, color=color.white, linewidth=2)
intradelong = 0
intradeshort = 0
// BUY
long = close > tunnel1 and close > tunnel2 or (ta.crossover(close,tunnel1) and ta.crossover(close,tunnel2))
if long
intradelong := 10
exitlong = low < tunnel2 and low < tunnel1
if exitlong
intradelong := 5
// SHORT
short = close < tunnel1 and close < tunnel2 or (ta.crossunder(close, tunnel1) and ta.crossunder(close, tunnel2))
if short
intradeshort := -10
exitshort = high > tunnel1 and high > tunnel2
if exitshort
intradeshort := -5
// PLOT SHAPES
plotshape(long and intradelong[1] == 5 , style=shape.labelup, color=color.green, location=location.belowbar, size=size.small,text="B",textcolor=color.white)
plotshape(exitlong and long[1], style=shape.diamond,color=color.white,location=location.belowbar,size=size.tiny)
plotshape(short and intradeshort[1] == -5 , style=shape.labeldown, color=color.red, location=location.abovebar, size=size.small,text="S",textcolor=color.yellow)
plotshape(exitshort and short[1], style=shape.diamond,color=color.yellow, location=location.abovebar,size=size.tiny)
plot(intradelong, color = color.green, display = display.status_line)
plot(intradeshort, color = color.red, display = display.status_line)

Atmospheric correction for Sentinel-2 imagery in Google Earth Engine

I want to apply atmospheric correction on Sentinel-2 imagery in Google Earth Engine(GEE). I saw the Sammurphy code which is written in Python and unfortunately it did not work for me. I tried the dark pixel subtraction method using the code(Java) below but it results in a total dark image over my region of interest.
I am new to both Earth Engine and JavaScript. Has anyone tried using the dark pixel subtraction or any other atmospheric correction on Sentinel 2 imagery in GEE (preferably a code written in Java)?
var toa=maskedComposite1;
var thresh= 0.5;
var dark=findDarkPixels(toa, thresh)
print(dark);
//Function to find dark pixels from threshold on sum of NIR, SWIR1, & SWIR2 bands
//Returns classified image with binary [0,1] 'dark' band
// toa: Sentinel 2 image converted to surface radiance
// thresh: threshold (0.2 - 0.5) value for sum of NIR, SWIR1 & SWIR2 bands
function findDarkPixels(toa, thresh) {
var darkPixels1 = toa.select(['B8','B11','B12']);
var darkPixels = darkPixels1.reduce(ee.Reducer.sum()).lt(thresh);
var filtered = darkPixels.focal_mode(0.1, 'square', 'pixels');
Map.addLayer(filtered,{},'darkPixel');
return filtered.rename(['dark']);
}
If you do not need specific atmospheric correction then you can use the Level-2A Sentinel-2 data already available in GEE. Here is the link to dataset info. The atmospheric correction for this data set is performed by sen2cor. Note the time period the data are available for as Level-2A data is not available for the entire data archive.

Warp images using motion maps generated by opticalFlowLKDoG (Matlab 2015A)

This question is based on a modified Matlab code from the online documentation for the optical flow system objects in version 2015a as appears in opticalFlowLK class
clc; clearvars; close all;
inputVid = VideoReader('viptraffic.avi');
opticFlow = opticalFlowLKDoG('NumFrames',3);
inputVid.currentTime = 2;
k = 1;
while inputVid.currentTime<=2 + 1/inputVid.FrameRate
frameRGB{k} = readFrame(inputVid);
frameGray{k} = rgb2gray(frameRGB{k});
flow{k} = estimateFlow(opticFlow,frameGray{k});
k = k+1;
end
By looking at flow{2}.Vx and flow{2}.Vy I get the motion maps U and V that describe the motion from frameGray{1} to frameGray{2}.
Iwant to use flow{2}.Vx and flow{2}.Vy directly on the data in frameGray{1} in order to warp frameGray{1} to appear visually similar to frameGray{2}.
I tried this code:
[x, y] = meshgrid(1:size(frameGray{1},2), 1:size(frameGray{1},1));
frameGray1Warped = interp2(double(frameGray{1}) , x-flow{2}.Vx , y-flow{2}.Vy);
But it doesn't seem to do much at all except ruin the image quality (but the objects don't display any real motion towards their locations in frameGray{2}.
I added 3 images showing the 2 original frames followed by frame 1 warped using the motion field to appear similar to frame 2:
It can be seen easily that frame 1 warped to 2 is essentially frame 1 with degraded quality but the cars haven't moved at all. That is - the location of the cars is the same: look at the car closest to the camera with respect to the road separation line near it; it's virtually the same in frame 1 and frame 1 warped to 2, but is quite different in frame 2.

reformatting x axis as date in a heat map in ggplot

I am trying to make a heat map using ggplot and I am having trouble formatting the x axis as a date.
When I run the basic code
ggplot(kbsdiv13CTN, aes(x=date, y=rev(depth), fill=mean)) +
geom_tile() + scale_fill_gradient(low = "white", high = "black") +
scale_y_reverse()
The heat map looks ok, but the dates are in the wrong order, and the x axis labels are really crowded together.
But then I reformat the x axis as date
kbsdiv$date<- as.Date(kbsdiv$date , format = "%m/%d")
ggplot(kbsdiv13CTN, aes(x=date, y=rev(depth), fill=mean)) +
geom_tile() + scale_fill_gradient(low = "white", high = "black") +
scale_y_reverse() +
scale_x_date(breaks = date_breaks("weeks"),labels = date_format("%b"))
.
and the heat map tiles become really narrow and no longer fill up the image. They basically look like very narrow bars, and it no longer looks like a heat map. I wanted to post a picture but they would not let me.
Can someone help?

iPhone colour Image analysis

I am looking for some ideas about an approach that will let me analyze an image, and determine how greenISH or brownISH or whiteISH it is... I am emphasizing ISH here because, I am interested in capturing ALL the shades of these colours. So far, I have done the following:
I have my UIImage, I have CGImageRef and I actually have the colour of the pixel itself (it's RGB and Alpha), what I don't know is how to quantify this, and determine all the green shades, blues, browns, yellows, purples etc... So, I can process each and every pixel, determine it's basic RGB, but I need some help in quantifying the colours it over a whole image.
Thanks for your ideas...
Alex.
One fairly good solution is to switch from RGB colour space to one of the Y colour spaces, such as YUV, YCrCb or any of those. In all cases the Y channel represents brightness and the other two channels together represent colour, relative to brightness. You probably want to factor brightness out, possibly with the caveat that all colours below a certain darkness are to be excluded, so getting Y separately is a helpful first step in itself.
Converting from RGB to YUV is achieved with a simple linear combination. Straight from Wikipedia and a thousand other sources:
y = 0.299*r + 0.587*g + 0.114*b;
u = -0.14713*r - 0.28886*g + 0.436*b;
v = 0.615*r - 0.51499*g - 0.10001*b;
Assuming you're keeping r, g and b in the range [0, 1], your first test might be:
if(y < 0.05)
{
// this colour is very dark, so it's considered to be as
// far as we allow from any colour we're interested in
}
To decide how close your colour then is to, say, green, work out the u and v components of the green you're interested in, as a proportion of the y:
r = b = 0;
g = 0;
y = 0.299*r + 0.587*g + 0.114*b = 0.587;
u = -0.14713*r - 0.28886*g + 0.436*b = -0.28886;
v = 0.615*r - 0.51499*g - 0.10001*b = -0.51499;
proportionOfU = u / y = -2.0479;
proportionOfV = v / y = -0.8773;
Subsequently, work out and compare the proportions of U and V for incoming colours and compare (e.g. with 2d planar distance) to those you've computed for the colour you're comparing to. Closer values are more similar. How you scale and use that metric depends on your application.
Notice that as y goes toward 0, the computed proportions become increasingly less precise because of the limited range of the input data, and are undefined when y is 0. Conceptually that's because all colours look exactly the same when there's no light on them. Checking that y is above at least a certain minimum value is the pragmatic way of working around this issue. This also means that you're not going to get sensible results if you try to say "how black is this picture?", though again that's because of the ambiguity between a surface that doesn't reflect any light and a surface that doesn't have any light falling upon it.