Data frame code has to change to data visualization
cur.execute(query)
pandas_dataframe = pd.read_sql(query,cond)
df = display(pandas_dataframe)
how can I change this data frame to data visualization ??? ```##
> Heading
##
Related
I use following code to draw a point cloud in vispy
# init
view = vispy.sence.widgets.ViewBox()
vis = visuals.Markers()
view.add(vis)
# updata data
vis.set_data(data,
face_color = color,
edge_color=color,
size = 1
)
vispy.app.run
As you can see, I could set the fixed point size for whole data.
How can I set multi size in one data?
You should be able to set the size with a numpy array (one element for every marker point):
https://github.com/vispy/vispy/blob/932d6e499791a423822513549ebd825601345c85/vispy/visuals/markers.py#L517-L518
size : float or array
The symbol size in px.
I am pretty new to GIS as a whole. I have a simple flat file in a csv format, as an example:
name, detail, long, lat, value
a, 123, 103, 22, 5000
b, 356, 103, 45, 6000
What I am trying to achieve is to assign a 3d polygon in Mapbox such as in this example. While the settings might be quite straight forward in Mapbox where you assign a height and color value based on a data range, it obviously does not work in my case.
I think I am missing out other files such as mentioned in the blog post, like shapefiles or some other file that is required to assign 3d layouts to the 3d extrusion.
I need to know what I am missing out in configuring a 3d polygon, say a cube in Mapbox based on the val data column in my csv.
So I figured what I was missing was the coordinates that make up the polygons I want to display. This can easily be defined in a geojson file format, if you are interested in the standards, refer here. For the visual I need, I would require:
Points (typically your long and lat coordinates)
Polygon (a square would require 5 vertices, the lines connecting and
defining your polygon)
Features (your data points)
FeatureCollection (a collection of features)
This are all parts of the geojson format, I used Python and its geojson module which comes with everything I need to do the job.
Using a helper function below, I am able to compute square/rectangular boundaries based on a single point. The height and width defines how big the square/rectangle appears.
def create_rec(pnt, width = 0.00005, height = 0.00005):
pt1 = (pnt[0] - width, pnt[1] - height)
pt2 = (pnt[0] - width, pnt[1] + height)
pt3 = (pnt[0] + width, pnt[1] + height)
pt4 = (pnt[0] + width, pnt[1] - height)
pt5 = (pnt[0] - width, pnt[1] - height)
return Polygon([[pt1,pt2,pt3,pt4,pt5]]) #assign to a Polygon class from geojson
From there it is pretty straight forward to append them into list of features, FeatureCollection and output as a geojson file:
with open('path/coordinates.csv', 'r') as f:
headers = next(f)
reader = csv.reader(f)
data = list(reader)
transform = []
for i in data:
#3rd last value is x and 2nd last is the y
point = Point([float(i[-3]), float(i[-2])])
polygon = create_rec(point['coordinates'])
#in my case I used a collection to store both points and polygons
col = GeometryCollection([point, polygon])
properties = {'Name':i[0]}
feature = Feature(geometry = col, properties = properties)
transform.append(feature)
fc = FeatureCollection(transform)
with open('target_doc_u.geojson', 'w') as f:
dump(fc, f)
The output file target_doc_u would contain all the listed items above that allows me to plot my point, as well as continue of the blog post in Mapbox to assign my filled extrusion
I'm trying to train a CNN regression net in TF 1.12, using TPU v3-8 1.12 instance. The model succesfully compiles with XLA, starting the training process, but some where after the half iterations of the 1t epoch freezes, and doing nothing. I cannot find the root of the problem.
def read_tfrecord(example):
features = {
'image': tf.FixedLenFeature([], tf.string),
'labels': tf.FixedLenFeature([], tf.string)
}
sample=tf.parse_single_example(example, features)
image = tf.image.decode_jpeg(sample['image'], channels=3)
image = tf.reshape(image, tf.stack([540, 540, 3]))
image = augmentation(image)
labels = tf.decode_raw(sample['labels'], tf.float64)
labels = tf.reshape(labels, tf.stack([2,2,45]))
labels = tf.cast(labels, tf.float32)
return image, labels
def load_dataset(filenames):
files = tf.data.Dataset.list_files(filenames)
dataset = files.apply(tf.data.experimental.parallel_interleave(tf.data.TFRecordDataset, cycle_length=4))
dataset = dataset.apply(tf.data.experimental.map_and_batch(map_func=read_tfrecord, batch_size=BATCH_SIZE, drop_remainder=True))
dataset = dataset.apply(tf.data.experimental.shuffle_and_repeat(1024, -1))
dataset = dataset.prefetch(buffer_size=1024)
return dataset
def augmentation(img):
image = tf.cast(img, tf.float32)/255.0
image = tf.image.random_brightness(image, max_delta=25/255)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.per_image_standardization(image)
return image
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
return dataset
def get_training_dataset():
return get_batched_dataset(training_filenames)
def get_validation_dataset():
return get_batched_dataset(validation_filenames)
The most likely cause is an issue in the data pre-processing function, take a look at the troubleshooting documentation Errors in the middle of training, it could be helpful to get a guidance.
I did not catch anything strange with your code.
Are you using Cloud Storage Buckets to work with those images and files? If yes, Are those buckets in the same region?
You might use Cloud TPU Audit Logs to determine if the issue is related with the resources in the system or how you are accessing your data.
Finally I suggest you to take a look in the Training Mask RCNN on Cloud TPU
documentation.
The problem statement is that a region of interest is given.
I need to find all the lakes in a polygon bounded region using the NDWI index for water bodies, which are at a height of more than 1500m. Then display the changes in the area of lake's surface water starting from the year 1984 till 2018 on a 2-year interval in a table like structure in Google Earth Engine. I have used Landsat 5 and 7 data.
I have created the following code:
Earth Engine Code
Now I need to display the results in the polygon marked region in a table sort of structure in the following format:-
Rows - (Lake 1, Lake 2, Lake 3... Lake n)
Columns - (Surface Area in 1984, Surface Area in 1986, ....2018)
How should I go about doing it?
I answer this question in regard of the code posted in the comments, hopefully the question is updated with the code posted in the comments.
Filtering: ok.
Just a comment, I wouldn't name an image collection variable with name img, it's just confusing to me, but variables names are up to you.
var mf = ee.Filter.calendarRange(10, 12, 'month');
var img1 = ee.ImageCollection(l5
.filterDate('1984-01-01','1999-12-31')
.filterBounds(roi)
.filter(mf));
var img2 = ee.ImageCollection(l7
.filterDate('2000-01-01','2018-12-31')
.filterBounds(roi)
.filter(mf));
add NDWI: This is your code:
var addNDWI = function(image){
var ndwi = image.normalizedDifference(['B2', 'B4']).rename('NDWI');
var ndwiMask = ndwi.gte(0.3);
return image.addBands(ndwi);
};
var image1 = img1.map(addNDWI);
var image2 = img2.map(addNDWI);
you are not saving ndwiMask, so you won't be able to use it outside of this function. Again, I wouldn't name them image as they are not images but image collections.
elevation mask: you have to select the elevation band:
var elevMask = elevation.select('elevation').gt(1500)
This mask image will have ones where elevation is greater than 1500 and zeros where not.
applying masks: in this part you have to remember that Earth Engine uses functional programming, so objects are not mutable, this means that you cannot update the state of an object using a method, you have to catch the output of the method you are calling. Here you need ndwi mask, so you have to compute it with NDWI band.
var mask = function(image){
var ndwiMask = image.select('NDWI').gt(0.3)
var ndwi_masked = image.updateMask(ndwiMask);
return ndwi_masked.updateMask(elevMask);
};
var maskedImg = image1.map(mask); // ImageCollection!
var maskedImg2 = image2.map(mask); // ImageCollection!
Visualizing: As the results are ImageCollection, when you add it to the map EE makes a mosaic and that is what you would see. Keep that in mind for further processing.
var ndwiViz = {bands: ['NDWI'], min: 0.5, max: 1, palette: ['00FFFF', '0000FF']};
Map.addLayer(maskedImg, ndwiViz, 'Landsat 5 masked collection');
I have 1000+ row Sine Wave data which changes with time and I want to visualize it with Processing language. My aim is to create an animation which will draw a Sine Wave with time starting from the middle of the rectangular [height/2]. I also want to show only the 1 second periods of that wave. I mean after 1 second, first coordinate should dissappear, and so forth.
How can I achieve that ?
Thanks
Sample Data :
TIME X Y
0.1333 0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
The way you'd achieve that is to split this project into tasks:
load & parse data
update time and render data
To make sure part 1 goes smoothly it's probably best to make sure your data is easy to parse first. The sample data looks like a table/spreadsheet, but it's not formatted with a standard separator(e.g. comma or tab). You can fiddle things when you parse, but I recommend using clean data first, for example, if you plan on using space as a separator:
TIME X Y
0.1333 0.0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
Once that's done, you can use loadStrings() to load the data and split() to break a row into 3 elements which can be converted from string to float.
Once you've got value to use, you can store them. You can either create three arrays, each holding a field from the loaded data(one for all the X values, one for all the Y values and one for all the time values) or you can cheat and use a single array of PVector objects. Although PVector is meant for 3D math/linear algebra, you have 2D coordinates, so you can store time as 3rd 'dimension'/component.
Part two revolves mostly around updating based on time, and this is where millis() comes in handy. You can check the amount of time passed between updates and if it's greater than a certain (delay) value, it's time for another update (of the frame/data row index).
The last part you need to worry about is rendering the data on screen. Luckily in your sample data the coordinates are normalized(between 0.0 and 1.0) which makes easy to map to the sketch dimensions(by using simple multiplication). Otherwise the map() function comes in handy.
Here's a sketch to illustrate the above, data.csv is a text file containing the formatted sample data from above:
PVector[] frames;//keep track of the frame data(position(x,y) and time(store in PVector's z property))
int currentFrame = 0,totalFrames;//keep track of the current frame and total frames from the csv
int now, delay = 1000;//keep track of time and a delay to update frames
void setup(){
//handle data
String[] rows = loadStrings("data.csv");//load data
totalFrames = rows.length-1;//get total number of lines (-1 = sans the header)
frames = new PVector[totalFrames];//initialize/allocate frame data array
for(int i = 1 ; i <= totalFrames; i++){//start parsing data(from 1, skip header)
String[] frame = rows[i].split(" ");//chop each row into 3 strings(time,x,y)
frames[i-1] = new PVector(float(frame[1]),float(frame[2]),float(frame[0]));//parse each row(not i-1 to get back to 0 index) and how the PVector's initialized 1,2,0 (x,y,time)
}
now = millis();//initialize this to keep track of time
//render setup, up to you
size(400,400);smooth();fill(0);strokeWeight(15);
}
void draw(){
//update
if(millis() - now >= delay){//if the amount of time between the current millis() and the last time we updated is greater than the delay (i.e. every 'delay' ms)
currentFrame++;//update the frame index
if(currentFrame >= totalFrames) currentFrame = 0;//reset to 0 if we reached the end
now = millis();//finally update our timer/stop-watch variable
}
PVector frame = frames[currentFrame];//get the data for the current frame
//render
background(255);
point(frame.x * width,frame.y * height);//draw
text("frame index: " + currentFrame + " data: " + frame,mouseX,mouseY);
}
There are a couple of extra notes needed:
You mentioned moving to the next coordinate after 1 second. From what I can see in your sample data there are 8 updates per second, so 1000/8 would probably work better. It's up to you how you handle timing though.
I assume your full set includes data for a sine wave movement. I've mapped to the full coordinates, but in the render part of the draw() loop you can map however you like(e.g. including a height/2 offset, etc.). Also if you're not familiar with sine waves, have a look at these Processing resources: Daniel Shiffman's SineWave sample, Ira Greenberg's trig tutorial.