Merge and Dissolve Features in Openlayers? - merge

Hello i have a map with complex features. each feature has 4 attributes.
Province | Regency | Sub-District | Village
i am using openlayers to display my map.
i need to be able to style this map with color based on attributes and filtering each of this features based on the common attributes.
which is is the best way to do this? using merge or dissolve?
or can i do this with openlayers?
for example
i have options to select the scope of the attributes color to be displayed.
for example when i choose scope village
Province | Regency | Sub-District | Village
A 101 X1 Z1
A 101 X2 Z2
B 102 X3 Z3
B 102 X4 Z4
C 103 X5 Z5
but when i choose scope Regency
the result will be
Province | Regency |
A 101
B 102
C 103
and if i use merge does the features after merging disappear?

OpenLayers has some excellent in-built classes that can help you out quite a bit. I think the classes you are looking for are OpenLayers.Strategy.Filter and OpenLayers.StyleMap.
The Filter Strategy allows you to specify a Filter object to a layer that will hide features that do not match the filter.
The StyleMap allows you to define Style objects to the features based on attributes or computed attributes (function output).
For both of these, there are great examples that you can find online (using the Google) to see these classes in action.

Related

Identify missing Dimension Value in Tableau

I have student name, subject and marks in three columns. My use case is to identify the subject in which selected student has not enrolled. I can do it very easily using SQL but requirement is to identify it in Tableau. I have tried LOD's and Traditional Filters but it's of no help.
**Sample Data**-
Name Subject Marks
Rob A 90
Rob B 95
Rob C 98
Ted B 86
Ted D 70
**Desired Output**-
Name Subject
Rob D
Ted A
Ted C
If graphical solutions are also an option, you can print the Subjects on e.g. the X axis, and the Student Names on the Y axis. Then, set the marks type to e.g. text and display the Number of Records in each cell. This way, you should get a matrix of all Subject vs. all Students, with 0 or 1 in each cell (intersection).

Tableau - Name Rank, by State / Year / Gender

I have a data set downloaded from the Social Security website. The data is in the form below, and contains the popularity (i.e., as defined by Count) of names, by gender, year and state:
State Gender Year Name Count First Letter
AK F 1910 Anna 10 A
AK F 1910 Annie 12 A
AK F 1911 Annie 6 A
AK F 1912 Alice 5 A
AK M 1912 Wilbur 7 W
AK M 1912 Thomas 7 T
Within Tableau, I'd like to Top X names by each of these categories (or all if not filters are applied). However, when I use a Top filter in a visualization, the underlying data produced by the filter is the form:
For example, I need the first ranked name, and be able to use filters to see how that changes by year, gender, and state. I'm thinking this might be accomplished by an LOD expression, but not sure where to start.
You can achieve this without needing a LOD calculation. Simply change your filters from standard blue dimension filters to be context filters. To do this simply right click on the filter when it is in the filter pane and click "Add to Context". The filter will be shown as grey. Now all ranks will be calculated after filters have been apllied
Why does this work? It is to do with the Order of Operations in Tableau. A calculation such as rank is a table calculation. As the name implies the calculation is processed on the entire data table before dimension filters are applied. However, when something is a context filter it creates temporary tables which are then used to calculate table cals.
Find out more here: https://www.google.com.au/search?q=order+of+operations+tableau&oq=order+of+operations+tableau&aqs=chrome..69i57j0j69i65j0j69i65j0.3475j0j7&sourceid=chrome&ie=UTF-8

How to merge pair-wise table(s) and non-pairwise table(s) in MatLab?

I am trying to perform clustering analysis. I have extracted all the possible data. And have made several pairwise comparisons. Now I want to know how do I merge the data?
Table 1:
entry smth coefficient
entry1 smth1 1.23
entry2 smth1 2.05
entry3 smth2 0.95
entry2 smth4 1.65
Table 2:
smth
smth1 smth2 smth 4
smth1 100 59 35
smth2 59 100 82
smth4 35 82 100
Table 3:
entry
entry1 entry2 entry3
entry1 100 82 75
entry2 82 100 59
entry3 75 59 100
I am trying to understand how to program this. I am new to matlab, I am training a lot, so there's definitely some progress, but now enough to get how to solve my problem.
UPDATE:
Here is the illustration to the table 2:
There is some similarity and difference between different smths.
Now here is the illustration to the table 3:
Entries also have some relational distance.
I also have input cases - table 1. Each row of the table is a unique input case. However, the real table is long, so some of the cases may be actually the same, though named differently. Now, I want to combine pairwise comparison 1, pairwise comparison 2 and if required up to pairwise comparison n. Finally, there are also some non-relative parameters (in table 1 there's one only - called coefficient), by which I want to multiply the position of pairwise-verified point in space (i.e. I am introducing a new axis - corresponding to coefficient and move the points by this axis. So there will be some sort of distribution, which I will be able to use in clustering analysis. I.e. I have an axis1/plane1 - corresponding to the pairwise comparisons of the smths, I have an axis2/plane2 - pairwise comparisons of the entries. Then I apply all those to the cases, for example in table 1 entry2 and smth1 appear more than twice. I know relational distance of entry2 vs other entries, as well as a relational distance of smth1 vs other smths, now I want to combine them into all the cases, given in table 1 and to move the point by axis3 - non-relational axis, corresponding to parameter coefficient.

Using Landsat 7 to go from NDVI to Emissivity

I am using Landsat 7 to calculate land surface derived temperature.
I understand the main concepts behind the conversion, however, I am confused on how to factor Emissivity into my model.
I am using the model builder for my calculations and have created several modules that uses the instruments Gain, Bias Offset, Landsat K1, and Landsat K2 correction variables.
I converted the DN to radiance values as well.
Now, I need to factor in the last and probably the most confusing (for me) part: Emissivity.
I would like to calculate Emissivity using the NDVI.
I have a model procedure built to calculate the NDVI layer (band4- band3)/(band4+ band3).
I have also calculated Pv, which is the fraction of vegetation calculated by: [NDVI - NDVI_min]/[NDVI_max-NDVI_min]^2.
Now, by using the Vegetation Cover Method, all I need is Ev and Eg.
I do not understand how to find these values to calculate the Total Emissivity value per cell.
Does anyone have any idea on how I can incorporate the Emissivity into my formulation?
I am slightly confused on how to derive this value...
I believe Emissivity is frequently included as part of the dataset. Alternatively, emissivity databases do exist (such as the ASTER database here: https://lpdaac.usgs.gov/about/news_archive/aster_global_emissivity_database_ged_product_release, and others usually maintained by academic departments.)
Values of Ev = 0.99 and Eg = 0.97 are used, and the method of selection discussed, on p. 436 here: ftp://atmosfera.cl/pub/elias/Paula/2004_Sobrino_RSE.pdf (J.A. Sobrino et al. Land surface temperature retrieval from LANDSAT TM 5, Remote Sensing of Environment 90, 2004, p. 434–440).
Another approach is taken here: http://fromgistors.blogspot.com/2014/01/estimation-of-land-surface-temperature.html
Estimation of Land Surface Temperature
There are several studies about the calculation of land surface temperature. For instance, using NDVI for the estimation of land surface emissivity (Sobrino, et al., 2004), or using a land cover classification for the definition of the land surface emissivity of each class (Weng, et al. 2004).
For instance, the emissivity (e) values of various land cover types are provided in the following table (from Mallick, et al. 2012).
Soil: 0.928
Grass: 0.982
Asphalt: 0.942
Concrete: 0.937
Therefore, the land surface temperature can be calculated as (Weng, et al. 2004):
T = TB / [ 1 + (? * TB / ?) lne ]
where:
? = wavelength of emitted radiance
? = h * c / s (1.438 * 10^-2 m K)
h = Planck’s constant (6.626 * 10^-34 Js)
s = Boltzmann constant (1.38 * 10^-23 J/K)
c = velocity of light (2.998 * 10^8 m/s)
The values of ? for the thermal bands of Landsat setellites are listed in the following table:
| Satellite | Band | Center wavelength (µm) |
| Landsat 4, 5, and 7 | 6 | 11.45 |
| Landsat 8 | 10 | 10.8 |
| Landsat 8 | 11 | 12 |
Further reading on emissivity selection, see section 2.3, Emissivity Retrieval, here: https://books.google.com/books?id=XN4uAYlexnsC&lpg=PA51&ots=YQrmDa2S1G&dq=vegetation%20and%20bare%20soil%20emissivity&pg=PA50#v=onepage&q&f=false

Training LIBSVM with multivariate data in MATLAB

How LIBSVM works performs multivariate regression is my generalized question?
In detail, I have some data for certain number of links. (Example 3 links). Each link has 3 dependent variables which when used in a model gives output Y. I have data collected on these links in some interval.
LinkId | var1 | var2 | var3 | var4(OUTPUT)
1 | 10 | 12.1 | 2.2 | 3
2 | 11 | 11.2 | 2.3 | 3.1
3 | 12 | 12.4 | 4.1 | 1
1 | 13 | 11.8 | 2.2 | 4
2 | 14 | 12.7 | 2.3 | 2
3 | 15 | 10.7 | 4.1 | 6
1 | 16 | 8.6 | 2.2 | 6.6
2 | 17 | 14.2 | 2.3 | 4
3 | 18 | 9.8 | 4.1 | 5
I need to perform prediction to find the output of
(2,19,10.2,2.3).
How can I do that using above data for training in Matlab using LIBSVM? Can I train the whole data as input to the svmtrain to create a model or do I need to train each link separate and use the model create for prediction? Does it make any difference?
NOTE : Notice each link with same ID has same value.
This is not really a matlab or libsvm question but rather a generic svm related one.
How LIBSVM works performs multivariate regression is my generalized question?
LibSVM is just a library, which in particular - implements the Support Vector Regression model for the regression tasks. In short words, in a linear case, SVR tries to find a hyperplane for which your data points are placed in some margin around it (which is quite a dual approach to the classical SVM which tries to separate data with as big margin as possible).
In non linear case the kernel trick is used (in the same fashion as in SVM), so it is still looking for a hyperplane, but in a feature space induced by the particular kernel, which results in the non linear regression in the input space.
Quite nice introduction to SVRs' can be found here:
http://alex.smola.org/papers/2003/SmoSch03b.pdf
How can I do that using above data for training in Matlab using LIBSVM? Can I train the whole data as input to the svmtrain to create a model or do I need to train each link separate and use the model create for prediction? Does it make any difference? NOTE : Notice each link with same ID has same value.
You could train SVR (as it is a regression problem) with the whole data, but:
seems that var3 and LinkId are the same variables (1->2.2, 2->2.3, 3->4.1), if this is a case you should remove the LinkId column,
are values of var1 unique ascending integers? If so, these are also probably a useless featues (as they do not seem to carry any information, they seem to be your id numbers),
you should preprocess your data before applying SVM so eg. each column contains values from the [0,1] interval, otherwise some features may become more important than others just because of their scale.
Now, if you would like to create a separate model for each link, and follow above clues, you end up with 1 input variable (var2) and 1 output variable var4, so I would not recommend such a step. In general it seems that you have very limited featues set, it would be valuable to gather more informative features.