How to parse a shape file to use in Foundry's map application? - pyspark

I am ingesting data in the form of a shapefile. For example, ice data from https://usicecenter.gov/Products
How do I use these files in Foundry, in particular displaying on a map?

Easy! This is outlined in the documentation on using vector data in transforms
Clean geospatial data in Foundry is:
Tabular, so the data can be used in Spark transforms
Formatted as either a valid GeoJSON or geohash, so Geospatial data can be used in the Foundry Ontology
Projected using the EPSG:4326 CRS, so that both sides of spatial joins use the same projection and Foundry maps will render features correctly.
Foundry provides a geospatial-tools pyspark library which makes it easy to clean and convert. Further details are in the documentation for data parsing and cleaning, but for this specific example, we would need to convert the shapefile into a dataframe and then project to EPSG:4326.
The EPSG can be determined from the .prj file, using the method outlined here. For the example of the ice shapefiles:
with open(shapeprj_path, 'r') as f:
prj_txt = f.read()
srs = osr.SpatialReference()
srs.ImportFromESRI([prj_txt])
print(str(srs.ExportToProj4()))
The output is:
+proj=lcc +lat_0=40 +lon_0=-100 +lat_1=49 +lat_2=77 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs
This is used as the input_crs:
from transforms.api import transform, Input, Output
from geospatial_tools import geospatial
from geospatial_tools.parsers import shapefile_to_dataframe
from geospatial_tools.geom_transformations import normalize_projection
#geospatial()
#transform(
output=Output("path/to/ice_data_parsed"),
raw=Input("path/to/ice_data_raw"),
)
def compute(raw, output):
gdf = shapefile_to_dataframe(raw)
gdf = normalize_projection(input_df=gdf, geometry_column="geometry", input_crs="+proj=lcc +lat_0=40 +lon_0=-100 +lat_1=49 +lat_2=77 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs")
output.write_dataframe(gdf)
The output dataset can then be synced to the Ontology and used in the mapping applications

Related

How to import and read data from a dataset without using transform or transform_df in palantir foundry?

I want to know are there any ways to import the file without using transform_df or transform in code repository.
Basically I want to extract the data from the dataset and return all the values in terms of list. If I use transform or transform_df decorators then I won't be able to access that input file while calling the return function.
Are you trying to access the raw files in the dataset? That is possible using the filesystem API. Search your stack's documentation for "Raw File Access" wher eyou can find example python code. You still use the #transform decorator, except instead of calling .dataframe() you call .filesystem(). Here's some example code.
import csv
with hair_eye_color.filesystem().open('students.csv') as f:
reader = csv.reader(f, delimiter=',')
next(reader)
next(reader)
# ['id', 'hair', 'eye', 'sex']
# ['1', 'brown', 'brown', 'M']
You can create and a Spark dataframe using the file data and write it the output.

How can I get a list of bridges with their location (latitude and longitude) from an OSM file?

maybe this query may be a bit trivial or perhaps laborious, but for a project I need to obtain the bridges that exist in an osm file along with its location (latitude and longitude).
Reading the openstreetmap wiki, I see that there is a procedure using osmosis but I do not know if I will actually get the information as follows:
Name of the bridge | latitude | longitude
bin / osmosis.bat --rx brandenburg.osm.bz2 --bp file = "city.poly" --tf accept-ways highway=motorway_link,motorway --way-key-value keyValueList="bridge.yes" --used-node --write-xml brdg_autob.osm
Thanks in advance
Pablo
The output will be OSM XML and not plaintext.
Also, most bridges in OSM are mapped as ways. A way consists of multiple lat/lons represented as nodes. If you need a single lat,lon pair then you have to calculate the bridge center yourself.
Additionally, not all bridges are tagged as bridge=yes. See bridge in the OSM wiki for a list of commonly used tags, such as bridge=viaduct, bridge=aqueduct, bridge=boardwalk and so on.
You won't exactly get the format you described. However with some little work you can transform OSM XML into your format.

pyspark unindex one-hot encoded and assembled columns

I have the following code which takes in a mix of categorical, numeric features, string indexes the categorical features, then one hot encodes the categorical features, then assembles both one hot encoded categorical features and numeric features, runs them trough a random forest and prints the resultant tree. I want the tree nodes to have the original features names (i.e Frame_Size etc). How can I do that? In general how can I decode one hot encoded and assembled features?
# categorical features : start singindexing and one hot encoding
column_vec_in = ['Commodity','Frame_Size' , 'Frame_Shape', 'Frame_Color','Frame_Color_Family','Lens_Color','Frame_Material','Frame_Material_Summary','Build', 'Gender_Global', 'Gender_LC'] # frame Article_Desc not slected because the cardinality is too high
column_vec_out = ['Commodity_catVec', 'Frame_Size_catVec', 'Frame_Shape_catVec', 'Frame_Color_catVec','Frame_Color_Family_catVec','Lens_Color_catVec','Frame_Material_catVec','Frame_Material_Summary_catVec','Build_catVec', 'Gender_Global_catVec', 'Gender_LC_catVec']
indexers = [StringIndexer(inputCol=x, outputCol=x+'_tmp') for x in column_vec_in ]
encoders = [OneHotEncoder(dropLast=False, inputCol=x+"_tmp", outputCol=y) for x,y in zip(column_vec_in, column_vec_out)]
tmp = [[i,j] for i,j in zip(indexers, encoders)]
tmp = [i for sublist in tmp for i in sublist]
#categorical and numeric features
cols_now = ['SODC_Regular_Rate','Commodity_catVec', 'Frame_Size_catVec', 'Frame_Shape_catVec', 'Frame_Color_catVec','Frame_Color_Family_catVec','Lens_Color_catVec','Frame_Material_catVec','Frame_Material_Summary_catVec','Build_catVec', 'Gender_Global_catVec', 'Gender_LC_catVec']
assembler_features = VectorAssembler(inputCols=cols_now, outputCol='features')
labelIndexer = StringIndexer(inputCol='Lens_Article_Description_reduced', outputCol="label")
tmp += [assembler_features, labelIndexer]
# converter = IndexToString(inputCol="featur", outputCol="originalCategory")
# converted = converter.transform(indexed)
pipeline = Pipeline(stages=tmp)
all_data = pipeline.fit(df_random_forest_P_limited).transform(df_random_forest_P_limited)
all_data.cache()
trainingData, testData = all_data.randomSplit([0.8,0.2], seed=0)
rf = RF(labelCol='label', featuresCol='features',numTrees=10,maxBins=800)
model = rf.fit(trainingData)
print(model.toDebugString)
After I run the spark machine learning pipeline I want to print out the random forest as a tree.Currently it looks like below.
What I actually want to see is the original categorical feature names instead of feature 1, feature 2 etc. The fact that the categorical features are one hot encoded and vector assembled makes it hard for me to unindex/decode the feature names. How can I unidex/decode onehot encoded and assembled feature vectors in pyspark? I have a vague idea that I have to use " IndexToString()" but I am not exactly sure because there is a mix of numeric, categorical features and they are one hot encoded and assembled.
Export the Apache Spark ML pipeline to a PMML document using the JPMML-SparkML library. A PMML document can be inspected and interpreted by humans (eg. using Notepad), or processed programmatically (eg. using other Java PMML API libraries).
The "model schema" is represented by the /PMML/MiningModel/MiningSchema element. Each "active feature" is represented by a MiningField element; you can retrieve their "type definitions" by looking up the corresponding /PMML/DataDictionary/DataField element.
Edit: Since you were asking about PySpark, you might consider using the JPMML-SparkML-Package package for export.

What is the fastest way to transform a very large JSON file with Spark?

I am having a rather large JSON file (Amazon product data) with a lot of single JSON objects. Those JSON objects contain text that I want to preprocess for a specific training task but it is the preprocessing that I need to speed up here. One JSON object looks like this:
{
"reviewerID": "A2SUAM1J3GNN3B",
"asin": "0000013714",
"reviewerName": "J. McDonald",
"helpful": [2, 3],
"reviewText": "I bought this for my husband who plays the piano. He is having a wonderful time playing these old hymns. The music is at times hard to read because we think the book was published for singing from more than playing from. Great purchase though!",
"overall": 5.0,
"summary": "Heavenly Highway Hymns",
"unixReviewTime": 1252800000,
"reviewTime": "09 13, 2009"
}
The task would be to extract reviewText from each JSON object and perform some preprocessing like lemmatizing etc.
My problem is that I don't know how I could use Spark in order to speed this task up on a cluster.. I am actually not even sure if I can read that JSON file as a stream object-by-object and parallelize the main task.
What would be the best way to get started with this?
As you have single JSON object per line, you can use RDD's textFile to get RDD[String] of lines. Then use map to parse JSON objects using something like json4s and extract necessary field.
You whole code will looks as simple as this (assuming you have SparkContext as sc):
import org.json4s._
import org.json4s.jackson.JsonMethods._
implicit def formats = DefaultFormats
val r = sc.textFile("input_path").map(l => (parse(l) \ "reviewText").extract[String])
You can use a JSON dataset and then execute a simple sql query to retrieve the reviewText column's value:
// A JSON dataset is pointed to by path.
// The path can be either a single text file or a directory storing text files.
val path = "path/reviews.json"
val people = sqlContext.read.json(path)
// Register this DataFrame as a table.
people.registerTempTable("reviews")
val reviewTexts = sqlContext.sql("SELECT reviewText FROM reviews")
Built from examples at the SparkSQL docs (http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets)
I would load JSON data into Dataframe and then select field that i need, also you can use map to apply preprocessing like lemmatising.

postgis shape file import problems

Hi I'm trying to import a shape file from
http://www.nyc.gov/html/dcp/html/bytes/bytesarchive.shtml
into a postgis database. the above files creates MULTIPOLYGONS when i import using shp2pgsql.
then i'm trying to simply determine if lat/long points are contained in my multipolygons
however my select's are not working, and when i print out the poitns of my the_geom column it seems to be very broken.
select st_astext(geom) from (select (st_dumppoints(the_geom)).* from nybb where borocode =1) foo;
gives the result...
st_astext
------------------------------------------
POINT(1007193.83859999 257820.786899999)
POINT(1007209.40620001 257829.435100004)
POINT(1007244.8654 257833.326199993)
POINT(1007283.3496 257839.812399998)
POINT(1007299.3502 257851.488900006)
POINT(1007320.1081 257869.218500003)
POINT(1007356.64669999 257891.055800006)
POINT(1007385.6197 257901.432999998)
POINT(1007421.94509999 257894.084000006)
POINT(1007516.85959999 257890.406100005)
POINT(1007582.59110001 257884.7861)
POINT(1007639.02150001 257877.217199996)
POINT(1007701.29170001 257872.893099993)
...
for points in nyc, this is very off.. what am i doing wrong?
The points are not of. The spatial data that is referred to is NOT in lat/long. This is why numbers are different from what you expect. If you need it to be in long/lat it must be reprojected. See more here: http://postgis.refractions.net/news/20020108/
The projection of the data seems to be in the NAD_1983_StatePlane_New_York_Long_Island_FIPS_3104_Feet coordinate system (according to the metadata - see code.).
<spref>
<horizsys>
<planar>
<planci>
<plance Sync="TRUE">coordinate pair</plance>
<coordrep>
<absres Sync="TRUE">0.000000</absres>
<ordres Sync="TRUE">0.000000</ordres>
</coordrep>
<plandu Sync="TRUE">survey feet</plandu>
</planci>
<mapproj><mapprojn Sync="TRUE">Lambert Conformal Conic</mapprojn><lambertc><stdparll Sync="TRUE">40.666667</stdparll><stdparll Sync="TRUE">41.033333</stdparll><longcm Sync="TRUE">-74.000000</longcm><latprjo Sync="TRUE">40.166667</latprjo><feast Sync="TRUE">984250.000000</feast><fnorth Sync="TRUE">0.000000</fnorth></lambertc></mapproj></planar>
<geodetic>
<horizdn Sync="TRUE">North American Datum of 1983</horizdn>
<ellips Sync="TRUE">Geodetic Reference System 80</ellips>
<semiaxis Sync="TRUE">6378137.000000</semiaxis>
<denflat Sync="TRUE">298.257222</denflat>
</geodetic>
<cordsysn>
<geogcsn Sync="TRUE">GCS_North_American_1983</geogcsn>
<projcsn Sync="TRUE">NAD_1983_StatePlane_New_York_Long_Island_FIPS_3104_Feet</projcsn>
</cordsysn>
</horizsys>
</spref>
If you work much with spatial data I suggest that you read more about map projection.
I think this is not issue with PostGIS. I checked input esri Shape file nybb.shp with AvisMap Free Viewer and as you see points are weird itself:
However there is something interesting in nybb.shp.xml metadata file:
<spdom>
<bounding>
<westbc Sync="TRUE">-74.257465</westbc>
<eastbc Sync="TRUE">-73.699450</eastbc>
<northbc Sync="TRUE">40.915808</northbc>
<southbc Sync="TRUE">40.495805</southbc>
</bounding>
<lboundng>
<leftbc Sync="TRUE">913090.770096</leftbc>
<rightbc Sync="TRUE">1067317.219904</rightbc>
<bottombc Sync="TRUE">120053.526313</bottombc>
<topbc Sync="TRUE">272932.050103</topbc>
</lboundng>
</spdom>
I am not familiar with those toolkit (ESRI ArcCatalog), but most probably you need to rescale your points after import using that metadata.