filter on data which are numeric - scala

hi i have a dataframe with a column CODEARTICLE here is the dataframe
|CODEARTICLE| STRUCTURE| DES|TYPEMARK|TYP|IMPLOC|MARQUE|GAMME|TAR|
+-----------+-------------+--------------------+--------+---+------+------+-----+---+
| GENCFFRIST|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 0| Local| | | |
| GENCFFMARC|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 0| Local| | | |
| GENCFFESCO|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 0| Local| | | |
| GENCFFTNA|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 0| Local| | | |
| GENCFFEMBA|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 0| Local| | | |
| 789600010|9999999999998|xxxxxxxxxxxxxxxxx...| 7| 1| Local| | | |
| 799700040|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 1| Local| | | |
| 799701000|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 1| Local| | | |
| 899980490|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 9| Local| | | |
| 429600010|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 1| Local| | | |
| 559970040|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 0| Local| | | |
| 679500010|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 1| Local| | | |
| 679500040|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 1| Local| | | |
| 679500060|9999999999998|xxxxxxxxxxxxxxxxx...| 0| 1| Local| | | |
+-----------+-------------+--------------------+--------+---+------+------+-----+---+
i would like to take only rows having a numeric CODEARTICLER
//connect to table TMP_STRUCTURE oracle
val spark = sparkSession.sqlContext
val articles_Gold = spark.load("jdbc",
Map("url" -> "jdbc:oracle:thin:System/maher#//localhost:1521/XE",
"dbtable" -> "IPTECH.TMP_ARTICLE")).select("CODEARTICLE", "STRUCTURE", "DES", "TYPEMARK", "TYP", "IMPLOC", "MARQUE", "GAMME", "TAR")
val filteredData =articles_Gold.withColumn("test",'CODEARTICLE.cast(IntegerType)).filter($"test"!==null)
thank you a lot

Use na.drop:
articles_Gold.withColumn("test",'CODEARTICLE.cast(IntegerType)).na.drop("test")

you can use .isNotNull function on the column in your filter function. You don't even need to create another column for your logic. You can simply do the following
val filteredData = articles_Gold.withColumn("CODEARTICLE",'CODEARTICLE.cast(IntegerType)).filter('CODEARTICLE.isNotNull)
I hope the answer is helpful

Related

Unable to get the result from the window function

+---------------+--------+
|YearsExperience| Salary|
+---------------+--------+
| 1.1| 39343.0|
| 1.3| 46205.0|
| 1.5| 37731.0|
| 2.0| 43525.0|
| 2.2| 39891.0|
| 2.9| 56642.0|
| 3.0| 60150.0|
| 3.2| 54445.0|
| 3.2| 64445.0|
| 3.7| 57189.0|
| 3.9| 63218.0|
| 4.0| 55794.0|
| 4.0| 56957.0|
| 4.1| 57081.0|
| 4.5| 61111.0|
| 4.9| 67938.0|
| 5.1| 66029.0|
| 5.3| 83088.0|
| 5.9| 81363.0|
| 6.0| 93940.0|
| 6.8| 91738.0|
| 7.1| 98273.0|
| 7.9|101302.0|
| 8.2|113812.0|
| 8.7|109431.0|
| 9.0|105582.0|
| 9.5|116969.0|
| 9.6|112635.0|
| 10.3|122391.0|
| 10.5|121872.0|
+---------------+--------+
I want to find the top highest salary from the above data which is 122391.0
My Code
val top= Window.partitionBy("id").orderBy(col("Salary").desc)
val res= df1.withColumn("top", rank().over(top))
Result
+---------------+--------+---+---+
|YearsExperience| Salary| id|top|
+---------------+--------+---+---+
| 1.1| 39343.0| 0| 1|
| 1.3| 46205.0| 1| 1|
| 1.5| 37731.0| 2| 1|
| 2.0| 43525.0| 3| 1|
| 2.2| 39891.0| 4| 1|
| 2.9| 56642.0| 5| 1|
| 3.0| 60150.0| 6| 1|
| 3.2| 54445.0| 7| 1|
| 3.2| 64445.0| 8| 1|
| 3.7| 57189.0| 9| 1|
| 3.9| 63218.0| 10| 1|
| 4.0| 55794.0| 11| 1|
| 4.0| 56957.0| 12| 1|
| 4.1| 57081.0| 13| 1|
| 4.5| 61111.0| 14| 1|
| 4.9| 67938.0| 15| 1|
| 5.1| 66029.0| 16| 1|
| 5.3| 83088.0| 17| 1|
| 5.9| 81363.0| 18| 1|
| 6.0| 93940.0| 19| 1|
| 6.8| 91738.0| 20| 1|
| 7.1| 98273.0| 21| 1|
| 7.9|101302.0| 22| 1|
| 8.2|113812.0| 23| 1|
| 8.7|109431.0| 24| 1|
| 9.0|105582.0| 25| 1|
| 9.5|116969.0| 26| 1|
| 9.6|112635.0| 27| 1|
| 10.3|122391.0| 28| 1|
| 10.5|121872.0| 29| 1|
+---------------+--------+---+---+
Also I have choosed partioned by salary and orderby id.
<br>
But the result was same.
As you can see 122391 is coming just below the above but it should come in first position as i have done ascending.
Please help anybody find any things
Are you sure you need a window function here? The window you defined partitions the data by id, which I assume is unique, so each group produced by the window will only have one row. It looks like you want a window over the entire dataframe, which means you don't actually need one. If you just want to add a column with the max, you can get the max using an aggregation on your original dataframe and cross join with it:
val maxDF = df1.agg(max("salary").as("top"))
val res = df1.crossJoin(maxDF)

pySpark windows partition sortby instead of order by (exclamation marks)

this is my current dataset
+----------+--------------------+---------+--------+
|session_id| timestamp| item_id|category|
+----------+--------------------+---------+--------+
| 1|2014-04-07 10:51:...|214536502| 0|
| 1|2014-04-07 10:54:...|214536500| 0|
| 1|2014-04-07 10:54:...|214536506| 0|
| 1|2014-04-07 10:57:...|214577561| 0|
| 2|2014-04-07 13:56:...|214662742| 0|
| 2|2014-04-07 13:57:...|214662742| 0|
| 2|2014-04-07 13:58:...|214825110| 0|
| 2|2014-04-07 13:59:...|214757390| 0|
| 2|2014-04-07 14:00:...|214757407| 0|
| 2|2014-04-07 14:02:...|214551617| 0|
| 3|2014-04-02 13:17:...|214716935| 0|
| 3|2014-04-02 13:26:...|214774687| 0|
| 3|2014-04-02 13:30:...|214832672| 0|
| 4|2014-04-07 12:09:...|214836765| 0|
| 4|2014-04-07 12:26:...|214706482| 0|
| 6|2014-04-06 16:58:...|214701242| 0|
| 6|2014-04-06 17:02:...|214826623| 0|
| 7|2014-04-02 06:38:...|214826835| 0|
| 7|2014-04-02 06:39:...|214826715| 0|
| 8|2014-04-06 08:49:...|214838855| 0|
+----------+--------------------+---------+--------+
I want to get the difference between the timestamp of the current row and the timestamp of the previous row.
so I converted the time stamp as follows
data = data.withColumn('time_seconds',data.timestamp.astype('Timestamp').cast("long"))
data.show()
next, I tried the following
my_window = Window.partitionBy().orderBy("session_id")
data = data.withColumn("prev_value", F.lag(data.time_seconds).over(my_window))
data = data.withColumn("diff", F.when(F.isnull(data.time_seconds - data.prev_value), 0)
.otherwise(data.time_seconds - data.prev_value))
data.show()
this is what I got
+----------+-----------+---------+--------+------------+----------+--------+
|session_id| timestamp| item_id|category|time_seconds|prev_value| diff|
+----------+--------------------+---------+--------+------------+----------+
| 1|2014-04-07 |214536502| 0| 1396831869| null| 0|
| 1|2014-04-07 |214536500| 0| 1396832049|1396831869| 180|
| 1|2014-04-07 |214536506| 0| 1396832086|1396832049| 37|
| 1|2014-04-07 |214577561| 0| 1396832220|1396832086| 134|
| 10000001|2014-09-08 |214854230| S| 1410136538|1396832220|13304318|
| 10000001|2014-09-08 |214556216| S| 1410136820|1410136538| 282|
| 10000001|2014-09-08 |214556212| S| 1410136836|1410136820| 16|
| 10000001|2014-09-08 |214854230| S| 1410136872|1410136836| 36|
| 10000001|2014-09-08 |214854125| S| 1410137314|1410136872| 442|
| 10000002|2014-09-08 |214849322| S| 1410167451|1410137314| 30137|
| 10000002|2014-09-08 |214838094| S| 1410167611|1410167451| 160|
| 10000002|2014-09-08 |214714721| S| 1410167694|1410167611| 83|
| 10000002|2014-09-08 |214853711| S| 1410168818|1410167694| 1124|
| 10000003|2014-09-05 |214853090| 3| 1409880735|1410168818| -288083|
| 10000003|2014-09-05 |214851326| 3| 1409880865|1409880735| 130|
| 10000003|2014-09-05 |214853094| 3| 1409881043|1409880865| 178|
| 10000004|2014-09-05 |214853090| 3| 1409886885|1409881043| 5842|
| 10000004|2014-09-05 |214851326| 3| 1409889318|1409886885| 2433|
| 10000004|2014-09-05 |214853090| 3| 1409889388|1409889318| 70|
| 10000004|2014-09-05 |214851326| 3| 1409889428|1409889388| 40|
+----------+--------------------+---------+--------+------------+----------+
only showing top 20 rows
I was hoping that the session Id came out in order of numerical sequence instead of what that gave me...
is there anyway to make the session id come out in numerical order (as in 1,2,3.....) instead of (1,100001......)
thank you so much
​

Create new column from existing Dataframe

I am having a Dataframe and trying to create a new column from existing columns based on following condition.
Group data by column named event_type
Only filter those rows where column source has value train and call it X.
Values for new column are X.sum / X.length
Here is input Dataframe
+-----+-------------+----------+--------------+------+
| id| event_type| location|fault_severity|source|
+-----+-------------+----------+--------------+------+
| 6597|event_type 11|location 1| -1| test|
| 8011|event_type 15|location 1| 0| train|
| 2597|event_type 15|location 1| -1| test|
| 5022|event_type 15|location 1| -1| test|
| 5022|event_type 11|location 1| -1| test|
| 6852|event_type 11|location 1| -1| test|
| 6852|event_type 15|location 1| -1| test|
| 5611|event_type 15|location 1| -1| test|
|14838|event_type 15|location 1| -1| test|
|14838|event_type 11|location 1| -1| test|
| 2588|event_type 15|location 1| 0| train|
| 2588|event_type 11|location 1| 0| train|
+-----+-------------+----------+--------------+------+
and i want following output.
+--------------+------------+-----------+
| | event_type | PercTrain |
+--------------+------------+-----------+
|event_type 11 | 7888 | 0.388945 |
|event_type 35 | 6615 | 0.407105 |
|event_type 34 | 5927 | 0.406783 |
|event_type 15 | 4395 | 0.392264 |
|event_type 20 | 1458 | 0.382030 |
+--------------+------------+-----------+
I have tried this code but this throws error
EventSet.withColumn("z" , when($"source" === "train" , sum($"source") / length($"source"))).groupBy("fault_severity").count().show()
Here EventSet is input dataframe
Python code that gives desired output is
event_type_unq['PercTrain'] = event_type.pivot_table(values='source',index='event_type',aggfunc=lambda x: sum(x=='train')/float(len(x)))
I guess you want to obtain the percentage of train values. So, here is my code,
val df2 = df.select($"event_type", $"source").groupBy($"event_type").pivot($"source").agg(count($"source")).withColumn("PercTrain", $"train" / ($"train" + $"test")).show
and gives the result as follows:
+-------------+----+-----+------------------+
| event_type|test|train| PercTrain|
+-------------+----+-----+------------------+
|event_type 11| 4| 1| 0.2|
|event_type 15| 5| 2|0.2857142857142857|
+-------------+----+-----+------------------+
Hope to be helpful.

Pyspark Join Tables

I'm new in Pyspark. I have 'Table A' and 'Table B' and I need join both to get 'Table C'. Can anyone help-me please?
I'm using DataFrames...
I don't know how to join that tables all together in the right way...
Table A:
+--+----------+-----+
|id|year_month| qt |
+--+----------+-----+
| 1| 2015-05| 190 |
| 2| 2015-06| 390 |
+--+----------+-----+
Table B:
+---------+-----+
year_month| sem |
+---------+-----+
| 2016-01| 1 |
| 2015-02| 1 |
| 2015-03| 1 |
| 2016-04| 1 |
| 2015-05| 1 |
| 2015-06| 1 |
| 2016-07| 2 |
| 2015-08| 2 |
| 2015-09| 2 |
| 2016-10| 2 |
| 2015-11| 2 |
| 2015-12| 2 |
+---------+-----+
Table C:
The join add columns and also add rows...
+--+----------+-----+-----+
|id|year_month| qt | sem |
+--+----------+-----+-----+
| 1| 2015-05 | 0 | 1 |
| 1| 2016-01 | 0 | 1 |
| 1| 2015-02 | 0 | 1 |
| 1| 2015-03 | 0 | 1 |
| 1| 2016-04 | 0 | 1 |
| 1| 2015-05 | 190 | 1 |
| 1| 2015-06 | 0 | 1 |
| 1| 2016-07 | 0 | 2 |
| 1| 2015-08 | 0 | 2 |
| 1| 2015-09 | 0 | 2 |
| 1| 2016-10 | 0 | 2 |
| 1| 2015-11 | 0 | 2 |
| 1| 2015-12 | 0 | 2 |
| 2| 2015-05 | 0 | 1 |
| 2| 2016-01 | 0 | 1 |
| 2| 2015-02 | 0 | 1 |
| 2| 2015-03 | 0 | 1 |
| 2| 2016-04 | 0 | 1 |
| 2| 2015-05 | 0 | 1 |
| 2| 2015-06 | 390 | 1 |
| 2| 2016-07 | 0 | 2 |
| 2| 2015-08 | 0 | 2 |
| 2| 2015-09 | 0 | 2 |
| 2| 2016-10 | 0 | 2 |
| 2| 2015-11 | 0 | 2 |
| 2| 2015-12 | 0 | 2 |
+--+----------+-----+-----+
Code:
from pyspark import HiveContext
sqlContext = HiveContext(sc)
lA = [(1,"2015-05",190),(2,"2015-06",390)]
tableA = sqlContext.createDataFrame(lA, ["id","year_month","qt"])
tableA.show()
lB = [("2016-01",1),("2015-02",1),("2015-03",1),("2016-04",1),
("2015-05",1),("2015-06",1),("2016-07",2),("2015-08",2),
("2015-09",2),("2016-10",2),("2015-11",2),("2015-12",2)]
tableB = sqlContext.createDataFrame(lB,["year_month","sem"])
tableB.show()
It's not really a join more a cartesian product (cross join)
Spark 2
import pyspark.sql.functions as psf
tableA.crossJoin(tableB)\
.withColumn(
"qt",
psf.when(tableB.year_month == tableA.year_month, psf.col("qt")).otherwise(0))\
.drop(tableA.year_month)
Spark 1.6
tableA.join(tableB)\
.withColumn(
"qt",
psf.when(tableB.year_month == tableA.year_month, psf.col("qt")).otherwise(0))\
.drop(tableA.year_month)
+---+---+----------+---+
| id| qt|year_month|sem|
+---+---+----------+---+
| 1| 0| 2015-02| 1|
| 1| 0| 2015-03| 1|
| 1|190| 2015-05| 1|
| 1| 0| 2015-06| 1|
| 1| 0| 2016-01| 1|
| 1| 0| 2016-04| 1|
| 1| 0| 2015-08| 2|
| 1| 0| 2015-09| 2|
| 1| 0| 2015-11| 2|
| 1| 0| 2015-12| 2|
| 1| 0| 2016-07| 2|
| 1| 0| 2016-10| 2|
| 2| 0| 2015-02| 1|
| 2| 0| 2015-03| 1|
| 2| 0| 2015-05| 1|
| 2|390| 2015-06| 1|
| 2| 0| 2016-01| 1|
| 2| 0| 2016-04| 1|
| 2| 0| 2015-08| 2|
| 2| 0| 2015-09| 2|
| 2| 0| 2015-11| 2|
| 2| 0| 2015-12| 2|
| 2| 0| 2016-07| 2|
| 2| 0| 2016-10| 2|
+---+---+----------+---+

How to create feature vector in Scala? [duplicate]

This question already has an answer here:
How to transform the dataframe into label feature vector?
(1 answer)
Closed 5 years ago.
I am reading a csv as a data frame in scala as below:
+-----------+------------+
|x |y |
+-----------+------------+
| 0| 0|
| 0| 33|
| 0| 58|
| 0| 96|
| 0| 1|
| 1| 21|
| 0| 10|
| 0| 65|
| 1| 7|
| 1| 28|
+-----------+------------+
Then I create the label and feature vector as below:
val assembler = new VectorAssembler()
.setInputCols(Array("y"))
.setOutputCol("features")
val output = assembler.transform(daf).select($"x".as("label"), $"features")
println(output.show)
The output is as:
+-----------+------------+
|label | features |
+-----------+------------+
| 0.0| 0.0|
| 0.0| 33.0|
| 0.0| 58.0|
| 0.0| 96.0|
| 0.0| 1.0|
| 0.0| 21.0|
| 0.0| 10.0|
| 1.0| 65.0|
| 1.0| 7.0|
| 1.0| 28.0|
+-----------+------------+
But instead of this I want the output to be like in the below format
+-----+------------------+
|label| features |
+-----+------------------+
| 0.0|(1,[1],[0]) |
| 0.0|(1,[1],[33]) |
| 0.0|(1,[1],[58]) |
| 0.0|(1,[1],[96]) |
| 0.0|(1,[1],[1]) |
| 1.0|(1,[1],[21]) |
| 0.0|(1,[1],[10]) |
| 0.0|(1,[1],[65]) |
| 1.0|(1,[1],[7]) |
| 1.0|(1,[1],[28]) |
+-----------+------------+
I tried
val assembler = new VectorAssembler()
.setInputCols(Array("y").map{x => "(1,[1],"+x+")"})
.setOutputCol("features")
But did not work.
Any help is appreciated.
This is not how you use VectorAssembler.
You need to give the names of your input columns. i.e
new VectorAssembler().setInputCols(Array("features"))
You'll face eventually another issue considering the data that you have shared. It's not much a vector if it's one point. (your features columns)
It should be used with 2 or more columns. i.e :
new VectorAssembler().setInputCols(Array("f1","f2","f3"))