Make absolute work inside filtering in Scala - scala

I want to return a percentage of results from a dataset. Being a noob in Scala, tried the following
ds.filter(abs(hash(col("source"))) % 100 < percentage)
but getting abs cannot be applied to (org.apache.spark.sql.Column). I don't want to sample it, I want to return based on the hash of a column so that it's deterministic even when dataset changes.

This works just fine:
ds.filter(abs(hash(col("source"))) % 100 < percentage)
Probabely you have multiple abs in your namespace (e.g. from imports like import math._ etc. To be sure, use
ds.filter(org.apache.spark.sql.functions.abs(hash(col("source"))) % 100 < percentage)
But I think this will not garantee that you get the exact percentage, because hash values may not be equally distributed (think about a dataframe with only 1 unique value of source, hash values will all be the same.... you get either all records or none. To get the exact percentage, you would need something like :
val newDF = df
.withColumn("rnb",row_number().over(Window.orderBy($"source"))) // or order by hash if you wish
.withColumn("count",count("*").over())
.where($"rnb" < lit(fraction)*$"count")

Related

PL/PGSQL function set upper and lower bound to an integer value

I am working on a pgsql function that calculates a score based on some logic. But one of the requirements is that a parameter after calculation should be in the range [100000, 9900000].
I can't figure out how to do this with existing functions, obviously possible with if conditions, any help?
v_running_sum += (30 - v_calcuated_value)* 100000;
I want v_running_sum to be in the range mentioned above. Is there any way to bound the value of the variable if lower than the lower bound (100,000) to 100,000 and vice versa for the upper bound?
This is how you can easily do this check, using a range:
SELECT 1 <# int4range(100000, 9900000,'[]');
There are many options how to implement this in your logic.
----edit----
When the outcome of a calculation should always be something between 100000 and 9900000, you can use this:
SELECT LEAST(GREATEST(var, 100000), 9900000);
Whatever you stick into "var", the result will always be between these boundaries
If you want a verbose solution use a CASE statement
case when val <= 100000 then 100000
when val >= 9900000 then 9900000
else val end as val

Generating a random sequence using Scala

I need to get a random sequence of 100 values from 10^-10 to 10^10 and storing to an Array using Scala. I tried following but it didn't work
Array(scala.math.pow(10,-10).doubleValue to scala.math.pow(10,10).intValue by scala.math.pow(10,5).toLong)
Can anyone help me to figure out how to do this correctly?
So you need to fill() the array with Random elements.
import scala.util.Random
val rndm = new Random(1911L)
Array.fill(100)(rndm.between(math.pow(10,-10), math.pow(10,10)))
//res0: Array[Double] = Array(6.08868427907728E9
// , 3.29548545155816E9
// , 9.52802903383275E9
// , 7.981295238889314E9
// , 1.9462480080050848E9
// . . .
This works because the 2nd parameter to the fill() method is "by-name", i.e. re-evaluated for every element.
UPDATE
Things aren't quite as clean if you don't have the .between() method (Scala 2.13).
Array.fill(100)(rndm.nextDouble())
.map(_ * math.pow(10,10))
Note that this actually has a floor of 0.0 instead of the desired 0.0000000001. It's very unlikely you'd have an entry that's too small, especially when taking only 100 samples. Still, there are steps you could take to insure that can't happen.

How to convert a type Any List to a type Double (Scala)

I am new to Scala and I would like to understand some basic stuff.
First of all, I need to calculate the average of a certain column of a DataFrame and use the result as a double type variable.
After some Internet research I was able to calculate the average and at the same time pass it into a List type Any by using the following command:
val avgX_List = mainDataFrame.groupBy().agg(mean("_c1")).collect().map(_(0)).toList
where "_c1" is the second column of my dataframe. This line of code returns a List with type List[Any].
To pass the result into a variable I used the following command:
var avgX = avgX_List(0)
hoping that the var avgX would be type double automatically but that didn't happen obviously.
So now let the questions begin:
What does map(_(0)) do? I know the basic definition of the map() transformation but I can't find an explanation with this exact argument
I know that by using .toList method in the end of the command my result will be a List with type Any. Is there a way that I could change this into List which contains type Double elements? Or even convert this one
Do you think that it would be much more appropriate to pass the column of my Dataframe into a List[Double] and then calculate the average of its elements?
Is the solution I showed above at any point of view correct based on my problem? I know that "it is working" is different from "correct solution"?
Summing up, I need to calculate the average of a certain column of a Dataframe and have the result as a double type variable.
Note that: I am Greek and I find it hard sometimes to understand some English coding "slang".
map(_(0)) is a shortcut for map( (r: Row) => r(0) ), which is in turn a shortcut for map( (r: Row) => r.apply(0) ). The apply method returns Any, and so you are losing the right type. Try using map(_.getAs[Double](0)) or map(_.getDouble(0)) instead.
Collecting all entries of the column and then computing the average would be highly counterproductive, because you'd have to send huge amounts of data to the master node, and then do all the calculations on this single central node. That would be the exact opposite of what Spark is good for.
You also don't need collect(...).toList, because you can access the 0-th entry directly (it doesn't matter whether you get it from an Array or from a List). Since you are collapsing everything into a single Row anyway, you could get rid of the map step entirely by reordering the methods a little bit:
val avgX = mainDataFrame.groupBy().agg(mean("_c1")).collect()(0).getDouble(0)
It can be written even shorter using the first method:
val avgX = mainDataFrame.groupBy().agg(mean("_c1")).first().getDouble(0)
#Any dataType in Scala can't be directly converted to Double.
#Use toString & then toDouble on final captured result.
#Eg-
#scala> x
#res22: Any = 1.0
#scala> x.toString.toDouble
#res23: Double = 1.0
#Note- Instead of using map().toList() directly use (0)(0) to get the final value from your resultset.
#TestSample(Scala)-
val wa = Array("one","two","two")
val wrdd = sc.parallelize(wa,3).map(x=>(x,1))
val wdf = wrdd.toDF("col1","col2")
val x = wdf.groupBy().agg(mean("col2")).collect()(0)(0).toString.toDouble
#O/p-
#scala> val x = wdf.groupBy().agg(mean("col2")).collect()(0)(0).toString.toDouble
#x: Double = 1.0

scala return matrix of average pixels

Here's the thing: I want to modify (and then return) a matrix of integers that is given in the parameters of the function. The funcion average (of the class MatrixMotionBlur) gives the average between the own pixel, upper, down and left pixels. Follows the following formula:
result(x, y) = (M1(x, y)+M1(x-1, y)+M1(x, y-1)+M1(x, y+1)) / 4
This is the code i've implemented so far
MatrixMotionBlur - Average function
MotionBlurSingleThread - run
The objetive here is to apply "average" method to alter the matrix value and return that matrix. The thing is the program gives me error when I to insert the value on the matrix.
Any ideas how to do this ?
The functional way
val updatedData = data.map{ outter =>
outter(i).map{ inner =>
mx.average(i.j)
}
}
Pay attention that Seq is immutable collection type and you can't just modify it, you can create new, modified collection only.
By the way, why you iterate starting 1, but not 0. Are you sure you want it?

Function equivalent to SUM() for multiplication in SQL Reporting

I'm looking for a function or solution to the following:
For the chart in SQL Reporting i need to multiply values from a Column A. For summation i would use =SUM(COLUMN_A) for the chart. But what can i use for multiplication - i was not able to find a solution so far?
Currently i am calculating the value of the stacked column as following:
=ROUND(SUM(Fields!Value_Is.Value)/SUM(Fields!StartValue.Value),3)
Instead of SUM i need something to multiply the values.
Something like that:
=ROUND(MULTIPLY(Fields!Value_Is.Value)/MULTIPLY(Fields!StartValue.Value),3)
EDIT #1
Okay tried to get this thing running.
The expression for the chart looks like this:
=Exp(Sum(Log(IIf(Fields!Menge_Ist.Value = 0, 10^-306, Fields!Menge_Ist.Value)))) / Exp(Sum(Log(IIf(Fields!Startmenge.Value = 0, 10^-306, Fields!Startmenge.Value))))
If i calculate my 'needs' manually i have to get the following result:
In my SQL Report i get the following result:
To make it easier, these are the raw values:
and you have the possibility to group the chart by CW, CQ or CY
(The values from the first pictures are aggregated Sum values from the raw values by FertStufe)
EDIT #2
Tried your expression, which results in this:
Just to make it clear:
The values in the column
=Value_IS / Start_Value
in the first picture are multiplied against each other
0,9947 x 1,0000 x 0,59401 = 0,58573
Diffusion Calenderweek 44 Sums
Startvalue: 1900,00 Value Is: 1890,00 == yield:0,99474
Waffer unbestrahlt Calenderweek 44 Sums
Startvalue: 620,00 Value Is: 620,00 == yield 1,0000
Pellet Calenderweek 44 Sums
Startvalue: 271,00 Value Is: 160,00 == yield 0,59041
yield Diffusion x yield Wafer x yield Pellet = needed Value in chart = 0,58730
EDIT #3
The raw values look like this:
The chart ist grouped - like in the image - on these fields
CY (Calendar year), CM (Calendar month), CW (Calendar week)
You can download the data as xls here:
https://www.dropbox.com/s/g0yrzo3330adgem/2013-01-17_data.xls
The expression i use (copy / past from the edit window)
=Exp(Sum(Log(Fields!Menge_Ist.Value / Fields!Startmenge.Value)))
I've exported the whole report result to excel, you can get it here:
https://www.dropbox.com/s/uogdh9ac2onuqh6/2013-01-17_report.xls
it's actually a workaround. But I am pretty sure is the only solution for this infamous problem :D
This is how I did:
Exp(∑(Log(X))), so what you should do is:
Exp(Sum(Log(Fields!YourField.Value)))
Who said math was worth nothing? =D
EDIT:
Corrected the formula.
By the way, it's tested.
Addressing Ian's concern:
Exp(Sum(Log(IIf(Fields!YourField.Value = 0, 10^-306, Fields!YourField.Value))))
The idea is change 0 with a very small number. Just an idea.
EDIT:
Based on your updated question this is what you should do:
Exp(Sum(Log(Fields!Value_IS.Value / Fields!Start_Value.Value)))
I just tested the above code and got the result you hoped for.