PL/PGSQL function set upper and lower bound to an integer value - postgresql

I am working on a pgsql function that calculates a score based on some logic. But one of the requirements is that a parameter after calculation should be in the range [100000, 9900000].
I can't figure out how to do this with existing functions, obviously possible with if conditions, any help?
v_running_sum += (30 - v_calcuated_value)* 100000;
I want v_running_sum to be in the range mentioned above. Is there any way to bound the value of the variable if lower than the lower bound (100,000) to 100,000 and vice versa for the upper bound?

This is how you can easily do this check, using a range:
SELECT 1 <# int4range(100000, 9900000,'[]');
There are many options how to implement this in your logic.
----edit----
When the outcome of a calculation should always be something between 100000 and 9900000, you can use this:
SELECT LEAST(GREATEST(var, 100000), 9900000);
Whatever you stick into "var", the result will always be between these boundaries

If you want a verbose solution use a CASE statement
case when val <= 100000 then 100000
when val >= 9900000 then 9900000
else val end as val

Related

Postgres: How to increment the index (pointer) to access other rows

I have been trying to understand how to increment the reference to some value.
In C I would simply increment the pointer to retrieve a value in the next array location.
How does this mechanism work in Postgres? is it possible?
For an example, I have created a table with some data in:
create table mathtest (
x int, y int, val int)
insert into mathtest (x,y,val)
values (1,1,10),(2,2,20),(3,3,30),(4,4,40),(5,5,50),(6,6,60),(7,7,70),(8,8,80),(9,9,90),(10,10,100),(11,11,110)
What I want to do is add the val value from the current row and then the val value when the x value in the row equals the current x value plus 2, and then plus 4. I realise that I can't assume the next row that is retrieved will be in a set order so I can't use 'lead'
If it was C I would simply increment the pointer.
The data output needs to be when the modulo of x and y = 0 for certain divisors. (this bit works)
select
x base,
(x+2) plus1x,
(x+4) plus2x,
y,
val
from mathtest
where x%2 =0 and y%3 = 0
This outputs the following:
base plus1x plus2x y val
1 6 8 10 6 60
The output I would like is:
60 + 80 +100 = 240
I can't conceptualise how to do it. My mind seems to be stuck in procedural C mode!
Whatever I type and try is an error.
Can any body help me to get over this hurdle?
Welcome to the world of window functions.
You need an explicit ordering, otherwise it makes no sense to speak of the "previous row".
As a simple example, to get the difference to the previous value, you can query like
SELECT val -
lag(val) OVER (ORDER BY x)
FROM mathtest;

Make absolute work inside filtering in Scala

I want to return a percentage of results from a dataset. Being a noob in Scala, tried the following
ds.filter(abs(hash(col("source"))) % 100 < percentage)
but getting abs cannot be applied to (org.apache.spark.sql.Column). I don't want to sample it, I want to return based on the hash of a column so that it's deterministic even when dataset changes.
This works just fine:
ds.filter(abs(hash(col("source"))) % 100 < percentage)
Probabely you have multiple abs in your namespace (e.g. from imports like import math._ etc. To be sure, use
ds.filter(org.apache.spark.sql.functions.abs(hash(col("source"))) % 100 < percentage)
But I think this will not garantee that you get the exact percentage, because hash values may not be equally distributed (think about a dataframe with only 1 unique value of source, hash values will all be the same.... you get either all records or none. To get the exact percentage, you would need something like :
val newDF = df
.withColumn("rnb",row_number().over(Window.orderBy($"source"))) // or order by hash if you wish
.withColumn("count",count("*").over())
.where($"rnb" < lit(fraction)*$"count")

How to convert a type Any List to a type Double (Scala)

I am new to Scala and I would like to understand some basic stuff.
First of all, I need to calculate the average of a certain column of a DataFrame and use the result as a double type variable.
After some Internet research I was able to calculate the average and at the same time pass it into a List type Any by using the following command:
val avgX_List = mainDataFrame.groupBy().agg(mean("_c1")).collect().map(_(0)).toList
where "_c1" is the second column of my dataframe. This line of code returns a List with type List[Any].
To pass the result into a variable I used the following command:
var avgX = avgX_List(0)
hoping that the var avgX would be type double automatically but that didn't happen obviously.
So now let the questions begin:
What does map(_(0)) do? I know the basic definition of the map() transformation but I can't find an explanation with this exact argument
I know that by using .toList method in the end of the command my result will be a List with type Any. Is there a way that I could change this into List which contains type Double elements? Or even convert this one
Do you think that it would be much more appropriate to pass the column of my Dataframe into a List[Double] and then calculate the average of its elements?
Is the solution I showed above at any point of view correct based on my problem? I know that "it is working" is different from "correct solution"?
Summing up, I need to calculate the average of a certain column of a Dataframe and have the result as a double type variable.
Note that: I am Greek and I find it hard sometimes to understand some English coding "slang".
map(_(0)) is a shortcut for map( (r: Row) => r(0) ), which is in turn a shortcut for map( (r: Row) => r.apply(0) ). The apply method returns Any, and so you are losing the right type. Try using map(_.getAs[Double](0)) or map(_.getDouble(0)) instead.
Collecting all entries of the column and then computing the average would be highly counterproductive, because you'd have to send huge amounts of data to the master node, and then do all the calculations on this single central node. That would be the exact opposite of what Spark is good for.
You also don't need collect(...).toList, because you can access the 0-th entry directly (it doesn't matter whether you get it from an Array or from a List). Since you are collapsing everything into a single Row anyway, you could get rid of the map step entirely by reordering the methods a little bit:
val avgX = mainDataFrame.groupBy().agg(mean("_c1")).collect()(0).getDouble(0)
It can be written even shorter using the first method:
val avgX = mainDataFrame.groupBy().agg(mean("_c1")).first().getDouble(0)
#Any dataType in Scala can't be directly converted to Double.
#Use toString & then toDouble on final captured result.
#Eg-
#scala> x
#res22: Any = 1.0
#scala> x.toString.toDouble
#res23: Double = 1.0
#Note- Instead of using map().toList() directly use (0)(0) to get the final value from your resultset.
#TestSample(Scala)-
val wa = Array("one","two","two")
val wrdd = sc.parallelize(wa,3).map(x=>(x,1))
val wdf = wrdd.toDF("col1","col2")
val x = wdf.groupBy().agg(mean("col2")).collect()(0)(0).toString.toDouble
#O/p-
#scala> val x = wdf.groupBy().agg(mean("col2")).collect()(0)(0).toString.toDouble
#x: Double = 1.0

How are min and max of cumulative variables assigned?

I created a routing problem and added some dimension to it. A solution assignment is found and I want to know the cumulative value at each index. I noticed that the CumulVar of an assignment does not only have a Value method but also Min and Max methods. Apparently the cumulative variables are implemented in such a way that they can represent intervals. I can see how setting
slack_max>0
fix_start_cumul_to_zero=False
introduces an ambiguity for the cumulative variables as their is a choice in how to start and how much slack to add at each stop. But
Question: How are the Min and Max at each index computed?
You can get the Min and Max range of a given node index from solution.Min(dimension.Cumulvar(index))
Note you'll get Min and Max exactly the same when slack_max=0 unless you know something I don't ;)
Assuming you are using an output solution object solution and a time dimension time_dimension, this will store em as a dict with min-max tuples, you may wish to adapt the output format however you wish:
time_dict = {}
for vehicle_id in range(num_vehicles):
vehicle_time_dict={}
index = routing.Start(vehicle_id)
start_time = solution.Min(time_dimension.CumulVar(index))
vehicle_time_dict[index]=(index_min,index_max)
while not routing.isEnd(index):
previous_index = index
index = solution.Value(routing.NextVar(index))
index_min = solution.Min(time_dimension.CumulVar(index))
index_max = solution.Max(time_dimension.CumulVar(index))
vehicle_time_dict[index]=(index_min,index_max)
time_dict[vehicle_id]=vehicle_time_dict
routing.IsEnd(index) returns True if it's the last index of that vehicle's route (or anywhere after the last index, so if it's 10 nodes long:
routing.IsEnd(8) will return False,
routing.IsEnd(9) will return True,
routing.IsEnd(10) will also return True, etc)

Is there a range data structure in Scala?

I'm looking for a way to handle ranges in Scala.
What I need to do is:
given a set of ranges and a range(A) return the range(B) where range(A) intersect range (B) is not empty
given a set of ranges and a range(A) remove/add range(A) from/to the set of ranges.
given range(A) and range(B) create a range(C) = [min(A,B), max(A,B)]
I saw something similar in java - http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/RangeSet.html
Though subRangeSet returns only the intersect values and not the range in the set (or list of ranges) that it intersects with.
RangeSet rangeSet = TreeRangeSet.create();
rangeSet.add(Range.closed(0, 10));
rangeSet.add(Range.closed(30, 40));
Range range = Range.closed(12, 32);
System.out.println(rangeSet.subRangeSet(range)); //[30,32] (I need [30,40])
System.out.println(range.span(Range.closed(30, 40))); //[12,40]
There is an Interval[A] type in the spire math library. This allows working with ranges of arbitrary types that define an Order. Boundaries can be inclusive, exclusive or omitted. So e.g. (-∞, 0.0] or [0.0, 1.0) would be possible intervals of doubles.
Here is a library intervalset for working with sets of non-overlapping intervals (IntervalSeq or IntervalTrie) as well as maps of intervals to arbitrary values (IntervalMap).
Here is a related question that describes how to use IntervalSeq with DateTime.
Note that if the type you want to use is 64bit or less (basically any primitive), IntervalTrie is extremely fast. See the Benchmarks.
As Tzach Zohar has mentioned in the comment, if all you need is range of Int - go for scala.collection.immutable.Range:
val rangeSet = Set(0 to 10, 30 to 40)
val r = 12 to 32
rangeSet.filter(range => range.contains(r.start) || range.contains(r.end))
If you need it for another underlying type - implement it by yourself, it's easy for your usecase.