InfluxDB Influx divide value by 10 results in zero - grafana

I have a field called epv1today which holds values that I need to divide by ten.
As long as the value has two digits, the following code example works. As soon as the value has one digit, all I get is a zero value.
Value 21 results in 2.1
Value 2 results in 0 but should be 0.2
Code:
from(bucket: "watt")
|> range(start: today())
|> filter(fn: (r) => r["_measurement"] == "<SerialNumber>")
|> filter(fn: (r) => r["_field"] == "epv1today")
|> map(fn: (r) => ({r with _value: r._value / 10}))
|> last()
What am I missing?

The solution I was looking for is:
from(bucket: "watt")
|> range(start: today())
|> filter(fn: (r) => r["_measurement"] == "<SerialNumber>")
|> filter(fn: (r) => r["_field"] == "epv1today")
|> toFloat()
|> map(fn: (r) => ({r with _value: r._value / 10.0}))
|> last()
Be aware of the toFloat() and the division by 10.0 instead of 10.

Related

Scala Recursive For Comprehension Prepends Empty List Only Once, Why?

similar to this post here, I am working on the "Functional Programming in Scala" Anagrams coursework. I could not figure out the combinations function, but found this incredibly elegant solution elsewhere
def combinations(occurrences: Occurrences): List[Occurrences] =
List() :: (for {
(char, max) <- occurrences
count <- 1 to max
rest <- combinations(occurrences filter {case (c, _) => c > char})
} yield List((char, count)) ++ rest)
I understand how the for the comprehension works to create the combinations, but what I do not understand is why the empty list is not pre-appended to every inner list during each recursive call. It's almost as if the compiler skips the prepend statement and only executes the right side for expression.
For example, the input combinations(List(('a', 2), ('b', 2))) returns the expected result set:
res1: List[forcomp.Anagrams.Occurrences] = List(List(), List((a,1)), List((a,1), (b,1)), List((a,1), (b,2)), List((a,2)), List((a,2), (b,1)), List((a,2), (b,2)), List((b,1)), List((b,2)))
With only a single Empty list. Looking at the recursive call, I would expected another Empty list for each recursion. Would someone be so kind as to explain how this elegant solution works?
There is nothing producing an empty list inside this for comprehension. Even if
combinations(occurrences filter {case (c, _) => c > char})
contained an empty list and returned it in rest <- ... (it should for the first element), a value is prepended in List((char, count)) ++ rest making it non-empty by design.
So the whole for-comprehension must return a List of non-empty Lists to which an empty list is prepended.
This basically builds solution by induction:
if you have an empty list - return an empty list because it is a valid solution for this input
if you start with (char, maxOccurrences) :: rest
assume that you have a valid solution for combinations(rest)
then take each such solution and add (char, 1) to each element of rest,
then take each such solution and add (char, 2) to each element of rest,
...
then take each such solution and add (char, maxOccurrences) to each element of rest
then combine all of these results into one solution
all of these are non-empty because you always prepended something
so you are missing empty set, so you add it explicitly to all the other solutions combined to create a complete solution for (char, maxOccurrences) :: rest
Because you have a valid starting point and a valid way of creating next step from the previous, you know that you can always create a valid solution.
In the for comprehension
def combinations(occurrences: Occurrences): List[Occurrences] =
List() :: (for {
(char, max) <- occurrences
count <- 1 to max
rest <- combinations(occurrences filter {case (c, _) => c > char})
} yield List((char, count)) ++ rest)
is doing the same thing as
def combinations(occurrences: Occurrences): List[Occurrences] =
List() :: occurrences.flatMap { case (char, max) =>
(1 to map).flatMap { count =>
combinations(occurrences filter {case (c, _) => c > char}).map { rest =>
(char, count) :: rest
}
}
}
which is the same as
def combinations(occurrences: Occurrences): List[Occurrences] =
occurrences.map { case (char, max) =>
(1 to map).map { count =>
val newOccurence = (char, count)
combinations(occurrences filter {case (c, _) => c > char}).map { rest =>
newOccurence :: rest
}
}
}.flatten.flatten.::(List())
and this you can easily compare to the induction recipe from above:
def combinations(occurrences: Occurrences): List[Occurrences] =
occurrences.map { case (char, max) =>
// for every character on the list of occurrences
(1 to max).map { count =>
// you construct (char, 1), (char, 2), ... (char, max)
val newOccurence = (char, count)
// and for each such occurrence
combinations(occurrences filter {case (c, _) => c > char}).map { rest =>
// you prepend it into every result from smaller subproblem
newOccurence :: rest
}
}
}
// because you would have a List(List(List(List(...)))) here
// and you need List(List(...)) you flatten it twice
.flatten.flatten
// and since you are missing empty result, you prepend it here
.::(List())
The solution you posted does exactly the same thing just in more compacted way - instead of .map().flatten, there are .flatMaps hidden by a for-comprehension.
I think that tracking the calls will help you understand it better. I am going to number the steps to easily going back and forth between them.
combinations(List(('a', 2), ('b', 2))), we have first:
(char, max) <- ('a', 2)
count <- 1
The result of occurrences filter {case (c, _) => c > char} will be List(('b', 2)).
Therefore we now calculate combinations(List(('b', 2))):
(char, max) <- ('b', 2)
count <- 1
Now, the result of occurrences filter {case (c, _) => c > char} will be List().
We are going to calculate combiantions(List()):
The for comprehension will be List(), and List() :: List() results in List(). Therefore we are back in step 2:
(step 2) We had:
(char, max) <- ('b', 2)
count <- 1
and now we have no elements for rest. So the result of step 2 will be List() :: List() which is List() as well. Back to step 1. So we are going to yield for that option List((char, count)) ++ rest) which is: List(('b', 1)) ++ List()) which is List(('b', 1)) We are now in the next iteration of the same call with:
(char, max) <- ('b', 2)
count <- 2
rest will be List() again and we are now going to yield for that option List((char, count)) ++ rest) which is: List(('b', 2)) ++ List()) which is List(('b', 2)). Now we have to add the empty list, and the result is: List(List(), List((b,1)), List((b,2))).
(step 1) We had:
(char, max) <- ('a', 2)
count <- 1
and now we have:
rest <- List()
So we are yielding List((char, count)) ++ rest which is: List(('a', 1)) ++ List() which is: List(('a', 1))
Now we continue to:
rest <- List((b,1))
So we are yielding List((char, count)) ++ rest which is: List(('a', 1)) ++ List((b,1)) which is: List(('a', 1), ('b', 1)).
Now we continue to:
rest <- List((b,2))
So we are yielding List((char, count)) ++ rest which is: List(('a', 1)) ++ List((b,2)) which is: List(('a', 1), ('b', 2)). The aggregated result so far is:
List(List(('a', 1)), List(('a', 1), ('b', 1)), List(('a', 1), ('b', 2)))
Now max is going to be increase to 2, and we are going to do the same calculation in steps 2-4, which are going to yield, with the exact same logic:
List(List(('a', 2)), List(('a', 2), ('b', 1)), List(('a', 2), ('b', 2)))
And now (char, max) is going to be changed into ('b', 2), which will result it(after applying the same logic from steps 2-4):
List(List(('b', 1)), List(('b', 2)))
When aggregating all together, with the prior empty list we get the wanted output.
What really helps to see what I just explained, is adding printing messages:
def combinations(occurrences: Occurrences): List[Occurrences] = {
println("Starting with: " + occurrences)
val result = List() :: (for {
(char, max) <- occurrences
count <- 1 to max
rest <- combinations(occurrences filter {case (c, _) => c > char })
} yield {
val result = List((char, count)) ++ rest
println("Occurrences are: " + occurrences + " Result is: " + result)
result
})
println("Done with: " + occurrences + " results are: " + result)
result
}
Then the call println("Done: " + combinations(List(('a', 2), ('b', 2)))) results in:
Starting with: List((a,2), (b,2))
Starting with: List((b,2))
Starting with: List()
Done with: List() results are: List(List())
Occurrences are: List((b,2)) Result is: List((b,1))
Starting with: List()
Done with: List() results are: List(List())
Occurrences are: List((b,2)) Result is: List((b,2))
Done with: List((b,2)) results are: List(List(), List((b,1)), List((b,2)))
Occurrences are: List((a,2), (b,2)) Result is: List((a,1))
Occurrences are: List((a,2), (b,2)) Result is: List((a,1), (b,1))
Occurrences are: List((a,2), (b,2)) Result is: List((a,1), (b,2))
Starting with: List((b,2))
Starting with: List()
Done with: List() results are: List(List())
Occurrences are: List((b,2)) Result is: List((b,1))
Starting with: List()
Done with: List() results are: List(List())
Occurrences are: List((b,2)) Result is: List((b,2))
Done with: List((b,2)) results are: List(List(), List((b,1)), List((b,2)))
Occurrences are: List((a,2), (b,2)) Result is: List((a,2))
Occurrences are: List((a,2), (b,2)) Result is: List((a,2), (b,1))
Occurrences are: List((a,2), (b,2)) Result is: List((a,2), (b,2))
Starting with: List()
Done with: List() results are: List(List())
Occurrences are: List((a,2), (b,2)) Result is: List((b,1))
Starting with: List()
Done with: List() results are: List(List())
Occurrences are: List((a,2), (b,2)) Result is: List((b,2))
Done with: List((a,2), (b,2)) results are: List(List(), List((a,1)), List((a,1), (b,1)), List((a,1), (b,2)), List((a,2)), List((a,2), (b,1)), List((a,2), (b,2)), List((b,1)), List((b,2)))
Done: List(List(), List((a,1)), List((a,1), (b,1)), List((a,1), (b,2)), List((a,2)), List((a,2), (b,1)), List((a,2), (b,2)), List((b,1)), List((b,2)))

Combine map, reduceByKey and another map

Data is a collection of tuples in the format: (group, number)
data.map(a => (a._1, (a._2, 1)))
.reduceByKey((a,b) => (a._1 * b._1, a._2 + b._2))
.map(a => (a._1, pow(a._2._1, 1/a. 2._2))
As a total newbie in Spark - what provided code is doing? Can you explain to me this code?

How to group a list of tuples sorted by first element into two lists containing overlapping and non overlapping tuples

I have a sorted list of tuples (sorted by first element) say for eg.
[(1, 6)
(5, 9)
(6, 8)
(11, 12)
(16, 19)]
I need to split the list into a list of overlapping and a list of non overlapping tuples. So the output for the above list would be
overlapping: [(1, 6), (5, 9), (6, 8)]
non overlapping: [(11, 12), (16, 19)]
I am trying to use foldLeft but not sure if it's possible that way
.foldLeft(List[(Long, Long)]()){(list, tuple) => list match {
case Nil => List(tuple)
case head :: tail => if (head.2 >= tuple._1) {
// Not sure what should my logic be
} else {
// Not sure
}
}}
Input: [(1, 6) (5, 9) (6, 8) (11, 12) (16, 19)]
Output: [(1, 6), (5, 9), (6, 8)] and [(11, 12), (16, 19)]
Here's what I've understood. You want to find each tuple, in the input, whom Longs are consecutive bounds of a range (so I can use Range by the way) and that range doesn't contain any Long from another tuple in the input.
Here's my suggestion:
Seq((1L, 6L), (5L, 9L), (6L, 8L), (11L, 12L), (16L, 19L))
.map { case (start, end) => start to end }
.foldLeft(Set[(Long, Long)]() -> Set[(Long, Long)]()) {
case ((overlapping, nonoverlapping), range) =>
(overlapping ++ nonoverlapping).find { case (start, end) =>
range.contains(start) || range.contains(end) || (start to end).containsSlice(range)
}.fold(overlapping -> (nonoverlapping + (range.start -> range.end)))(matchedTuple =>
(overlapping + (matchedTuple, range.start -> range.end), nonoverlapping - matchedTuple)
)
}
It may not work for tuples like (6, 6) or (10, 0) because they're computed as empty ranges and you have to decide limit cases with empty ranges like them if you want to.
Hope it helps.
I agree with Dima that this question is unclear. It's important to note that the approach above will also fail because you return a single list, not one list of overlapping intervals and one of non-overlapping intervals.
A possible approach to this problem -- especially if you're set on using foldLeft -- would be to do something like this:
ls.foldLeft((List[(Int, Int)](), List[(Int, Int)]()))((a, b) => (a, b) match {
case ((Nil, _), (h1, t1)) => (a._1 ::: List((h1, t1)), a._2)
case ((head :: tail, _), (h2, t2)) if head._2 >= h2 => (a._1 ::: List((h2, t2)), a._2)
case ((head :: tail, _), (h2, t2)) => (a._1, a._2 ::: List((h2, t2)))
})
Of course, if we don't address the problem of having several non-overlapping subsets of overlapping intervals, this solution also fails.
I'd say, find overlapping ones first, and then compute the rest. This will do it in linear time.
#tailrec
def findOverlaps(
ls: List[(Int, Int)],
boundary: Int = Int.MinValue,
out: List[(Int, Int)] = Nil
): List[(Int, Int)] = ls match {
case (a, b) :: tail if a < boundary =>
findOverlaps(tail, b max boundary, (a, b) :: out)
case _ :: Nil | Nil => out.reverse
case (a, b) :: (c, d) :: tail if b > c =>
findOverlaps(ls.tail, b max boundary, (a, b) :: out)
case _ :: tail => findOverlaps(tail, boundary, out)
}
val overlaps = findOverlasp(ls)
val nonOverlaps = ls.filterNot(overlaps.toSet)

Map within filter in Spark

How can I filter within mapping ?
Example :
test1 = sc.parallelize(Array(('a', (1,Some(4)), ('b', (2, Some(5)), \
('c', (3,Some(6)), ('d',(0,None))))
What I want :
Array(('a', (1,Some(4)), ('b', (2, Some(5)), \ ('c', (3,Some(6)), \
('d',(613,None))))
What I tried (I've change the 0 by 613) :
test 2 = test1.filter(value => value._2._1 == 0).mapValues(value =>
(613, value._2))
But it returns only :
Array('d',(613,None))
Use map with pattern matching:
test1.map {
case (x, (0, y)) => (x, (613, y))
case z => z
}.collect
// res2: Array[(Char, (Int, Option[Int]))] = Array((a,(1,Some(4))), (b,(2,Some(5))), (c,(3,Some(6))), (d,(613,None)))
test1.map{
case (a, (0, b)) => (a, (613, b))
case other => other
}

Aggregate RDD values per key

I have RDD in key,value structure (someKey,(measure1,measure2)). I grouped by the key and now I want to aggregate the values for each key.
val RDD1 : RDD[(String,(Int,Int))]
RDD1.groupByKey()
the result I need is:
key: avg(measure1), avg(measure2), max(measure1), max(measure2), min(measure1), min(measure2), count(*)
First of all, avoid groupByKey! You should use aggregateByKey or combineByKey. We will use aggregateByKey. This function will transform values for each key: RDD[(K, V)] => RDD[(K, U)]. It needs zero value of type U and knowledge how to merge (V, U) => U and (U, U) => U. I simplified your example a little bit and want to get: key: avg(measure1), avg(measure2), min(measure1), min(measure2), count(*)
val rdd1 = sc.parallelize(List(("a", (11, 1)), ("a",(12, 3)), ("b",(10, 1))))
rdd1
.aggregateByKey((0.0, 0.0, Int.MaxValue, Int.MaxValue, 0))(
{
case ((sum1, sum2, min1, min2, count1), (v1, v2)) =>
(sum1 + v1, sum2 + v2, v1 min min1, v2 min min2, count1+1)
},
{
case ((sum1, sum2, min1, min2, count),
(otherSum1, otherSum2, otherMin1, otherMin2, otherCount)) =>
(sum1 + otherSum1, sum2 + otherSum2,
min1 min otherMin1, min2 min otherMin2, count + otherCount)
}
)
.map {
case (k, (sum1, sum2, min1, min2, count1)) => (k, (sum1/count1, sum2/count1, min1, min2, count1))
}
.collect()
giving
(a,(11.5,2.0,11,1,2)), (b,(10.0,1.0,10,1,1))