How do I collapse an Either in PureScript? - purescript

I have an object of type Either String (Either String Int). I would like to collapse it to an object of type Either String Int.
Is there a provided function for this in PureScript?

It is the same as Haskell:
import Prelude
import Data.Either
let a = Left "a" :: Either String (Either String Int)
let b = Right (Left "b") :: Either String (Either String Int)
let c = Right (Right 123) :: Either String (Either String Int)
join a -- Left "a"
join b -- Left "b"
join c -- Right 123

Related

overloaded method value select with alternatives

I'm trying to select more columns and cast all of them but I receive this error
"overloaded method value select with alternatives: (col:
String,cols: String*)org.apache.spark.sql.DataFrame (cols:
org.apache.spark.sql.Column*)org.apache.spark.sql.DataFrame cannot be
applied to (org.apache.spark.sql.Column, org.apache.spark.sql.Column,
String)"
the code is this:
val result = df.select(
col(s"${Constant.CS}_exp.${Constant.DATI_CONTRATTO}.${Constant.NUMERO_CONTRATTO}").cast(IntegerType),
col(s"${Constant.CS}_exp.${Constant.DATI_CONTRATTO}.${Constant.CODICE_PORTAFOGLIO}").cast(IntegerType),
col(s"${Constant.CS}_exp.${Constant.RATEALE}.${Constant.STORIA_DEL_CONTRATTO}"))
The last part of the error message means that the compiler cannot find a method "select" with an api that fit your code: select(Column, Column, String)
However, the compiler found 2 possible methods, but they don't fit:
select(col: String, cols: String*)
select(cols: Column*)
(the * means "any number of")
This, I am sure of.
However, I don't understand why you get that error with the code you've given that actually is select(Column, Column, Column) which fits the select(cols: Column*) api. For some reason, it consider the last argument to be a String. Maybe some parenthesis are wrongly placed
What I do in such cases, is to split the code to validate types:
val col1: Column = col(s"${Constant.CS}_exp.${Constant.DATI_CONTRATTO}.${Constant.NUMERO_CONTRATTO}").cast(IntegerType)
val col2: Column = col(s"${Constant.CS}_exp.${Constant.DATI_CONTRATTO}.${Constant.CODICE_PORTAFOGLIO}").cast(IntegerType)
val col3: Column = col(s"${Constant.CS}_exp.${Constant.RATEALE}.${Constant.STORIA_DEL_CONTRATTO}")
val result = df.select(col1, col2, col3)
and check it compiles alright

Compound type vs mixins in scala

readying about compound type in programming scala 2nd edition, and left with more question than answers.
When you declare an instance that combines several types, you get a compound type:
trait T1
trait T2
class C
val c = new C with T1 with T2 // c's type: C with T1 with T2
In this case, the type of c is C with T1 with T2. This is an alternative to declaring a type that extends C and mixes in T1 and T2. Note that c is considered a subtype of all three types:
val t1: T1 = c
val t2: T2 = c
val c2: C = c
The question that comes to mind is , why the alternative ? If you add something to a language it is supposed to add some value, otherwise it is useless. Hence, what's the added value of compound type and how does it compare to mixins i.e. extend ... with ...
Mixins and compound types are different notions:
https://docs.scala-lang.org/tour/mixin-class-composition.html
vs.
https://docs.scala-lang.org/tour/compound-types.html
Mixins are traits
trait T1
trait T2
class C
class D extends C with T1 with T2
val c = new D
Partial case of that is when an anonymous class is instead of D
trait T1
trait T2
class C
val c = new C with T1 with T2 // (*)
Compound types are types
type T = Int with String with A with B with C
Type of c in (*) is a compound type.
The notion of mixins is from the world of classes, inheritance, OOP etc. The notion of compound types is from the world of types, subtyping, type systems, type theory etc.
The authors of "Programming in Scala" mean that there is an alternative:
either to introduce D
(then D extends two mixins, namely T1 and T2, type of c is D)
or not
(to use anonymous class instead of D, type of c is a compound type).

a function that returns multiple values in Scala [duplicate]

This question already has answers here:
Is it possible to have tuple assignment to variables in Scala? [duplicate]
(2 answers)
Closed 3 years ago.
I am a new developer on Spark & Scala and I want to do an easy thing (I think..) :
I have 3 int values
I want to define a function that returns the result of an SQL request (as a DF containing 3 columns)
I want to store the content of each of those 3 columns in my 3 initial variables.
So, my code looks like this :
var a
var b
var c
def myfunction() : (Int, Int, Int) = {
val tmp = spark.sql(""" select col1, col2, col3 from table
LIMIT 1
""")
return (tmp.collect(0)(0), tmp.collect(0)(1), tmp.collect(0)(2))
}
So, the idea if to call my function like this :
a, b, c = myfunction()
I tried a lot of configurations but I get many different errors each time, so, I got confused.
You could just use destructuring bind. Since your method returns tuple you can unpack it using pattern matching:
val (a, b, c) = myfunction()
a, b and c will contain consecutive elements of the tuple.

Calculating a variable inside RDD after full outer join in Scala

What I want to do is simpl, but I struggle with Scala and RDDs.
The concept is this:
rdd1 rdd2
id count id count
a 2 a 1
b 1 c 5
d 3
And the result I am searching for is this:
rdd2
id count
a 3
b 1
c 5
d 3
what I intend to do is to perform a full outer join to get common and non common registers, identified by the id field. For now, rdd2, is empty.
rdd1 and rdd2 are:
RDD[(String, org.apache.spark.sql.Row)]
For now, I have the following code:
var rdd3 = rdd1.fullOuterJoin(rdd2).map {
case (id, left, right) =>
// TODO
}
How can I calculate that sum between RDDs?
If you are doing a fullOuterJoin you get the key and two Options passed into the closure (one Option represents the left side, the other one the right side). So the closure could look like this:
val result = rdd1.fullOuterJoin(rdd2).map {
case (id, (left, right)) =>
(id, left.getOrElse(0) + right.getOrElse(0))
}
This applies if your RDD is of type (String, Int).

Slick 2 aggregation - how to get a scalar result?

I have a table with an Int column TIME in it:
def time = column[Int]("TIME")
The table is mapped to a custom type. I want to find a maximum time value, i.e. to perform a simple aggregation. The example in the documentation seems easy enough:
val q = coffees.map(_.price)
val q1 = q.min
val q2 = q.max
However, when I do this, the type of q1 and q2 is Column[Option[Int]]. I can perform a get or getOrElse on this to get a result of type Column[Int] (even this seems somewhat surprising to me - is get a member of Column, or is the value converted from Option[Int] to Int and then wrapped to Column again? Why?), but I am unable to use the scalar value, when I attempt to assign it to Int, I get an error message saying:
type mismatch;
found : scala.slick.lifted.Column[Int]
required: Int
How can I get the scala value from the aggregated query?
My guess is that you are not calling the invoker that's the reason why you get a Column object. Try this:
val q1 = q.min.run
Should return an Option[Int] and then you can get or getOrElse.