I have a csv file that contains (FileName,ColumnName,Rule and RuleDetails) as headers.
As per the Rule Detail I need to get the count of columnname(INSTALLDATE) which are not matching with the RuleDetail DataFormat
I have to pass ColumnName and RuleDetails dynamically
I tried with below Code
from pyspark.sql.functions import *
DateFields = []
for rec in df_tabledef.collect():
if rec["Rule"] == "DATEFORMAT":
DateFields.append(rec["Columnname"])
DateFormatValidvalues = [str(x) for x in rec["Ruledetails"].split(",") if x]
DateFormatString = ",".join([str(elem) for elem in DateFormatValidvalues])
DateColsString = ",".join([str(elem) for elem in DateFields])
output = (
df_tabledata.select(DateColsString)
.where(
DateColsString
not in (datetime.strptime(DateColsString, DateFormatString), "DateFormatString")
)
.count()
)
display(output)
Expected output is count of records which are not matching with the given dateformat.
For Example - If 4 out of 10 records are not in (YYYY-MM-DD) then the count should be 4
I got the below Error Message if u run the above code.
i'm new to scala ,my requirement is delete the particular column records from almost 100 tables,so that i read the data from csv (which is my source) ,selected that particular column and changed into List.
val csvDF = spark.read.format("csv").option("header", "true").option("delimiter", ",").option("inferSchema", true).option("escape", "\"").option("multiline", "true").option("quotes", "").load(inputPath)
val badrecods = csvDF.select("corrput_id").collect().map(_ (0)).toList
then read the metadata from postgres schema, there will get the all the tables list ,here i write the two for loops which is working fine,but performance wat too bad,how can i imporve this
val query = "(select table_name from information_schema.tables where table_schema = '" + db + "' and table_name not in " + excludetables + ") temp "
val tablesdf = spark.read.jdbc(jdbcUrl, table = query, connectionProperties)
val tablelist = tablesdf.select($"corrput_id").collect().map(_(0)).toList
println(tablelist)
for (i <- tablelist) {
val s2 = dbconnection.createStatement()
for (j <- bad_records) {
s2.execute("delete from " + db + "." + i + " where corrput_id = '" + j + "' ")
}
s2.close()
Thanks in advance
If you're looking to improve your performance, in my opinion, I think you should consider more on optimizing your queries instead! executing a query per row in a table WILL affect your performance, something like
" where corrput_id IN " + bad_records.map(str => s" '$str' ").mkString("(", ",", ")")
would be better. The second point, why don't you just use spark APIs? I mean like using collect on a DF and then processing it in a single thread is kind of like awaiting a Future (I mean you are not using the actual power that you can), spark is made to do such things, and can do these efficiently I believe.
How can I execute lengthy, multiline Hive Queries in Spark SQL? Like query below:
val sqlContext = new HiveContext (sc)
val result = sqlContext.sql ("
select ...
from ...
");
Use """ instead, so for example
val results = sqlContext.sql ("""
select ....
from ....
""");
or, if you want to format code, use:
val results = sqlContext.sql ("""
|select ....
|from ....
""".stripMargin);
You can use triple-quotes at the start/end of the SQL code or a backslash at the end of each line.
val results = sqlContext.sql ("""
create table enta.scd_fullfilled_entitlement as
select *
from my_table
""");
results = sqlContext.sql (" \
create table enta.scd_fullfilled_entitlement as \
select * \
from my_table \
")
val query = """(SELECT
a.AcctBranchName,
c.CustomerNum,
c.SourceCustomerId,
a.SourceAccountId,
a.AccountNum,
c.FullName,
c.LastName,
c.BirthDate,
a.Balance,
case when [RollOverStatus] = 'Y' then 'Yes' Else 'No' end as RollOverStatus
FROM
v_Account AS a left join v_Customer AS c
ON c.CustomerID = a.CustomerID AND c.Businessdate = a.Businessdate
WHERE
a.Category = 'Deposit' AND
c.Businessdate= '2018-11-28' AND
isnull(a.Classification,'N/A') IN ('Contractual Account','Non-Term Deposit','Term Deposit')
AND IsActive = 'Yes' ) tmp """
It is worth noting that the length is not the issue, just the writing. For this you can use """ as Gaweda suggested or simply use a string variable, e.g. by building it with string builder. For example:
val selectElements = Seq("a","b","c")
val builder = StringBuilder.newBuilder
builder.append("select ")
builder.append(selectElements.mkString(","))
builder.append(" where d<10")
val results = sqlContext.sql(builder.toString())
In addition to the above ways, you can use the below-mentioned way as well:
val results = sqlContext.sql("select .... " +
" from .... " +
" where .... " +
" group by ....
");
Write your sql inside triple quotes, like """ sql code """
df = spark.sql(f""" select * from table1 """)
This is same for Scala Spark and PySpark.
I am currently facing a weird problem.
Whenever a user types something into the search bar that starts with an 's', the request crashes.
What you see next is a sample sql code generated by the search engine I programmed for this project.
SELECT Profiles.ProfileID,Profiles.Nickname,Profiles.Email,Profiles.Status,Profiles.Role,Profiles.Credits, Profiles.Language,Profiles.Created,Profiles.Modified,Profiles.Cover,Profiles.Prename, Profiles.Lastname,Profiles.BirthDate,Profiles.Country,Profiles.City,Profiles.Phone,Profiles.Website, Profiles.Description, Profiles.Affair,Scores.AvgScore, coalesce(Scores.NumScore, 0) AS NumScore, coalesce(Scores.NumScorer, 0) AS NumScorer, (
(SELECT count(*)
FROM Likes
JOIN Comments using(CommentID)
WHERE Comments.ProfileID = Profiles.ProfileID)) NumLikes, (
(SELECT count(*)
FROM Likes
JOIN Comments using(CommentID)
WHERE Comments.ProfileID = Profiles.ProfileID) /
(SELECT coalesce(nullif(count(*), 0), 1)
FROM Comments
WHERE Comments.ProfileID = Profiles.ProfileID)) AvgLikes, Movies.MovieID, Movies.Caption, Movies.Description, Movies.Language, Movies.Country, Movies.City, Movies.Kind, Movies.Integration,
(SELECT cast(least(25 + 5.000000 * round((75 * ((0.500000 * SIZE/1024.0/1024.0 * 0.001250) + (0.500000 * Duration/60.0 * 0.050000))) / 5.000000), 100) AS signed int)
FROM Streams
WHERE MovieID = Movies.MovieID
AND Tag = "main"
AND ENCODING = "mp4") AS ChargeMain,
(SELECT cast(least(25 + 10.000000 * round((75 * ((0.200000 * SIZE/1024.0/1024.0 * 0.001000) + (0.800000 * Duration/60.0 * 0.016667))) / 10.000000), 100) AS signed int)
FROM Streams
WHERE MovieID = Movies.MovieID
AND Tag = "notes"
AND ENCODING = "mp4") AS ChargeNotes,
(SELECT coalesce(count(*), 0)
FROM Views
WHERE Views.MovieID = Movies.MovieID
AND Tag = "main") AS MainViews,
(SELECT coalesce(count(*), 0)
FROM Views
WHERE Views.MovieID = Movies.MovieID
AND Tag = "notes") AS NotesViews,
(SELECT coalesce(count(*), 0)
FROM Views
WHERE Views.MovieID = Movies.MovieID
AND Tag = "trailer") AS TrailerViews,
(SELECT coalesce(greatest(
(SELECT coalesce(count(*), 0)
FROM Views
WHERE Views.MovieID = Movies.MovieID
AND Tag = "trailer"),
(SELECT coalesce(count(*), 0)
FROM Views
WHERE Views.MovieID = Movies.MovieID
AND Tag = "main")), 0)) AS MaxMainTrailerViews,
(SELECT avg(Score)
FROM Scores
WHERE Scores.MovieID = Movies.MovieID) AS Score,
(SELECT coalesce(group_concat(cast(Score AS signed int)), "")
FROM Scores
WHERE Scores.MovieID = Movies.MovieID) AS Scores, Movies.Cover, Movies.Locked, Movies.Created, Movies.Modified,
(SELECT coalesce(group_concat(name separator ','),"")
FROM Tags
JOIN TagLinks using(TagID)
WHERE TagLinks.MovieID = Movies.MovieID
ORDER BY name ASC) AS Tags,
(SELECT count(*)
FROM Purchases
WHERE MovieID = Movies.MovieID
AND ProfileID = %s
AND TYPE = "main") AS PurchasedMain,
(SELECT count(*)
FROM Purchases
WHERE MovieID = Movies.MovieID
AND ProfileID = %s
AND TYPE = "notes") AS PurchasedNotes,
(SELECT count(*)
FROM Watchlist
WHERE MovieID = Movies.MovieID
AND ProfileID = %s) AS Watchlist,
(SELECT count(*)
FROM Scores
WHERE MovieID = Movies.MovieID
AND ProfileID = %s) AS Rated,
(SELECT count(*)
FROM Comments
WHERE MovieID = Movies.MovieID
AND Deleted IS NULL) AS Comments,
(SELECT sum(Duration)
FROM Streams
WHERE Streams.MovieID = Movies.MovieID
AND Streams.Tag IN ("main",
"notes")
AND Streams.ENCODING = "mp4") AS Runtime,
(SELECT cast(count(*) AS signed int)
FROM Movies
JOIN Profiles ON Profiles.ProfileID = Movies.ProfileID
WHERE ((Movies.Locked = 0
AND
(SELECT count(*)
FROM Streams
WHERE Streams.MovieID = Movies.MovieID
AND Streams.Status <> "ready") = 0
AND Profiles.Status = "active")
OR (%s = 1)
OR (Movies.ProfileID = %s))
AS Movies,
(SELECT cast(ceil(count(*) / %s) AS signed int)
FROM Movies
JOIN Profiles using(ProfileID)
WHERE ((Movies.Locked = 0
AND
(SELECT count(*)
FROM Streams
WHERE Streams.MovieID = Movies.MovieID
AND Streams.Status <> "ready") = 0
AND Profiles.Status = "active")
OR (%s = 1)
OR (Movies.ProfileID = %s))
AS Pages
FROM Movies
JOIN Profiles using(ProfileID)
LEFT JOIN
(SELECT Movies.ProfileID AS ProfileID,
avg(Scores.Score) AS AvgScore,
count(*) AS NumScore,
count(DISTINCT Scores.ProfileID) AS NumScorer
FROM Scores
JOIN Movies using(MovieID)
GROUP BY Movies.ProfileID) AS Scores using(ProfileID)
WHERE ((Movies.Locked = 0
AND
(SELECT count(*)
FROM Streams
WHERE Streams.MovieID = Movies.MovieID
AND Streams.Status <> "ready") = 0
AND Profiles.Status = "active")
OR (%s = 1)
OR (Movies.ProfileID = %s))
ORDER BY Score DESC LIMIT %s,
%s
After countless hours of investigating and comparing possible user inputs with the generated sql code I finally nailed the problem down to some really strange behaviour of the JDBC driver which I consider a serious bug - yet I am not sure:
I spent another few hours trying to reproduce the problem with as less sql code as possible and ended up with the following:
SQL("""select * from Movies where "s" like "%s" and MovieID = {a} """)
.on('a -> 1).as(scalar[Long]*)
[SQLException: Parameter index out of range (1 > number of parameters, which is 0).]
SQL("""select * from Movies where "s" like "%samuel" and MovieID = {a} """)
.on('a -> 1).as(scalar[Long]*)
[SQLException: Parameter index out of range (1 > number of parameters, which is 0).]
SQL("""select * from Movies where "s" like "%flower" and MovieID = {a} """)
.on('a -> 1).as(scalar[Long]*)
[OK]
SQL("""select * from Movies where "s" like "%samuel" and MovieID = 1 """)
.on('a -> 1).as(scalar[Long]*)
[OK]
SQL("""select * from Movies where "s" like "%s" and MovieID = "{a}" """)
.on('a -> 1).as(scalar[Long]*)
[OK]
SQL("""select * from Movies where MovieID = {a} and "s" like "%s" """)
.on('a -> 1).as(scalar[Long]*)
[OK]
I believe to see a pattern here:
Under the exact condition that there is a %s sequence (quoted or unquoted) anywhere in a sql code, followed by a non quoted named parameter with arbitrary name and arbitrary distance
to the %s sequence, jdbc (or anorm) crashes. The crash seems to occur in JDBC, however its also possible that Anorm submits invalid values to JDBC.
Do you guys have any suggestions?
I think I found an enduring solution for the problem meanwhile. Since my sql generator needs to stay very flexible I somehow need a way to pass along sql fragments with their corresponding parameters without evaluating them right away. Instead the generator must be able to assemble and compose various sql fragments into bigger fragments at any time - just as he does now - but now with the acompanying, not yet evaluated parameters. I came up with this prototype:
DB.withConnection("betterdating") { implicit connection =>
case class SqlFragment(Fragment: String, Args: NamedParameter*)
val aa = SqlFragment("select MovieID from Movies")
val bb = SqlFragment("join Profiles using(ProfileID)")
val cc = SqlFragment("where Caption like \"%{a}\" and MovieID = {b}", 'a -> "s", 'b -> 5)
// combine all fragments
val v1 = SQL(Seq(aa, bb, cc).map(_.Fragment).mkString(" "))
.on((aa.Args ++ bb.Args ++ cc.Args): _*)
// better solution
val v2 = Seq(aa, bb, cc).unzip(frag => (frag.Fragment, frag.Args)) match {
case (frags, args) => SQL(frags.mkString(" ")).on(args.flatten: _*)
}
// works
println(v1.as(scalar[Long].singleOpt))
println(v2.as(scalar[Long].singleOpt))
}
It seems to work great! :-)
I then rewrote the last part of the freetext filter as follow:
// finally transform the expression
// list a single sql fragment
expressions.zipWithIndex.map { case (expr, index) =>
s"""
(concat(Movies.Caption, " ", Movies.Description, " ", Movies.Kind, " ", Profiles.Nickname, " ",
(select coalesce(group_concat(Tags.Name), "") from Tags join TagLinks using (TagID)
where TagLinks.MovieID = Movies.MovieID)) like "%{expr$index}%"))
""" -> (s"expr$index" -> expr)
}.unzip match { case (frags, args) => SqlFragment(frags.mkString(" and "), args.flatten: _*)
What do you think?
This is how it is being implemented right now:
/**
* This private helper method transforms a content filter string into an sql expression
* for searching within movies, owners and kinds and tags.
* #author Samuel Lörtscher
*/
private def contentFilterToSql(value: String) = {
// trim and clean and the parametric value from any possible anomalies
// (those include strange spacing and non closed quotes)
val cleaned = value.trim match {
case trimmed if trimmed.count(_ == '"') % 2 != 0 =>
if (trimmed.last == '"') trimmed.dropRight(1).trim
else trimmed + '"'
case trimmed =>
trimmed
};
// transform the cleaned value into a list of expressions
// (words between quotes are considered being one expression)
// empty expressions between quotes are being removed
// expressions will contain no quotes as they are being stripped during evaluation -
// thus counter measures for sql injection should be obsolete
// (we put an empty space at the end because it makes the lexer algorithm much
// more efficient as it will not need to check for end of file in every iteration)
val expressions = (cleaned + " ").foldLeft((List[String](), "", false)) { case ((list, expr, quoted), char) =>
// perform the lexer operation for the current character
if (char == ' ' && !quoted) (expr :: list, "", false)
else if (char == '"') (expr :: list, "", !quoted)
else (list, expr + char, quoted)
}._1.filter(_.nonEmpty).map(_.trim)
// finally transform the expression
// list into a variable length sql condition statement
expressions.map { expr =>
s"""
(concat(Movies.Caption, " ", Movies.Description, " ", Movies.Kind, " ", Profiles.Nickname, " ",
(select coalesce(group_concat(Tags.Name), "")
from Tags join TagLinks using (TagID) where TagLinks.MovieID = Movies.MovieID)) like "%$expr%")
"""
}.mkString(" and ")
}
Since the number of search expressions is variable, I cannot use Anorm arguments here. :-/
I found a simple solution now, but I am not exactly happy being forced to apply such crappy hacks.
Since putting a %s character sequence seems to trigger the bug, I was looking for possibilities to submit the same semantical outcome without directly passing this character sequence. I finally ended up replacing like "%$expr%" by like concat("%", "$expr%"). Since concat is being evaluated by the MySql Server engine BEFORE "like", he will put the original pattern back together before processing it by "like" - and without the sequence %s ever being transmitted through the anorm, jdbc data processors.
// finally transform the expression
// list into a variable length sql condition statement
// (freaking concat("%", "$expr%")) is required due to a freaking bug in either anorm or JDBC
// which results into a crash when %s is anyway submitted)
expressions.map { expr =>
s"""
(concat(Movies.Caption, " ", Movies.Description, " ", Movies.Kind, " ", Profiles.Nickname, " ",
(select coalesce(group_concat(Tags.Name), "")
from Tags join TagLinks using (TagID) where TagLinks.MovieID = Movies.MovieID)) like concat("%", "$expr%"))
"""
}.mkString(" and ")
I have below columns in my table [col1,col2,key1,col3,txn_id,dw_last_updated]. Out of these txn_id , key1 are primary key columns. In my dataset I can have multiple records for the combination of (txn_id,key). Out of those records , I need to pick the latest one one based on dw_last_updated..
I'm using a logic this. I'm consistently hitting memory issue and I believe its partly because of groupByKey()... Is there a better alternate for this ?
case class Fact(col1: Int,
col2: Int,
key1: String,
col3: Int,
txn_id: Double,
dw_last_updated: Long)
sc.textFile(s3path).map { row =>
val parts = row.split("\t")
Fact(parts(0).toInt,
parts(1).toInt,
parts(2),
parts(3).toInt,
parts(4).toDouble,
parts(5).toLong)
}).map { t => ((t.txn_id, t.key1), t) }.groupByKey(512).map {
case ((txn_id, key1), sequence) =>
val newrecord = sequence.maxBy {
case Fact_Cp(col1, col2, key1, col3, txn_id, dw_last_updated) => dw_last_updated.toLong
}
(newrecord.col1 + "\t" + newrecord.col2 + "\t" + newrecord.key1 +
"\t" + newrecord.col3 + "\t" + newrecord.txn_id + "\t" + newrecord.dw_last_updated)
}
Appreciate your thoughts / suggestions...
rdd.groupByKey collects all values per key, requiring the necessary memory to hold the sequence of values for a key on a single node. Its use is discouraged. See [1].
Given that we are interested in only 1 value per key: max(dw_last_updated), a more memory efficient way would be to use rdd.reduceByKey where the reduce function here is to pick up the max of the two records for the same key using that timestamp as discriminant.
rdd.reduceByKey{case (record1,record2) => max(record1, record2)}
Applied to your case, it should look like this:
case class Fact(...)
object Fact {
def parse(s:String):Fact = ???
def maxByTs(f1:Fact, f2:Fact):Fact = if (f1.dw_last_updated.toLong > f2.dw_last_updated.toLong) f1 else f2
}
val factById = sc.textFile(s3path).map{row => val fact = Fact.parse(row); ((fact.txn_id, fact.key1),fact)}
val maxFactById = factById.reduceByKey(Fact.maxByTs)
Note that I've defined utility operations on the Fact companion object to keep the code tidy. I also advice to give named variables to each transformation step or logical group of steps. It makes it the program more readable.