sparkJobServer - running spark sql - spark-jobserver

It looks like spark.jobserver.context.SQLContextFactory is deprecated. Could somebody help me with the example on how to run Spark SQL with the latest ( 0.8) version of SparkJobServer.
Thank you.

Related

unsupported frontend protocol 1234.5680: server supports 2.0 to 3.0

I am running confluence 7.9.1 and postgres 10 but when we start only postgres container it doesn't throw below logs
unsupported frontend protocol 1234.5680: server supports 2.0 to 3.0
but when we start confluence with version 7.9.1 , postgres container will throw above logs.
Anyone know how we can resolve this since we tried PGGSSENCMODE=disable in env but it didnt help.
Regards,
Samurai
We resolved this by using new postgresql-42.2.18.jar which we replaced with postgresql-42.2.16.jar
suggested here : https://jira.atlassian.com/browse/CONFSERVER-60515?error=login_required&error_description=Login+required&state=14f30dda-a08b-4f9d-9841-ed77c8e91c79
Thank you for your support.
For those who are using postgresql-42.2.16.jar or prior and are looking to quiet this error without upgrading the JDBC jar, you can use the following option in the connection string - note case sensitivity:
gssEncMode=disable

Datastage v9.1 - run user defined sql query file using odbc connector

I want to execute multiple lines of DDL and DML commands from a file in datastage.
I have used the ODBC connector with the write mode selected as user defined SQL and the and the SQL statements are available in the file.
But the connector stage is not executing the file. If anyone can provide me with guidance it would be greatly appreciated.
Thanks
If you can provide the more details on how you were using the 'DDL & DML statements in the file, warning messages & ODBC configuration etc.. It will help anyone to provide some suggestions- It will save your time as well to resolve it.

What is the way to connect to hive using scala code and execute query into hive?

I checked out this link but did not find anything useful :
HiveClient Documentation
From raw Scala you can use Hive JDBC connector: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBC.
One more option is to use Spark Hive context.

Java MongoDB driver can't parse query

I'm developing project with hibernate-ogm 5 with mongodb 3. But some query can not parsed. But I tested this query on shell. It works. What's wrong with this query?
com.mongodb.util.JSONParseException:
db.Tree.update({'_id':2},{'$inc':{'totalUserCount':NumberInt(-1)}},{})
^
com.mongodb.util.JSONParser.parse(JSON.java:230)
com.mongodb.util.JSONParser.parse(JSON.java:155)
com.mongodb.util.JSON.parse(JSON.java:92)
com.mongodb.util.JSON.parse(JSON.java:73)
org.hibernate.ogm.datastore.mongodb.query.parsing.nativequery.impl.MongoDBQueryDescriptorBuilder.build(MongoDBQueryDescriptorBuilder.java:71)
FYI, the parsing issue will be fixed in the next release of OGM.
Note that it will support NumberLong but not NumberInt as NumberInt is not supported by the MongoDB Java Driver: https://jira.mongodb.org/browse/JAVA-2185 .
The use of functions like NumberInt is not supported at the moment.
I've created an issue for it: https://hibernate.atlassian.net/browse/OGM-1027

Spark SQL build for hive?

I have downloaded spark release - 1.3.1 and package type is Pre-build for Hadoop 2.6 and later
now i want to run below scala code using spark shell so i followed this steps
1. bin/spark-shell
2. val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
3. sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
Now the problem is if i verity it on hue browser like
select * from src;
then i get
table not found exception
that means table not created how do i configure hive with spark shell to make this successful. i want to use SparkSQL also i need to read and write data from hive.
i randomly heard that we need to copy hive-site.xml file somewhere in spark directory
can someone please explain me with the steps - SparkSQL and Hive configuration
Thanks
Tushar
Indeed, the hive-site.xml direction is correct. Take a look at https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables .
Also it sounds like you wish to create a hive table from spark, for that look at "Saving to Persistent Tables" in the same document as above.