Steam OpenId and Play Framework - scala

I am having trouble using Steam as OpenId provider. Everything works fine until the callback to my site is made, I see the steam login web page and I can login with my user, but when the calback executes I get an exception.
I use play 2.2 and Scala. The code is very similar to the one found on the play docs
def loginPost = Action.async { implicit request =>
OpenID.redirectURL("http://steamcommunity.com/openid",
routes.Application.openIDCallback.absoluteURL(),
realm = Option("http://mydomain.com/"))
.map(url => Redirect(url))
.recover { case error => Redirect(routes.Application.login) }
}
def openIDCallback = Action.async { implicit request =>
OpenID.verifiedId.map(info => Ok(info.id + "\n" + info.attributes))
.recover {
case error =>
println(error.getMessage()) //prints null
Redirect(routes.Application.login)
}
}
Stacktrace:
Internal server error, for (GET) [/steam/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=error&openid.error=Invalid+claimed_id+or+identity] ->
play.api.Application$$anon$1: Execution exception[[BAD_RESPONSE$: null]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.3.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.3.jar:na]
Caused by: play.api.libs.openid.Errors$BAD_RESPONSE$: null
at play.api.libs.openid.Errors$BAD_RESPONSE$.<clinit>(OpenIDError.scala) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:111) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:92) ~[play_2.10.jar:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:29) ~[classes/:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:28) ~[classes/:2.2.1]
at play.api.mvc.Action$.invokeBlock(Action.scala:357) ~[play_2.10.jar:2.2.1]
I see in the returned URL this error message openid.error=Invalid+claimed_id+or+identity but couldnt find anything related.
What am I missing? Thanks.

It's because the Play Framework OpenID classes are not properly generating the redirect URL. Print out the value of the url variable from this line in your code:
.map(url => Redirect(url))
It most likely looks something like this:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.identity=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
This is incorrect per the OpenID 2.0 spec, specifically at http://openid.net/specs/openid-authentication-2_0.html#discovered_info:
If the end user entered an OpenID Provider (OP) Identifier, there is no Claimed Identifier. For the purposes of making OpenID Authentication requests, the value "http://specs.openid.net/auth/2.0/identifier_select" MUST be used as both the Claimed Identifier and the OP-Local Identifier when an OP Identifier is entered.
Based on that, the generated redirect url variable should be:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
I've written up an issue for this on the Play Framework issue tracker:
https://github.com/playframework/playframework/issues/3740
In the meantime as a temporary hack/fix, you can use any number of string replacement techniques on your url variable to set the openid.claimed_id and openid.identity parameters to the correct http://specs.openid.net/auth/2.0/identifier_select value.

Related

How to resolve akka-http issues?

I'm new to Akka-Http. I added following Route
path(urlpath / "messages") {
post {
decodeRequest {
withoutSizeLimit {
entity(asSourceOf[Message]) { source =>
val storeToDb = Flow[Message].map[Future[Message]](message => (service ask message).mapTo[Message])
val sendToProviderFlow = Flow[Future[Message]].map[Unit](message => sendToJasminProvider(message))
val futureResponse = source
.via(storeToDb)
.via(sendToProviderFlow)
.runWith(Sink.ignore).map(_ => "Message Received")
complete(futureResponse)
}
}
}
}
}
When I tried run above Route, I've been receiving couple of errors and I do not know how to fix them.
Issues:
Accept error: could not accept new connection java.io.IOException: Too many open files
[1] dead letters encountered. If this is not an expected behavior,This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
akka.http.scaladsl.model.EntityStreamException: Entity stream
truncation
About first error, it's seems that you open too many connection. Maybe you simply didn't close them somewhere. Unix used similar connection for sockets and files
About second error, it's obvious that your actor (service) is dead. Try to debug your actors lifecycle and start with
enabling akka.actor.debug.lifecycle = on
For more details visit this page
https://doc.akka.io/docs/akka/current/logging.html#auxiliary-logging-options
Is sendToJasminProvider has type Future[Message] => Unit? If that function has type like Future[Message] => Future[T] then it's problem

How to write to HDFS using spark programming API if I have authentication details?

I need to write to external HDFS cluster whose authentication details are available for both simple as well as kerberos authentication. For the sake of simplicity, lets assume we are dealing with simple authentication.
This is what I have:
External HDFS cluster connection details (host, port)
Authentication details (user for simple auth)
HDFS location where files need to be written (hdfs://host:port/loc)
Also, other details like format, etc.
Please note SPARK user is not same as user specified for HDFS auth.
Now, using the spark programming API, this is what I am trying to do:
val hadoopConf = new Configuration()
hadoopConf.set("fs.defaultFS", fileSystemPath)
hadoopConf.set("hadoop.job.ugi", userName)
val jConf = new JobConf(hadoopConf)
jConf.setUser(user)
jConf.set("user.name", user)
jConf.setOutputKeyClass(classOf[NullWritable])
jConf.setOutputValueClass(classOf[Text])
jConf.setOutputFormat(classOf[TextOutputFormat[NullWritable, Text]])
outputDStream.foreachRDD(r => {
val rdd = r.mapPartitions { iter =>
val text = new Text()
iter.map { x =>
text.set(x.toString)
println(x.toString)
(NullWritable.get(), text)
}
}
val rddCount = rdd.count()
if(rddCount > 0) {
rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf)
}
})
Here, I was assuming that if we pass JobConf with correct details, it should be used for authentication and write should be done using the user specified in JobConf.
However, write still happens as the spark user ("root") irrespective of the auth details present in JobConf ("hdfs" as user). Below is the exception that I get:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/spark-deploy/out/_temporary/0":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3900)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:978)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy40.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy41.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
... 45 more
Please let me know, if there are any suggestions.
This is probably more a comment than an answer but as it is too long I put it here. I haven't tried this because I have no environment to test it. Please try and let me know if this works (and if it doesn't I'll remove this answer).
Looking a bit into the code it looks like DFSClient creates a proxy using createProxyWithClientProtocol that uses UserGroupInformation.getCurrentUser() (I haven't traced the createHAProxy branch down but I suspect the same logic there). Then this info is sent to the server for authentication.
It means that you need to change what UserGroupInformation.getCurrentUser() returns in the context of your particular call. This is what UserGroupInformation.doAs is supposed to do so you just need to get a proper UserGroupInformation instance. And in the case of simple authentication UserGroupInformation.createRemoteUser might actually work.
So I suggest trying something like this:
...
val rddCount = rdd.count()
if(rddCount > 0) {
val remoteUgi = UserGroupInformation.createRemoteUser("hdfsUserName")
remoteUgi.doAs(() => { rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf) })
}

ReactiveMongo/Play: `LastError: DatabaseException['<none>']`, while db still updates

weird problem: While my play application tries to insert/update records from some mongoDB collections while using reactivemongo, the operation seems to fail with a mysterious message, but the record does, actually, gets inserted/updated.
More info:
Insert to problematic collections from the mongo console works well
reading from all collections works well
reading and writing to other collections in the same db works well
writing to the problematic collections used to work.
Error message is:
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[LastError: DatabaseException['<none>']]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:280)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:206)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:100)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:99)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:344)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:343)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:70)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
Caused by: reactivemongo.api.commands.LastError: DatabaseException['<none>']
Using ReactiveMongo 0.11.14, Play 2.5.4, Scala 2.11.7, MongoDB 3.4.0.
Thanks!
UPDATE - The mystery thickens!
Based on #Yaroslav_Derman's answer, I added a .recover clause, like so:
collectionRef.flatMap( c =>
c.update( BSONDocument("_id" -> publicationWithId.id.get), publicationWithId.asInstanceOf[PublicationItem], upsert=true))
.map(wr => {
Logger.warn("Write Result: " + wr )
Logger.warn("wr.inError: " + wr.inError)
Logger.warn("*************")
publicationWithId
}).recover({
case de:DatabaseException => {
Logger.warn("DatabaseException: " + de.getMessage())
Logger.warn("Cause: " + de.getCause())
Logger.warn("Code: " + de.code)
publicationWithId
}
})
The recover clause does get called. Here's the log:
[info] application - Saving pub t3
[warn] application - *************
[warn] application - Saving publication Publication(Some(BSONObjectID("5848101d7263468d01ff390d")),t3,2016-12-07,desc,auth,filename,None)
[info] application - Resolving database...
[info] application - Resolving database...
[warn] application - DatabaseException: DatabaseException['<none>']
[warn] application - Cause: null
[warn] application - Code: None
So no cause, no code, message is "'<none>'", but still an error. What gives?
I tried to move to 0.12, but that caused some compilation errors across the app, plus I'm not sure that would solve the problem. So I'd like to understand what's wrong first.
UPDATE #2:
Migrated to reactive-mongo 0.12.0. Problem persists.
Problem solved by downgrading to MongoDB 3.2.8. Turns out reactiveMongo 0.12.0 is not compatible with mongoDB 3.4.
Thanks everyone who looked into this.
For play reactivemongo 0.12.0 you can do like this
def appsDb = reactiveMongoApi.database.map(_.collection[JSONCollection](DesktopApp.COLLECTION_NAME))
def save(id: String, user: User, body: JsValue) = {
val version = (body \ "version").as[String]
val app = DesktopApp(id, version, user)
appsDb.flatMap(
_.insert(app)
.map(_ => app)
.recover(processError)
)
}
def processError[T]: PartialFunction[Throwable, T] = {
case ex: DatabaseException if ex.code.contains(10054 | 10056 | 10058 | 10107 | 13435 | 13436) =>
//Custom exception which processed in Error Handler
throw new AppException(ResponseCode.ALREADY_EXISTS, "Entity already exists")
case ex: DatabaseException if ex.code.contains(10057 | 15845 | 16550) =>
//Custom exception which processed in Error Handler
throw new AppException(ResponseCode.ENTITY_NOT_FOUND, "Entity not found")
case ex: Exception =>
//Custom exception which processed in Error Handler
throw new InternalServerErrorException(ex.getMessage)
}
Also you can add logs in processError method
LastError was deprecated in 0.11, replaced by WriteResult.
LastError does not, actually, means error, it could mean successful result, you need to check inError property of the LastError object to detect if it's real error. As I see, the '<none>' error message give a good chance that this is not error.
Here is the example "how it was in 0.10": http://reactivemongo.org/releases/0.10/documentation/tutorial/write-documents.html

Jersey/JAX-RS client throwing exception on HTTP GET

Please note: even though I'm using Groovy here, I think my exception is really about using the Jersey/JAX-RS API correctly.
Given the following code:
ClientConfig clientConfig = new DefaultClientConfig()
clientConfig.getFeatures().put(JSONConfiguration.FEATURE_POJO_MAPPING, Boolean.TRUE)
Client jerseyClient = Client.create(clientConfig)
WebResource webResource = jerseyClient.resource("http://localhost:8080/location/")
Long id = 5L
Address address = webResource.path("address").path(id)
.accept(MediaType.APPLICATION_JSON)
.get(Long)
I am getting the following exception:
groovy.lang.MissingMethodException: No signature of method: com.sun.jersey.api.client.WebResource.path() is applicable for argument types: (java.lang.Long) values: [5]
Possible solutions: path(java.lang.String), put(), wait(long), put(com.sun.jersey.api.client.GenericType), put(java.lang.Class), put(java.lang.Object)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:55)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.me.myapp.Driver.run(Driver.groovy:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
<rest omitted for brevity>
I am trying to hit the following REST endpoint:
GET http://localhost:8080/location/address/{id}
Where am I going wrong?
You're calling the path method with a long, but it can only take a String. Your id is a long, but since the error message says the value is 1, I assume that LocationResourcePaths.ADDRESS_PATH is also a long with value 1, is that the case?

Play 2.1 Scala SQLException Connection Timed out waiting for a free available connection

I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^