Fetching a file as a String response using Apache CXF - rest

I have a simple REST client using the Apache CXF library. Here is the code snippet:
val wc = WebClient.create(address(host, port) + "/" + resource).`type`("text/plain")
requestParam match {
case Some(reqParams) => reqParams.foreach((param: (String, String)) => {
wc.query(param._1, param._2)
})
case None => wc
}
println("Acturl url is " + wc.getCurrentURI)
wc.get(classOf[String])
}
What I'm trying to fetch is a simple file named test.txt. I want to render this as a plain String and that is what I do at the last line on the code snippet above. But I get the following error:
javax.ws.rs.NotAuthorizedException was thrown.
javax.ws.rs.NotAuthorizedException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.cxf.jaxrs.client.AbstractClient.convertToWebApplicationException(AbstractClient.java:462)
at org.apache.cxf.jaxrs.client.WebClient.doInvoke(WebClient.java:860)
at org.apache.cxf.jaxrs.client.WebClient.doInvoke(WebClient.java:831)
at org.apache.cxf.jaxrs.client.WebClient.invoke(WebClient.java:394)
at org.apache.cxf.jaxrs.client.WebClient.get(WebClient.java:573)
How can I return the call to a GET endpoint as a plain String?

I was a bit stupid. My original code actually worked and returned a String that I wanted!
wc.get(classOf[String])

Related

How to write to HDFS using spark programming API if I have authentication details?

I need to write to external HDFS cluster whose authentication details are available for both simple as well as kerberos authentication. For the sake of simplicity, lets assume we are dealing with simple authentication.
This is what I have:
External HDFS cluster connection details (host, port)
Authentication details (user for simple auth)
HDFS location where files need to be written (hdfs://host:port/loc)
Also, other details like format, etc.
Please note SPARK user is not same as user specified for HDFS auth.
Now, using the spark programming API, this is what I am trying to do:
val hadoopConf = new Configuration()
hadoopConf.set("fs.defaultFS", fileSystemPath)
hadoopConf.set("hadoop.job.ugi", userName)
val jConf = new JobConf(hadoopConf)
jConf.setUser(user)
jConf.set("user.name", user)
jConf.setOutputKeyClass(classOf[NullWritable])
jConf.setOutputValueClass(classOf[Text])
jConf.setOutputFormat(classOf[TextOutputFormat[NullWritable, Text]])
outputDStream.foreachRDD(r => {
val rdd = r.mapPartitions { iter =>
val text = new Text()
iter.map { x =>
text.set(x.toString)
println(x.toString)
(NullWritable.get(), text)
}
}
val rddCount = rdd.count()
if(rddCount > 0) {
rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf)
}
})
Here, I was assuming that if we pass JobConf with correct details, it should be used for authentication and write should be done using the user specified in JobConf.
However, write still happens as the spark user ("root") irrespective of the auth details present in JobConf ("hdfs" as user). Below is the exception that I get:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/spark-deploy/out/_temporary/0":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3900)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:978)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy40.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy41.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
... 45 more
Please let me know, if there are any suggestions.
This is probably more a comment than an answer but as it is too long I put it here. I haven't tried this because I have no environment to test it. Please try and let me know if this works (and if it doesn't I'll remove this answer).
Looking a bit into the code it looks like DFSClient creates a proxy using createProxyWithClientProtocol that uses UserGroupInformation.getCurrentUser() (I haven't traced the createHAProxy branch down but I suspect the same logic there). Then this info is sent to the server for authentication.
It means that you need to change what UserGroupInformation.getCurrentUser() returns in the context of your particular call. This is what UserGroupInformation.doAs is supposed to do so you just need to get a proper UserGroupInformation instance. And in the case of simple authentication UserGroupInformation.createRemoteUser might actually work.
So I suggest trying something like this:
...
val rddCount = rdd.count()
if(rddCount > 0) {
val remoteUgi = UserGroupInformation.createRemoteUser("hdfsUserName")
remoteUgi.doAs(() => { rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf) })
}

HTTP4S client. How to get the exact request and response body

I am writing a small http4s client
val client = SimpleHttp1Client()
val uri = Uri.fromString(requestUrl).valueOr(throw _)
val task = POST(uri, UrlForm("username" -> userName, "password" -> password)).map{request => println("request: " + request.body)}
try {
val response = client.expect[String](task).unsafePerformSync
println("token: " + response)
response
} catch {
case e: Exception => println(e.getMessage);"BadToken"
}
The output is like
[info] Running com.researchnow.nova.shield.NovaShieldSetup
[info] Emit(Vector(ByteVector(44 bytes, 0x757365726e616d653d616268737269766173746176612670617373776f72643d41726)))
[info] Failed: unexpected HTTP status: 400 Bad Request
[info] token: BadToken
How to convert the binary request body to String? I want to see the body and headers in clear text.
I had a conversation with the http4s team on gitter and found the response. since gitter talk is not returned by google I am putting the answer here
val loggedReq = req.copy(body = request.body.observe(scalaz.stream.io.stdOutBytes))
println(loggedReq)
this prints all the headers. If we do something with the loggedReq then we get the entire body which is posted
loggedReq.as[String].run

Jersey/JAX-RS client throwing exception on HTTP GET

Please note: even though I'm using Groovy here, I think my exception is really about using the Jersey/JAX-RS API correctly.
Given the following code:
ClientConfig clientConfig = new DefaultClientConfig()
clientConfig.getFeatures().put(JSONConfiguration.FEATURE_POJO_MAPPING, Boolean.TRUE)
Client jerseyClient = Client.create(clientConfig)
WebResource webResource = jerseyClient.resource("http://localhost:8080/location/")
Long id = 5L
Address address = webResource.path("address").path(id)
.accept(MediaType.APPLICATION_JSON)
.get(Long)
I am getting the following exception:
groovy.lang.MissingMethodException: No signature of method: com.sun.jersey.api.client.WebResource.path() is applicable for argument types: (java.lang.Long) values: [5]
Possible solutions: path(java.lang.String), put(), wait(long), put(com.sun.jersey.api.client.GenericType), put(java.lang.Class), put(java.lang.Object)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:55)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.me.myapp.Driver.run(Driver.groovy:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
<rest omitted for brevity>
I am trying to hit the following REST endpoint:
GET http://localhost:8080/location/address/{id}
Where am I going wrong?
You're calling the path method with a long, but it can only take a String. Your id is a long, but since the error message says the value is 1, I assume that LocationResourcePaths.ADDRESS_PATH is also a long with value 1, is that the case?

Steam OpenId and Play Framework

I am having trouble using Steam as OpenId provider. Everything works fine until the callback to my site is made, I see the steam login web page and I can login with my user, but when the calback executes I get an exception.
I use play 2.2 and Scala. The code is very similar to the one found on the play docs
def loginPost = Action.async { implicit request =>
OpenID.redirectURL("http://steamcommunity.com/openid",
routes.Application.openIDCallback.absoluteURL(),
realm = Option("http://mydomain.com/"))
.map(url => Redirect(url))
.recover { case error => Redirect(routes.Application.login) }
}
def openIDCallback = Action.async { implicit request =>
OpenID.verifiedId.map(info => Ok(info.id + "\n" + info.attributes))
.recover {
case error =>
println(error.getMessage()) //prints null
Redirect(routes.Application.login)
}
}
Stacktrace:
Internal server error, for (GET) [/steam/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=error&openid.error=Invalid+claimed_id+or+identity] ->
play.api.Application$$anon$1: Execution exception[[BAD_RESPONSE$: null]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.3.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.3.jar:na]
Caused by: play.api.libs.openid.Errors$BAD_RESPONSE$: null
at play.api.libs.openid.Errors$BAD_RESPONSE$.<clinit>(OpenIDError.scala) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:111) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:92) ~[play_2.10.jar:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:29) ~[classes/:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:28) ~[classes/:2.2.1]
at play.api.mvc.Action$.invokeBlock(Action.scala:357) ~[play_2.10.jar:2.2.1]
I see in the returned URL this error message openid.error=Invalid+claimed_id+or+identity but couldnt find anything related.
What am I missing? Thanks.
It's because the Play Framework OpenID classes are not properly generating the redirect URL. Print out the value of the url variable from this line in your code:
.map(url => Redirect(url))
It most likely looks something like this:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.identity=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
This is incorrect per the OpenID 2.0 spec, specifically at http://openid.net/specs/openid-authentication-2_0.html#discovered_info:
If the end user entered an OpenID Provider (OP) Identifier, there is no Claimed Identifier. For the purposes of making OpenID Authentication requests, the value "http://specs.openid.net/auth/2.0/identifier_select" MUST be used as both the Claimed Identifier and the OP-Local Identifier when an OP Identifier is entered.
Based on that, the generated redirect url variable should be:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
I've written up an issue for this on the Play Framework issue tracker:
https://github.com/playframework/playframework/issues/3740
In the meantime as a temporary hack/fix, you can use any number of string replacement techniques on your url variable to set the openid.claimed_id and openid.identity parameters to the correct http://specs.openid.net/auth/2.0/identifier_select value.

AKKA remote (with SSL) can't find keystore/truststore files on classpath

I'm trying to configure AKKA SSL connection to use my keystore and trustore files, and I want it to be able to find them on a classpath.
I tried to set application.conf to:
...
remote.netty.ssl = {
enable = on
key-store = "keystore"
key-store-password = "passwd"
trust-store = "truststore"
trust-store-password = "passwd"
protocol = "TLSv1"
random-number-generator = "AES128CounterSecureRNG"
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
}
...
This works fine if keystore and trustore files are in the current directory. In my application these files get packaged into WAR and JAR archives, and because of that I'd like to read them from the classpath.
I tried to use getResource("keystore") in application.conf as described here without any luck. Config reads it literally as a string.
I also tried to parse String conf and force it to read the value:
val conf: Config = ConfigFactory parseString (s"""
...
"${getClass.getClassLoader.getResource("keystore").getPath}"
...""")
In this case it finds proper path on the classpath as file://some_dir/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore which is exactly where it's located (in the jar). However, underlying Netty SSL transport can't find the file given this path, and I get:
Oct 03, 2013 1:02:48 PM org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink
WARNING: Failed to initialize an accepted socket.
45a13eb9-6cb1-46a7-a789-e48da9997f0fakka.remote.RemoteTransportException: Server SSL connection could not be established because key store could not be loaded
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:113)
at akka.remote.netty.NettySSLSupport$.initializeServerSSL(NettySSLSupport.scala:130)
at akka.remote.netty.NettySSLSupport$.apply(NettySSLSupport.scala:27)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$.defaultStack(NettyRemoteSupport.scala:74)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:277)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:242)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: file:/some_dir/server/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:97)
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:118)
... 10 more
I wonder if there is any way to configure this in AKKA without implementing custom SSL transport. Maybe I should configure Netty in the code?
Obviously I can hardcode the path or read it from an environment variable, but I would prefer a more flexible classpath solution.
I decided to look at the akka.remote.netty.NettySSLSupport at the code where exception is thrown from, and here is the code:
def initializeServerSSL(settings: NettySettings, log: LoggingAdapter): SslHandler = {
log.debug("Server SSL is enabled, initialising ...")
def constructServerContext(settings: NettySettings, log: LoggingAdapter, keyStorePath: String, keyStorePassword: String, protocol: String): Option[SSLContext] =
try {
val rng = initializeCustomSecureRandom(settings.SSLRandomNumberGenerator, settings.SSLRandomSource, log)
val factory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
factory.init({
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType)
val fin = new FileInputStream(keyStorePath)
try keyStore.load(fin, keyStorePassword.toCharArray) finally fin.close()
keyStore
}, keyStorePassword.toCharArray)
Option(SSLContext.getInstance(protocol)) map { ctx ⇒ ctx.init(factory.getKeyManagers, null, rng); ctx }
} catch {
case e: FileNotFoundException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because key store could not be loaded", e)
case e: IOException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because: " + e.getMessage, e)
case e: GeneralSecurityException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because SSL context could not be constructed", e)
}
It looks like it must be a plain filename (String) because that's what FileInputStream takes.
Any suggestions would be welcome!
I also got stuck in similar issue and was getting similar errors. In my case I was trying to hit an https server with self-signed certificates using akka-http, with following code I was able to get through:
val trustStoreConfig = TrustStoreConfig(None, Some("/etc/Project/keystore/my.cer")).withStoreType("PEM")
val trustManagerConfig = TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
val badSslConfig = AkkaSSLConfig().mapSettings(s => s.withLoose(s.loose
.withAcceptAnyCertificate(true)
.withDisableHostnameVerification(true)
).withTrustManagerConfig(trustManagerConfig))
val badCtx = Http().createClientHttpsContext(badSslConfig)
Http().superPool[RequestTracker](badCtx)(httpMat)
At the time of writing this question there was no way to do it AFAIK. I'm closing this question but I welcome updates if new versions provide such functionality or if there are other ways to do that.