JCoIDocServer with own DestinationDataProvider not working - sapjco3

I want to use a own DestinationDataProvider for a JCoIDocServer.
I have registered my provider with:
Environment.registerDestinationDataProvider
When I call
JCoDestination destination = JCoDestinationManager.getDestination("SAP_DEST_" + connector.name + "_server");
my data provider is called.
But when I use:
JCoIDocServer server = JCoIDoc.getServer("SAP_DEST_" + connector.name + "_server");
My provider is not called (I debugged it), and I get this exception:
com.sap.conn.jco.JCoException: (106) JCO_ERROR_RESOURCE: Server SAP_DEST_TestSap_server does not exist
at com.sap.conn.jco.rt.StandaloneServerFactory.update(StandaloneServerFactory.java:338)
at com.sap.conn.jco.rt.StandaloneServerFactory.getServerInstance(StandaloneServerFactory.java:175)
at com.sap.conn.idoc.jco.JCoIDoc.getServer(JCoIDoc.java:301)
at com.sap.conn.idoc.jco.JCoIDoc$getServer.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.lomnido.service.SapService.$tt__startServer(SapService.groovy:84)
at com.lomnido.service.SapService$_startServer_closure2.doCall(SapService.groovy)
What is the problem here?

It's a simple mistake, in order to get your server you have to first register the server data provider as well..
com.sap.conn.jco.ext.Environment.registerServerDataProvider(serverDataProvider);

Related

How to write to HDFS using spark programming API if I have authentication details?

I need to write to external HDFS cluster whose authentication details are available for both simple as well as kerberos authentication. For the sake of simplicity, lets assume we are dealing with simple authentication.
This is what I have:
External HDFS cluster connection details (host, port)
Authentication details (user for simple auth)
HDFS location where files need to be written (hdfs://host:port/loc)
Also, other details like format, etc.
Please note SPARK user is not same as user specified for HDFS auth.
Now, using the spark programming API, this is what I am trying to do:
val hadoopConf = new Configuration()
hadoopConf.set("fs.defaultFS", fileSystemPath)
hadoopConf.set("hadoop.job.ugi", userName)
val jConf = new JobConf(hadoopConf)
jConf.setUser(user)
jConf.set("user.name", user)
jConf.setOutputKeyClass(classOf[NullWritable])
jConf.setOutputValueClass(classOf[Text])
jConf.setOutputFormat(classOf[TextOutputFormat[NullWritable, Text]])
outputDStream.foreachRDD(r => {
val rdd = r.mapPartitions { iter =>
val text = new Text()
iter.map { x =>
text.set(x.toString)
println(x.toString)
(NullWritable.get(), text)
}
}
val rddCount = rdd.count()
if(rddCount > 0) {
rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf)
}
})
Here, I was assuming that if we pass JobConf with correct details, it should be used for authentication and write should be done using the user specified in JobConf.
However, write still happens as the spark user ("root") irrespective of the auth details present in JobConf ("hdfs" as user). Below is the exception that I get:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/spark-deploy/out/_temporary/0":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3900)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:978)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy40.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy41.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
... 45 more
Please let me know, if there are any suggestions.
This is probably more a comment than an answer but as it is too long I put it here. I haven't tried this because I have no environment to test it. Please try and let me know if this works (and if it doesn't I'll remove this answer).
Looking a bit into the code it looks like DFSClient creates a proxy using createProxyWithClientProtocol that uses UserGroupInformation.getCurrentUser() (I haven't traced the createHAProxy branch down but I suspect the same logic there). Then this info is sent to the server for authentication.
It means that you need to change what UserGroupInformation.getCurrentUser() returns in the context of your particular call. This is what UserGroupInformation.doAs is supposed to do so you just need to get a proper UserGroupInformation instance. And in the case of simple authentication UserGroupInformation.createRemoteUser might actually work.
So I suggest trying something like this:
...
val rddCount = rdd.count()
if(rddCount > 0) {
val remoteUgi = UserGroupInformation.createRemoteUser("hdfsUserName")
remoteUgi.doAs(() => { rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf) })
}

javax.security.sasl.SaslException: Authentication failed: the server presented no authentication mechanisms in Wildfly 10.1

I am new to EJBs, and I am trying to perform remote invocations on stateless and stateful beans that I have deployed on a pod in my project that is based on Wildfly 10.1 in the new OpenShift 3 (Origin). The code that I am using for initializing the client context looks like:
Properties clientProperties = new Properties();
clientProperties.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "false");
clientProperties.put("remote.connections", "default");
clientProperties.put("remote.connection.default.host", "localhost");
clientProperties.put("remote.connection.default.port", "8080");
clientProperties.put("remote.connection.default.username", "****");
clientProperties.put("remote.connection.default.password", "****"); clientProperties.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", "false");
clientProperties.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false");
EJBClientContext.setSelector(new ConfigBasedEJBClientContextSelector(new
PropertiesBasedEJBClientConfiguration(clientProperties)));
Properties contextProperties = new Properties();
contextProperties.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
contextProperties.put(Context.SECURITY_PRINCIPAL, "****"); //username
contextProperties.put(Context.SECURITY_CREDENTIALS, "****"); //password
Context context = new InitialContext(contextProperties);
String appName = "CloudEAR";
String moduleName = "CloudEjb";
String distinctName = "";
String beanName = "Calculator";
String qualifiedRemoteView = "cloudEJB.view.CalculatorRemote";
String lookupString = "ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + qualifiedRemoteView;
Calculator calculator = (CalculatorRemote) context.lookup(lookupString);
int sum = calculator.sum(10, 10);
And the error message that I get is:
WARN: Could not register a EJB receiver for connection to localhost:8080
javax.security.sasl.SaslException: Authentication failed: the server presented no authentication mechanisms
at org.jboss.remoting3.remote.ClientConnectionOpenListener$Capabilities.handleEvent(ClientConnectionOpenListener.java:378)
at org.jboss.remoting3.remote.ClientConnectionOpenListener$Capabilities.handleEvent(ClientConnectionOpenListener.java:240)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.channels.TranslatingSuspendableChannel.handleReadable(TranslatingSuspendableChannel.java:198)
at org.xnio.channels.TranslatingSuspendableChannel$1.handleEvent(TranslatingSuspendableChannel.java:112)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.ChannelListeners$DelegatingChannelListener.handleEvent(ChannelListeners.java:1092)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567)
at ...asynchronous invocation...(Unknown Source)
at org.jboss.remoting3.EndpointImpl.doConnect(EndpointImpl.java:272)
at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:388)
Initially I tried using the "jboss-ejb-client.properties" file, but that wasn't even able to make the remote connection. Now I am manually creating and configuring the EJBClientContext, and at least is successfully connecting to the remote server, but the invocation filas because of authentication failures.
I remember that we used to solve this issue by removing the "security realm" argument in "standalone.xml" files in older versions of OpenShift; however I am not able to find that file in the new version anymore. I have been looking at concepts such as secrets, volumes etc. but I really don't have a clear understanding how this works. When I create a new secret and try to associate it with my pod, the new deployment procedure fails. I would really appreciate any help.

Grizzly: Illegal attempt to exceed the configured maximum number of headers

I am using Grizzly/Jersey/Jackson in a RESTful web service server application. For certain interactions, a large number of HTTP headers may be returned in a response. By default, Grizzy sets a maxium number of response headers to 100.
Reading the Grizzy Http Server Framework Overview, it seems like the maxResponseHeaders (maximum number of headers a response may send to a client) can somehow be configured, but it's not clear how that is done when Grizzly is stacked up with Jersey.
Any suggestions on what to try to set this configuration?
This is how I am currently configuring Grizzly and Jackson:
packages(true, Config.CONFIG_RESOURCE_BASE_PACKAGE);
register(JacksonFeature.class);
register(GrizzlyHttpContainerProvider.class);
register(CustomInjectables.class);
register(RolesAllowedDynamicFeature.class);
register(AccessSecurityFilter.class);
String host = Config.get("webserver.address");
int port = Config.getInteger("webserver.port");
boolean secure = Config.getBoolean("webserver.secure");
if (secure) {
SSLContextConfigurator sslContextConfigurator = new SSLContextConfigurator();
sslContextConfigurator.setKeyStoreFile(Config
.get("webserver.keystore.location"));
sslContextConfigurator.setKeyStorePass(Config
.get("webserver.keystore.password"));
boolean clientMode = false;
boolean needClientAuth = false;
boolean wantClientAuth = false;
SSLEngineConfigurator sslEngineConfigurator = new SSLEngineConfigurator(
sslContextConfigurator, clientMode, needClientAuth,
wantClientAuth);
URI uri = URI.create("https://" + host + ":" + port);
log.info("Starting web server (secure): " + uri + " ...");
server = GrizzlyHttpServerFactory.createHttpServer(uri, this, true,
sslEngineConfigurator, true);
} else {
URI uri = URI.create("http://" + host + ":" + port);
log.info("Starting web server: " + uri + " ...");
server = GrizzlyHttpServerFactory.createHttpServer(uri, this, true);
}
This is the stack when I exceed the maximum number of response headers:
Apr 24, 2017 10:28:46 PM org.glassfish.grizzly.filterchain.DefaultFilterChain execute
WARNING: GRIZZLY0013: Exception during FilterChain execution
org.glassfish.grizzly.http.util.MimeHeaders$MaxHeaderCountExceededException: Illegal attempt to exceed the configured maximum number of headers: 100
at org.glassfish.grizzly.http.util.MimeHeaders.createHeader(MimeHeaders.java:396)
at org.glassfish.grizzly.http.util.MimeHeaders.setValue(MimeHeaders.java:498)
at org.glassfish.grizzly.http.HttpServerFilter.prepareResponse(HttpServerFilter.java:944)
at org.glassfish.grizzly.http.HttpServerFilter.encodeHttpPacket(HttpServerFilter.java:834)
at org.glassfish.grizzly.http.HttpCodecFilter.handleWrite(HttpCodecFilter.java:1407)
at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:890)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:858)
at org.glassfish.grizzly.http.io.OutputBuffer.flushBuffer(OutputBuffer.java:1029)
at org.glassfish.grizzly.http.io.OutputBuffer.flushBinaryBuffers(OutputBuffer.java:1016)
at org.glassfish.grizzly.http.io.OutputBuffer.flushAllBuffers(OutputBuffer.java:987)
at org.glassfish.grizzly.http.io.OutputBuffer.close(OutputBuffer.java:716)
at org.glassfish.grizzly.http.server.NIOWriterImpl.close(NIOWriterImpl.java:111)
at org.glassfish.grizzly.http.server.util.HtmlHelper.sendErrorPage(HtmlHelper.java:103)
at org.glassfish.grizzly.http.server.Response.sendError(Response.java:1358)
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer$ResponseWriter.failure(GrizzlyHttpContainer.java:287)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:486)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:316)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:291)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1140)
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:375)
at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Unknown Source)
The firs thing you need to do is have the server not automatically start when you call createHttpServer. Currently, you are passing true as the last argument, which is saying that is should auto-start. This configuration is already the default. So it's mainly used to set the value to false, meaning don't auto-start. So set that value to false.
Now that the server is not auto-starting, we can configure it. The specific configuration of the setMaxResponseHeader is a configuration on the NetworkListener. You can get that from the HttpServer.
final HttpServer server = GrizzlyHttpServerFactory.createHttpServer(...);
final NetworkListener listener = server.getListener("grizzly");
listener.setMaxResponseHeaders(300);
server.start();
Now we manually start the server after configuration. The one thing I'm not sure about if there is a better way to get the listener. I just hard coded the "grizzly" because what I did prior was just iterate through server.getListeners() and print out all the names, and saw that "grizzly" was the only one available. So that's what I used to test.
Aside from the NetworkListener configuration, there are also other server related configurations you can make through the ServerConfiguration. You can get the with server.getServerConfiguration()

Jersey/JAX-RS client throwing exception on HTTP GET

Please note: even though I'm using Groovy here, I think my exception is really about using the Jersey/JAX-RS API correctly.
Given the following code:
ClientConfig clientConfig = new DefaultClientConfig()
clientConfig.getFeatures().put(JSONConfiguration.FEATURE_POJO_MAPPING, Boolean.TRUE)
Client jerseyClient = Client.create(clientConfig)
WebResource webResource = jerseyClient.resource("http://localhost:8080/location/")
Long id = 5L
Address address = webResource.path("address").path(id)
.accept(MediaType.APPLICATION_JSON)
.get(Long)
I am getting the following exception:
groovy.lang.MissingMethodException: No signature of method: com.sun.jersey.api.client.WebResource.path() is applicable for argument types: (java.lang.Long) values: [5]
Possible solutions: path(java.lang.String), put(), wait(long), put(com.sun.jersey.api.client.GenericType), put(java.lang.Class), put(java.lang.Object)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:55)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.me.myapp.Driver.run(Driver.groovy:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
<rest omitted for brevity>
I am trying to hit the following REST endpoint:
GET http://localhost:8080/location/address/{id}
Where am I going wrong?
You're calling the path method with a long, but it can only take a String. Your id is a long, but since the error message says the value is 1, I assume that LocationResourcePaths.ADDRESS_PATH is also a long with value 1, is that the case?

Steam OpenId and Play Framework

I am having trouble using Steam as OpenId provider. Everything works fine until the callback to my site is made, I see the steam login web page and I can login with my user, but when the calback executes I get an exception.
I use play 2.2 and Scala. The code is very similar to the one found on the play docs
def loginPost = Action.async { implicit request =>
OpenID.redirectURL("http://steamcommunity.com/openid",
routes.Application.openIDCallback.absoluteURL(),
realm = Option("http://mydomain.com/"))
.map(url => Redirect(url))
.recover { case error => Redirect(routes.Application.login) }
}
def openIDCallback = Action.async { implicit request =>
OpenID.verifiedId.map(info => Ok(info.id + "\n" + info.attributes))
.recover {
case error =>
println(error.getMessage()) //prints null
Redirect(routes.Application.login)
}
}
Stacktrace:
Internal server error, for (GET) [/steam/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=error&openid.error=Invalid+claimed_id+or+identity] ->
play.api.Application$$anon$1: Execution exception[[BAD_RESPONSE$: null]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.3.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.3.jar:na]
Caused by: play.api.libs.openid.Errors$BAD_RESPONSE$: null
at play.api.libs.openid.Errors$BAD_RESPONSE$.<clinit>(OpenIDError.scala) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:111) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:92) ~[play_2.10.jar:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:29) ~[classes/:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:28) ~[classes/:2.2.1]
at play.api.mvc.Action$.invokeBlock(Action.scala:357) ~[play_2.10.jar:2.2.1]
I see in the returned URL this error message openid.error=Invalid+claimed_id+or+identity but couldnt find anything related.
What am I missing? Thanks.
It's because the Play Framework OpenID classes are not properly generating the redirect URL. Print out the value of the url variable from this line in your code:
.map(url => Redirect(url))
It most likely looks something like this:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.identity=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
This is incorrect per the OpenID 2.0 spec, specifically at http://openid.net/specs/openid-authentication-2_0.html#discovered_info:
If the end user entered an OpenID Provider (OP) Identifier, there is no Claimed Identifier. For the purposes of making OpenID Authentication requests, the value "http://specs.openid.net/auth/2.0/identifier_select" MUST be used as both the Claimed Identifier and the OP-Local Identifier when an OP Identifier is entered.
Based on that, the generated redirect url variable should be:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
I've written up an issue for this on the Play Framework issue tracker:
https://github.com/playframework/playframework/issues/3740
In the meantime as a temporary hack/fix, you can use any number of string replacement techniques on your url variable to set the openid.claimed_id and openid.identity parameters to the correct http://specs.openid.net/auth/2.0/identifier_select value.