oozie-hive beeline not working with kerberos - kerberos

We have recently migrated from our old HDP cluster(without kerberos) to new HDP cluster(having kerberos). We are facing some authentication issues while running our ozzie jobs on new clutser. Please refer to workflow.xml below. The first action 'hive-101' works fine, however the second action hive-102 fails.
<credentials>
<credential name="hs2-creds" type="hive2">
<property>
<name>hive2.server.principal</name>
<value>${jdbcPrincipal}</value>
</property>
<property>
<name>hive2.jdbc.url</name>
<value>${jdbcURL}</value>
</property>
</credential>
</credentials>
<start to="hive-101"/>
<action name="hive-101" cred="hs2-creds">
<hive2 xmlns="uri:oozie:hive2-action:0.2">
<jdbc-url>${jdbcURL}</jdbc-url>
<password>${hivepassword}</password>
<query>SELECT count(*) FROM table1;</query>
</hive2>
<ok to="hive-102"/>
<error to="fail"/>
</action>
<action name="hive-102" retry-max="${maxretry}" retry-interval="${retryinterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>beeline</exec>
<argument>jdbc:hive2://zk01.abc.com:2181,zk02.abc.com:2181,zk03.abc.com:2181/${hivedatabase};principal=hive/_HOST#ABC.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2</argument>
<argument>--outputformat=vertical</argument>
<argument>--silent=true</argument>
<argument>-e</argument>
<argument>
SELECT max(id) as mx_id FROM ${hivedatabase}.table1;
</argument>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
Below are the error details
ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_212]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.8.0_212]
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) ~[?:1.8.0_212]
WARN jdbc.HiveConnection: Failed to connect to nn02.abc.com:10000
WARN jdbc.HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://nn02.abc.com:10000/db_test;principal=hive/_HOST#ABC.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: GSS initiate failed Retrying 0 of 1
ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_212]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.8.0_212]

The shell action will run on an arbitrary data node as the Unix user who started the Oozie workflow. That user who tries to run the shell command won't be automatically authenticated with Kerberos.
I believe you will have to place a Kerberos keytab for the user on each data node. Then your Oozie shell action will need to run a script that runs a kinit using the keytab and then runs the beeline command.
From Apache Oozie by Mohammad Kamrul Islam and Aravind Srinivasan
On a nonsecure Hadoop cluster, the shell command will execute as the Unix user who runs the TaskTracker (Hadoop 1) or the YARN container (Hadoop 2). This is typically a system-defined user. On secure Hadoop clusters running Kerberos, the shell commands will run as the Unix user who submitted the workflow containing the action.

Related

Failed to read challenge file [Caused by java.io.FileNotFoundException: /jboss/standalone/tmp/auth/local4123__.challenge (No such file or directory)

I am trying to connect two machines both running JBoss EAP 7.1.0 using a JMS bridge. Machine 1 is to act as a web server and has a WAR file deployed which is accessible, and Machine 2 is to act as the app server and has all the necessary components deployed just fine.
This is the error I am receiving:
WARN [org.apache.activemq.artemis.jms.bridge] (ServerService Thread Pool -- 72) AMQ342010: Failed to connect JMS Bridge N/A: javax.naming.CommunicationException: WFNAM00018: Failed to connect to remote host [Root exception is javax.security.sasl.SaslException: Authentication failed: all available authentication mechanisms failed:
JBOSS-LOCAL-USER: javax.security.sasl.SaslException: ELY05128: [JBOSS-LOCAL-USER] Failed to read challenge file [Caused by java.io.FileNotFoundException: /.../.../jboss/standalone/tmp/auth/local3093626581916142639.challenge (No such file or directory)]]
at org.wildfly.naming.client.remote.RemoteNamingProvider.getPeerIdentityForNaming(RemoteNamingProvider.java:110)
at org.wildfly.naming.client.remote.RemoteNamingProvider.getPeerIdentityForNaming(RemoteNamingProvider.java:53)
at org.wildfly.naming.client.NamingProvider.getPeerIdentityForNamingUsingRetry(NamingProvider.java:105)
at org.wildfly.naming.client.remote.RemoteNamingProvider.getPeerIdentityForNamingUsingRetry(RemoteNamingProvider.java:91)
at org.wildfly.naming.client.remote.RemoteContext.lambda$lookupNative$0(RemoteContext.java:189)
at org.wildfly.naming.client.NamingProvider.performExceptionAction(NamingProvider.java:222)
at org.wildfly.naming.client.remote.RemoteContext.performWithRetry(RemoteContext.java:100)
at org.wildfly.naming.client.remote.RemoteContext.lookupNative(RemoteContext.java:188)
at org.wildfly.naming.client.AbstractFederatingContext.lookup(AbstractFederatingContext.java:74)
at org.wildfly.naming.client.AbstractFederatingContext.lookup(AbstractFederatingContext.java:60)
at org.wildfly.naming.client.WildFlyRootContext.lookup(WildFlyRootContext.java:144)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at org.apache.activemq.artemis.jms.bridge.impl.JNDIFactorySupport.createObject(JNDIFactorySupport.java:46)
at org.apache.activemq.artemis.jms.bridge.impl.JNDIDestinationFactory.createDestination(JNDIDestinationFactory.java:32)
at org.apache.activemq.artemis.jms.bridge.impl.JMSBridgeImpl.setupJMSObjects(JMSBridgeImpl.java:1072)
at org.apache.activemq.artemis.jms.bridge.impl.JMSBridgeImpl.start(JMSBridgeImpl.java:398)
at org.wildfly.extension.messaging.activemq.jms.bridge.JMSBridgeService.startBridge(JMSBridgeService.java:114)
at org.wildfly.extension.messaging.activemq.jms.bridge.JMSBridgeService$1.run(JMSBridgeService.java:84)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
The connection to the target machine (application server) is being made as the path to JBoss is that of the path on the machine, which I verified by testing with a Windows application server environment and the path was the correct Windows path to the directory where the challenge file should be, so clearly the connection is being made as the directories are being accessed. However, the .challenge file isn't present each time which understandably causes the error message.
I have scoured SO and JBoss forums for days now and nothing is resolving my issue.
I saw this post: JBOSS-LOCAL-USER: javax.security.sasl.SaslException: Failed to read server challenge
This is the same issue that I am facing, but the answer which was marked as correct doesn't help me very much. The solution, in this case, was to replace the default ApplicationRealm with a JAAS realm, but I do not know if this is what I need, and I certainly do not currently have one. I did research it, but it seemed to not be applicable to my setup, but I could be wrong.
I also tried this solution: https://access.redhat.com/solutions/3209281 (Subscription only access)
This solution was to remove default-user="$local" from here:
<security-realm name="ApplicationRealm">
<authentication>
<local default-user="$local" allowed-users="*" skip-group-loading="true"/>
I did this to both standalone-full.xml files on both machines, and it appeared to make no difference at all.
I have created application users on both machines and given them superuser privileges through the JBoss console as I figured it was probably an issue with permissions when trying to write the file but this too was to no avail. I have also verified that both user credentials are correct.
The workaround was to switch to using a core bridge instead of a JMS bridge, as per the recommendation from Justin in the comments.

GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) while connecting Polybase with Kerberos

We want to connect our SQL Server 2016 Enterprise via Polybase with our Kerberized OnPrem Hadoop-Cluster with Cloudera 5.14.
I followed the Microsoft PolyBase Guide to configure Polybase. After working few days on this topic I'm not able to continue because of an exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Microsoft has an built in diagnostic tool for troubleshooting the connectivity with PolyBase and Kerberos. On this troubleshooting guide from Microsoft there are 4 checkpoints and I'm stuck on checkpoint 4.
Short information about the checkpoints (where I'm successfull):
Checkpoint 1: Successfull! Authenticated against the KDC and received a TGT
Checkpoint 2: Successfull! Regarding troubleshooting guide PolyBase will make an attempt to access the HDFS and fail because the request did not contain the necessary Service Ticket.
Checkpoint 3: Sucessfull! A second hex dump indicates that SQL Server successfully used the TGT and acquired the applicable Service Ticket for the name node's SPN from the KDC.
Checkpoint 4: Not successfull SQL Server was authenticated by Hadoop using the ST (Service Ticket) and a session was granted to access the secured resource.
krb5.conf file
[libdefaults]
default_realm = COMPANY.REALM.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
COMPANY.REALM.COM = {
kdc = ipadress.kdc.host
admin_server = ipadress.kdc.host
}
[logging]
default = FILE:/var/log/krb5/kdc.log
kdc = FILE:/var/log/krb5/kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
core-site.xml for Polybase on SQL-Server
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>ipc.client.connect.max.retries</name>
<value>2</value>
</property>
<property>
<name>ipc.client.connect.max.retries.on.timeouts</name>
<value>2</value>
</property>
<!-- kerberos security information, PLEASE FILL THESE IN ACCORDING TO HADOOP CLUSTER CONFIG -->
<property>
<name>polybase.kerberos.realm</name>
<value>COMPANY.REALM.COM</value>
</property>
<property>
<name>polybase.kerberos.kdchost</name>
<value>ipadress.kdc.host</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>KERBEROS</value>
</property>
</configuration>
hdfs-site.xml for Polybase on SQL-Server
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.block.size</name>
<value>268435456</value>
</property>
<!-- Client side file system caching is disabled below for credential refresh and
settting the below cache disabled options to true might result in
stale credentials when an alter credential or alter datasource is performed
-->
<property>
<name>fs.wasb.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.wasbs.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.asv.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.asvs.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.hdfs.impl.disable.cache</name>
<value>true</value>
</property>
<!-- kerberos security information, PLEASE FILL THESE IN ACCORDING TO HADOOP CLUSTER CONFIG -->
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#COMPANY.REALM.COM</value>
</property>
</configuration>
Polybase Exception
[2018-06-22 12:51:50,349] WARN 2872[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:53,568] WARN 6091[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:56,127] WARN 8650[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:58,998] WARN 11521[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:59,139] WARN 11662[main] - org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:676) - Couldn't setup connection for hdfs#COMPANY.REALM.COM to IPADRESS_OF_NAMENODE:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Log Entry on NameNode
Socket Reader #1 for port 8020: readAndProcess from client IP-ADRESS_SQL-SERVER threw exception [javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: AES128 CTS mode with HMAC SHA1-96 encryption type not in permitted_enctypes list)]]
Auth failed for IP-ADRESS_SQL-SERVER:60484:null (GSS initiate failed) with true cause: (GSS initiate failed)
The confusing part for me is the log entry from our NameNode because AES128 CTS mode with HMAC SHA1-96 is already in the list of permitted enctypes as shown in krb5.conf and in Cloudera Manager UI
We appreciate your help!
The problem has itself taken care after we restarted the cluster.
I think the problem was that the krb5.conf file in our Hadoop-Cluster could not be distributed on all nodes because of some running services. There was also a warning in the Cloudera Manager about a stale configuration regarding Kerberos.
Many thanks to everyone!

Remote Invocation of EJB in WildFly 10 using JNDI lookup

Im trying to invoke an EJB from a remote server using JNDI lookup, Im using EJB3 with Spring-MVC in WildFly 10 and the configuration guided in this documentation has been done in my client and remote server
https://docs.jboss.org/author/display/WFLY10/EJB+invocations+from+a+remote+client+using+JNDI
But still I'm not able to get the connection of remote server.
1) Created a user under ApplicationRealm and gave the permissions for master slave setup for remote EJB Invocation.
2) This is my jboss-ejb-client.properties file, Here I have given the wildfly User_Name and Password of Host server.
endpoint.name=client-endpoint
remote.connections=one, two
remote.connection.one.host=172.16.25.26
remote.connection.one.port=8080
remote.connection.one.username=ABCD
remote.connection.one.password=ABCD#123
remote.connection.two.host=localhost
remote.connection.two.port=8080
remote.connection.two.username=guest
remote.connection.two.username=guest
# org.jboss.as.logging.per-deployment=true
My exception is
javax.naming.AuthenticationException: Failed to connect to any server. Servers tried:
[http-remoting://172.16.25.26:8080 (Authentication failed: all available authentication mechanisms failed:
JBOSS-LOCAL-USER: javax.security.sasl.SaslException: Failed to read server challenge [Caused by
java.io.FileNotFoundException: D:\wildfly-10.0.0.Final\standalone\tmp\auth\local3540175271681581878.challenge
(The system cannot find the file specified)]
DIGEST-MD5: javax.security.sasl.SaslException: DIGEST-MD5: Cannot perform callback to acquire realm,
authentication ID or password [Caused by javax.security.auth.callback.UnsupportedCallbackException])]
[Root exception is javax.security.sasl.SaslException: Authentication failed: all available authentication
mechanisms failed:
Please tell me what am I missing here thats causing this exception and what is the significance of secret-key generated while creating the user in wildfly and where to configure that key

Configure Event Listener in Keycloak

I'm configuring a event listener in KeyCloak in a Wildfly 9.0.1.
I have created a .jar with two clases, implements a provider like Keycloak explains in his github's example.
In this example, Keycloak people explain it's necessary to register the provider editing "standalone/configuration/standalone.xml" and adding the module to the providers element.
I code this definition inside the tag "subsystem":
<spi name="eventsListener">
<provider name="my-event-listener" enabled="true">
<properties>
<property name="max" value="100" />
</properties>
</provider>
</spi>
When I start the server, it gives me a error like this:
ERROR [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0055: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: WFLYCTL0085: Failed to parse configuration
at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:131)
at org.jboss.as.server.ServerService.boot(ServerService.java:350)
at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:271)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.xml.stream.XMLStreamException: Unknown keycloak-server subsystem tag: spi
at org.keycloak.subsystem.server.extension.KeycloakSubsystemParser.readElement(KeycloakSubsystemParser.java:55)
at org.keycloak.subsystem.server.extension.KeycloakSubsystemParser.readElement(KeycloakSubsystemParser.java:39)
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110)
at org.jboss.staxmapper.XMLExtendedStreamReaderImpl.handleAny(XMLExtendedStreamReaderImpl.java:69)
at org.jboss.as.server.parsing.StandaloneXml.parseServerProfile(StandaloneXml.java:1199)
at org.jboss.as.server.parsing.StandaloneXml.readServerElement_1_4(StandaloneXml.java:457)
at org.jboss.as.server.parsing.StandaloneXml.readElement(StandaloneXml.java:144)
at org.jboss.as.server.parsing.StandaloneXml.readElement(StandaloneXml.java:106)
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110)
at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69)
at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:123)
... 3 more
FATAL [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details.
Someone knows what is wrong? I need help.
Thanks you.
I think you must code your definition inside : <subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">

dropbox connector, no callback uri, Could not find connector with name 'connector.http.mule.default'

i'm new to Mule and i'm trying to use the dropbox connector with my web application. I'm trying to make a flow to authorize the current user to upload a file, however the flow doesn't even run. All i did was set up an http connector, then placed the dropbox connector and set it up. I used the graphic interface but here is the code:
<?xml version="1.0" encoding="UTF-8"?>
<http:listener-config name="HTTP_Listener_Configuration" host="localhost" port="8081" doc:name="HTTP Listener Configuration"/>
<dropbox:config name="Dropbox" appKey="xxxxxxxxxxx" appSecret="xxxxxxxxxxx" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="http://localhost" localPort="8081" path="callback" remotePort="8081"/>
</dropbox:config>
<dropbox:config name="Dropbox1" appKey="xxxxxxxxxxx" appSecret="xxxxxxxxxxx" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="http://localhost" localPort="8081" remotePort="8081" path="callback"/>
</dropbox:config>
<flow name="dropbxFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/callback/" doc:name="HTTP"/>
<dropbox:authorize config-ref="Dropbox" doc:name="Dropbox"/>
</flow>
i keep getting this error when i run :
INFO 2015-05-05 16:09:53,688 [Mule.app.deployer.monitor.1.thread.1] org.mule.lifecycle.AbstractLifecycleManager: Starting model: _muleSystemModel
INFO 2015-05-05 16:09:53,688 [Mule.app.deployer.monitor.1.thread.1] org.mule.construct.FlowConstructLifecycleManager: Starting flow: dropbxFlow
INFO 2015-05-05 16:09:53,688 [Mule.app.deployer.monitor.1.thread.1] org.mule.processor.SedaStageLifecycleManager: Starting service: dropbxFlow.stage1
ERROR 2015-05-05 16:09:53,694 [Mule.app.deployer.monitor.1.thread.1] org.mule.modules.dropbox.process.DefaultHttpCallback: Could not find connector with name 'connector.http.mule.default'
INFO 2015-05-05 16:09:53,694 [Mule.app.deployer.monitor.1.thread.1] org.mule.processor.SedaStageLifecycleManager: Stopping service: dropbxFlow.stage1
ERROR 2015-05-05 16:09:53,890 [Mule.app.deployer.monitor.1.thread.1] org.mule.module.launcher.application.DefaultMuleApplication: null
org.mule.api.DefaultMuleException: Could not find connector with name 'connector.http.mule.default'
This only happens when i use the Authorize attribute on the dropbox connector, as it needs a callback uri(which i think it the problem here), the callback http endpoint isn't being set up. Any insight would be greatly appreciated.
These are normal version issues with some of the libraries. I had the same issue while using dropbox connector 3.3.0 so after a lot of R&D I figured out where was actually the problem. I went to pom.xml, there I changed version to 3.3.3 and all was fine and running and no more issue at all.
<dependency>
<groupId>org.mule.modules</groupId>
<artifactId>mule-module-dropbox</artifactId>
<version>3.3.3</version>
</dependency>
And regarding that "no callback uri", you should have same redirect uri in dropbox developer account and in dropbox global element Oauth configuration.
Let's say you have provided "http://localhost:8081/callback" as Redirect URI in Dropbox account so your Oauth configuration should be like as following
Domain : localhost
Local Port : 8081
Remote Port : 8081
Path : callback