How can I support Kerberos based authentication with the ning http client?
I am extending existing code which has support for NTLMAuth and I want to be able to include support for Kerberos, which is used on some of the websites that I need to test.
I want to be able to put in the user and password programmatically, I do not want to use a keyTab, or setup krb5 configuration on the system where this is running.
I have the following code block;
import com.ning.http.client.RequestBuilder;
import com.ning.http.client.Realm.RealmBuilder;
....
RealmBuilder myRealmBuilder = new RealmBuilder()
.setScheme(AuthScheme.KERBEROS)
.setUsePreemptiveAuth(true)
.setNtlmDomain(getDomain())
.setNtlmHost(getHost())
.setPrincipal(getUsername())
.setPassword(getUserPassword()));
RequestBuilder rb = new RequestBuilder()
.setMethod(site.getMethod())
.setUrl(site.getUrl())
.setFollowRedirects(site.isFollowRedirects())
.setRealm(myRealmBuilder),
site)
ยดยดยด
Currently I get the error response:
FAILED: Invalid name provided (Mechanism level: KrbException: Cannot locate default realm)
Does anyone have an good example of how to do this correctly?
Related
Trying to use google logging client library for writing logs into gcloud, specifically, i'm interested in writing logs that will be attached to a managed resource, in this case, a Vertex AI endpoint:
Code sample:
import logging
from google.api_core.client_options import ClientOptions
import google.cloud.logging_v2 as logging_v2
from google.oauth2 import service_account
def init_module_logger(module_name: str) -> logging.Logger:
module_logger = logging.getLogger(module_name)
module_logger.setLevel(settings.LOG_LEVEL)
credentials= service_account.Credentials.from_service_account_info(json.loads(SA_KEY_JSON))
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="us-east1-aiplatform.googleapis.com"),
)
handler = client.get_default_handler(
resource=Resource(
type="aiplatform.googleapis.com/Endpoint",
labels={"endpoint_id": "ENDPOINT_NUMBER_ID",
"location": "us-east1"},
)
)
#Assume we have the formatter
handler.setFormatter(ENRICHED_FORMATTER)
module_logger.addHandler(handler)
return module_logger
logger = init_module_logger(__name__)
logger.info("This Fails with 501")
And i am getting:
google.api_core.exceptions.MethodNotImplemented: 501 The GRPC target
is not implemented on the server, host:
us-east1-aiplatform.googleapis.com, method:
/google.logging.v2.LoggingServiceV2/WriteLogEntries. Sent all pending
logs.
I thought we need to enable api and was told it's enabled, and that we have: https://www.googleapis.com/auth/logging.write
what could be causing the error?
As mentioned by #DazWilkin in the comment, the error is because the API endpoint us-east1-aiplatform.googleapis.com does not have a method called WriteLogEntries.
The above endpoint is used to send requests to Vertex AI services and not to Cloud Logging. The API endpoint to be used is the logging.googleapis.com as shown in the entries.write method. Refer to this documentation for more info.
The ClientOptions() function should have logging.googleapis.com as the api_endpoint parameter. If the client_options parameter is not specified, logging.googleapis.com is used by default.
After changing the api_endpoint parameter, I was able to successfully write the log entries. The ClientOptions() is as follows:
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="logging.googleapis.com"),
)
Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.
I am trying to run a simple program of jcloud. The program is as follows:
String provider = "openstack-nova";
String identity = "Tenant:usename"; // tenantName:userName
String credential = "pass";
novaApi = ContextBuilder.newBuilder(provider).endpoint("http://openstack.infosys.tuwien.ac.at/identity/v2.0")
.credentials(identity, credential).modules(modules).buildApi(NovaApi.class);
regions = novaApi.getConfiguredRegions();
The openstack.infosys is connect via SOCKS proxy on port 7777. I have also enlisted the same on eclipse(Window->Preferences->General->Network Config->SOCKS(Manual)) . However, everytime I run the code I get the following error:
ERROR o.j.h.i.JavaUrlHttpCommandExecutorService - Command not considered safe to retry because request method is POST:
Which is then caused by
Caused by: java.net.SocketTimeoutException: connect timed out
I am able to access the horizon web interface of the same without any issues.
Can someone please help me in understanding what is the possible problem.
You need to tell Apache jclouds about your proxy configuration when creating the context. Have a look at these properties, and pass the ones you need to the overrides method of the ContextBuilder:
Proxy type
Proxy host
Proxy port
Proxy user
Proxy password
I have installed Oracle ATG v11 with the commerce reference store, when I startup the production server and go to the url domain/crs/storeus I see the blank white page, and have the following error in the console:
Oct 13, 2014 1:56:37 PM com.endeca.infront.site.SiteManager getSite
SEVERE: Unable to retrieve site definition for site id: /storeSiteUS
com.endeca.store.exceptions.PathNotFoundException: No node found at
path: [pages].
at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:153)
at com.endeca.store.configuration.InternalNode.getNodeInfo(InternalNode.java:221)
at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:150)
at com.endeca.store.configuration.InternalNode.getNode(InternalNode.java:61)
........................................
**** Error Mon Oct 13 13:00:47 +00:00 2014 1413205247448 /atg/endeca/assembler/droplet/InvokeAssembler A problem occurred
assembling the content for content item /content/Web/Home Pages. The
response received was {#type=ContentSlot,
atg:currentSiteProductionURL=/crs/storeus,
canonicalLink=com.endeca.infront.cartridge.model.NavigationAction#2b35e9c6,
ruleLimit=1, #error=com.endeca.infront.content.ContentException:
com.endeca.navigation.ENEConnectionException: Error establishing
connection to retrieve Navigation Engine request
http://localhost:15000/graph?node=0&profiles=sitegroup.siteGroupUS|NoPriceRange|site.storeSiteUS&offset=0&nbins=0&irversion=640'.
Tried all: '2' addresses, but could not connect over HTTP to server:
'localhost', port: '15000' Check MDEX Logs and specified query
parameters. , contentCollection=/content/Web/Home Pages}. Servicing
the error open parameter.
I am assuming this error is related to endeca? I have downloaded CAS, Tools And Frameworks with experience manager and MDX, and Platform Services. Do I need to start these or have I missed a part of the endeca install?
The value of the configurationPath attribute in the DefaultFileStoreFactory.properties located at \localconfig\atg\endeca\assembler\cartridge\manager may be incorrect.
In OOTB CRS, we normally provide the following value for configurationPath attribute :
/ToolsAndFrameworks/11.1.0/server/workspace/state/repository/CRS
Could you please verify the .zip is present at path provided in DefaultFileStoreFactory.properties.
Just check if you are able to connect the below url:
host:15000/admin?op=stats
If you are able to connect this URL, then MDEX is running. Also, you can login to the experience manager and check if the dgraphs and dgidx are running.
If you are not able to connect then check all the services are(tools and http) running and accessible. You can check the endeca logs to debug further.
Your DGraph is not (yet) started.
(Hit this URL in your browser and verify: http://localhost:15000/graph?node=0&profiles=sitegroup.siteGroupUS|NoPriceRange|site.storeSiteUS&offset=0&nbins=0&irversion=640&format=xml)
Possible reasons are:
You did not run baseline update from ATG (from
ProductCatalogSimpleIndexingAdmin dyn/admin component).
You did not run promote content (from your Endeca App's control folder).
Your Services are not working properly (or not started at all). Check that Platform Services and Tools And Frameworks are started.
The solution is to properly define the value for the property configurationPath=E:/Endeca/Apps/CRS/data/workbench/application_export_archive/CRS in "DefaultFileStoreFactory.properties"
If you are using the OS as Windows then define this path as Unix style as shown above.
I am trying to write a GWT back-end using the RPC model for java servlets.
Is it possible to ssh tunnel within an RPC in order to communicate with a remote sql database?
The code I try to execute is below, using Jsch. The error occurs on "session.connect();"
String host="xxxxx.xxx.edu";
String user="username";
String password="password";
Session session= null;
try{
//Set StrictHostKeyChecking property to no to avoid UnknownHostKey issue
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
JSch jsch = new JSch();
session=jsch.getSession(user, host, 22);
session.setPassword(password);
session.setConfig(config);
session.connect();
}
The runtime error I get on the 'session.connect()' line is as follows: (scroll right to see whole error)
com.jcraft.jsch.JSchException: java.security.AccessControlException: access denied (java.net.SocketPermission xxxxx.xxx.edu resolve)
at com.jcraft.jsch.Util.createSocket(Util.java:341)
at com.jcraft.jsch.Session.connect(Session.java:194)
at com.jcraft.jsch.Session.connect(Session.java:162)
at com.front.server.GameServiceImpl.createGame(GameServiceImpl.java:39)
The frustrating part about this is that I copied/pasted the exact same code into a simple java program and it works. So I know the code is correct; obviously the jetty server which GWT creates for local testing has a problem executing the code. What else can I do / what should I be doing in this situation with GWT? Shouldn't the back-end of a GWT application have the capacity to ssh??
I suggest you try running your gwt app with a different web container (Tomcat, JBoss). You can still make use of debugging functionality by running the hosted mode with the -noserver flag.
See here