Within ElasticSearch, I configured jdbc river plugin, it works before, after configured shield and assigned with admin user, the ElasticSearch is secured and able to access by TransportClient, but when I run river plugin script, I got the following exception:
pool-3-thread-1] ERROR river.jdbc.RiverPipeline - action [org.xbib.elasticsearch.action.river.jdbc.state.get] is unauthorized for user [ddtuser]
org.elasticsearch.shield.authz.AuthorizationException: action [org.xbib.elasticsearch.action.river.jdbc.state.get] is unauthorized for user [ddtuser]
at org.elasticsearch.shield.authz.InternalAuthorizationService.denial(InternalAuthorizationService.java:247)
BTW, I already modified the JDBCFeeder.java, to pass the shield.user into setting, but no luck!
Settings clientSettings = ImmutableSettings.settingsBuilder()
.put("cluster.name", settings.get("elasticsearch.cluster", "elasticsearch"))
.put("shield.user", "ddtuser:*mypassword*")
Related
I'm trying to connect the SPGo plugin in Visual Studio Code to a Sharepoint Online site. There are lots of guides for this, for instance this one: https://medium.com/niftit-sharepoint-blog/saying-goodbye-to-sharepoint-designer-ac939a0b79ba
In short, I'm doing it like this:
Open VS Code
Open a local, empty folder)
SPGO: Configure workspace (follow guide, ending up with spgo.json
looking like the one I pasted)
SPGO: Populate local workspace (asking me for credentials and I plot
it in O365 style (email and password).
Statusbar says "Populating workspace"
After about 10 seconds I get the pasted error in the output window (spgo)
I'm using newest versions:
Visual Studio Code 1.37.1
SPGo 1.4.3
I have tried various sites in my tenant and I know they are up. I am Site Collection Administrator for the sites. I know the credentials are correct, of course. the remoteFolders and publishingScope doesn't affect anything, when changed. I assume authenticationType should be "Digest".
SPGo.json:
{
"sourceDirectory": "src",
"sharePointSiteUrl": "https://domain.sharepoint.com/sites/SiteName",
"publishingScope": "Major",
"authenticationType": "Digest",
"remoteFolders": [
"/SitePages/"
]
}
I don't get any files in the local folder, instead I get an error in the output:
================================ ERROR ================================
<s:Fault>
<s:Code>
<s:Value>s:Receiver</s:Value>
<s:Subcode>
<s:Value xmlns:a="http://schemas.microsoft.com/net/2005/12/windowscommunicationfoundation/dispatcher">a:InternalServiceFault</s:Value>
</s:Subcode>
</s:Code>
<s:Reason>
<s:Text xml:lang="en-US">The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework SDK documentation and inspect the server trace logs.</s:Text>
</s:Reason>
</s:Fault>
Error Detail:
----------------------
{}
===============================================================================
Sorry I missed this post for so long. First- thanks for the detailed write-up. This is the first time I've seen this specific issue with SPGo, so I do not know for sure what is the root cause.
Couple questions:
Are you using ADFS Authentication with your Office 365/SharePoint Online instance?
Are you able to use Addin-Only Authentication on this SP Site?
SPGo should be able to automatically work with ADFS in SharePoint Online but, as a fall-back, you could use Addin-Only Authentication. In this scenario you would create a ClientId and ClientSecret pair for the SharePoint Site Collection you are accessing and authenticate using those credentials. The ClientId would act as your UserName, and the ClientSecret would be your password.
Under the covers, I am using the node-sp-auth package for user authentication. Sergei (s-KaiNet on Github) has a great write-up on how to enable Addin-Only Authentication in SharePoint Online on his site, which you can find here.
Thanks for using SPGo!
I am triyng to connect pyeve with a MongoDB Atlas replica set (https://cloud.mongodb.com/). I've connected successfully DB management tools from the same host, to make sure the deployment is working OK.
One particularity is that using Atlas, all users must authenticate against auth database, I cannot put my users in the application database, so I need to set authSource in MONGO_URI.
Now, when defining the MONGO_URI for the replica set, in settings.py, like this:
MONGO_URI = mongodb://<USER>:<PASS>#my-shard-00-00-tlati.mongodb.net:27017,my-shard-00-01-tlati.mongodb.net:27017,my-shard-00-02-tlati.mongodb.net:27017/<MY_DB>?ssl=true&replicaSet=my-shard-0&authSource=admin
The authSource=admin parameter seems to be ignored, (I've checked debugging pymongo's auth and the authentication source used is None).
MONGO_AUTH_SOURCE could be used to set the authorization database, but it has no effect since MONGO_URI is used in preference of the other configuration variables, according to eve's documentation.
Is this an issue or am I doing it wrong?
Found out that the problem was that I was using version 0.4.1 for flask-pymongo. Updating it to version 0.5.1 fixed the problem.
I am running 7.0.0.CR2 of workbench and server in a docker container. It looks on first view that they are working perfectly together. However, when I select the tasks tab in the workbench I get the following error:
Unable to complete your request. The following exception occurred:
Can't lookup on specified data set: jbpmHumanTasksWithUser.
This lead me to this bug: https://issues.jboss.org/browse/JBPM-5432
There they are saying that this is caused by a user not having the kie-server role. There is no kie-server role in my installation, there is however a kie-server group, and the user I am using is a member of this group.
Dockerfile and user and role files can be found here:
https://gist.github.com/martijnburger/c9a1072746d94ffe4beff72830e03ca7
I believe it could be due to a missing login module in your set up, to ensure the role/authentication is passed on to the Kie Server, you need to add a custom login module. Please check this example as reference: https://github.com/cristianonicolai/kie-wb-dev-docker/blob/master/src/main/resources/standalone-full-kie.xml#L379
I can connect to xx.xx.xx.xx:9200 use head-plugin without x-pack.
When my elasticsearch 5.2.0 x-pack is enabled the xx.xx.xx.xx:9200 need logon can to connect to es ,
but head 9100 can't connnect to xx.xx.xx.xx:9200, where to enter the user and password for x-pack.
I try the setting in elasticsearch.yml
http.cors.allow-headers: Authorization
And use the url to connect to es
http://xx.xx.xx.xx:9200/es-head/?auth_user=elastic&auth_password=changeme
But it cannot work.I got this response message..
"missing authentication token for REST request [/es-head/?auth_user=elastic&auth_password=changeme]"
My es verion--5.2.0
I make it.
When I used
http://xx.xx.xx.xx:9200/es-head/?auth_user=elastic&auth_password=changeme
I got the message
Cannot GET /es-pack/
So I change it to
http://xx.xx.xx.xx:9200/?auth_user=elastic&auth_password=changeme
I want to know why?
I'm unable to browse data with BigSheets on BigInsights on Cloud. When I select a file and change the Reader type, I receive the exception (see screenshot, below).
I was previously able to browse data ok.
Have you logged into BigSheets as a biadmin user ? I think there is some permission issue since you are trying to access the file created by biadmin (/user/biadmin).
One option to confirm is, login to bigsheets as biadmin and try the same.
You can try doing impersonation of biadmin user and try your scenario.
Login to Ambari UI, then go to HDFS --> Config --> Advanced --> Custom core-site
Update or Add the property
hadoop.proxyuser.biadmin.groups=*
hadoop.proxyuser.biadmin.hosts=*
The solution for me was to restart the BigX services in the Ambari console.