I need to enable SSO on my redhat environment. I need to know which rpms needs installation.
believe it’s a case of configuring AD to support the single sign-on against the WebSeal instance.i am installing WebSeal 6.1(Tivoli Access Manager WebSeal 6.1).
I have no knowledge regarding this.Can anyone brief me out and help me here how to proceed and what steps should be taken. What should be the prerequisites ?
There is a good writeup on IBM's InfoCenter about how to do this:
TAM 6.0:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.itame.doc_6.0/rev/am60_webseal_admin211.htm?path=5_8_1_6_0_6_0_2_1_10_1_2#spnego-cfg-unix
TAM 6.1.1:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.itame.doc_6.1.1/am611_webseal_admin709.htm?path=5_8_1_3_1_11_1_2#spnego-cfg-unix
SAM 7.0:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.isam.doc_70/ameb_webseal_guide/concept/con_config_win_desktop_sso_unix.html
You have to:
Install IBM Kerberos client for WebSEAL
Create an entry in AD for the Linux server to auth against
Map the Kerberos principal to that AD user (the hardest part)
Enable SPNEGO on WebSEAL
Here are some of my notes that may help. However, I would strongly recommend walking down through the instructions on the InfoCenter site, as they are almost right on.
For step 1, in the linux_i386 directory, install the IBM Kerberos client using:
rpm -i IBMkrb5-client-1.4.0.2-1.i386.rpm
For step 2, the ktpass command you run on your AD controller should look something like:
ktpass -princ HTTP/WEBSEAL_SERVER_NAME_NOTFQDN#ad-domain.org -pass new_password -mapuser WEBSEAL_SERVER_NAME_NOTFQDN -out c:\WEBSEAL_SERVER_NAME_NOTFQD_HTTP.keytab -mapOp set
Transfer that keytab file to your Linux server.
Also make sure the keytab file on the Linux server is chown ivmgr.ivmgr; chmod 600. Otherwise the WebSEAL process won't be able to read it.
For step 3, you will need to edit /etc/krb5/krb5.conf and configure the KDC, AD realm, and local DNS name. You can use the mkkrb5clnt utility to help with this:
config.krb5 -r AD-DOMAIN.ORG -c ad-domain.org -s ad-domain.org -d AD-DOMAIN
Edit krb5.conf and change:
[libdefaults]
default_tkt_enctypes = des-cbc-md5 des-cbc-crc
default_tgs_enctypes = des-cbc-md5 des-cbc-crc
From my notes, I had you can test the Kerberos configuration using (this is all documented on the infocenter article):
/usr/krb5/bin/kinit webseal#AD-DOMAIN.ORG
Enter the password for the WebSEAL user, then use klist to check things.
For step 4, just edit the WebSEAL config file and change:
[spnego]
spnego-auth = https
[authentication-mechanisms]
kerberosv5 = /opt/PolicyDirector/lib/libstliauthn.so
If you are clients are configured correctly, as long as their AD account name matches their TAM account name then it will work. You can also have WebSEAL prepend the #DOMAIN.ORG when mapping to a TAM user, which is handy if you are going to have multiple domains setup for SSO. However, you have to have TAM accounts with user#domain.org within your directory to map to.
You can specify what auth level SPNEGO comes in by modifying the [authentication-levels] section in the WebSEAL config file. That level would be level = kerberosv5
Good luck and have patience. Getting the Kerberos client setup on the Linux box was the most difficult part. It's a bit tricky when it wants capital DNS domain name, lower case DNS domain name, or just the plain vanilla AD domain name.
Related
I'm trying to create some kind of user authentication to prevent unwanted access to my NodeRED's User Interface. I've searched online and found 2 solutions, that for some reason didn't worked out. Here they are:
Tried to add the httpNodeAuth{user:"user", pass:"password"} key to the bluemix-settings.js but after that my dashboard kept prompting me to type username and password, even after I typed the password defined at pass:"password" field.
Added the user defined Environtment Variables NODE_RED_USERNAME : username and NODE_RED_PASSWORD : password . But nothing has changed.
Those solutions were sugested here: How could I prohibit anonymous access to my NodeRed UI Dashboard on IBM Cloud(Bluemix)?
Thanks for the help, guys!
Here is a little bit of the 'bluemix-settings.js'
autoInstallModules: true,
// Move the admin UI
httpAdminRoot: '/red',
// Serve up the welcome page
httpStatic: path.join(__dirname,"public"),
//GUI password authentication (ALEX)
httpNodeAuth: {user:"admin",pass:"$2y$12$W2VkVHvBTwRyGCEV0oDw7OkajzG3mdV3vKRDkbXMgIjDHw0mcotLC"},
functionGlobalContext: { },
// Configure the logging output
logging: {
As described in the Node-RED docs here, you need to add a section as follows to the settings.js (or in the case of Bluemix/IBM Cloud the bluemix-settings.js file.
...
httpNodeAuth: {user:"user",pass:"$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN."},
...
The pass files is a bcrypt hash of the password. There are 2 ways listed in the docs about how to generate the hash in the correct way.
if you have a local copy of Node-RED installed you can use the following command:
node-red admin hash-pw
As long as you have a local NodeJS install you can use the following:
node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" your-password-here
You may need to install bcryptjs first with npm install bcryptjs first.
I have owncloud version 9.1.8 running on a synology. Now I installed onlyoffice on a local server with a self signed certificat. It is important to know, that the onlyoffice server is running locally in a network. So I cannot access the server like e.g. with lets encrypt, because I only have a local server name and not a public server name. Lets Encrypt therefore cannot verify the server. However if I want (and if you have a solution doing that), I can access the internet using the server.
Now i have the problem, that owncloud delivers me the following error message
"Error while downloading the document file to be converted."
when I want to save the url in the onlyoffice configuration in owncloud. I guess the problem is, that I am using a self signed certificat. Do you know what I can do? Google does not really help me.
"Error while downloading the document file to be converted."
means that DocumentServer cannot validate your storage's self-signed certificate (OC in your case)
There are 2 possible workarounds:
1) Change "rejectUnauthorized" to false in the /etc/onlyoffice/documentserver/default.json config file
2) Change the default Node.js CAstore:
Edit the files:
/etc/supervisor/conf.d/onlyoffice-documentserver-converter.conf
/etc/supervisor/conf.d/onlyoffice-documentserver-docservice.conf
Add a flag --use-openssl-ca to the parameters in this line
Then you need to add your certificate to the the default CA store and restart ONLYOFFICE services:
supervisorctl restart all
In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.
I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.
I'm trying to deploy a VSTO solution, which are 2 addins for Word and for Outlook, using ClickOnce. Due to our deployment infrastructure/practices, I cannot publish it using Visual Studio, it is instead built on a build server and deployed via a deployment server.
For local development, a self-signed certificate is used. The deployment worked with this self-signed certificate (if the the self-signed certificate was installed on the machine), but now I want to add a real company certificate so that the application can be deployed to the users.
During deployment, after the configuration files are poked, they are updated and re-signed with the real certificate. However, this produces the following error during installation:
System.Security.SecurityException: Customized functionality in this application will not work because the certificate used to sign the deployment manifest for <app name> or its location is not trusted. Contact your administrator for further assistance.
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustPromptKeyInternal(ClickOnceTrustPromptKeyValue promptKeyValue, DeploymentSignatureInformation signatureInformation, String productName, TrustStatus status)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustUsingPromptKey(Uri manifest, DeploymentSignatureInformation signatureInformation, String productName, TrustStatus status)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustUsingPromptKey(Uri manifest, DeploymentSignatureInformation signatureInformation, String productName)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.ProcessSHA1Manifest(ActivationContext context, DeploymentSignatureInformation signatureInformation, PermissionSet permissionsRequested, Uri manifest, ManifestSignatureInformationCollection signatures, AddInInstallationStatus installState)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.VerifySecurity(ActivationContext context, Uri manifest, AddInInstallationStatus installState)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.InstallAddIn()
The Zone of the assembly that failed was:
MyComputer
The only lead I have is that, after re-signing, the values in publisherIdentity element are not changed (both .vsto and .manifest), only the Signature element has values corresponding to the new certificate.
Following commands are used to sign the .vsto and .manifest files (as far as I can see from the deployment scripts):
mage.exe -Update "[path to .vsto/.manifest]"
mage.exe -Sign "[path to .vsto/.manifest]" -CertHash [certificateHash]
where [certificateHash] is the thumbprint of the real certificate and is used to look up the certificate in certificates stores. I'm told this is security measure so that the certificate file doesn't have to be distributed along with the deployment package.
After signing, the files have their Signature values changed, but the publisherIdentity still has the name and issuerKeyHash of the self-signed certificate.
I tried poking these two values prior to re-signing, but I'm don't know how to calculate the issuerKeyHash.
Any advise on how to proceed would be much appreciated!
Edit:
I was trying out other mage.exe parameters, like '-TrustLevel FullTrust' (which didn't have any effect) or '-UseManifestForTrust True' along with Name and Publisher parameters, which yielded this error message (which is different than the one mentioned above).
************** Exception Text **************
System.InvalidOperationException: You cannot specify a <useManifestForTrust> element for a ClickOnce application that specifies a custom host.
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.GetManifests(TimeSpan timeout)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.InstallAddIn()
.
The certificate that the app is signed with isn't trusted by Windows. As a work around,
Right click on setup.exe,
Select properties then the Digital Signatures tab
Select Vellaichamy/user then click Details
Click View Certificate and Click Install Certificate.
Do not let it automatically choose where to store the sert, install the certificate in the Trusted Root Certification Authorities Store. Once the cert is installed the app should install...
Take a look at the Granting Trust to Office Solutions article which states the following:
If you sign the solution with a known and trusted certificate, the solution will automatically be installed without prompting the end user to make a trust decision. After a certificate is obtained, the certificate must be explicitly trusted by adding it to the Trusted Publishers list.
For more information, see How to: Add a Trusted Publisher to a Client Computer for ClickOnce Applications.
Also you may find the Deploying an Office Solution by Using ClickOnce article helpful.
We have found what the problem was. We used a version of mage.exe tool from Windows SDK from a folder named 7A (I don't remember the full paths, sorry). A colleague then found another folder with versions 7A, 8 and 8A. Once we took the .exe from 8A folder, the installation works as expected.
Try copying all the necessary files to the client computer then install. If you can avoid installing from the network drive you might be able to avoid this exception.