Changing hostname in WSO2 Identity Server - server

I'm following WSO2IS's guide to changing hostname and I feel like they're leaving out a rule that I don't know or they're (again) assuming I should know this.
https://docs.wso2.com/display/IS550/Changing+the+Hostname
I'll do a quick rundown of what I did and I'll put the questions at the end. Each numbered list header is referring to the steps in that guide respectively.
I had my Hostname/MgtHostname set as somename.something.ca
I put the original wso2carbon.jks in a separate folder as a backup and created a new keystore "wso2carbon.jks" using this commandline with adjusted values :
keytool -genkey -alias newcert -keyalg RSA -keysize 2048 -keystore newkeystore.jks -dname "CN=<testdomain.org>, OU=Home,O=Home,L=SL,S=WS,C=LK" -storepass mypassword -keypass mypassword
I was able to export a public key from my keystore by adjusting the command line to their appropriate values.
Same thing as step 3, I just changed the values in the command line so it fit with my alias and public key. I was able to import the public key.
Changed "localhost" to "somename.something.ca" everywhere it existed (identity.xml, authenticators.xml etc...)
Everything ran smoothly and I started my WSO2IS service. However, I was not able to reach somename.something.ca (URL does not exist) and when I tried to access my IS, the SAML SSO referred to localhost again (tried this in Incognito mode too)! I don't know why it kept doing that even after I went into my admin dashboard and changed the SAML SSO Identity Provider name from localhost to the new hostname.
For now, I just want to be able to refer to localhost as a different name and at least be able to access my identity server outside my LAN. I should note that I am testing the WSO2IS on a remote desktop (Windows Server) where we do development for the site.
So should I try my hostname as my IP? Does my hostname actually have to exist or can it just be a placeholder? Does my hostname have to be different from
'localhost' so that I may access it outside my LAN? If I change the hostname, am I able to access the dashboard as 'localhost' AND as my new hostname through the computer where I'm running WSO2IS as a service?
I'm sorry for all the questions, I am new to this stuff and I think there is just so much left out of the WSO2 documentation that I need to know in order to make this work.
Your answers are much appreciated.
EDIT:
Now with my new hostname setup (after following the guide) and while I'm running WSO2IS, my localhost won't load. If I try going to localhost (on Chrome), it will take 30s then say "localhost took too long to load."

I'm currently using IIS so what you need to do is create an active website using the manager. I just created a subdomain and plugged that in for every 'localhost' occuring in my .xml files. That allowed me to change my hostname.
Special thanks to gusto2.

Related

How can I resolve the tailscale HTTPS error "SSL_ERROR_RX_RECORD_TOO_LONG"

I just setup MagicDNS and HTTPS on my tailscale account.
Then I ssh'ed into my nas and issued a tls certificate with
sudo tailscale cert "machinename.tailnetalias.ts.net"
Response was:
Wrote private key to machinename.tailnetalias.ts.net.crt
Wrote private key to machinename.tailnetalias.ts.net.key
Now when I try to access the web interface of my nas via https:// in a browser, I get an error. Firefox for example says "SSL_ERROR_RX_RECORD_TOO_LONG".
What can I do about this?
The tailscale cert command doesn't know where the certificate files should be installed (it doesn't even know what you were planning to do with them). So the first question is: did you move those files somewhere to install them? If not, the certificate getting SSL_ERROR_RX_RECORD_TOO_LONG is likely some other cert file which was already there.
If the tailscale cert files did get installed, I think the next step would be to click on the lock icon in Firefox on the left side of the URL. It will have a bunch of information about the TLS connection, in particular:
if the certificate had something wrong with it
in the Technical Details section, it will say what TLS version was used (SSL2, SSL3, TLS1.0, TLS1.1, TLS1.2, TLS1.3).
The SSL_ERROR_RX_RECORD_TOO_LONG error was mostly a problem in older versions of TLS like 1.1 and before. If the TLS version is one of those, it may be necessary to figure out how to get the NAS to stop offering the older versions and only offer 1.2 and 1.3.

How to solve authentication failure with CNAME in url

We have a web application written using Liferay 6.2 and deployed on tomcat server. Application is accessed using Integrated Windows Authentication. Everything works fine if hostname is directly used to in url to access.
To hide the actual hostname, a CNAME record was created. When that is used to access, users get repeated prompts for credentials and authentication is rejected despite entering correct credentials.
We tried creating SPN for CNAME using the command setspn -a "HTTP/<<friendly name>>". Since connection is made on standard port 443 using HTTPS, no port number was specified when creating SPN. However, repeated authentication prompts still continue to appear. The application runs using a service account. Including the service account when creating SPN could be an option. Please share if there are any suggestions on what else could be tried.
What does "everything works fine" mean? Are you getting prompted and when you enter creds it works correctly, or it does SSO and logs you in without a prompt?
The fact that you're getting prompted is because a) the new cname isn't considered to be in the intranet/trusted internet zone. See Internet Options > Security > Local Intranet/Trusted Sites > Sites. Or b) the requested ticket sent to the server failed.
Also usually you don't register the cname as an SPN. You register the A record the cname is pointing to as the SPN. My guess is this is causing the failure. The SPN is getting registered to the wrong service account so the KDC is using the wrong service account key.

HTTPS Redirect from non secure server gives Error Message in Browser

Decided to move from a shared hosting platform to an AWS based Hosting Environment (Acquia Cloud specifically). This environment doesn't offer e-mail services so the client kept the shared hosting to continue using that for email (they didn't want to spend the extra $2400 per year for G Suite Email Hosting).
In order to achieve this, we worked with the new host to use the shared site as a pass through so that the emails still go there, and the web traffic goes to the new server.
The nameservers go to the shared host. We have a DNS Cname www.example.com pointed to the new AWS server and the A record pointed at the shared host. It was the only way to keep the email still running. When we pointed the A record, that's when email went down. This was the suggestion from the hosting company.
So now if they go to http://example.com, https://www.example.com, http://www.example.com and www.example.com it all works fine, no problem. However if they go to https://example.com they get this issue right here:
1:
When we moved to the new host, the SSL certificate went with it. This causes some Search Engine Issues. I have an .htaccess redirect set up, but it still gives that error.
This is what myself and both hosting companies could best come up with, and it's not a great solution.
Is there a solution other than:
Carrying an SSL Certificate for both accounts
Moving email to a 3rd party provider like gmail
If there isn't we are going to go with one of these options, but I figured I'd ask first.
The only issue here is your certificate does not have example.com in your certificate SAN (Subject Alternative Name). By default, you should get this in your SAN but few CA don't provide it under SAN unless and until you tell them. Kindly find the image for your reference. If you have windows OS just save your certificate file in .crt or .cer format to view the SAN.
Else you can use below command if you Linux OS and the certificate is installed on the server
openssl s_client -connect website.com:443 | openssl x509 -noout -text | grep DNS
It will list the SAN

Certificate bound to port not accepted

I made a tool that exposes a web-interface for the localhost. Now, i require this web-interface to register a https prefix for a page. For this i'm using BouncyCastle to generate a root certificate and a ssl certificate. This all works well (generating, signing and binding to port). IE displays the page by https without certificate warnings etc.
However, when a third party app tries to display the webpage, it fails (unable to load and displaying 'about:blank'). Because it is an embedded webbrowser i am not sure what the exact problem is. Thus, along other stuff, i tried to use fiddler to maybe determine the problem - only to find it DOES accept the certificate fiddler generates.
So what i have done is exporting the fiddler certificates and removed all custom certificates from the stores. Then, i imported the fiddler certificates on the exact same stores where my generated certificates are. I also made sure that the build up (all stuff you can inspect by viewing the certificate properties) are exactly the same. By using Windows MMC, clicking the certificates i can see NO difference, even the order is the same. Critical and such - all match. The only thing that is slightly different: the serial number from my certificates are shorter then the ones generated by fiddler.
So what i end up with are 4 certs (I deleted all the original from fiddler): 1 ssl and 1 root from fiddler and 1 ssl and 1 root from BouncyCastle. The roots are in trusted and the ssl in personal, both on localmachine. Now, when i use netsh to bind the fiddler cert to the port, it works. When i bind my own certificate to the port, it fails.
I have no idear why as all the properties look the same to me.
There is one thing though (again, i have no idea what is going wrong, so this might be irrelevant): on the SSL cert (so not the root one) the SKI points to nowhere (or, at least, i dont see where it points to), but this seems to be the case on the fiddler cert as well. Obviously for both certs the Authority key id point to their respective roots. The SKI on the SSL cert is set by
certificateGenerator.AddExtension(X509Extensions.SubjectKeyIdentifier, false, new SubjectKeyIdentifierStructure(subjectKeyPair.Public));
BTW, i use a VM for testing wich is reset everytime, so i don't think i messed up the cert store somewhere along the way. The tool stays the same, the only thing that changes is the bound certificate, both are registered to 'localhost'
IE thirdparty browser
fiddler's good good
Own's good fail (without message)
Why can 2 seemingly identical certs have a different impact? Is there anything i'm missing in hidden properties or something? And, if so, what should i look for?
Ahhhhhhhhhhhhhhh.... Minutes after this post i saw the flaw... It had not to do with the certificate at all, but the way it was bound to the port.... I used code from Mike Bouck to bind the certificate. This line was causing the problem...
configSslParam.DefaultFlags = (uint)NativeMethods.HTTP_SERVICE_CONFIG_SSL_FLAG.HTTP_SERVICE_CONFIG_SSL_FLAG_NEGOTIATE_CLIENT_CERT;
Changing the flags to 0 made it work....
Wasted hours.... :(

Npgsql 3.0.3 error with Power BI Desktop

I'm receiving the following error when connecting to an AWS Postgres database that requires SSL. I recently upgraded from npgsql 2.3.2 (which was buggy) to 3.0.3 which won't connect. Any suggestions would be appreciated.
DataSource.Error: TlsClientStream.ClientAlertException:
CertificateUnknown: Server certificate was not accepted. Chain status:
A certificate chain could not be built to a trusted root authority. .
at TlsClientStream.TlsClientStream.ParseCertificateMessage(Byte[] buf,
Int32& pos) at
TlsClientStream.TlsClientStream.TraverseHandshakeMessages() at
TlsClientStream.TlsClientStream.GetInitialHandshakeMessages(Boolean
allowApplicationData) at
TlsClientStream.TlsClientStream.PerformInitialHandshake(String
hostName, X509CertificateCollection clientCertificates,
RemoteCertificateValidationCallback
remoteCertificateValidationCallback, Boolean
checkCertificateRevocation) Details:
DataSourceKind=PostgreSQL
I was able to fix the issue by installing the Amazon RDS public certificate on my machine. Once I did this, I was able to connect.
Steps I followed:
Download the AWS RDS public certificate 1
Create a .crt file from the .pem file downloaded. Sample instructions
here 2
Install the certificate (.crt file) on the machine. 3
Connect!
The docs from npgsql give the solution as changing the default trust server certificate of 'false' to 'true' in the connection string.
Unfortunately, neither Excel (AFAIK) nor Power BI will allow you to edit the connection string. So if you are unable to get the SSL certificate from the DB admin (as suggested in another answer), or the SSL cert has a different server name to the name you connect to (in my case an IP address), there is not much that can be done.
I can see two ways of fixing this. Either Shay & co from npgsql (who are doing an excellent job btw) provide some way for users to change the default settings for the connection string parameters. Or Microsoft allows users to send keywords in the connection dialog of Power BI (and Excel).
Npgsql 2.x didn’t perform validation on the server’s certificate by default, so self-signed certificate were accepted. The new default is to perform validation, which is probably why your connection is failing. Specify the Trust Server Certificate connection string parameter to get back previous behavior.
You can read more on the Npgsql security doc page, note also that this change is mentioned in our migration notes.
I had the same issue connecting PowerBI to a locally hosted PostgreSQL server and it turned out to be easy to solve if you can get the right information. Recent Npgsql versions will only connect over SSL if it trusts the certificate of the server. As a Windows application PowerBI uses the windows certificate store to decide what to trust. If you can get the SSL cert for the PostgreSQL server (or the CA cert used to sign that one) then tell Windows to trust that certificate, PowerBI will trust it too.
In the configuration folder for the PostgreSQL server there is a postgresql.conf file, search it for ssl settings, there is one with the location of the ssl cert. Note NOT the key file which contains the private key, only the cert file which contains the public key. copy it or its content to the machine running PowerBI and import using Run | mmc | Add Plugin... Certificates (Google it)
Look at the server name once you imported the cert and connect from PowerBI using the same server name (so the cert matches the connection). That solved the problem for me. If PostgreSQL is configured to insist on a SSL connection you might have to do the same for a ODBC connection too.
Its not best way but worked for me since if u dont need encryption for security reason.
Go to Postgres config file on your DB server and go from
ssl = true
to
ssl = false
Then open your power bi desktop File-> Options and settings -> Data source settings -> then in global you will have saved your connection press Edit Permissions and uncheck "ENCRYPT CONNECTIONS"
Then it will work
WARNING: THIS IS NOT RECOMMENDED IF YOUR DB IS OPEN TO PUBLIC.
Regards,
Davlik