I have an application running on Wildfly 10 in a domain setup with more than 10 machines. Clients consume REST webservices using SSL authentication, in this scenario we will be adding clients on a daily basis so it is important to be able to propagate changes on the Truststore to the whole server group.
It's not an option to centralize the truststore in one machine due to concurrency levels.
I would like to know if there is a way to achieve this using the CLI or any other alternatives.
Thanks in advance!
Given that Wildfly does not support reloading the truststore at runtime (see https://access.redhat.com/solutions/482133), you would copy the truststore file to all servers (by hand, by script, by Puppet/Ansible/your DevOps tool), and use CLI to restart the affected server groups in the domain.
See also https://github.com/wildfly/quickstart/tree/10.x/helloworld-war-ssl for an example to implemet SSL auth. Basically all clients get a certificate from your own CA, which you add to the truststore once. Then use RBAC for the authorization.
Related
Looking at the instructions here: https://certbot.eff.org/lets-encrypt/ubuntubionic-haproxy
I'm in a situation where I have 2 HaProxy instances, each in a docker container, on different machines. The domain names are the same. This is done for redundancy purposes.
Googling "multiple letsencrypt" or "multiple certbot" just leads to solutions for creating certificates for many domains at the same time.
This is good for subdomains, but it doesn't explain what I'm expected to do if I have more than 1 server running haproxy.
Run certbot on 1 server only, then copy the file over? If so, what about renewing the certificate? Can it no longer be automated?
Also, because of urls, certain subdomains will go to one server or the other. But both must be able to serve all the urls.
Or does this situation call for a different approach entirely? Should I use the manual mode, generate the certificates, and then update them manually?
Thanks for any help.
Eventually found a solution: you can start certbot with a custom port, --http-01-port as you can read here: https://eff-certbot.readthedocs.io/en/stable/using.html.
If all your haproxys detect the incoming challenge URL "/.well-known/acme-challenge", you can have them redirect to that host/port combo. So all challenges end up at the certbot.
Then find a way to move the certificate around.
I would suggest you to go with getssl which is a "simple" Bash script taking care to :
deploy the challenge file to all the required nodes, to the right place, and even reloading the remote node web server
deploy/copy the generated SSL certificate files to remote nodes too
It can use SSH, SFTP or FTPS to transfer files. You then can add a cron job to execute getssl everyday and it will renew the certificate and distribute it when done (a config allows you to tell when to renew the certificate).
The REST API for Kafka Connect is not secured and authenticated.
Since its not authenticated, the configuration for a connector or Tasks are easily accessible by anyone. Since these configurations may contain about how to access the Source System [in case of SourceConnector] and destination system [in case of SinkConnector], Is there a standard way to restrict access to these APIs?
In Kafka 2.1.0, there is possibility to configure http basic authentication for REST interface of Kafka Connect without writing any custom code.
This became real due to implementation of REST extensions mechanism (see KIP-285).
Shortly, configuration procedure as follows:
Add extension class to worker configuration file:
rest.extension.classes = org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension
Create JAAS config file (i.e. connect_jaas.conf) for application name 'KafkaConnect':
KafkaConnect {
org.apache.kafka.connect.rest.basic.auth.extension.PropertyFileLoginModule required
file="/your/path/rest-credentials.properties";
};
Create rest-credentials.properties file in above-mentioned directory:
user=password
Finally, inform java about you JAAS config file, for example, by adding command-line property to java:
-Djava.security.auth.login.config=/your/path/connect_jaas.conf
After restarting Kafka Connect, you will be unable to use REST API without basic authentication.
Please keep in mind that used classes are rather examples than production-ready features.
Links:
Connect configuratin
BasicAuthSecurityRestExtension
JaasBasicAuthFilter
PropertyFileLoginModule
This is a known area in need of improvement in the future but for now you should use a firewall on the Kafka Connect machines and either an API Management tool (Apigee, etc) or a Reverse proxy (haproxy, nginx, etc.) to ensure that HTTPS is terminated at an endpoint that you can configure access control rules on and then have the firewall only accept connections from the secure proxy. With some products the firewall, access control, and SSL/TLS termination functions can be all done in a fewer number of products.
As of Kafka 1.1.0, you can set up SSL and SSL client authentication for the Kafka Connect REST API. See KIP-208 for the details.
Now you are able to enable certificate based authentication for client access to the REST API of Kafka Connect.
An example here https://github.com/sudar-path/kc-rest-mtls
Here is my dilemma. Currently we run quite a few server on AWS EC2 service. Before my time, they used to configure Server images with the SSL certificate on them. Now, the certificate is about to expire and we need to replace the old one with the new one. I have read documentation on AWS in regards to uploading new certificate to IAM but it is very confusing. Is there any way, for example, using Power Shell commands to upload the new certificate to the existing servers?
Thanks in advance.
If you have certificates that are expired on existing instances and NOT on an Elastic Load Balancer, then you need to update each server as needed, on that server.
It is not an IAM type server certificate.
So you need to touch each server and upgrade. If you have AMIs for each server, you may need to create new AMIs after upgrading the certificate.
See Install certificate with PowerShell on remote server for some suggestion on PowerShell methods of installing a certificate file remotely.
Depending on your budget, you could consider using an ELB even for one instance, and installing the SSL cert there. It makes it easier in the long run to manage certs at the ELB level, rather than at the server/AMI level
I'm developing an application on Azure VM and would like to secure it by using the wildcard SSL certificate that I'm already using with my main domain. The SSL cert works with any *.mydomain.com and the application on Azure VM is accessible through myapplication.cloudapp.net
Based on the research that I've done, CNAME should be the best option to do that (I can't use A record since we need to shutdown the VMs every week and turn them back on the next week and will lose the ip addresses).
My two questions are:
How can I have myapplication.cloudapp.net be shown as subdomain.mydomain.com?
Will doing that make it possible for wildcard SSL certificate to be used for Azure application too?
How can I have myapplication.cloudapp.net be shown as
subdomain.mydomain.com?
Yes - this is just the CNAME forwarding and ensuring that the appropriate SSL certificate is installed on the server.
Will doing that make it possible for wildcard SSL certificate to be used for Azure application too?
Well as you're already exposing the Application through the VM - this should happen seemlessly.
Just a word of caution, you mention that you're using the certificate on the main domain, but haven't mentioned where you're using this. Be aware that, out-of-the-box, you can only assign one SSL per HTTPS endpoint. You can enable multiple SSL certificates on an Endpoint for Azure / IIS using Server Name Identification and can be enabled directly or automatically. If you do take this route, remember to configure your SNI bindings first, then apply the default binding - it kinda screws up otherwise.
I have a project that have embedded jetty with SPNEGO enabled. I would like to be able to run this project locally for development purposes (WITH SPNEGO enabled!)
My question is, is the SPN and keytab associated with a particular server at all or can I use the same set on multiple instances of my service?
Kerberos requires that both the client and server somehow figure the service principal to use without any prior contact. If you have control of both the client and server, you can use any principal you want provided you configure both sides to
use the same principal.
In the SPNEGO case, the client does the "standard" thing and builds a principal based on the hostname of the server. (i.e. I want to talk to www.foo.com, I'll try
requesting an HTTP/www.foo.com service ticket and see if the server accepts it. )
I don't know of any way to get the SPNEGO code in the browser to use a fixed service principal. So in this case you'll need a separate keytab for each server.