Connect to Kafka on Unix from Windows with Kerberos - apache-kafka

I'm quite new to Kafka so please bear with me. Here is my set up.
I have kafka hosted on a unix box. Clustered. and in a domain say B.
client is on windows. and am trying to connect to kafka hosted on B using a domain A.
I have the keytab. and krb5. both these are set up in the envt.
krb5.ini(and is set to envt variable KRB5_CONFIG)
[logging]
default = CONSOLE
admin_server = CONSOLE
kdc = CONSOLE
[libdefaults]
renew_lifetime = 7d
clockskew = 324000
forwardable = true
proxiable = true
renewable = true
default_realm = some.something.COM
dns_lookup_realm = true
dns_lookup_kdc = false
default_tgs_enctypes = somethingelse
default_tkt_enctypes = somethingelse
[appdefaults]
renewable = true
[realms]
some.something.COM = {
kdc = some.something.COM
admin_server = some.something.COM
}
I also have set up Jaas.config(Kafka.client.ini in my case and is set to envt variable KAFKA_CLIENT_KERBEROS_PARAMS) below is the config
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="sample.keytab"
storeKey=true
useTicketCache=true
serviceName="kafka"
principal="svcacc#some.something.COM";
};
downloaded apache kafka_2.12-0.10.2.1.tgz and am executing this command.
kafka-console-producer.bat --broker-list <broker list> --topic <mytopic> --security-protocol SASL_PLAINTEXT
no matter what i change i keep getting below error
"security-protocol is not a recognised option"
can someone please help me in this?
i also added below props in producer.properties. but nothing seems to change. I'm not sure what i'm missing
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
I even tried setting this property in kafka-console-producer.bat but with no luck
set KAFKA_CLIENT_KERBEROS_PARAMS=- Djava.security.auth.login.config=..\..\config\kafka_Connection.ini
looking forward for your inputs. Many thanks (i've no control as of now on kafka server nor i will be able to explain why its hosted on domain B)

Disclaimer: I'm not too familiar with Kafka, and that error message does not clearly hint at a Kerberos problem.
But given that this is a cross-realm situation, you will probably hit a Kerberos snag sooner or later...
From Kerberos MIT documentation about krb5.conf section [capaths]
In order to perform direct (non-hierarchical) cross-realm
authentication, configuration is needed to determine the
authentication paths between realms.
A client will use this section to find the authentication path between
its realm and the realm of the server.
In other words, you get a Kerberos TGT (ticket-granting-ticket) for principal wtf#USERS.CORP.DMN but need a Kerberos service ticket for kafka/brokerhost.some.where#SERVERS.CORP.DMN. Each realm has its own KDC servers. Your Kerberos client (the Java implementation in this case) must have a way to "hop" from one domain to the others
Scenario 1 >> both realms are "brother" AD domains with mutual trust, and they use the default hierarchical relationship -- meaning that there is a "father" AD domain named CORP.DMN that is in the path from USERS to SERVERS.
Your krb5.conf should look like this...
[libdefaults]
default_realm = USERS.CORP.DMN
kdc_timeout = 3000
...
...
[realms]
USERS.CORP.DMN = {
kdc = roundrobin.siteA.users.corp.dmn
kdc = roundrobin.bcp.users.corp.dmn
}
SERVERS.CORP.DMN = {
kdc = dc1.servers.corp.dmn
kdc = dc2.servers.corp.dmn
kdc = roundrobin.bcp.servers.corp.dmn
}
CORP.DMN = {
kdc = roundrobin.corp.dmn
kdc = roundrobin.bcp.corp.dmn
}
...assuming you have multiple AD Domain Controllers in each domain, sometimes behind DNS aliases doing round-robin assignment, plus another set of DC on a separate site for BCP/DRP. It could be more simple than that :-)
Scenario 2 >> there is trust enabled but the relationship does not use the default, hierarchical path.
In that case you must define explicitly that "path" in a [capaths] section, as explained in the Kerberos documentation.
Scenario 3 >> there is no trust between realms. You are screwed.
Or rather, you must obtain a different user that can authenticate on the same domain as the Kafka broker, e.g. xyz#SERVERS.CORP.DMN.
And maybe use a specific krb5.conf that states default_realm = SERVERS.CORP.DMN (I've seen weird behaviors of some JDK versions on Windows, for example)
Bottom line: you must require assistance from your AD administrators. Maybe they are not familiar with raw Kerberos conf, but they will know about the trust and about the "paths"; at this point it's just a matter of following the proper krb5.conf syntax.
Or, maybe, that conf has already been done by the Linux administrators; so you should require an example of their standard krb5.conf to check whether there is cross-domain stuff in there.
And of course you should enable Kerberos debug traces in your Kafka producer:
-Dsun.security.krb5.debug=true
-Djava.security.debug=gssloginconfig,configfile,configparser,logincontext
Just for the record, but not useful here... when using Keberos over HTTP (SPNego) there's an additional flag-Dsun.security.spnego.debug=true

Related

Airflow- configure smtp with office365 without credentials

When a task/DAG fails I want to send an email to someone, and this does not work. We are using Office365 for this within the organisation and there should not be a need to authenticate with credentials user or password, as it is not done in other running projects. We are using the latest Airflow version released: 2.1.4
I have tried with the configuration in airflow config:
[email]
email_backend = airflow.utils.email.send_email_smtp
email_conn_id = smtp_default
default_email_on_retry = True
default_email_on_failure = True
[smtp]
smtp_host = <the smtp host(Office365)>
smtp_starttls = True
smtp_ssl = False
smtp_port = 25
smtp_mail_from = <the from email>
smtp_timeout = 30
smtp_retry_limit = 5
As I try this I get the following error in the airflow log when a task fails:
WARNING - section/key [smtp/smtp_user] not found in config
...
ERROR - Failed to send email to: ['<my email>']
Therefore I suppose I need to have a user if I use these options in the config.
There is also this information in the log:
PendingDeprecationWarning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
I have been looking at this airflow documentation:
https://airflow.apache.org/docs/apache-airflow/stable/howto/email-config.html
But it does not help me to understand how I should set up a connection to our smtp-server that is with Office365. The problem is as well that I don't have a user or password. I could possibly get them, but as it works without them in other running projects I am looking to do something similar.
Does anybody have some guidance in this matter?
Thank you
I have two client setup SMTP with authorizaiton, just setup as manual then work. My new client, their mail relay have no need to authenticate with user or password, I just config empty string as folow then works.
It will left PendingDeprecationWarning in log.
[smtp]
...
smtp_starttls = False
smtp_ssl = False
smtp_user =
smtp_password =

LOG: connection failed during start up processing: user= database= FATAL: GSSAPI authentication failed for user "postgres"

I am trying to configure Kerberos for GSSAPI Currently I have two nodes
One the KDC server (windows server 2016) and the the other is Postgres-server(Ubuntu).
I have created Active directory on in kdc-server and create user with the name of
postgres and selected the option "password will never expire".
Then I have installed a kerbrose client of MIT.
here is krb5.ini on kdc server.
[libdefaults]
default_realm = HIGHGO.CA
# The following krb5.conf variables are only for MIT Kerberos.
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
# The following encryption type specification will be used by MIT Kerberos
# if uncommented. In general, the defaults in the MIT Kerberos code are
# correct and overriding these specifications only serves to disable new
# encryption types as they are added, creating interoperability problems.
#
# The only time when you might need to uncomment these lines and change
# the enctypes is if you have local software that will break on ticket
# caches containing ticket encryption types it doesn't know about (such as
# old versions of Sun Java).
# default_tgs_enctypes = des3-hmac-sha1
# default_tkt_enctypes = des3-hmac-sha1
# permitted_enctypes = des3-hmac-sha1
# The following libdefaults parameters are only for Heimdal Kerberos.
fcc-mit-ticketflags = true
[realms]
HIGHGO.CA = {
kdc = kdc.highgo.ca
admin_server = kdc.highgo.ca
}
ATHENA.MIT.EDU = {
kdc = kerberos.mit.edu
kdc = kerberos-1.mit.edu
kdc = kerberos-2.mit.edu:88
admin_server = kerberos.mit.edu
default_domain = mit.edu
}
ZONE.MIT.EDU = {
kdc = casio.mit.edu
kdc = seiko.mit.edu
admin_server = casio.mit.edu
}
CSAIL.MIT.EDU = {
admin_server = kerberos.csail.mit.edu
default_domain = csail.mit.edu
}
IHTFP.ORG = {
kdc = kerberos.ihtfp.org
admin_server = kerberos.ihtfp.org
}
1TS.ORG = {
kdc = kerberos.1ts.org
admin_server = kerberos.1ts.org
}
ANDREW.CMU.EDU = {
admin_server = kerberos.andrew.cmu.edu
default_domain = andrew.cmu.edu
}
CS.CMU.EDU = {
kdc = kerberos-1.srv.cs.cmu.edu
kdc = kerberos-2.srv.cs.cmu.edu
kdc = kerberos-3.srv.cs.cmu.edu
admin_server = kerberos.cs.cmu.edu
}
DEMENTIA.ORG = {
kdc = kerberos.dementix.org
kdc = kerberos2.dementix.org
admin_server = kerberos.dementix.org
}
stanford.edu = {
kdc = krb5auth1.stanford.edu
kdc = krb5auth2.stanford.edu
kdc = krb5auth3.stanford.edu
master_kdc = krb5auth1.stanford.edu
admin_server = krb5-admin.stanford.edu
default_domain = stanford.edu
}
UTORONTO.CA = {
kdc = kerberos1.utoronto.ca
kdc = kerberos2.utoronto.ca
kdc = kerberos3.utoronto.ca
admin_server = kerberos1.utoronto.ca
default_domain = utoronto.ca
}
[domain_realm]
.mit.edu = ATHENA.MIT.EDU
mit.edu = ATHENA.MIT.EDU
.media.mit.edu = MEDIA-LAB.MIT.EDU
media.mit.edu = MEDIA-LAB.MIT.EDU
.csail.mit.edu = CSAIL.MIT.EDU
csail.mit.edu = CSAIL.MIT.EDU
.whoi.edu = ATHENA.MIT.EDU
whoi.edu = ATHENA.MIT.EDU
.stanford.edu = stanford.edu
.slac.stanford.edu = SLAC.STANFORD.EDU
.toronto.edu = UTORONTO.CA
.utoronto.ca = UTORONTO.CA
created principle
setspn -A postgres/pg.highgo.ca#HIGHGO.CA postgres
after creating principle I have tested it with the following command
c:\Users\administrator\Desktop>kinit postgres
Password for postgres#HIGHGO.CA:
which is working fine.
that's how I have created key tab
ktpass -out pgkt.keytab -princ postgres/pg.highgo.ca#HIGHGI.CA
-mapUser enterprisedb -pass Casper#12 -crypto all -ptype KRB5_NT_PRINCIPAL
and cpoy this file in postgres server
and replace it with the file /etc/krb5.keytab with following permission.
chmod 600 /etc/krb5.keytab
and here is my /etc/host entries on linux and windows.
192.168.100.112 pg.highgo.ca
192.168.100.114 kdc.highgo.ca
and I have put an entery in postgress.conf.
krb_server_keyfile = '/etc/krb5.keytab'
and here is pg_hba.conf entries.
host all all 0.0.0.0/0 gss include_realm=0
after that I have tried to access postgress server with the following command .
psql -U postgres -d postgress -h 192.168.100.114
in responce I got the following error on windows.
psql: error: could not connect to server: SSPI continuation error: The specified target is unknown or unreachable
(80090303)
and seen the logs on posgtes.
2020-08-18 05:49:36.534 PDT [5086] [unknown]#[unknown] LOG: connection failed during start up processing: user= database=
2020-08-18 05:49:36.541 PDT [5087] postgres#postgres FATAL: GSSAPI authentication failed for user "postgress"
2020-08-18 05:49:36.541 PDT [5087] postgres#postgres DETAIL: Connection matched pg_hba.conf line 97: "host all all 0.0.0.0/0 gss include_realm=0 "
I have checked the lots of tutorials but did not get a chance to resolve it.
(Note : same commands works fine with MD5 authentication )
thanks advance.
This is a common issue experienced in earlier releases of Postgres and EDB Postgres v. 12, since GSSAPI encryption has been added, but a bug existed. The bug has been fixed in commit 79e594cf04754d55196d2ce54fc869ccad5fa9c3, released in v. 12.3. If you can upgrade to v. 12.3, you may be able to work around this issue.
If you require use of an older client for some reason, please be sure to set gssencmode=disable in your connection string or set PGGSSAPIENCMODE=disable in your environment.
I have resolve it with the help of my Colleagues this is done on the fresh environment.
Steps:
(Note : there no need of kerbrose client on the PG-Server machien (mine is Ubuntu 18.xx))
Active Directory is setup on Windows 2016 MYDOMAIN.CA and
EPAS Server 11 or 12 is installed on both machines.
Active Directory Setup Link
Make sure time zone and time on both machines are the same.
/etc/hosts
IP of Windows machine is 192.168.100.19 and that of Linux is
192.168.100.17.
Also assuming that Windows machine name is “client” so its full name
is “client.mydomain.ca”.
Enter the following in /etc/hosts on linux (Comment out other
entries)
192.168.100.19 client.mydomain.ca client
192.168.100.17 pg.mydomain.ca pg
Enter the following in c:\Windows\System32\Drivers\etc\hosts on
Windows
192.168.100.19 client.mydomain.ca
192.168.100.17 pg.mydomain.ca
Verify the host are communicating with the ping.
Create User in Active Directory (Windows Machine)
Assuming you are logged in as Administrator, In “Server Manager”
click “Tools” and select “Active Directory Users and Computers”
Under your domain “MYDOMAIN.CA” select users to show all users
Right Click Administrator and select “Copy”
Enter “pguser” in “First Name” and “User logon name” fields.
Click Next. Domain “MYDOMAIN.CA” should be shown in combo box against
“User logon name”
Enter password for user and uncheck “Password never expires”
checkbox. -> Click Next -> Click Finish. User account is created.
Double click this user in Users list OR right click this user and
select Properties.
In Account Tab, under Account options check “This account supports
kerberos AES 256 bit encryption” checkbox and click OK.
Log off Windows and login using “pguser” user.
Create Keytab
Windows Machine: Open Command Prompt as Administrator and enter the
following command to create Keytab.
ktpass -out krb5.keytab -mapUser pguser#MYDOMAIN.CA +rndPass -mapOp set +DumpSalt -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -princ POSTGRES/pg.mydomain.ca#MYDOMAIN.CA
Note that this command should not give any error or warning. If you
see an error or warning and the keytab is generated, this keytab will
not work.
If the keytab is created successfully, you can check by opening
pguser user properties, Account tab that “user logon name” is
changed to postgres/pg.mydomain.ca.
Now you have created keytab file “krb5.keytab”.
Linux Machine: Copy this file to Linux machine as “/etc/krb5.keytab”.
//Suppose file is on Desktop of user edb on Linux machine. su to
become root.
cd /etc/
cp /home/edb/Desktop/krb5.keytab .
chown enterprisedb:enterprisedb krb5.keytab
chmod 600 krb5.keytab
Open postgresql.conf file and set krb_server_keyfile to
“/etc/krb5.keytab” (uncomment this line as it is commented out by
default)
krb_server_keyfile = '/etc/krb5.keytab'
Open pg_hba.conf file and add the following line (Comment out all
other lines except “local all enterprisedb trust/md5” so any remote
user can only connect using gss)
local all enterprisedb trust
host all all 0.0.0.0/0 gss
Restart server.
Create user “pguser#MYDOMAIN.CA”.
CREATE USER "pg1postgres#HIGHGO.CA" SUPERUSER CREATEDB CREATEROLE;
PSQL command from Windows
Issue this command to connect to D on Linux
psql -U pgUSER#MYDOMAIN.CA -d edb -h pg.mydomain.ca
Regards,

Hints on global deadman alerting methods

Kapacitor configuration file contains following comment in [deadman] section:
# NOTE: for this to be of use you must also globally configure at least one alerting method.
But there is no more hints about how to set this global alerting method. Some alert handlers sections have a global boolean parameter but not the basic or old-school ones like snmp, httppost or even log. Is it not available?
Kapacitor documentation shortly introduces an [Alert] section. Would it be possible to set a global log event handler here?
From my understanding this means that in order to use the global configuration for the [deadman] node, you need to set the default parameters for one of the possible Kapacitor [Alert node] properties (smtp, mqtt, slack, ...)
The list of supported [Alert node] is available in the documentation
This configuration is done in the Kapacitor configuration file.
Here is an example of the email property
[smtp]
# Configure an SMTP email server
# Will use TLS and authentication if possible
# Only necessary for sending emails from alerts.
enabled = true
host = "smtp.host.com"
port = 465
username = "notify#host.com"
password = "password"
# From address for outgoing mail
from = "notify#host.com"
# List of default To addresses.
to = ["dest1#host.com","dest2#host.com"]
# Skip TLS certificate verify when connecting to SMTP server
no-verify = false
# Close idle connections after timeout
idle-timeout = "30s"
# If true the all alerts will be sent via Email
# without explicitly marking them in the TICKscript.
global = false
# Only applies if global is true.
# Sets all alerts in state-changes-only mode,
# meaning alerts will only be sent if the alert state changes.
state-changes-only = false

Error using Google two factor auth with mu4e & Gmail

I have been using Google 2 factor auth for a while, and have several applications configured. One of them is offlineimap (where I download the mail), but when I use mu4e to compose a message, I get the following error:
Sending failed: 534-5.7.9 Application-specific password required.
Learn more at 534-5.7.9
http://support.google.com/accounts/bin/answer.py?answer=185833
I have a ~/.authinfo.gpg configured (and it decrypts successfully manually), and my ~/.offlineimaprc calls get_password_emacs (the example I used can be found here).
I've even attempted to skip the gpg piece to see if it works, using my mu4e Google App Password directly in the ~/.offlineimaprc, but I end up with the same result.
My ~/.authinfo.gpg file: (decrypted here, sensitive info removed)
machine imap.gmail.com login me#gmail.com port 993 password GoogleAppPassword
machine smtp.gmail.com login me#gmail.com port 587 password GoogleAppPassword
My ~/.offlineimaprc file:
[general]
accounts = Gmail
maxsyncaccounts = 3
pythonfile = ~/.offlineimap.py
[Account Gmail]
localrepository = Local
remoterepository = Remote
[Repository Local]
type = Maildir
localfolders = ~/Maildir
[Repository Remote]
remotehost = imap.gmail.com
remoteuser = me#gmail.com
remotepasseval = get_password_emacs("imap.gmail.com", "me#gmail.com", "993")
ssl = yes
maxconnections = 1
realdelete = no
holdconnectionopen = true
keepalive = 60
type = IMAP
and my ~/.offlineimap.py
#!/usr/bin/python
import re, os
def get_password_emacs(machine, login, port):
s = "machine %s login %s port %s password ([^ ]*)\n" % (machine, login, port)
p = re.compile(s)
authinfo = os.popen("gpg -q --no-tty -d ~/.authinfo.gpg").read()
return p.search(authinfo).group(1)
Can anyone see the issue I'm having? I've validated that the ~/.authinfo.gpg file decrypts successfully, and that my Google App Password is correct.
Thanks for your time.
using my mu4e Google App Password directly in the ~/.offlineimaprc
That's exactly the problem. You shouldn't be using the password directly. For legacy applications that do not accept the second factor token, you need to use the application-specific password, instead. This is a password that you generate from this URL:
https://security.google.com/settings/security/apppasswords
And you use the generated password in lieue of your real password. You should note, however, that these application-specific passwords grant full access to your account. As a result, using app passwords significantly reduces the protections you get from enabling 2-factor on your account.

How do I code Citrix web sites to use a Secure Gateway (CSG)?

I'm using Citrix's sample code as a base and trying to get it to generate ICA files that direct the client to use their Secure Gateway (CSG) provider. My configuration is that the ICA file's server address is replaced with a CSG ticket and traffic is forced to go to the CSG.
The challenge is that both the Citrix App Server (that's providing the ICA session on 1494) and the CSG have to coordinate through a Secure Ticket Authority (STA). That means that my code needs to talk to the STA as it creates the ICA file because STA holds a ticket that the CSG needs embedded into the ICA file. Confusing? Sure! But it's much more secure.
The pre-CSG code looks like this:
AppLaunchInfo launchInfo = (AppLaunchInfo)userContext.launchApp(appID, new AppLaunchParams(ClientType.ICA_30));
ICAFile icaFile = userContext.convertToICAFile(launchInfo, null, null);
I tried to the SSLEnabled information to the ICA generation, but it was not enough. here's that code:
launchInfo.setSSLEnabled(true);
launchInfo.setSSLAddress(new ServiceAddress("CSG URL", 443));
Now, it looks like I need to register the STA when I configure my farm:
ConnectionRoutingPolicy policy = config.getDMZRoutingPolicy();
policy.getRules().clear();
//Set the Secure Ticketing Authorities (STAs).
STAGroup STAgr = new STAGroup();
STAgr.addSTAURL(#"http://CitrixAppServerURL/scripts/ctxsta.dll");
//creat Secure Gateway conenction
SGConnectionRoute SGRoute = new SGConnectionRoute(#"https://CSGURL");
SGRoute.setUseSessionReliability(false);
SGRoute.setGatewayPort(80);
SGRoute.setTicketAuthorities(STAgr);
// add the SGRoute to the policy
policy.setDefault(SGRoute);
This is based on code I found on the Citrix Forums; however, it breaks my ability to connect with the Farm and get my application list!
Can someone point me to an example of code that works? Or a reference document?
The code in the question is basically right, but I was trying too hard to inject configuration into the launching ICA generator.
Note: Using the WebInterface.conf file for guidance is a good way to determine the right config settings. Even if the code is right, the configuration is very touchy!
Most of the Citrix Secure Gateway (CSG) / Secure Ticket Authority (STA) magic happens when the policy for the initial connection to the farm is established. Specifically, in Global.asax.cs, you must have the following blocks of code:
1) you must have a valid STAGroup:
//Set the Secure Ticketing Authorities (STAs).
STAGroup STAgr = new STAGroup();
STAgr.addSTAURL(#"http://[STA URL]/scripts/ctxsta.dll");
2) the you must create a CSG connection (with the STA mapped):
//create Secure Gateway conenction
SGConnectionRoute SGRoute = new SGConnectionRoute(#"[CSG FQDN without HTTPS]");
SGRoute.setUseSessionReliability(false);
SGRoute.setGatewayPort(443);
SGRoute.setTicketAuthorities(STAgr);
3) you need to set the policy default
// Create a DMZ routing policy
ConnectionRoutingPolicy policy = config.getDMZRoutingPolicy();
policy.getRules().clear();
policy.setDefault(SGRoute);
4) you need to tell the launchInfo that you want to be CGP enabled:
launchInfo.setCGPEnabled(true);
WARNING: The SSL enabled as a red herring.
There's another way to do this that is cleaner and more configurable. The code can be setup to use the webinterface.conf file that the default Citrix Web Interface uses.
The following code should replace all of the farmConfig, STAGroup, ConnectionRoutinePolcy, mess in the above sample.
InputStream inputStream = new FileInputStream(#"C:\temp\WebInterface.conf");
CtxConfig configInput = new CtxConfig(inputStream);
Map settingsMap = configInput.getSettingsMap();
WIConfiguration wiConfiguration = ConfigurationParser.buildWIConfiguration(settingsMap);
com.citrix.wing.config.Configuration config = new com.citrix.wing.config.Configuration();
config.setGlobalConfig(wiConfiguration.getGlobalConfig());
config.setMPSFarmConfigs(wiConfiguration.getMPSFarmConfigs());
config.setDMZRoutingPolicy(wiConfiguration.getDMZRoutingPolicy());
config.setClientProxyPolicy(wiConfiguration.getClientProxyPolicy());
// Create a StaticEnvironmentAdaptor instance.
WIASPNetStaticAdaptor staticEnvAdaptor = new WIASPNetStaticAdaptor(this);
// Create a WebPNBuilder instance.
WebPNBuilder builder = WebPNBuilder.getInstance();
Application["WebPNBuilder"] = builder;
// Create a WebPN instance from the configuration.
WebPN webPN = builder.createWebPN(config, staticEnvAdaptor);
Application["WebPN"] = webPN;
Another note on this problem from using the JICA client with an internal certificate (non-trusted root).
The JICA client does not let you accept a certificate from a non-trusted root, so it was required to add the certificate to the Java CA store. Adding it to the Windows store does not do any good!
Get your dev root CA, then navigate to bin directory of the latest Java install (typically, under c:\program files\java\jre*** )
Execute the following command:
keytool -import -trustcacerts -keystore "..\lib\security\cacerts" -file "c:\temp\root.cer" -alias myroot
I'll let you Google for the password because your supposed to changeit [sic].