Keystore get alias with Unicode encoding - unicode

When i use C# to get certificate issuer, it returns correctly.
Here is my code:
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
foreach (X509Certificate2 mCert in store.Certificates)
{
Trace.WriteLine(mCert.SubjectName.Name);
}
Result: C=VN, L=HÀ NỘI, CN=TẬP ĐOÀN VIỄN THÔNG QUÂN ĐỘI (test)...
But in Java, i cannot get alias with Unicode character.
Here is my code :
KeyStore ks = KeyStore.getInstance("Windows-MY");
ks.load(null, null);
Enumeration e = ks.aliases();
while (e.hasMoreElements()) {
String alias = (String) e.nextElement();
System.out.println(alias);
}
Result is T?P ÐOÀN VI?N THÔNG QUÂN Ð?I (test)
So, how can i get alias Unicode characters in Java like C# ? Thank you very much.

Related

RSA2048 signature output Base64length

I am trying to sign a text using a certificate in my personal store. I am supposed to sign it with RSA2048 and then convert the output to the BASE64 string. The output I am getting is just 172 characters. Can you help understand is better where am I making mistake? As per my understanding, the output base64 length should be more.
RSAParameters privateKey = new RSAParameters();
foreach (X509Certificate2 cert in my.Certificates)
{
if (cert.Subject.Contains(certSubject))
{
using (var rsa = cert.GetRSAPrivateKey())
{
UnicodeEncoding encoding = new UnicodeEncoding();
byte[] data = encoding.GetBytes(text);
return rsa.SignData(data, HashAlgorithmName.SHA512, RSASignaturePadding.Pkcs1);
}
}
}

Create WS security headers for REST web service in SoapUI Pro

We are developing a REST web service with the WS security headers to be passed through as header parameters in the REST request.
I am testing this in SoapUI Pro and want to create a groovy script to generate these and then use them in the REST request.
These parameters include the password digest, encoded nonce and created dateTime and password digest which is created from encoding the nonce, hashed password and created date and time, i.e. the code should be the same as that which generates these from using the Outgoing WS Security configurations in SoapUI Pro.
I have created a groovy test script in Soap UI Pro (below). However when I supply the created values to the headers I get authorisation errors.
I am able to hash the password correctly and get the same result a my python script.
Groovy code for this is ..
MessageDigest cript = MessageDigest.getInstance("SHA-1");
cript.reset();
cript.update(userPass.getBytes("UTF-8"));
hashedpw = new String(cript.digest());
This correctly hashes the text 'Password2451!' to í¦è~µ”t5Sl•Vž³t;$.
The next step is to create a password digest of the nonce the created time stamp and the hashed pasword. I have the following code for this ...
MessageDigest cript2 = MessageDigest.getInstance("SHA-1");
cript2.reset();
cript2.update((nonce+created+hashedpw).getBytes("UTF-8"));
PasswordDigest = new String(cript2.digest());
PasswordDigest = PasswordDigest.getBytes("UTF-8").encodeBase64()
This converts '69999998992017-03-06T16:19:28Zí¦è~µ”t5Sl•Vž³t;$' into w6YA4oCUw6nDicucw6RqxZMIbcKze+KAmsOvBA4oYu+/vQ==.
However the correct value should be 01hCcFQRjDKMT6daqncqhN2Vd2Y=.
The following python code correctly achieves this conversion ...
hashedpassword = sha.new(password).digest()
digest = sha.new(nonce + CREATIONDATE + hashedpassword).digest()
Can anyone tell me where I am going wrong with the groovy code?
Thanks.
changing my answer slightly as in original I was converting the pasword digest to a string value which caused the request to not validate some of the time as certain bytes did not get converted into the correct string value.
import java.security.MessageDigest;
int a = 9
nonce = ""
for(i = 0; i < 10; i++)
{
random = new Random()
randomInteger= random.nextInt(a)
nonce = nonce + randomInteger
}
Byte[] nonceBytes = nonce.getBytes()
def XRMGDateTime = new Date().format("yyyy-MM-dd'T'HH:mm:ss", TimeZone.getTimeZone( 'BTC' ));
Byte[] creationBytes = XRMGDateTime.getBytes()
def password = testRunner.testCase.testSuite.getPropertyValue( "XRMGPassword" )
EncodedNonce = nonce.getBytes("UTF-8").encodeBase64()
MessageDigest cript = MessageDigest.getInstance("SHA-1");
cript.reset();
cript.update(password.getBytes());
hashedpw = cript.digest();
MessageDigest cript2 = MessageDigest.getInstance("SHA-1");
cript2.update(nonce.getBytes());;
cript2.update(XRMGDateTime.getBytes());
cript2.update(hashedpw);
PasswordDigest = cript2.digest()
EncodedPasswordDigest = PasswordDigest.encodeBase64();
def StringPasswordDigest = EncodedPasswordDigest.toString()
def encodedNonceString = EncodedNonce.toString()
testRunner.testCase.setPropertyValue( "passwordDigest", StringPasswordDigest )
testRunner.testCase.setPropertyValue( "XRMGDateTime", XRMGDateTime )
testRunner.testCase.setPropertyValue( "XRMGNonce", encodedNonceString )
testRunner.testCase.setPropertyValue( "Nonce", nonce )

Types of VS FoxPro encodings

I'm trying to decode some strings in a DBF (created by a Foxpro app), and i'm interested in encoding / encrypting methods of FoxPro.
Here's a sample encoded string: "òÙÛÚÓ ½kê3ù[ƒ˜øžÃ+™Þoa-Kh— Gó¯ý""|øHñyäEük#‰fç9æ×ϯyi±:"
Can somebody tell me the encoding method of this string, OR give me any suggestion about Foxpro encoding methods?
Thanks!
It depends on the FoxPro version, the most recent DBF structure (VFP 9) is documented here:
https://msdn.microsoft.com/en-us/library/aa975386%28v=vs.71%29.aspx
It looks like your text could be the result of the "_Crypt.vcx" which will take a given string, apply whatever passphrase and generate an output encrypted string.
VFP has a class that is available in the "FFC" folder where VFP is default installed (via HOME() path resulting such as
C:\PROGRAM FILES (X86)\MICROSOFT VISUAL FOXPRO 9\
Here is a SAMPLE set of code to hook up the _Crypt class and sample to encrypt a string, then decrypt an encrypted string. Your string appears encrypted (obviously), but unless you know more of the encryption (such as finding the passphrase / key, you might be a bit stuck and into more research)...
lcCryptLib = HOME() + "FFC\_Crypt.vcx"
IF NOT FILE( lcCryptLib )
MESSAGEBOX( "No crypt class library." )
RETURN
ENDIF
SET CLASSLIB TO ( lcCryptLib ) ADDITIVE
oCrypt = CREATEOBJECT( "_CryptAPI" )
oCrypt.AddProperty( "myPassKey" )
oCrypt.myPassKey = "Hold property to represent some special 'Key/pass phrase' "
*/ Place-holder to get encrypted value
lcEncryptedValue = ""
? oCrypt.EncryptSessionStreamString( "Original String", oCrypt.myPassKey, #lcEncryptedValue )
*/ Show results of encrypted value
? "Encrypted Value: " + lcEncryptedValue
*/ Now, to get the decrypted from the encrypted...
lcDecryptedValue = ""
? oCrypt.DecryptSessionStreamString( lcEncryptedValue, oCrypt.myPassKey, #lcDecryptedValue )
? "Decrypted Value: " + lcDecryptedValue
*/ Now, try with your string to decrypt
lcYourString = [òÙÛÚÓ ½kê3ù[ƒ˜øžÃ+™Þoa-Kh— Gó¯ý""|øHñyäEük#‰fç9æ×ϯyi±:]
lcDecryptedValue = ""
? oCrypt.DecryptSessionStreamString( lcYourString, oCrypt.myPassKey, #lcDecryptedValue )
? "Decrypted Value: " + lcDecryptedValue

ANTLR4: Using non-ASCII characters in token rules

On page 74 of the ANTRL4 book it says that any Unicode character can be used in a grammar simply by specifying its codepoint in this manner:
'\uxxxx'
where xxxx is the hexadecimal value for the Unicode codepoint.
So I used that technique in a token rule for an ID token:
grammar ID;
id : ID EOF ;
ID : ('a' .. 'z' | 'A' .. 'Z' | '\u0100' .. '\u017E')+ ;
WS : [ \t\r\n]+ -> skip ;
When I tried to parse this input:
Gŭnter
ANTLR throws an error, saying that it does not recognize ŭ. (The ŭ character is hex 016D, so it is within the range specified)
What am I doing wrong please?
ANTLR is ready to accept 16-bit characters but, by default, many locales will read in characters as bytes (8 bits). You need to specify the appropriate encoding when you read from the file using the Java libraries. If you are using the TestRig, perhaps through alias/script grun, then use argument -encoding utf-8 or whatever. If you look at the source code of that class, you will see the following mechanism:
InputStream is = new FileInputStream(inputFile);
Reader r = new InputStreamReader(is, encoding); // e.g., euc-jp or utf-8
ANTLRInputStream input = new ANTLRInputStream(r);
XLexer lexer = new XLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
...
Grammar:
NAME:
[A-Za-z][0-9A-Za-z\u0080-\uFFFF_]+
;
Java:
import org.antlr.v4.runtime.CharStream;
import org.antlr.v4.runtime.CharStreams;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.TokenStream;
import com.thalesgroup.dms.stimulus.StimulusParser.SystemContext;
final class RequirementParser {
static SystemContext parse( String requirement ) {
requirement = requirement.replaceAll( "\t", " " );
final CharStream charStream = CharStreams.fromString( requirement );
final StimulusLexer lexer = new StimulusLexer( charStream );
final TokenStream tokens = new CommonTokenStream( lexer );
final StimulusParser parser = new StimulusParser( tokens );
final SystemContext system = parser.system();
if( parser.getNumberOfSyntaxErrors() > 0 ) {
Debug.format( requirement );
}
return system;
}
private RequirementParser() {/**/}
}
Source:
Lexers and Unicode text
For those having the same problem using antlr4 in java code, ANTLRInputStream beeing deprecated, here is a working way to pass multi-char unicode data from a String to a the MyLexer lexer :
String myString = "\u2013";
CharBuffer charBuffer = CharBuffer.wrap(myString.toCharArray());
CodePointBuffer codePointBuffer = CodePointBuffer.withChars(charBuffer);
CodePointCharStream cpcs = CodePointCharStream.fromBuffer(codePointBuffer);
OneLexer lexer = new MyLexer(cpcs);
CommonTokenStream tokens = new CommonTokenStream(lexer);
You can specify the encoding of the file when actually reading the file.
For Kotlin/Java that could look like this, no need to specify the encoding in the grammar!
val inputStream: CharStream = CharStreams.fromFileName(fileName, Charset.forName("UTF-16LE"))
val lexer = BlastFeatureGrammarLexer(inputStream)
Supported Charsets by Java/Kotlin

KRL: Signing requests with HMAC_SHA1

I made a test suite for math:hmac_* KRL functions. I compare the KRL results with Python results. KRL gives me different results.
code: https://gist.github.com/980788 results: http://ktest.heroku.com/a421x68
How can I get valid signatures from KRL? I'm assuming that they Python results are correct.
UPDATE: It works fine unless you want newline characters in the message. How do I sign a string that includes newline characters?
I suspect that your python SHA library returns a different encoding than is expected by the b64encode library. My library does both the SHA and base64 in one call so I to do some extra work to check the results.
As you show in your KRL, the correct syntax is:
math:hmac_sha1_base64(raw_string,key);
math:hmac_sha256_base64(raw_string,key);
These use the same libraries that I use for the Amazon module which is testing fine right now.
To test those routines specifically, I used the test vectors from the RFC (sha1, sha256). We don't support Hexadecimal natively, so I wasn't able to use all of the test vectors, but I was able to use a simple one:
HMAC SHA1
test_case = 2
key = "Jefe"
key_len = 4
data = "what do ya want for nothing?"
data_len = 28
digest = 0xeffcdf6ae5eb2fa2d27416d5f184df9c259a7c79
HMAC SHA256
Key = 4a656665 ("Jefe")
Data = 7768617420646f2079612077616e7420666f72206e6f7468696e673f ("what do ya want for nothing?")
HMAC-SHA-256 = 5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843
Here is my code:
global {
raw_string = "what do ya want for nothing?";
mkey = "Jefe";
}
rule first_rule {
select when pageview ".*" setting ()
pre {
hmac_sha1 = math:hmac_sha1_hex(raw_string,mkey);
hmac_sha1_64 = math:hmac_sha1_base64(raw_string,mkey);
bhs256c = math:hmac_sha256_hex(raw_string,mkey);
bhs256c64 = math:hmac_sha256_base64(raw_string,mkey);
}
{
notify("HMAC sha1", "#{hmac_sha1}") with sticky = true;
notify("hmac sha1 base 64", "#{hmac_sha1_64}") with sticky = true;
notify("hmac sha256", "#{bhs256c}") with sticky = true;
notify("hmac sha256 base 64", "#{bhs256c64}") with sticky = true;
}
}
var hmac_sha1 = 'effcdf6ae5eb2fa2d27416d5f184df9c259a7c79';
var hmac_sha1_64 = '7/zfauXrL6LSdBbV8YTfnCWafHk';
var bhs256c = '5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843';
var bhs256c64 = 'W9zBRr9gdU5qBCQmCJV1x1oAPwidJzmDnexYuWTsOEM';
The HEX results for SHA1 and SHA256 match the test vectors of the simple case.
I tested the base64 results by decoding the HEX results and putting them through the base64 encoder here
My results were:
7/zfauXrL6LSdBbV8YTfnCWafHk=
W9zBRr9gdU5qBCQmCJV1x1oAPwidJzmDnexYuWTsOEM=
Which match my calculations for HMAC SHA1 base64 and HMAC SHA256 base64 respectively.
If you are still having problems, could you provide me the base64 and SHA results from python separately so I can identify the disconnect?