I can't cypher an avro message using jpgpj - scala

I am trying to encrypt avro messages with schema using jpgpj library and it gives me an exception when encrypting:
Exception in thread "main" org.bouncycastle.openpgp.PGPException: no suitable signing key found
at org.c02e.jpgpj.Encryptor.sign(Encryptor.java:982)
at org.c02e.jpgpj.Encryptor.prepareCiphertextOutputStream(Encryptor.java:773)
at org.c02e.jpgpj.Encryptor.encrypt(Encryptor.java:691)
at org.c02e.jpgpj.Encryptor.encrypt(Encryptor.java:662)
at avro.EncryptPayload$.main(EncryptPayload.scala:40)
at avro.EncryptPayload.main(EncryptPayload.scala)
I generate the pair of keys using these commands:
gpg --gen-key
gpg --armor --output public-key.gpg --export myemail#gmail.com
Then, I copied public-key.gpg file to src/main/resources in a project with this code and the exception happens. The exception is clear, I can see it in sources.
It is not a problem about a file not found, it is a problem that says that the public key cannot be used as a key to sign the message, and it confuses me. What am I doing wrong?

the problem vanished when I changed this line:
encryptor.setSigningAlgorithm(HashingAlgorithm.SHA256)
to this line:
encryptor.setSigningAlgorithm(HashingAlgorithm.Unsigned)
EDIT
I share the gist with the code

Related

OpenSource: Encryption of JDBC Password in configuration properties file

As I noticed a plugin available for the enterprise version (https://download.rundeck.com/plugins/encrypted-datasource-plugin.html); is there an option for users of Rundeck open source to perform the same kind of encyption of datasource password in the configuration file?
As I noticed many people mentioning writing their own java programs and leveraging the Jasypt utilities; I tried this. I do have two jar files (one for encrypt and one for decrypt). I created a directory (since I'm using rpm based Rundeck 3.3 installation) called: /var/lib/rundeck/lib . I added this directory to the JVM classpath in /etc/sysconfig/rundeckd via: export RDECK_JVM_SETTINGS="-Djava.class.path=/var/lib/rundeck/lib/*". I converted my /etc/rundeck/rundeck-config.properties file to groovy format and updated the /etc/sysconfig/rundeck with: export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy". However when I change the /etc/rundeck/rundeck-config.groovy entry for datasource.password to:
datasource.password=MyDecrypt("MyTest123Password"); I get an error in the Rundeck logs after restarting:
[2020-09-08T18:01:03,168] WARN context.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'application': Initialization of bean failed; nested exception is groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper$_parse_closure5.MyDecrypt() is applicable for argument types: (String) values: [MyTest123Password]
Any suggestions?
That's encryption is only for Rundeck Enterprise, perhaps the best approach on Rundeck Community is to secure the rundeck-config.properties file through file UNIX permissions.

Parse error generating server stub with openapitools/openapi-generator-cli using OAS 3.0

I am trying to generate server code using openapitools/openapi-generator-cli which I installed globally using NPM.
When I run the command:
openapi-generator generate -i MyApi.yaml -g aspnetcore -o ./src
I get the following error:
[main] ERROR i.s.parser.SwaggerCompatConverter - failed to read resource listing
com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'openapi': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
I have also tried converting my spec file to json and encountered the same error.
How can I resolve this error with parsing the yaml file?
I ran my spec file through the online editor at http://editor.swagger.io/ and found an error in my yaml (I forgot to add a parameter entry for a path with a parameter in the path). Once I fixed the error, the generator worked correctly.
So this was user error, though the error message could be better.

Rex and identity files

I'm trying to configure a fi-ware cloud instance using Rex. What these instances (and probably other OpenStack-based systems) prove is a "identity file", a single private key that you can use to connect to them. I have been using variations of this:
user "root";
private_key "/home/jmerelo/.ssh/jj-iv.pem";
public_key "/home/one/public/key.dsa";
key_auth;
group fiware => "130.206.x.y";
desc "Install git";
task "git", group => "fiware", sub {
install "git";
};
where the private key is the one provided by fi-ware, and the public key is, well, whatever I thought of, or nothing.
If no public key is provided, error is
[2014-11-30 11:45:45] WARN - Error running task/batch: No public_key file defined. at /home/jmerelo/perl5/perlbrew/perls/perl-5.20.0/lib/site_perl/5.20.0/Rex/Task.pm line 621.
at /home/jmerelo/perl5/perlbrew/perls/perl-5.20.0/lib/site_perl/5.20.0/Rex/TaskList/Base.pm line 273.
which is quite obviously true. But if I try other public keys, error is:
[2014-11-30 11:48:37] WARN - Error running task/batch: Wrong username/password or wrong key on 130.206.127.211. Or root is not permitted to login over SSH. at /home/jmerelo/perl5/perlbrew/perls/perl-5.20.0/lib/site_perl/5.20.0/Rex/TaskList/Base.pm line 273.
Using
ssh -i ~/.ssh/jj-iv.pem root#130.206.x.y
connects correctly to the instance. So maybe the question is "Can Rex use a single private key to connect to a host?"
Finally, I generated a public key from the private key using, as suggested by the documentation,
$ sshkey-gen -y -f /path/to/your/private.key >public.key
and then using that public.key in the Rexfile

Exception while creating sp.xml using ssoadmin

I am facing this exception when I trying to create the sp.xml using the ssoadmin :
com.sun.identity.cli.CLIException: AdminTokenAction: FATAL ERROR: Cannot obtain Application SSO token.
Check AMConfig.properties for the following properties
com.sun.identity.agents.app.username
com.iplanet.am.service.password
at com.sun.identity.cli.LogWriter.log(LogWriter.java:109)
at com.sun.identity.cli.Authenticator.ldapLogin(Authenticator.java:170)
at com.sun.identity.cli.AuthenticatedCommand.ldapLogin(AuthenticatedCommand.java:144)
at com.sun.identity.federation.cli.CreateMetaDataTemplate.handleRequest(CreateMetaDataTemplate.java:113)
at com.sun.identity.cli.SubCommand.execute(SubCommand.java:291)
at com.sun.identity.cli.CLIRequest.process(CLIRequest.java:212)
at com.sun.identity.cli.CLIRequest.process(CLIRequest.java:134)
at com.sun.identity.cli.CommandManager.serviceRequestQueue(CommandManager.java:573)
at com.sun.identity.cli.CommandManager.(CommandManager.java:171)
at com.sun.identity.cli.CommandManager.main(CommandManager.java:148)
And I also tried adding something like this in the ssoamdin.bat :
-D"com.iplanet.am.naming.map.site.to.server=https://lb.example.com:443/openam=http://server1.example.com:8080/openam"
But the same exception...
How to fix it?
Thanks in advance,
The 'map-to-site' property is only needed if you have a site configured an the host where you run ssoadm is not able to talk to the siteURL.
You may set -Dcom.iplanet.services.debug.level=message -Dcom.iplanet.services.debug.directory=WRITABLE_EXISTING_DIRECTORY' as JVM options within ssoadm.bat.
You may then look into the debug directory you should find a pointer what's wrong.
The above got sorted in my case when i went to the ssoadmin folder openam/bin and found the ssoadm.bat file and opened it in the edit mode and added the following two lines in the java comaand :
-D"javax.net.ssl.trustStore=F:\tomcatsslkeystore" (tomcat keystore path)
-D"javax.net.ssl.trustStorePassword=tomcatsslkeystore" (tomcat keystore password)

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>