Trying to Use ESAPI but getting Error as ConfigurationException - esapi

ESAPI: WARNING: System property org.owasp.esapi.opsteam is not set
ESAPI: WARNING: System property org.owasp.esapi.devteam is not set
ESAPI: Attempting to load ESAPI.properties via file I/O.
ESAPI: Attempting to load ESAPI.properties as resource file via file I/O.
ESAPI: Not found in org.owasp.esapi.resources directory or file not readable:
ESAPI.properties
ESAPI: Loading validation.properties via file I/O failed.
ESAPI: Attempting to load validation.properties via the classpath.
ESAPI: validation.properties could not be loaded by any means. fail.. Caught java.lang.IllegalArgumentException; exception message was: java.lang.IllegalArgumentException: Failed to load ESAPI.properties as a classloader resource.
ESAPI: SecurityConfiguration for ESAPI.printProperties not found in ESAPI.properties. Using default: false
ESAPI: SecurityConfiguration for Encoder.DefaultCodecList not found in ESAPI.properties. Using default: [org.owasp.esapi.codecs.HTMLEntityCodec, org.owasp.esapi.codecs.PercentCodec, org.owasp.esapi.codecs.JavaScriptCodec]
org.owasp.esapi.errors.ConfigurationException: java.lang.reflect.InvocationTargetException Encoder class (org.owasp.esapi.reference.DefaultEncoder) CTOR threw exception

You can safely ignore those warning messages. It's a red herring. That's referring to a more secure configuration option that you can use (although most people don't) when deploying an application that uses ESAPI.
[Aside:
The idea is that it allows you to split the ESAPI.properties file into two files, one controlled by the dev team and the other controlled by the operations (ops) team. Any property found in the one controlled by the ops team overrides and identical property in the dev version.
This feature was developed in the days before DevOps became as prevalent as it is today (and long before things like HashiCorp Vault), so perhaps it doesn't make as much sense now, but the intent was to allow the devs to have there own ESAPI.properties file with properties like Encryptor.MasterKey that all the developers can safely share, but that the operations team can set a separate version for QA and production deployments. (It of course applied to other properties as well, but I think that was the properties that drove it.)
So that explains the warnings part.]
But your actual problem is that ESAPI cannot find your ESAPI.properties file anywhere. Looks at this for an explanation of how ESAPI tries to locate your configuration files:
https://www.javadoc.io/static/org.owasp.esapi/esapi/2.5.1.0/org/owasp/esapi/reference/DefaultSecurityConfiguration.html
If you are still having trouble, what I generally recommend is setting the system property 'org.owasp.esapi.resources' on the 'java' command line.
If for some reason that you don't want to do that, you will have to provide us with more details, like ALL the messages, including the complete exception stack trace.
Hope that helps.

Related

Creating and using a custom kafka connect configuration provider

I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.

Workflow step is failing after upgrade from AEM6.1 to AEM6.3

We just upgraded from AEM6.1 to 6.3. I am trying to execute a workflow but getting the below error-
07.08.2017 15:20:21.233 *ERROR* [sling-threadpool-cc7c6ae7-7243-4db2-9490-b0810d422592-(apache-sling-job-thread-pool)-282-Granite Workflow Queue(com/adobe/granite/workflow/job/etc/workflow/models/content-request-for-deletion/jcr_content/model)] com.adobe.granite.repository.impl.SlingRepositoryImpl Bundle com.adobe.granite.workflow.core is NOT whitelisted to use SlingRepository.loginAdministrative
07.08.2017 15:20:21.233 *ERROR* [sling-threadpool-cc7c6ae7-7243-4db2-9490-b0810d422592-(apache-sling-job-thread-pool)-282-Granite Workflow Queue(com/adobe/granite/workflow/job/etc/workflow/models/content-request-for-deletion/jcr_content/model)] com.adobe.granite.workflow.core.job.JobHandler Error executing workflow step
java.lang.RuntimeException: Error logging in as service user
at com.adobe.granite.workflow.core.util.ServiceLoginUtil.getWorkflowPayloadSession(ServiceLoginUtil.java:82)
at com.adobe.granite.workflow.core.util.ServiceLoginUtil.getWorkflowPayloadWorkflowSession(ServiceLoginUtil.java:127)
at com.adobe.granite.workflow.core.job.JobHandler.process(JobHandler.java:203)
at org.apache.sling.event.impl.jobs.JobConsumerManager$JobConsumerWrapper.process(JobConsumerManager.java:500)
at org.apache.sling.event.impl.jobs.queues.JobQueueImpl.startJob(JobQueueImpl.java:291)
at org.apache.sling.event.impl.jobs.queues.JobQueueImpl.access$100(JobQueueImpl.java:58)
at org.apache.sling.event.impl.jobs.queues.JobQueueImpl$1.run(JobQueueImpl.java:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.jcr.LoginException: Bundle com.adobe.granite.workflow.core is NOT whitelisted
at org.apache.sling.jcr.base.AbstractSlingRepository2.loginAdministrative(AbstractSlingRepository2.java:378)
at com.adobe.granite.workflow.core.util.ServiceLoginUtil.getWorkflowPayloadSession(ServiceLoginUtil.java:76)
... 9 common frames omitted
Do I need to create a service user? How can I do so?
You will find useful this link https://issues.apache.org/jira/browse/SLING-5135
loginAdministrative is a deprecated method which you can still use, although in 6.3 an extra security level was added so in order to be able to use it you would need to create an OSGi config
org.apache.sling.jcr.base.internal.LoginAdminWhitelist.fragment More info here
This problem occur when we try to access resource resolver by administrative services in aem 6.3 or abouve version ...you can remove this error by following way
Apache Sling Service User Mapper Service
enter image description here
There are two options in this configurations:
Service mappings: The service mappings configuration can be used here.
You can configure it like this:
Bundle-Symbolic-Name: Sub-Service[Optional] = System-User-Name
Default User: If there is no service mapping corresponding to a bundle, then the bundle will pick default User and use it as it’s service authentication user.So if you don't want to provide any service mappings, you can use the option of default user.But it is not specific to the bundle.
Apache Sling Service User Mapper Amendment
enter image description here
his configuration is used when you want to have an individual configuration for a particular project.
If there are more than one configurations correspond to a particular bundle, based on the ranking, service can be picked.(The highest the number will be having highest ranking).
New loginService methods
Now the new methods are introduced to replace loginAdministrative methods:
ResourceResolver getServiceResourceResolver(Map authenticationInfo) throws LoginException;
Session loginService(String serviceInfo, String workspace) throws LoginException, RepositoryException;
Note: Each bundle using the ResourceResolverFactory or SlingRepository service actually gets an instance bound to the using bundle. That bundle is used to identify the service.

Jboos connectivity Issue

I am getting the following error when trying to connect my application to jboss:
WARN | ISPN004022: Unable to invalidate transport for server:
/127.0.0.1:12222 ERROR | ISPN004017: Could not fetch transport
org.infinispan.client.hotrod.exceptions.TransportException:: Could not
connect to server: /127.0.0.1:12222
Tried searching a lot for a solution. It would be great is someone could help me out with this. Thanks
You must recall the following actions:
Make sure that your webapp is using the same port as defined in the socket-binding definitions for hotrod in the standalone.xml for JDG configuration folder;
Make sure that your webapp is using the proper inject annotations for your RemoteCacheManager class (remember to use the #ApplicationScopped annotation at the class definition and for additional methods used to get the cache instance);
If you are using JBoss and JDG on the same host, you must check declarations of the JBOSS_HOME environment variable. This variable must be assigned to the JDG installation home directory and not the JBoss EAP home (check also port-offset settings at startup if you're using a custom shell script);
If you are not using both products on the same host, check firewall and network settings;
Remember to re-deploy the application always after every modification and check both EAP and JDG console output for warnings and/or errors.
The following errors are related (for example):
14:38:42,610 WARN [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (http-127.0.0.1:8080-1) ISPN004022:
Unable to invalidate transport for server: /127.0.0.1:11322
14:38:42,610 ERROR [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (http-127.0.0.1:8080-1) ISPN004017:
Could not fetch transport: java.lang.IllegalStateException: Pool not open

Cant access to FTP using Eclipse

I am using the Remote System software on Eclipse. I can successfully log in to my FTP account but when I try to view the directories, I get the following message:
Message: Operation failed due to network I/O error
'java.net.SocketException: Connection reset by peer: socket write
error'
Any ideas are welcome.
Looks like there could be some negotiation issue.
Try following solution:
I've got the same exception and in my case the problem was in a
renegotiation procecess. In fact my client closed a connection when
the server tried to change a cipher suite. After digging it appears
that in the jdk 1.6 update 22 renegotiation process is disabled by
default. If your security constraints can effort this, try to enable
the unsecure renegotiation by setting the
sun.security.ssl.allowUnsafeRenegotiation system property to true.
http://www.oracle.com/technetwork/java/javase/overview/tlsreadme2-176330.html
Setting the System Properties/Mode Configuration The various modes are
set using the corresponding system properties, which must be set
before the SunJSSE library is initialized. There are several ways to
set these properties:
From the command line:
% java -Dsun.security.ssl.allowUnsafeRenegotiation=true Main Within
the application:
java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation",
true); In the Java Deployment environment (Plug-In/Web Start), there
are several ways to set the system properties. (See Java Web App and
Next Generation Web Browser Plugin for more information.)
Use the Java Control Panel to set the Runtime Environment Property on
a local/per-VM basis. This creates a local deployment.properties file.
Deployers can also distribute a enterprise-wide deployment.properties
file by using the deployment.config mechanism. (See Deployment
Configuration File and Properties.)
To set a property for a specific applet, use the HTML subtag
"java_arguments" within the tag. (See Java Arguments.)
To set the property in a specific Java Web Start application or applet
using the new Plugin2 (6u10+), use the JNLP "property" sub-element of
the "resources" element. (See Resources Element.)

Ignore an log4net Error in powershell

I have an issue on the script, basically I don't use any log4net or whatever and im not planning, but some resource which i access during my script i suppose has some references to this log4net, so i get this messages:
log4net:ERROR XmlConfigurator: Failed to find configuration section
'log4net' in the application's .config file. Check your .config file
for the and elements. The configuration
section should look like:
I don't really care about this, as this is also not a real error, i would prefere to somehow hide this messages from the propmpt window, is this possible?
How can I ignore this information, without too much hassle?
This message comes from the log4net internal debugging, and means that not log4net configuration information is found in the config file. What I find strange is that this kind of info is usually opt-in:
There are 2 different ways to enable internal debugging in log4net.
These are listed below. The preferred method is to specify the
log4net.Internal.Debug option in the application's config file.
Internal debugging can also be enabled by setting a value in the application's configuration file (not the log4net configuration file,
unless the log4net config data is embedded in the application's config
file). The log4net.Internal.Debug application setting must be set to
the value true. For example:
This setting is read immediately on startup an will cause all internal debugging messages to be emitted.
To enable log4net's internal debug programmatically you need to set the log4net.Util.LogLog.InternalDebugging property to true.
Obviously the sooner this is set the more debug will be produced.
So either the code of one component uses the code approach, or there is a configuration value set to true. Your options are:
look through the configuration files for a reference to the log4net.Internal.Debug config key; if you find one set to true, set it to false.
add an empty log4net section in the configuration file to satisfy the configurator and prevent it from complaining
if the internal debugging is set through code, you may be able to redirect console out and the trace appenders (see link for where the internal debugging writes to) but this really depends on your environment so you'll need to dig a bit more to find how to catch all outputs. Not really simple