Proper format for log4j2.xml RollingFile configuration - glassfish-4

I am getting the following exception in my glassfish 4 application that uses log4j2:
SEVERE: ERROR StatusLogger Invalid URL C:/glassfish4/glassfish/domains/domain1/config/log4j2.xml java.net.MalformedURLException: Unknown protocol: c
I have the following section in my log4j2.xml:
<RollingFile name="RollingFile" fileName="C:/glassfish4/glassfish/domains/domain1/logs/ucsvc.log"
filePattern="C:/glassfish4/glassfish/domains/domain1/logs/$${date:yyyy-MM}/ucsvc-%d{MM-dd-yyyy}-%i.log">
I understand that if it's looking for a URL, then "C:/glassfish4/..." is not the correct format.
However, the rolling file part actually works: I see a log file and the rolled log files where I expect them.
If I change to a URL (e.g. file:///C/glassfish4/...) that doesn't work at all.
So should I ignore the exception? (everything seems to be working ok). Or can someone explain the correct format for this section of the configuration?

I have not yet fully determined why it is that the config file works for me as well as the OP, but, I can confirm that changing the path reference to a file:// url solves the problem (ie: gets rid of the error/warning/irritant).
In my IntelliJ Run/Debug configurations, for VM options, I have:
-Dlog4j.configurationFile=file://C:\dev\path\to\log4j2.xml
I can confirm that '\' are translated to '/' so, no worries there.
EDIT:
Okay, the whole thing works because they (the apache guys) try really hard to load the configuration and they do, in fact, load from the file as specified via the c:\... notation. They just throw up a rather misleading exception before continuing to try.
In ConfigurationFactory::getConfiguration:
**source = getInputFromURI(FileUtils.getCorrectedFilePathUri(config));**
} catch (Exception ex) {
// Ignore the error and try as a String.
}
if (source == null) {
final ClassLoader loader = this.getClass().getClassLoader();
**source = getInputFromString(config, loader);**
The first bolded line tries to load from a URL and fails, throwing the exception. The code then continues, pops into getInputFromString:
try {
final URL url = new URL(config);
return new ConfigurationSource(url.openStream(), FileUtils.fileFromURI(url.toURI()));
} catch (final Exception ex) {
final ConfigurationSource source = getInputFromResource(config, loader);
if (source == null) {
try {
**final File file = new File(config);
return new ConfigurationSource(new FileInputStream(file), file);**
Where it tries to load the config again, fails and falls into the catch, tries again, fails and finally succeeds on the bolded lines (dealing with a File).
Okay, the code lines I wanted in emphasize with bold are actually just wrapped in **; guess the site doesn't permit nested tags? Anyway, y'all get the meaning.
It's all a bit of a mess to read, but that's why it works even though you get that nasty-looking (and wholly misleading) exception.

Thanks Jon, i was searching all over.. this helped!
This is on Intellij 13, Tomcat 7.0.56
-Dlog4j.configurationFile=file://C:\Surendra\workspace\cmdb\resources\elasticityLogging.xml

The problem is not the contents of your log4j2.xml file.
The problem is that log4j2 cannot locate your log4j2.xml config file. If you look carefully at the error, the URL that is reported as invalid is C:/glassfish4/glassfish/domains/domain1/config/log4j2.xml: the config file.
I'm not sure why this is. Are you specifying the location of the config file via the system property -Dlog4j.configurationFile=path/to/log4j2.xml?
Still, if the application and logging works then perhaps there is no problem. Strange though. You can get more details about the log4j configuration by configuring <Configuration status="trace"> at the top of your log4j2.xml file. This will print log4j initialization details to the console.

Related

error.CannotStartupOSGIPlatform issue when running birt

I'm in the midst of implementing Birt 4.6.0 into my gwt application. Unfortunately whenever I run a specific section of the program, I get the following error:
org.eclipse.birt.core.exception.BirtException:
error.CannotStartupOSGIPlatform at
org.eclipse.birt.core.framework.Platform.startup(Platform.java:81)
I've done some searching and one thread mentioned a permissions error but I am not sure what that entails. What does this mean?
EDIT Just read another article that suggests that it may be an issue with my classpath but I already added all the jar files from ReportEngine/lib to my buildpath. Anyone know what jar files I am supposed to include?
the offending code:
public static synchronized IReportEngine getBirtEngine(ServletContext sc) {
if (birtEngine == null) {
EngineConfig config = new EngineConfig();
java.util.HashMap map = config.getAppContext();;
map.put(EngineConstants.APPCONTEXT_CLASSLOADER_KEY, SegnalazioniDbManager.class.getClassLoader());
config.setAppContext(map);
IPlatformContext context = new PlatformServletContext(sc);
config.setPlatformContext(context);
try {
Platform.startup(config); //problem begins here
.....
}
[1]: http://developer.actuate.com/community/forum/index.php?/topic/20933-errorcannotstartuposgiplatform/
Yes it is indeed a permission error.
The relevant file is:
WEB-INF/platform/configuration/org.eclipse.osgi/.manager/.fileTableLock
You need to give access to the Birt user.

Apache Camel File process is resulting in TypeConversion Error

I am using akka-camel to process files. My initial tests were working great, however when I started passing in actual xml files it is puking with type conversions.
Here is my consumer (very simple, but puking at msg.bodyAs[String]
class FileConsumer extends Consumer {
def endpointUri = "file:/data/input/actor"
val processor = context.actorOf(Props[Processor], "processor")
def receive = {
case msg: CamelMessage => {
println("Parent...received %s" format msg)
processor ! msg.bodyAs[String]
}
}
}
Error:
[ERROR] [04/27/2015 12:10:48.617] [ArdisSystem-akka.actor.default-dispatcher-5] [akka://ArdisSystem/user/$a] Error during type conversion from type: org.apache.camel.converter.stream.FileInputStreamCache to the required type: java.lang.String with value org.apache.camel.converter.stream.FileInputStreamCache#4611b35a due java.io.FileNotFoundException: /var/folders/dh/zfqvn9gn7cl6h63d3400y4zxp3xtzf/T/camel-tmp-807558/cos2920459202139947606.tmp (No such file or directory)
org.apache.camel.TypeConversionException: Error during type conversion from type: org.apache.camel.converter.stream.FileInputStreamCache to the required type: java.lang.String with value org.apache.camel.converter.stream.FileInputStreamCache#4611b35a due java.io.FileNotFoundException: /var/folders/dh/zfqvn9gn7cl6h63d3400y4zxp3xtzf/T/camel-tmp-807558/cos2920459202139947606.tmp (No such file or directory)
I am wondering if it has something to do with the actual contents of the xml. They are not big at all (roughly 70kb). I doubt I will be able to provide an actual example of the XML itself. Just baffled as to why something so small and being converted to a string is having issues. Other dummy example xml files have worked fine.
EDIT:
One of the suggestions I had was to enable StreamCache, which I did. However, it still doesn't seem to be working. As Ankush commented, the error is confusing. I am not sure if it actually is a Stream issue or if it really is a conversion problem.
http://camel.apache.org/stream-caching.html
Added the below
camel.context.setStreamCaching(true)
I was finally able to figure out the problem. The issue was not bad data, but the size of the files. To account for this, you need to add addtional settings to the camel context.
http://camel.apache.org/stream-caching.html
The settings I used are below. I will need to further research if I should just turn off the streamcache, but this is a start.
camel.context.getProperties.put(CachedOutputStream.THRESHOLD, "750000");
or turn off streamcache
camel.context.setStreamCaching(false)
Hope this helps someone else.
we were having same issue commenting the streamCaching() helped
from(IEricssonConstant.ROUTE_USAGE_DATA_INDIVIDUAL_PROCSESS)
//.streamCaching()
.split(new ZipSplitter()) .stopOnException()
.streaming()
.unmarshal().csv()
.process(new UsageDataCSVRequestProcessor())

GetExportedValues<MyType> returns nothing, I can see the parts

I have a strange MEF problem, I tested this in a test project and it all seems to work pretty well but for some reason not working in the real project
This is the exporting code
public void RegisterComponents()
{
_registrationBuilder = new RegistrationBuilder();
_registrationBuilder
.ForTypesDerivedFrom(typeof(MyType))
.SetCreationPolicy(CreationPolicy.NonShared)
.Export();
var catalog = new AggregateCatalog();
catalog.Catalogs.Add(new AssemblyCatalog(typeof(MyType).Assembly, _registrationBuilder));
var directoryCatalog = new DirectoryCatalog(PathToMyTypeDerived, _registrationBuilder);
catalog.Catalogs.Add(directoryCatalog);
_compositionContainer = new CompositionContainer(catalog);
_compositionContainer.ComposeParts();
var exports = _compositionContainer.GetExportedValues<MyType>();
Console.WriteLine("{0} exports in AppDomain {1}", exports.Count(), AppDomain.CurrentDomain.FriendlyName);
}
exports count is 0 :( Any ideas why?
IN the log file I have many of this
System.ComponentModel.Composition Information: 6 : The ComposablePartDefinition 'SomeOthertype' was ignored because it contains no exports.
Though I would think this is ok because I wasn' interested in exporting 'someOtherType'
UPDATE: I found this link but after debuging over it I am not wiser but maybe I m not following up properly.
Thanks for any pointers
Cheers
I just had the same problem and this article helped me a lot.
It describes different reasons why a resolve can fail. One of the more important ones is that the dependency of a dependency of the type you want to resolve is not registered.
What helped me a lot was the the trace output that gets written to the Output window when you debug your application. It describes exactly the reasons why a type couldn't be resolved.
Even with this output. you might need to dig a little bit, because I only got one level deep.
Example:
I wanted to resolve type A and I got a message like this:
System.ComponentModel.Composition Warning: 1 : The ComposablePartDefinition 'Namespace.A' has been rejected. The composition remains unchanged. The changes were rejected because of the following error(s): The composition produced multiple composition errors, with 1 root causes. The root causes are provided below. Review the CompositionException.Errors property for more detailed information.
1) No exports were found that match the constraint:
ContractName Namespace.IB
RequiredTypeIdentity Namespace.IB
Resulting in: Cannot set import 'Namespace.A..ctor (Parameter="b", ContractName="namespace.IB")' on part 'Namespace A'.
Element: Namespace.A..ctor (Parameter="b", ContractName="Namespace.IB") --> Namespace.A --> AssemblyCatalog (Assembly="assembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=...")
But I clearly saw a part for Namespace.IB. So, in the debugger, I tried to resolve that one. And I got another trace output. This time it told me that my implementation of Namespace.IB couldn't be resolved because for one of its imports there was a missing export, so basically the same message as above, just with different types. And this time, I didn't find a part for that missing import. Now I knew, which type was the real problem and figure out, why no registration happened for it.

How do I verify that an NUnit Addin has been loaded?

From what I can tell, the assembly containing the addin must be located in C:\Program Files (x86)\NUnit 2.5.7\bin\net-2.0\addins. I think my assembly is being loaded because I have to close NUnit-gui before I can replace the assembly in the addins directory. The problem, however, is that I don't see any of the effects of the addin (None of the event handlers are being called)
So how do I verify that my addin has been loaded? I'd love to step through with the debugger but I'd be perfectly happy with print line debugging. When I tried doing a File.WriteAllText() the addin failed to load but gave no reason. Also, how do I debug the loading process?
The NUnit docs are helpful, but they're bare bones at best when it comes to extensibility and there isn't any intellisense available for classes in NUnit.Core.
You should use some tracing library like this one which you can download here.
Now you can decorate your relevant methods with using statements like this:
using ApiChange.Infrastructure;
class MyAddin
{
static TypeHashes myType = new TypeHashes(typeof(MyAddin);
void RelevantMethod()
{
using (Tracer t = new Tracer(myType, "RelevantMethod"))
{
....
if(bLoaded == false)
t.Error("Could not load adding because of {0}", reason);
}
}
}
Then you can enable tracing via the environment variable _TRACE
set _Trace=debugoutput
DebugOutput can be viewed with the SysInternals tool DbgView (no attach simply start it and watch the traces).
Or you trace to a file
set _Trace=file
The trace file is located where the executable is e.g. Nunit.exe.txt. If you set _TRACE to some random string it will trace the help to console and OutputDebugString to give you help.
Why this trace library? It is actually the ONLY one which is able to trace any exception when your method is left. This does work when the method contains using statements for tracing like the one above. If it is actually your fault that NUnit does choose to ignore your plugin you can find out now.
The output will look like this:
* ApiChange.IntegrationTests.Diagnostics.TracingTests.Demo_Show_Leaving_Trace_With_Exception
18:57:46.665 03064/05180 <{{ > ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeMethod
18:57:46.668 03064/05180 <{{ > ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeOtherMethod
18:57:46.670 03064/05180 < }}< ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeOtherMethod Exception thrown: System.NotImplementedException: Hi this a fault
at ApiChange.IntegrationTests.Diagnostics.TracingTests.FaultyMethod()
at ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeOtherMethod()
at ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeMethod()
at ApiChange.IntegrationTests.Diagnostics.TracingTests.Demo_Show_Leaving_Trace_With_Exception()
18:57:46.670 03064/05180 < }}< ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeOtherMethod Duration 2ms
18:57:46.689 03064/05180 < }}< ApiChange.IntegrationTests.Diagnostics.TracingTests.SomeMethod Duration 24ms
That should make it easy to find out why your Addin was not used at all. And you do not need a debugger ;-).
Yours,
Alois Kraus

Zend framework: how to migrate a site

I'm trying to copy a site built on ZF from production to a localhost environment. All files and db contents were copied but I just get a blank screen. No errors, nothing
Changes made in config.ini I added an entry for development:production
general.host = "localhost:8888"
db.adapter = PDO_MYSQL
db.params.host = localhost:8888
db.params.username = bla
db.params.password = bla
db.params.dbname = db_name
bootstrap.php
$frontController->registerPlugin(new Initializer('development'));
.htaccess contains a few basic directives but if I put some random stuff at the top I don't get Internal server errors so I don't think it even reaches the .htaccess stage.
Did I miss some kind of configuration somewhere?
EDIT:
I have code below in my bootstrap but still get a blank page. Very quickly, it barely loads at all
$frontController->registerPlugin(new Initializer('development'));
$frontController->throwExceptions(true);
// Dispatch the request using the front controller.
try {
$frontController->dispatch();
}
catch (Exception $exception)
{
exit($exception->getMessage());
}
Try adding this line before running dispatch() on front controller object.
$frontController->throwExceptions(true);
On production systems throwing exceptions is almost always disabled, enabling it on dev could tell you more about the nature of the problem.
Yes, you probably missed some configuration.
Try setting display_errors=On in php.ini. You should be able to see what is going on.
Also, like suggested - try putting $frontController->throwExceptions(true) before calling dispatch().
Regarding the .htaccess file - you need to put the AllowOverride All (or anything valid, other than None) in your apache.conf/vhosts config.