I just successfully added Grizzled-SLF4J logger to my project using this link http://alvinalexander.com/scala/how-to-log-output-file-grizzled-slf4j-scala-simplelogger.properties
But using this properties, there is no option to create dynamic file name:
org.slf4j.simpleLogger.logFile = /tmp/myapp.log
org.slf4j.simpleLogger.defaultLogLevel = info
org.slf4j.simpleLogger.showDateTime = true
org.slf4j.simpleLogger.dateTimeFormat = yyyy'/'MM'/'dd' 'HH':'mm':'ss'-'S
org.slf4j.simpleLogger.showThreadName = true
org.slf4j.simpleLogger.showLogName = true
org.slf4j.simpleLogger.showShortLogName= false
org.slf4j.simpleLogger.levelInBrackets = true
Is there any other logger for scala projects that allow me add dynamic file name, or how to do this using this library (I see it is just a wrapper for slf4j)
The slf4j library is really an interface to some underlying logging implementation. You would have log4j, logback or some other logging implementation do the heavy lifting, with an adapter jar, as explained in the slf4j documentation.
You would then provide the details in the properties file for log4j for instance, where you can bind in dynamically constructed file names.
Related
I have an application, which refers to a MY_ PRODUCT_CONF_DIR/mycustom.properties file which has some key value pairs which needs to be external to the ear, war or jars deployed on my WildFly. Earlier in Jboss 6.1.0. we did it in a tricky way. The jboss 6.1.0, have a collection of URLs, visible to CL loading the server.
For example ( https://repository.jboss.org/org/jboss/jbossas/jboss-as-distribution/6.1.0.Final/, refer to jboss-6.1.0.Final-src\main\src\main\java\org\jboss\Main.java )
// Define a Set URLs to have visible to the CL loading the Server
final Set<URL> urls = new HashSet<URL>();
..........
urls.add(new File(MY_ PRODUCT_CONF_DIR)).toURI().toURL()); // I have added the DIR
.........
// Make a ClassLoader to be used in loading the server
final URL[] urlArray = urls.toArray(new URL[]{});
final ClassLoader loadingCl = new URLClassLoader(urlArray, tccl);
// Load the server
server = JBossASServer.class.cast(ServerFactory.createServer(DEFAULT_AS_SERVER_IMPL_CLASS_NAME, loadingCl));
In my code, I read the properties file from ClassLoader
URLClassLoader ucl = (URLClassLoader) loader;
url = ucl.findResource(propertiesResource);
final InputStream inputStream = url.openStream();
..........
So, is there any option to retain this mechanism? Can I add my CONFIG_DIR in the ModuleClassLoader as a URLClassLoader?
Is there any way to keep the properties file external to the ear/jars and module path?
You can put your properties file in a module and load it that way without having to use a URLClassLoader for getResource or getInputStream . Another way would be to specify the path to your properties files in some system property and then just do a Properties.load(Files.newInputStream(myPath))
With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.
I'm using the latest milestone of spring-cloud-sleuth and I can't seem to get traces emitted through opentracing. I have a Tracer bean defined and spring boot seems to acknowledge that, but no traces are being emitted.
Is there a way to check if spring-cloud-sleuth is aware of the Tracer bean?
update
I did see the merged documentation and have a Tracer instance on the bean, as defined below:
#Bean(name = "tracer")
#Primary
public Tracer lightstepTracer() throws MalformedURLException {
Options opt = lightstepOptionsBuilder.build();
log.info("Instantiating LightStep JRETracer.");
return new JRETracer(opt);
}
I'm not explicitly importing the OpenTracing APIs, because the LightStep tracer pulls that in transitively, but I can try doing that.
I've also explicitly enabled OpenTracing support in my application.yml file.
sleuth:
opentracing:
enabled: true
If you go to the latest snapshot documentation (or milestone) and you search for the word OpenTracing, you would get your answer. It's here https://cloud.spring.io/spring-cloud-sleuth/single/spring-cloud-sleuth.html#_opentracing
Spring Cloud Sleuth is compatible with OpenTracing. If you have OpenTracing on the classpath, we automatically register the OpenTracing Tracer bean. If you wish to disable this, set spring.sleuth.opentracing.enabled to false
So it's enough to just have OpenTracing on the classpath and Sleuth will work out of the box
Although there already are quite some StackOverflow questions, blog entries, etc. on the web, I still cannot figure out a solution to the problem stated below.
Similar to this question (Injecting EJB within JAX-RS resource on JBoss7) I'd like to inject a EJB instance into a JAX-RS class. I tried with JBoss 5, JBoss 7, and WildFly 8. I either get no injection at all (field is null), or the server does not deploy (as soon as I try to combine all sorts of annotations).
Adding #Stateless to the JAX-RS makes the application server know both classes as beans. However, no injection takes place.
Is there a way to inject EJBs into a REST application? What kind of information (in addition to that contained in the question linked to above) could I provide to help?
EDIT: I created a Github project showing code that works (with Glassfish 4.0) and does not work (with JBoss 5).
https://github.com/C-Otto/beantest
Commit 4bf2f3d23f49d106a435f068ed9b30701bbedc9d works using Glassfish
4.0.
Commit 50d137674e55e1ceb512fe0029b9555ff7c2ec21 uses Jersey 1.8, which does not work.
Commit 86004b7fb6263d66bda7dd302f2d2a714ff3b939
uses Jersey 2.6, which also does not work.
EDIT2:
Running the Code which I tried on JBoss 5 on Glassfish 4.0 gives:
Exception while loading the app : CDI deployment failure:WELD-001408 Unsatisfied dependencies for type [Ref<ContainerRequest>] with qualifiers [#Default] at injection point [[BackedAnnotatedParameter] Parameter 1 of [BackedAnnotatedConstructor] #Inject org.glassfish.jersey.server.internal.routing.UriRoutingContext(Ref<ContainerRequest>, ProcessingProviders)]
org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [Ref<ContainerRequest>] with qualifiers [#Default] at injection point [[BackedAnnotatedParameter] Parameter 1 of [BackedAnnotatedConstructor] #Inject org.glassfish.jersey.server.internal.routing.UriRoutingContext(Ref<ContainerRequest>, ProcessingProviders)]
at org.jboss.weld.bootstrap.Validator.validateInjectionPointForDeploymentProblems(Validator.java:403)
EDIT3: The crucial information might be that I'd like a solution that works on JBoss 5
If you don't want to make your JAX-RS resource an EJB too (#Stateless) and then use #EJB or #Resource to inject it, you can always go with JNDI lookup (I tend to write a "ServiceLocator" class that gets a service via its class.
A nice resource to read about the topic:
https://docs.jboss.org/author/display/AS71/Remote+EJB+invocations+via+JNDI+-+EJB+client+API+or+remote-naming+project
A sample code:
try {
// 1. Retreive the Home Interface using a JNDI Lookup
// Retrieve the initial context for JNDI. // No properties needed when local
Context context = new InitialContext();
// Retrieve the home interface using a JNDI lookup using
// the java:comp/env bean environment variable // specified in web.xml
helloHome = (HelloLocalHome) context.lookup("java:comp/env/ejb/HelloBean");
//2. Narrow the returned object to be an HelloHome object. // Since the client is local, cast it to the correct object type.
//3. Create the local Hello bean instance, return the reference
hello = (HelloLocal)helloHome.create();
} catch(NamingException e) {
} catch(CreateException e) {
}
This is not "injecting" per-se, but you don't use "new" as-well, and you let the application server give you an instance which is managed.
I hope this was useful and I'm not telling you something you already know!
EDIT:
This is an excellent example: https://docs.jboss.org/author/display/AS72/EJB+invocations+from+a+remote+client+using+JNDI
EDIT 2:
As you stated in your comment, you'd like to inject it via annotations.
If the JNDI lookup is currently working for you without problems, and
If you're using Java EE 6+ (which I'm guessing you are), you can do the following:
#EJB(lookup = "jndi-lookup-string-here")
private RemoteInterface bean;
I'm using Drools 5.4.0.Final
For logging I'm using logback in my application.
I tried to add update my logback.xml with
<logger name="org.drools" level="debug"/>
But I see nothing in my logs concerning Drools.
I would expect to see so my lines of logs concerning the drools initialization.
You can pass the LOGGER to the StatefulKnowledgeSession
private static final Logger LOGGER = LoggerFactory.getLogger(Example.class);
private transient StatefulKnowledgeSession ksession;
.
.
.
ksession.setGlobal("logger", LOGGER);
and in your DRL file, you have to define global org.slf4j.Logger logger and then you can use the logger in your rules.
Drools 5.4.0.Final does not support any logging framework natively. The next version, Drools 5.5.0.Beta1, will. It will also be documented in the manual how to use it. See this issue for more info.
Drools 5.5.0.Beta1 will log to slf4j-api, so you can logback, log4j, jdk-logging, slf4j-simple, ... You still need to explicitly call KnowledgeRuntimeLoggerFactory.newConsoleLogger() and add that to the event listeners.