What the best practices are for exception handling in spring cloud - spring-cloud

I am new to spring cloud and I would like to know what the best practices are for exception handling in spring cloud. I already use sleuth and logback to send exception logs to ELK, but I don't know how to go about handling them in individual microservices. Here's what I'm envisioning.
Create a GlobalExceptionHandler in each microservice to catch exceptions and send them to ELK, but this would result in duplicate exceptions being logged because service2 throws an exception, service1 also catches the exception, and both service1 and service2 log the exception.
So I need to check if the exception has already been logged and ignore it if it has.
I want to make sure that my idea is feasible or how do you do it?

Related

Log all exceptions in service fabric application

I have a bunch of backend service in Azure Service Fabric, I want to log any uncaught exceptions to App Insights, along with all my other logs. Is there any way in an Azure Service Farbic app to catch all uncaught exceptions and log them before re-throwing them?
You're using .net so you have access to the standard AppDomain way of handling all uncaught exceptions. Use this event.
Add the following lines into your Program.cs with logging code in there
AppDomain.CurrentDomain.UnhandledException += (sender,e)
=> {
//log exception
};
For sending application/service telemetry to Application Insights, I strongly recommend you have a look at App Insights Service Fabric. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics, as well as,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
If you're also interesting in monitoring the overall health of your cluster (e.g. CPU/Memory and when nodes go up/down), have a look at EventFlow or this github project

JBoss Fuse v6.2 - Tracing

What is the way to do message tracing for each request made to the JBoss Fuse 6.2 server? In my case most of the entry points are CXF REST service with the processing delegated to Camel routes in some cases. I would like to do end-to-end tracing with same message id that can correlate the request processing.
In my project, there was a similar requirement. Customer wanted to see all e2e log by executing grep command to system logs with a transaction id.
I used CXF interceptors and MDC logging capability for this as below:
Create a common CXF request and response interceptors. Add them to all your Camel's CXF Server/Client configuration
With your request interceptor, extract transaction id from request (or generte it yourself) then put it into MDC map. MDC is a thread local variable that log4j, slf4j,.. uses.
Print request, it'll have your transaction id as prefix thanks to MDC.
Dont forget to add your MDC key in logging format configuration
All logs you print until the end of operation with this transaction Id until the end.
If you're always using direct-vm, direct for routing then it wont be problem. However as you may know using seda, multi processing, etc. your execution is handled by other threads. Since MDC is thread local variable, you need to manually handle the trouble by transferring it.
With your response interceptor, log response message then clear MDC values.
If you're using CXF as client, you should use same interceptor approach to be able to print client request/reponses with transaction id.
See CXF-RS and MDC links as entry points

enable annotations shiro spring

I followed this subject then created "working" code.
However, an unauthenticated person may call a method with annotations:
#RequiresAuthentication
No exception is thrown.
(Important thing: I don't create a web application. It is a simple console program.)

How to use Apache-Commons DBCP with EclipseLink JPA and Tomcat 7.x

I've been working on a web application, deployed on Tomcat 7, which use EclipseLink JPA to handle the persistence layer.
Everything works fine in a test environment but we're having serious issues in the production environment due to a firewall cutting killing inactive connections. Basically if a connection is inactive for a while a firewall the sits between the Tomcat server and the DB server kill it, with the result of leaving "stale" connections in the pool.
The next time that connection is used the code never returns, until it gets a "Connection timed out" SQLException (full ex.getMessage() below).
EL Fine]: 2012-07-13
18:24:39.479--ServerSession(309463268)--Connection(69352859)--Thread(Thread[http-bio-8080-exec-5,5,main])--
MY QUERY REPLACED TO POST IT TO SO [EL Config]: 2012-07-13
18:40:10.229--ServerSession(309463268)--Connection(69352859)--Thread(Thread[http-bio-8080-exec-5,5,main])--disconnect
[EL Info]: 2012-07-13
18:40:10.23--UnitOfWork(1062365884)--Thread(Thread[http-bio-8080-exec-5,5,main])--Communication
failure detected when attempting to perform read query outside of a
transaction. Attempting to retry query. Error was: Exception
[EclipseLink-4002] (Eclipse Persistence Services -
2.3.0.v20110604-r9504): org.eclipse.persistence.exceptions.DatabaseException Internal
Exception: java.sql.SQLException: Eccezione IO: Connection timed out
I already tried several configuration in the persistence.xml, but since I have no access to the firewall configuration I had no luck with these methods. I also tried to use setCheckConnections()
ConnectionPool cp = ((JpaEntityManager)em).getServerSession().getDefaultConnectionPool();
cp.setCheckConnections();
cp.releaseConnection(cp.acquireConnection());
I managed to solve the issue in a test script using testOnBorrow, testWhileIdle and other features that are avalaible from DBCP Apache Commons. I'd like to know how to override the EclipseLink internal connection pool to use a custom connection pool so that I can provide an already configured pool, based on DBCP rather than just configuring the internal one using persistence.xml.
I know I should provide a SessionCustomizer, I'm uncertain which one is the correct pattern to use. Basically I would like to preserve the performance of DBCP in a JPA-like way.
I'm deploying on Tomcat 7, I know that if I switch to GF I won't have this problem, but for a matter of consistency with other webapp on the same server I'd prefere to stay on Tomcat.
What you want is definitely possible, but you might be hitting the limits of the "do it yourself" approach.
This is one of the more difficult things to explain, but there are effectively two ways to configure your EntityManagerFactory. The "do it yourself" approach and the "container" approach.
When you call Persistence.createEntityManagerFactory it eventually delegates to this method of the PersistenceProvider interface implemented by EclipseLink:
EntityManagerFactory createEntityManagerFactory(String emName, Map map)
The deal here is EclipseLink will then take it upon itself to do all the work, including its own connection creation and handling. This is the "do it yourself" approach. I don't know EclipseLink well enough to know if there is a way to feed it connections using this approach. After two days on Stackoverflow it doesn't seem like anyone else has that info either.
So here is why this "works in GF". When you let the container create the EntityManagerFactory for you by having it injected or looking it up, the container uses a different method on the PersistenceProvider interface implemented by EclipseLink:
EntityManagerFactory createContainerEntityManagerFactory(PersistenceUnitInfo info, Map map)
The long and short of it is that this PersistenceUnitInfo is an interface that the container implements and has these two very key methods on it:
public DataSource getJtaDataSource();
public DataSource getNonJtaDataSource();
With this mode EclipseLink will not try to do its own connection handling and will simply call these methods to get the DataSource from the container. This is really what you need.
There are two possible approaches you could take to solving this:
You could attempt to instantiate the EclipseLink PersistenceProvider implementation yourself and call the createContainerEntityManagerFactory method passing in your own implementation of the PersistenceUnitInfo interface and feed the DBCP configured DataSource instances into EclipseLink that way. You would need to parse the persistence.xml file yourself and feed that data in through the PersistenceUnitInfo. As well EclipseLink might also expect a TransactionManager, in which case you'll be stuck unless you hunt down a TransactionManager you can add to Tomcat.
You could use the Java EE 6 certified version of Tomcat, TomEE. DataSources are configured in the tomee.xml, created using DBCP with full support for all the options you need, and passed to the PersistenceProvider using the described createContainerEntityManagerFactory call. You then get the EntityManagerFactory injected via #PersistenceUnit or look it up.
If you do attempt to use TomEE, make sure your persistence.xml is updated to explicitly set transaction-type="RESOURCE_LOCAL" because the default is JTA. Even though it's non-compliant to use JTA with the Persistence.createEntityManagerFactory approach, there aren't any persistence providers that will complain and let you know you're doing something wrong, they treat it as RESOURCE_LOCAL ignoring the schema. So when you go to port your app to an actual certified server, it blows up.
Another note on TomEE is that in the current release, you'll have to put your EclipseLink libs in the <tomcat>/lib/ directory. This is fixed in trunk, just not released yet.
I'm not sure how useful these slides will be without the explanation that goes along with them, but the second part of this presentation is a deep dive into how container-managed EntityManager's work, specifically with regards to connection handling and transactions. You can ignore the transaction part as you aren't using them and already have an in production you're not likely to dramatically change, but it might be interesting for future development.
Best of luck!

A system exception occurred during an invocation on EJB AuthenticationRequestFilter method public com.sun.jersey.spi.container.ContainerRequest

I am Using RESTful APIs in my application using Jersey 1.6.
Some database is also there.
I have created two .war files in my application. These are deployed on Glassfish server3.0.1 with no issues.(no errors/exceptions).
They make some REST calls to each other for transactions.(It has a proper xml format to send a transaction).
When I try to make a transaction it gives me exceptions like
A system exception occurred during an invocation on EJB AuthenticationRequestFilter method public com.sun.jersey.spi.container.ContainerRequest com.mypack1.mypack2.resources.filters.AuthenticationRequestFilter.filter(com.sun.jersey.spi.container.ContainerRequest)
javax.ejb.EJBException
at com.sun.ejb.containers.BaseContainer.processSystemException(BaseContainer.java:5119)
and at the end it says
[#|2011-04-26T12:06:32.356+0530|WARNING|glassfish3.0.1|org.apache.http.impl.client.DefaultHttpClient|_ThreadID=27;_ThreadName=Thread-1;|Authentication error: Unable to respond to any of these challenges: {}|#]
I am sure that the xml I send is correct according to database entries.(including authentication for the particular URL).
Is there anything like "container security". Whats may go wrong in this case.
Thanks in advance.