Hello Rest Api lovers !!
I created a DropWizard basic rest application.
I would like to view metrics but ONLY MY CUSTOM ONES and not dropwizard api's.
how can i disable dropwizard's healtCheks and Metrics and only view mines (the custum ones).
I hope it is clear....
If you are concerned about the "view" part of metrics/healthchecks, you can set filter that will apply when data is returned. It can be done on start of an application:
environment.getAdminContext().setAttribute(MetricsServlet.METRIC_FILTER, new MetricFilter() {
#Override
public boolean matches(final String name, final Metric metric) {
return // you logic;
}
});
environment.getAdminContext().setAttribute(HealthCheckServlet.HEALTH_CHECK_FILTER, new HealthCheckFilter() {
#Override
public boolean matches(final String s, final HealthCheck healthCheck) {
return // you logic;
}
});
If you don't want to have metrics/healthchecks at all, you can directly remove them:
environment.healthChecks().unregister();
environment.metrics().remove();
Related
I would like to implement a custom endpoint class to check Zookeeper health:
http://localhost:8080/actuator/health/zookeeper
PROBLEM: Do I extend AbstractHealthIndicaitor or implement HealthIndicator class?
HealthIndicator class
public class CustomHealth implements HealthIndicator {
#Override
public Health health() {
int errorCode = check(); // perform some specific health check
if (errorCode != 0) {
return Health.down()
.withDetail("Error Code", errorCode).build();
}
return Health.up().build();
}
public int check() {
// Our logic to check zookeeper health
return 0;
}
}
AbstractHealthIndicator class
public class CustomHealth extends AbstractHealthIndicator {
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception
{
// Our logic to check zookeeper health
}
}
I'm confused on which approach to use. I believe the logic to check zookeeper health is to simply declare an CuratorFramework object then do curator.getState() and return builder from there and for endpoint, attach #RestControllerEndPoint to declare the path. Please help!
It is up to you which one to choose, the difference is that AbstractHealthIndicator:
Provides you with the Health.Builder instance so you don't need to create one manually
Wraps doHealthCheck(builder) call with try-catch, that returns status DOWN if your healthcheck has failed with exception.
So in general the AbstractHealthIndicator is more convenient to use as you can skip error handling. Choose implementing raw HealthIndicator when you need to provide custom status details on exception.
For example of Zookeeper Health Indicator please refer to existing one provided with spring-cloud-zookeeper https://github.com/spring-cloud/spring-cloud-zookeeper/blob/master/spring-cloud-zookeeper-core/src/main/java/org/springframework/cloud/zookeeper/ZookeeperHealthIndicator.java
Regarding the endpoint /actuator/health/zookeeper, I suggest you to use new feature introduced in SpringBoot 2.2.0 called Health Indicator Groups https://spring.io/blog/2019/10/16/spring-boot-2-2-0#health-indicator-groups
In short, if you use component scan and named custom health indicator MyZookeeperHealthIndicator, then add following properties to register it under custom zookeeper group:
management.endpoint.health.group.zookeeper.include=myZookeeper
After that, custom health details will be displayed under myZookeeper component at /actuator/health/zookeeper
Check following docs for more information:
Writing Custom HealthIndicators https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html#writing-custom-healthindicators
Health Groups https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html#health-groups
I currently have a Spring Boot app where I can access the health check via actuator.
This app is dependent on another Spring Boot App being available/up so my question is:
By overriding the health check in the first app, is there an elegant way to do a health check on the second app?
In essence I just want to use one call and get health-check-info for both applications.
You can develop an own health indicator by implementing HealthIndicator that checks the health of the backend app. So in essence that will not be too difficult, cause you can just use the RestTemplate you get out of the box, e.g.
public class DownstreamHealthIndicator implements HealthIndicator {
private RestTemplate restTemplate;
private String downStreamUrl;
#Autowired
public DownstreamHealthIndicator(RestTemplate restTemplate, String downStreamUrl) {
this.restTemplate = restTemplate;
this.downStreamUrl = downStreamUrl;
}
#Override
public Health health() {
try {
JsonNode resp = restTemplate.getForObject(downStreamUrl + "/health", JsonNode.class);
if (resp.get("status").asText().equalsIgnoreCase("UP")) {
return Health.up().build();
}
} catch (Exception ex) {
return Health.down(ex).build();
}
return Health.down().build();
}
}
If you have a controller in the App A, then you can introduce a GET method request in the controller and point it to the App B health check API endpoint. In this way, you will have an API endpoint available in App A to check App B's health as well.
I have a use case where I need to fetch the ids of my entire solr collection. For that, with solrj, I use the Streaming API like this :
CloudSolrServer server = new CloudSolrServer("zkHost1:2181,zkHost2:2181,zkHost3:2181");
SolrQuery query = new SolrQuery("*:*");
server.queryAndStreamResponse(tmpQuery, handler);
Where handler is a class that implements StreamingResponseCallback, ommited in my code for brevity.
Now, the Spring data repositories abstraction give me the ability to search by pages, by cursors, but I can't seem to find a way to handle the streaming use case.
Is there a workaround ?
SolrTemplate allows to access the underlying SolrClient in a callback style. So you could use that one to work around the current limitations.
The result conversion using the MappingSolrConverter available via the SolrTemplate is broken at the moment (I need to check why) - but you get the idea of how to do it.
solrTemplate.execute(new SolrCallback<Void>() {
#Override
public Void doInSolr(SolrClient solrClient) throws SolrServerException, IOException {
SolrQuery sq = new SolrQuery("*:*");
solrClient.queryAndStreamResponse("collection1", sq, new StreamingResponseCallback() {
#Override
public void streamSolrDocument(SolrDocument doc) {
// the bean conversion fails atm
// ExampleSolrBean bean = solrTemplate.getConverter().read(ExampleSolrBean.class, doc);
System.out.println(doc);
}
#Override
public void streamDocListInfo(long numFound, long start, Float maxScore) {
// do something useful
}
});
return null;
}
});
I'm having problems with the NetBeans Nodes API.
I have this line of code:
Node n = (new MyNode(X)).getChildren().getNodeAt(Y);
The call to new MyNode(X) with the same X always initializes a MyNode the same way, independent of the context.
When I place it by itself (say, in an menu action), it successfully gets the Yth child, but if I put it in an event where other Node/Children stuff happens, it returns null.
MyNode's Children implementation is a trivial subclass of Children.Keys, which is approximately:
// Node
import org.openide.nodes.AbstractNode;
class MyNode extends AbstractNode {
MyNode(MyKey key) {
super(new MyNodeChildren(key));
}
}
// Children
import java.util.Collections;
import org.openide.nodes.Children;
import org.openide.nodes.Node;
public class MyNodeChildren extends Children.Keys<MyKey> {
MyKey parentKey;
MyNodeChildren(MyKey parentKey) {
super(true); // use lazy behavior
this.parentKey = parentKey;
}
#Override
protected Node[] createNodes(MyKey key) {
return new Node[] {new MyNode(key)};
}
#Override
protected void addNotify() {
setKeys(this.parentKey.getChildrenKeys());
}
#Override
protected void removeNotify() {
setKeys(Collections.EMPTY_SET);
}
}
// MyKey is trivial.
I assume this has something to do with the lazy behavior of Children.Keys. I have the sources for the API, and I've tried stepping through it, but they're so confusing that I haven't figured anything out yet.
NetBeans IDE 7.0.1 (Build 201107282000) with up-to-date plugins.
Edit: More details
The line with the weird behavior is inside a handler for an ExplorerManager selected-nodes property change. The weird thing is that it still doesn't work when the MyNode instance isn't in the heirarchy that the ExplorerManager is using (it's not even the same class as the nodes in the ExplorerManager), and isn't being used for anything else.
Accessing the nodes instead of the underlying model is actually necessary for my use case (I need to do stuff with the PropertySets), the MyNode example is just a simpler case that still has the problem.
It is recommended to use org.openide.nodes.ChildFactory to create child nodes unless you have a specific need to use one of the Children APIs. But for the common cases the ChildFactory is sufficient.
One thing to keep in mind when using the Nodes API is that it is only a presentation layer that wraps your model and used in conjunction with the Explorer API makes it available to the various view components in the NetBeans platform such as org.openide.explorer.view.BeanTreeView.
Using a model called MyModel which may look something like:
public class MyModel {
private String title;
private List<MyChild> children;
public MyModel(List<MyChild> children) {
this.children = children;
}
public String getTitle() {
return title;
}
public List<MyChild> getChildren() {
return Collections.unmodifiableList(children);
}
}
You can create a ChildFactory<MyModel> that will be responsible for creating your nodes:
public class MyChildFactory extends ChildFactory<MyModel> {
private List<MyModel> myModels;
public MyChildFactory(List<MyModel> myModels) {
this.myModels = myModels;
}
protected boolean createKeys(List<MyModel> toPopulate) {
return toPopulate.addAll(myModels);
}
protected Node createNodeForKey(MyModel myModel) {
return new MyNode(myModel);
}
protected void removeNotify() {
this.myModels= null;
}
}
Then, implementing MyNode which is the presentation layer and wraps MyModel:
public class MyNode extends AbstractNode {
public MyNode(MyModel myModel) {
this(myModel, new InstanceContent());
}
private MyNode(MyModel myModel, InstanceContent content) {
super(Children.create(
new MyChildrenChildFactory(myModel.getChildren()), true),
new AbstractLookup(content)); // add a Lookup
// add myModel to the lookup so you can retrieve it latter
content.add(myModel);
// set the name used in the presentation
setName(myModel.getTitle());
// set the icon used in the presentation
setIconBaseWithExtension("com/my/resouces/icon.png");
}
}
And now the MyChildrenChildFactory which is very similar to MyChildFactory except that it takes a List<MyChild> and in turn creates MyChildNode:
public class MyChildFactory extends ChildFactory<MyChild> {
private List<MyChild> myChildren;
public MyChildFactory(List<MyChild> myChildren) {
this.myChildren = myChildren;
}
protected boolean createKeys(List<MyChild> toPopulate) {
return toPopulate.addAll(myChildren);
}
protected Node createNodeForKey(MyChild myChild) {
return new MyChildNode(myChild);
}
protected void removeNotify() {
this.myChildren = null;
}
}
Then an implementation of MyChildNode which is very similar to MyNode:
public class MyChildNode extends AbstractNode {
public MyChildNode(MyChild myChild) {
// no children and another way to add a Lookup
super(Children.LEAF, Lookups.singleton(myChild));
// set the name used in the presentation
setName(myChild.getTitle());
// set the icon used in the presentation
setIconBaseWithExtension("com/my/resouces/child_icon.png");
}
}
And we will need the children's model, MyChild which is very similar to MyModel:
public class MyChild {
private String title;
public String getTitle() {
return title;
}
}
Finally to put it all to use, for instance with a BeanTreeView which would reside in a TopComponent that implements org.openide.explorer.ExplorerManager.Provider:
// somewhere in your TopComponent's initialization code:
List<MyModel> myModels = ...
// defined as a property in you TC
explorerManager = new ExplorerManager();
// this is the important bit and we're using true
// to tell it to create the children asynchronously
Children children = Children.create(new MyChildFactory(myModels), true);
explorerManager.setRootContext(new AbstractNode(children));
Notice that you don't need to touch the BeanTreeView and in fact it can be any view component that is included in the platform. This is the recommended way to create nodes and as I've stated, the use of nodes is as a presentation layer to be used in the various components that are included in the platform.
If you then need to get a child you can use the ExplorerManager which you can retrieve from the TopComponent using the method ExplorerManager.Provier.getExplorerManager() which was implemented due to the fact that your TopComponent implemented ExplorerManager.Provider and is in fact the way that a view component itself gets the nodes:
ExplorerManager explorerManager = ...
// the AbstractNode from above
Node rootContext = explorerManager.getRootContext();
// the MyNode(s) from above
Children children = rootContext.getChildren().getNodes(true);
// looking up the MyModel that we added to the lookup in the MyNode
MyModel myModel = nodes[0].getLookup().lookup(MyModel.class);
However, you must be aware that using the Children.getNodes(true) method to get your nodes will cause all of your nodes and their children to be created; which weren't created due to the fact that we told the factory that we wanted it to create the children asynchronously. This is not the recommended way to access the data but instead you should keep a reference to the List<MyModel> and use that if at all possible. From the documentation for Children.getNodes(boolean):
...in general if you are trying to get useful data by calling this method, you are probably doing something wrong. Usually you should be asking some underlying model for information, not the nodes for children.
Again, you must remember that the Nodes API is a presentation layer and is used as an adapter between your model and your views.
Where this becomes a powerful technique is when using the same ChildFactory in different and diverse views. You can reuse the above code in many TopComponents without any modifications. You can also use a FilterNode if you need to change only a part of the presentation of a node without having to touch the original node.
Learning the Nodes API is one of the more challenging aspects of learning the NetBeans platform API as you have undoubtedly discovered. Once you have some mastery of this API you will be able to take advantage of much more of the platforms built in capabilities.
Please see the following resources for more information on the Nodes API:
NetBeans Nodes API Tutorial
Great introduction to the Nodes API by Antonio Vieiro
Part 5: Nodes API and Explorer & Property Sheet API by Geertjan Wielenga
JavaDocs for the Nodes API
Timon Veenstra on the NetBeans Platform Developers mailing list solved this for me.
Actions on the explorerManager are guarded to ensure consistency. A
node selection listener on an explorer manager for example cannot
manipulate the same explorer manager while handling the selection
changed event because that would require a read to write upgrade. The
change will be vetoed and die a silent death.
Are you adding the MyNode root node to the explorer manager on
initialization, or somewhere else in a listener?
My problem line is in an ExplorerManager selection change listener. I guess the Children.MUTEX lock is getting set by ExplorerManager and preventing the Children.Keys instance from populating its Nodes...?
Anyways, I moved my Node access into a EventQueue.invokeLater(...), so it executes after the selection changed event finishes, and that fixed it.
I've been using IoC container's for quite some time but today I've found some "pattern" appearing in code over and over again. To give you some background I am now working on web application basically used for data analysis. There is a set of features there, that requires user to pick up what we call QueryTypeContex at the very beginning. Once this query type is chosen other steps may be taken but that all are performed in this specific QueryTypeContex. In the gui the QueryTypeContex pick up is represented as opening new tab with other controls.
When user is working with given QueryTypeContex all ajax calls to the server include QueryTypeId that identifies users choice and is used to build QueryTypeContex on the server which then is used for various data retrieval and manipulation.
What I've found is that many of our controllers (we use asp.net mvc) that are constructed with Ioc container have one thing in common. There is an action method that looks somewhat like this:
public class AttributeController : Controller
{
public AttributeController(IUsefulService usefulService)
{
_usefulservice = usefulService;
}
ActionResult GetAttributes(QueryTypeContex context)
{
var dataDto = _usefulService.Manipulate(context, currentUser);
return JSon(dataDto);
}
...
}
In order to bind QueryTypeContex to action argument we use custom model binder that pulls some information from database. Once the service gets QueryTypeContex as argument it passes it or its properties down to its collaborators in method arguments for instance data access layer. And so there is a factory class that looks like this
public interface IDateValueFactory
{
DateValue CurrentYear(QueryTypeContex context);
DateValue RollingMonth(int numberOfMonths, QueryTypeContex context);
DateValue RollingQuareter(int numberOfQuarters, QueryTypeContex context);
}
public class DateValueFactory : IDateValueFactory
{
public DateValueFactory(IDateValueDb dateValueDb)
{
_dateValueDb = dateValueDb;
}
public DateValue CurrentYear(QueryTypeContext context)
{
var currentYear = _dateValueDb.GetCurrentYear(context.Id);
return new DateValue(DateValueType.CurrentYear, currentYear, context);
}
public DateValue RollingMonth(int numberOfMonths, QueryTypeContex context)
{
return new DateValue(DateValueType.RollingMonth, numberOfMonths, context);
}
...
}
As you see all of these methods get QueryTypeContex as a parameter more importantly they all get the very same instance of QueryTypeContex during their short life (one web request). So I started to wonder if I could refactor this so that whenever many service class methods require QueryTypeContex as arguments it would be injected via constructor instead of passing the same value over an over again. For example:
public interface IDateValueFactory
{
DateValue CurrentYear();
DateValue RollingMonth(int numberOfMonths);
DateValue RollingQuareter(int numberOfQuarters);
}
public class DateValueFactory : IDateValueFactory
{
public DateValueFactory(IDateValueDb dateValueDb, QueryTypeContext context)
{
_dateValueDb = dateValueDb;
_context = context;
}
public DateValue CurrentYear()
{
var currentYear = _dateValueDb.GetCurrentYear(_context.Id);
return new DateValue(DateValueType.CurrentYear, currentYear, _context);
}
public DateValue RollingMonth(int numberOfMonths)
{
return new DateValue(DateValueType.RollingMonth, numberOfMonths, _context);
}
...
}
And now the real question:
Is this a good idea to to this sort of thing or it violates some design principles i should adhere to ?
In order to inject QueryTypeContex instance, builded using information from http request I thought about embedding the QueryTypeId in the uris so it would be available in the RouteData on the server. Then before the controller is constructed I could pull it out, build the QueryTypeContex, create nested IoC container for that request and inject it into the container. Then whenever some class would need QueryTypeContex to perform its job it would simply declare it as constructor argument.
Anything you can meaningfully push to the constructor as dependencies, you should. Dependencies wired up with constructor injection are implementation details, whereas method parameters are part of your model's API.
It's much easier to refactor dependencies wired through constructors than to change an API, so for maintainability reasons you should prefer as few method parameters as possible.