CQ5(CRX) bookstore application error - aem

I checked out source code fromhttp://dev.day.com/docs/en/crx/current/getting_started/first_steps_with_crx.html#Step%20Two:%20Check%20out%20CRX%20Bookstore%20Example
When I tried to invoke http://:4502/products.html
Actual result should list the products page from bookstore app
I got "Cannot serve request to /products.html in /apps/bookstore/components/ranking/ranking.jsp: What version of the product are you using? On what operating system?
I am using CQ5.5 (CRX 2.3) on windows 7
http://code.google.com/p/crxzon/issues/detail?id=4&thanks=4&ts=1362987616

From what I see, you get NullPointerException in RankingServiceImpl:277, because the repository field is null. And the only way I can explain that is that SCR annotations didn't fire during the build.
Said that, I'm actually surprised your bundle started on CQ 5.5, as the dependencies seem to be to earlier versions (5.4 I guess) -- I suggest double checking that under /system/console/bundles (search for CRX - Sample Bookstore Demo). If you're missing imports there, try playing with /src/impl/com.day.crx.sample.bookshop.bnd to update versions as in CQ 5.5, or running it on CQ 5.4.

The annotations in RankingServiceImpl seem to be for an earlier version of CQ and CRX. Here are the changes that I made to get this to work:
import org.apache.felix.scr.annotations.Property;
import org.apache.felix.scr.annotations.Reference;
/**
* Default implementation of the ranking service.
* The ranking is updated through observation (based on OSGi events).
* The service can be used by clients to get the highest ranked products.
*/
#Component(immediate = true)
#Service(value = RankingService.class)
#Property(name = org.osgi.service.event.EventConstants.EVENT_TOPIC, value = SlingConstants.TOPIC_RESOURCE_ADDED)
public class RankingServiceImpl
implements RankingService, EventHandler, Runnable {
private Logger logger = LoggerFactory.getLogger(this.getClass());
// private Logger logger = LoggerFactory.getLogger("RankingServiceImpl");
private static final String PROPERTY_PREV_RANKING = "lowerRankingRef";
private static final String PROPERTY_NEXT_RANKING = "higherRankingRef";
private static final int SHOW_HIGHEST_RANKING = 3;
/** Flag for stopping the background service. */
private volatile boolean running = false;
/** A local queue for handling new orders. */
protected final BlockingQueue<String> orders = new LinkedBlockingQueue<String>();
#Reference
private SlingRepository repository;
#Reference
private ResourceResolverFactory resourceResolverFactory;

Related

What is a good way/pattern to use Temporal/Cadence versioning API

The Versioning API is powerful. However, with the pattern of using it, the code will quickly get messy and hard to read and maintain.
Over the time, product need to move fast to introduce new business/requirements. Is there any advice to use this API wisely.
I would suggest using a Global Version Provider design pattern in Cadence/Temporal workflow if possible.
Key Idea
The versioning API is very powerful to let you change the behavior of the existing workflow executions in a deterministic way(backward compatible). In real world, you may only care about adding the new behavior, and being okay to only introduce this new behavior to newly started workflow executions. In this case, you use a global version provider to unify the versioning for the whole workflow.
The Key idea is that we are versioning the whole workflow (that's why it's called GlobalVersionProvider). Every time adding a new version, we will update the version provider and provide a new version.
Example In Java
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import io.temporal.workflow.Workflow;
import java.util.HashMap;
import java.util.Map;
public class GlobalVersionProvider {
private static final String WORKFLOW_VERSION_CHANGE_ID = "global";
private static final int STARTING_VERSION_USING_GLOBAL_VERSION = 1;
private static final int STARTING_VERSION_DOING_X = 2;
private static final int STARTING_VERSION_DOING_Y = 3;
private static final int MAX_STARTING_VERSION_OF_ALL =
STARTING_VERSION_DOING_Y;
// Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
// We're introducing this map in order to cache our versions on the first call, which should
// always occur at the beginning of an workflow
private static final Map<String, GlobalVersionProvider> RUN_ID_TO_INSTANCE_MAP =
new HashMap<>();
private final int versionOnInstantiation;
private GlobalVersionProvider() {
versionOnInstantiation =
Workflow.getVersion(
WORKFLOW_VERSION_CHANGE_ID,
Workflow.DEFAULT_VERSION,
MAX_STARTING_VERSION_OF_ALL);
}
private int getVersion() {
return versionOnInstantiation;
}
public boolean isAfterVersionOfUsingGlobalVersion() {
return getVersion() >= STARTING_VERSION_USING_GLOBAL_VERSION;
}
public boolean isAfterVersionOfDoingX() {
return getVersion() >= STARTING_VERSION_DOING_X;
}
public boolean isAfterVersionOfDoingY() {
return getVersion() >= STARTING_VERSION_DOING_Y;
}
public static GlobalVersionProvider get() {
String runId = Workflow.getInfo().getRunId();
GlobalVersionProvider instance;
if (RUN_ID_TO_INSTANCE_MAP.containsKey(runId)) {
instance = RUN_ID_TO_INSTANCE_MAP.get(runId);
} else {
instance = new GlobalVersionProvider();
RUN_ID_TO_INSTANCE_MAP.put(runId, instance);
}
return instance;
}
// NOTE: this should be called at the beginning of the workflow method
public static void upsertGlobalVersionSearchAttribute() {
int workflowVersion = get().getVersion();
Workflow.upsertSearchAttributes(
ImmutableMap.of(
WorkflowSearchAttribute.TEMPORAL_WORKFLOW_GLOBAL_VERSION.getValue(),
workflowVersion));
}
// Call this API on each replay tests to clear up the cache
#VisibleForTesting
public static void clearInstances() {
RUN_ID_TO_INSTANCE_MAP.clear();
}
}
Note that because of a bug in Temporal/Cadence Java SDK, Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
We're introducing this map in order to cache our versions on the first call, which should
always occur at the beginning of the workflow execution.
Call clearInstances API on each replay tests to clear up the cache.
Therefor in the workflow code:
public class HelloWorldImpl{
private GlovalVersionProvider globalVersionProvider;
#VisibleForTesting
public HelloWorldImpl(final GlovalVersionProvider versionProvider){
this.globalVersionProvider = versionProvider;
}
public HelloWorldImpl(){
this.globalVersionProvider = GlobalVersionProvider.get();
}
#Override
public void start(final Request request) {
if (globalVersionProvider.isAfterVersionOfUsingGlobalVersion()) {
GlobalVersionProvider.upsertGlobalVersionSearchAttribute();
}
...
...
if (globalVersionProvider.isAfterVersionOfDoingX()) {
// doing X here
...
}
...
if (globalVersionProvider.isAfterVersionOfDoingY()) {
// doing Y here
...
}
...
}
Best practice with the pattern
How to add a new version
For every new version
Add the new constant STARTING_VERSION_XXXX
Add a new API ` public boolean isAfterVersionOfXXX()
Update MAX_STARTING_VERSION_OF_ALL
Apply the new API into workflow code where you want to add the new logic
Maintain the replay test JSON in a pattern of `HelloWorldWorkflowReplaytest-version-x-description.json. Make sure always add a new replay test for every new version you introduce to the workflow. When generating the JSON from a workflow execution, make sure it exercise the new code path – otherwise it won't be able to protect the determinism. If it requires more than one workflow executions to exercise all branches, then make multiple JSON files for replay. 
How to remove a old version:
To remove an old code path(version), add a new version to not execute old code path, then later on use Search attribute query like
GlobalVersion>=STARTING_VERSION_DOING_X AND GlobalVersion<STARTING_VERSION_NOT_DOING_X to find out if there is existing workflow execution still running with certain versions.
Instead of waiting for workflows to close, you can terminate or reset workflows
Example of deprecating a code path DoingX:
Therefor in the workflow code:
public class HelloWorldImpl implements Helloworld{
...
#Override
public void start(final Request request) {
...
...
if (globalVersionProvider.isAfterVersionOfDoingX() && !globalVersionProvider.isAfterVersionOfNotDoingX()) {
// doing X here
...
}
}
###TODO Example In Golang
Benefits
Prevent spaghetti code by using native Temporal versioning API everywhere in the workflow code
Provide search attribute to find workflow of particular version. This will fill the gaps that Temporal Java SDK is missing TemporalChangeVersion feature.
Even Cadence Java/Golang SDK has CadenceChangeVersion, this global
version search attribute is much better in query, because it's an
integer instead of a keyword.
Provide a pattern to maintain replay test easily
Provide a way to test different version without this missing feature
Cons
There shouldn't be any cons. Using this pattern doesn't stop you from using the raw versioning API directly in the workflow. You can combine this pattern with others together.

Is there a way to log correlation id (sent from microservice) in Postgres

I am setting up the logging for a project and was wondering, whether it is possible to send id to postges database. Later I would collect all logs with fluentd and use the efk(elastic search, fluentd, kibana) stack to look through the logs. That is why it would be very helpful if can set the id in the database logs.
You can have your connection set the application_name to something that includes the id you want, and then configure your logging to include the application_name field. Or you could include the id in an SQL comment, something like select /* I am number 6 */ whatever from whereever. But then the id will only be visible for log messages that include the sql. (The reason for putting the comment after the select keyword is that some clients will strip out leading comments so the serve never sees them)
Thank you #jjane,
for giving me such a great idea. After some googling I found a page describing how to intercept hibernate logs
then I made some more research on how to add the inspector to spring-boot and this is the solution I came up with:
#Component
public class SqlCommentStatementInspector
implements StatementInspector {
private static final Logger LOGGER = LoggerFactory
.getLogger(
SqlCommentStatementInspector.class
);
private static final Pattern SQL_COMMENT_PATTERN = Pattern
.compile(
"\\/\\*.*?\\*\\/\\s*"
);
#Override
public String inspect(String sql) {
LOGGER.info(
"Repo log, Correlation_id:"+ MDC.get(Slf4jMDCFilter.MDC_UUID_TOKEN_KEY),
sql
);
return String.format("/*Correlation_id: %s*/", MDC.get(Slf4jMDCFilter.MDC_UUID_TOKEN_KEY)) + SQL_COMMENT_PATTERN
.matcher(sql)
.replaceAll("");
}
}
and its configuration:
#Configuration
public class HibernateConfiguration implements HibernatePropertiesCustomizer {
#Autowired
private SqlCommentStatementInspector myInspector;
#Override
public void customize(Map<String, Object> hibernateProperties) {
hibernateProperties.put("hibernate.session_factory.statement_inspector", myInspector);
}
}

Problems while connecting to two MongoDBs via Spring

I'm trying to achieve to connect to two different MongoDBs with Spring (1.5.2. --> we included Spring in an internal Framework therefore it is not the latest version yet) and this already works partially but not fully. More precisely I found a strange behavior which I will describe below after showing my setup.
So this is what I done so far:
Project structure
backend
config
domain
customer
internal
repository
customer
internal
service
In configI have my Mongoconfigurations.
I created one base class which extends AbstractMongoConfiguration. This class holds fields for database, host etc. which are filled with the properties from a application.yml. It also holds a couple of methods for creating MongoClient and SimpleMongoDbFactory.
Furthermore there are two custom configuration classes. For each MongoDB one config. Both extend the base class.
Here is how they are coded:
Primary Connection
#Primary
#EntityScan(basePackages = "backend.domain.customer")
#Configuration
#EnableMongoRepositories(
basePackages = {"backend.repository.customer"},
mongoTemplateRef = "customerDataMongoTemplate")
#ConfigurationProperties(prefix = "customer.mongodb")
public class CustomerDataMongoConnection extends BaseMongoConfig{
public static final String TEMPLATE_NAME = "customerDataMongoTemplate";
#Override
#Bean(name = CustomerDataMongoConnection.TEMPLATE_NAME)
public MongoTemplate mongoTemplate() {
MongoClient client = getMongoClient(getAddress(),
getCredentials());
SimpleMongoDbFactory factory = getSimpleMongoDbFactory(client,
getDatabaseName());
return new MongoTemplate(factory);
}
}
The second configuration class looks pretty similar. Here it is:
#EntityScan(basePackages = "backend.domain.internal")
#Configuration
#EnableMongoRepositories(
basePackages = {"backend.repository.internal"}
mongoTemplateRef = InternalDataMongoConnection.TEMPLATE_NAME
)
#ConfigurationProperties(prefix = "internal.mongodb")
public class InternalDataMongoConnection extends BaseMongoConfig{
public static final String TEMPLATE_NAME = "internalDataMongoTemplate";
#Override
#Bean(name = InternalDataMongoConnection.TEMPLATE_NAME)
public MongoTemplate mongoTemplate() {
MongoClient client = getMongoClient(getAddress(), getCredentials());
SimpleMongoDbFactory factory = getSimpleMongoDbFactory(client,
getDatabaseName());
return new MongoTemplate(factory);
}
}
As you can see, I use EnableMongoRepositoriesto define which repository should use which connection.
My repositories are defined just like it is described in the Spring documentation.
However, here is one example which is located in package backend.repository.customer:
public interface ContactHistoryRepository extends MongoRepository<ContactHistoryEntity, String> {
public ContactHistoryEntity findById(String id);
}
The problem is that my backend always only uses the primary connection with this setup. Interestingly, when I remove the beanname for the MongoTemplate (just #Bean) the backend then uses the secondary connection (InternalMongoDataConnection). This is true for all defined repositories.
My question is, how can I achieve that my backend really take care of both connections? Probably I missed to set another parameter/configuration?
Since this is a pretty extensive post I apologise if I forgot something to mention. Please ask for missing information in the comments.
I found the answer.
In my package structure there was a empty configuration class (of my colleague) with the annotation #Configurationand #EnableMongoRepositories. This triggered the automatic wiring process of Stpring Data and therefore led to the problems I reported above.
I simply deleted the class and now it works as it should!

JBoss7.1 final 2 counts of IllegalAnnotationExceptions

I'm trying to migrate app from jboss 5.1 to 7.1 and i have error like this Error message which i'm not sure why i get this. if anyone have any idea please help me.
Update: 1
#Stateless
#Remote(PackageService.class)
#Interceptors(CrossContextSpringBeanAutowiringInterceptor.class)
#WebContext(contextRoot="/appname_web_services", urlPattern="/MaintenanceService", authMethod="", secureWSDLAccess=false)
#WebService(
name="MaintenanceService",
targetNamespace = "http://appname.com/web/services",
serviceName = "MaintenanceService")
#SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.WRAPPED)
#HandlerChain(file = "WebServiceHandlerChains.xml")
#TransactionTimeout(10800)
public class MaintenanceServiceBean implements MaintenanceService {
private static final Logger logger = Logger.getLogger( MaintenanceServiceBean.class );
#Resource(mappedName="/ConnectionFactory")
ConnectionFactory connectionFactory;
#Resource(mappedName="topic/manager_system_topic")
javax.jms.Destination systemTopic;
#Autowired
MaintenanceService MigrationService;
#WebMethod
public List<Long> getSoftDeletedPackageIds(Long performedBy) throws Exception {
return MigrationService.getSoftDeletedPackageIds(null);
}
this is the class where i believe it fails.
You are using an interface in your JAXB mappings for which you have not provided enough information to the runtime for it too be able to bind an actual implementation. Without more code included in your question it's hard to recommend a specific solution, but typically you would annotate the included interface with #XmlAnyElement.
You can read through this useful tutorial to determine the best solution for your possible case.

how to dig into this memory leak with eclipse MAT further

I have an issue where a ScheduledThreadPoolExecutor ends up with 3 million future tasks. I am trying to see what type of task so I can go to where that task is scheduled, but I am not sure how to get any info from this screen(I have tried right clicking those future tasks and selecting various choices in the menu). It seems like there is something missing in the gui like the links to the actual runnables or something...
any ideas on how to drill into further?
Some General Stuff
You need to know, that if you have a portable heap dump (phd, see types here), then it does not contain actual data (primitives), so then you can make your findings only based on reference map (which types hold a reference to which other types).
You can give a try to OQL. This is an SQL like language, with which you can query your objects.
One example:
select * from java.lang.String s where s.#retainedHeapSize>10000
This gives back all strings, that are bigger than ~10k.
You can make also some functions (like this aggregating here).
You could give a try to it.
As for the current problem
If you check the FutureTask source (here is JDK6 below):
public class FutureTask<V> implements RunnableFuture<V> {
/** Synchronization control for FutureTask */
private final Sync sync;
...
public FutureTask(Callable<V> callable) {
if (callable == null)
throw new NullPointerException();
sync = new Sync(callable);
}
...
public FutureTask(Runnable runnable, V result) {
sync = new Sync(Executors.callable(runnable, result));
}
The actual Runnable is referred by the Sync object:
private final class Sync extends AbstractQueuedSynchronizer {
private static final long serialVersionUID = -7828117401763700385L;
/** State value representing that task is running */
private static final int RUNNING = 1;
/** State value representing that task ran */
private static final int RAN = 2;
/** State value representing that task was cancelled */
private static final int CANCELLED = 4;
/** The underlying callable */
private final Callable<V> callable;
/** The result to return from get() */
private V result;
/** The exception to throw from get() */
private Throwable exception;
/**
* The thread running task. When nulled after set/cancel, this
* indicates that the results are accessible. Must be
* volatile, to ensure visibility upon completion.
*/
private volatile Thread runner;
Sync(Callable<V> callable) {
this.callable = callable;
}
So in the GUI open the Sync object (not open in your picture), and then you can check the Runnables.
I dont know if you can change the code or not, but in general it is better always limit the size of the queue used by an executor, since this way you can avoid leaks. Or you can use some persisted queue. If you apply a limit you can define the rejection policy like for example reject, run in caller and so on. See http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html for details.