Trying to retrieve JBPM process results from KIE Remote REST API - rest

I am trying to retrieve the result objects created by a business process via the KIE remote API:
if (baseURL != null) {
System.out.println("[GlobalFlow]-Creating Engine");
engine = RemoteRuntimeEngineFactory.newRestBuilder().addUrl(baseURL).addUserName(user).add Password(password)
.addDeploymentId(deploymentId).addTimeout(10)
.addExtraJaxbClasses(ClaimRequest.class, EvaluateLabClaimResponse.class, SecondaryCodes.class,
ResponseHeaderData.class, ResponseLineData.class, EvaluateLabClaim.class)
.build();
KieSession ksession = engine.getKieSession();
Map<String, Object> params = new HashMap<String, Object>();
[create and store a bunch of params...]
ProcessInstance processInstance = ksession.startProcess("defaultPackage.GlobalFlow", params);
}
Now I want to get the return values that were created by the remote JBPM process. We have the objects sitting over there, all I need to do is get access to them.
How do I do that?

Related

KIE Server execution using Java API

I have simple Business process with rule executed before and after RestService WorkItem
BPM Process
I also defined the Rest Work Handler definition in the settings.
Rest Work Handler Definition Install Rest Work Item Handler.
Using Java KIE API calling RuleServicesClient to execute Rules and BPM Process.
KieServices kieServices = KieServices.Factory.get();
CredentialsProvider credentialsProvider = new EnteredCredentialsProvider(USERNAME, PASSWORD);
KieServicesConfiguration kieServicesConfig = KieServicesFactory.newRestConfiguration(KIE_SERVER_URL, credentialsProvider);
// Set the Marshaling Format to JSON. Other options are JAXB and XSTREAM
kieServicesConfig.setMarshallingFormat(MarshallingFormat.JSON);
KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(kieServicesConfig);
// Retrieve the RuleServices Client.
RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
List<Command<?>> commands = new ArrayList<>();
KieCommands commandFactory = kieServices.getCommands();
commands.add(commandFactory.newInsert(new RestFlowRequest("Sample"), "SampleRequest"));
commands.add(commandFactory.newStartProcess("RuleFlowSample.DecisionRestBPM"));
//commands.add(commandFactory.newFireAllRules("numberOfFiredRules"));
//ProcessServicesClient processService
// = kieServicesClient.getServicesClient(ProcessServicesClient.class);
//processService.startProcess(CONTAINER_ID,"RuleFlowSample.DecisionRestBPM");
BatchExecutionCommand batchExecutionCommand = commandFactory.newBatchExecution(commands);
ServiceResponse<ExecutionResults> response = rulesClient.executeCommandsWithResults(CONTAINER_ID, batchExecutionCommand);
It fails to execute the Rest Service Task with following error
Error Thrown By KIE Server
If change the code to start process using ProcessServicesClient then Business Process executes without any issue but rules don't execute.
You are using the correct approach using commands.add(commandFactory.newStartProcess("RuleFlowSample.DecisionRestBPM"));"
I tried it using below code(https://github.com/jbossdemocentral/kie-server-client-examples/blob/master/src/main/java/com/redhat/demo/qlb/loan_application/Main.java) and it works fine :
KieServices kieServices = KieServices.Factory.get();
CredentialsProvider credentialsProvider = new EnteredCredentialsProvider(USERNAME, PASSWORD);
KieServicesConfiguration kieServicesConfig = KieServicesFactory.newRestConfiguration(KIE_SERVER_URL, credentialsProvider);
// Set the Marshaling Format to JSON. Other options are JAXB and XSTREAM
kieServicesConfig.setMarshallingFormat(MarshallingFormat.JSON);
KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(kieServicesConfig);
// Retrieve the RuleServices Client.
RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
/*
* Create the list of commands that we want to fire against the rule engine. In this case we insert 2 objects, applicant and loan,
* and we trigger a ruleflow (with the StartProcess command).
*/
List<Command<?>> commands = new ArrayList<>();
KieCommands commandFactory = kieServices.getCommands();
//The identifiers that we provide in the insert commands can later be used to retrieve the object from the response.
commands.add(commandFactory.newInsert(getApplicant(), "applicant"));
commands.add(commandFactory.newInsert(getLoan(), "loan"));
commands.add(commandFactory.newStartProcess("loan-application.loan-application-decision-flow"));
For testing purpose please remove rest handler and try with script task and see the result.

Is there a retention policy for custom state store (RocksDb) with Kafka streams?

I am setting up a new Kafka streams application, and want to use custom state store using RocksDb. This is working fine for putting data in state store and getting a queryable state store from it and iterating over the data, However, after about 72 hours I observe data to be missing from the store. Is there a default retention time on data for state store in Kafka streams or in RocksDb?
I are using custom state store using RocksDb so that we can utilize the column family feature, that we can't use with the embedded RocksDb implementation with KStreams. I have implementated custom store using KeyValueStore interface. And have my own StoreSupplier, StoreBuilder, StoreType and StoreWrapper as well.
A changelog topic is created for the application but no data is going to it yet (haven't looked into that problem yet).
Putting data into this custom state store and getting queryable state store from it is working fine. However, I am seeing that data is missing after about 72 hours from the store. I checked by getting the size of the state store directory as well as by exporting the data into files and checking the number of entries.
Using SNAPPY compression and UNIVERSAL compaction
Simple topology:
final StreamsBuilder builder = new StreamsBuilder();
String storeName = "store-name"
List<String> cfNames = new ArrayList<>();
// Hybrid custom store
final StoreBuilder customStore = new RocksDBColumnFamilyStoreBuilder(storeName, cfNames);
builder.addStateStore(customStore);
KStream<String, String> inputstream = builder.stream(
inputTopicName,
Consumed.with(Serdes.String(), Serdes.String()
));
inputstream
.transform(() -> new CurrentTransformer(storeName), storeName);
Topology tp = builder.build();
Snippet from custom store implementation:
RocksDBColumnFamilyStore(final String name, final String parentDir, List<String> columnFamilyNames) {
.....
......
final BlockBasedTableConfig tableConfig = new BlockBasedTableConfig()
.setBlockCache(cache)
.setBlockSize(BLOCK_SIZE)
.setCacheIndexAndFilterBlocks(true)
.setPinL0FilterAndIndexBlocksInCache(true)
.setFilterPolicy(filter)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setPinTopLevelIndexAndFilter(true)
;
cfOptions = new ColumnFamilyOptions()
.setCompressionType(CompressionType.SNAPPY_COMPRESSION)
.setCompactionStyle(CompactionStyle.UNIVERSAL)
.setMaxWriteBufferNumber(MAX_WRITE_BUFFERS)
.setOptimizeFiltersForHits(true)
.setLevelCompactionDynamicLevelBytes(true)
.setTableFormatConfig(tableConfig);
columnFamilyDescriptors.add(new ColumnFamilyDescriptor(RocksDB.DEFAULT_COLUMN_FAMILY, cfOptions));
columnFamilyNames.stream().forEach((cfName) -> columnFamilyDescriptors.add(new ColumnFamilyDescriptor(cfName.getBytes(), cfOptions)));
}
#SuppressWarnings("unchecked")
public void openDB(final ProcessorContext context) {
Options opts = new Options()
.prepareForBulkLoad();
options = new DBOptions(opts)
.setCreateIfMissing(true)
.setErrorIfExists(false)
.setInfoLogLevel(InfoLogLevel.INFO_LEVEL)
.setMaxOpenFiles(-1)
.setWriteBufferManager(writeBufferManager)
.setIncreaseParallelism(Math.max(Runtime.getRuntime().availableProcessors(), 2))
.setCreateMissingColumnFamilies(true);
fOptions = new FlushOptions();
fOptions.setWaitForFlush(true);
dbDir = new File(new File(context.stateDir(), parentDir), name);
try {
Files.createDirectories(dbDir.getParentFile().toPath());
db = RocksDB.open(options, dbDir.getAbsolutePath(), columnFamilyDescriptors, columnFamilyHandles);
columnFamilyHandles.stream().forEach((handle) -> {
try {
columnFamilyMap.put(new String(handle.getName()), handle);
} catch (RocksDBException e) {
throw new ProcessorStateException("Error opening store " + name + " at location " + dbDir.toString(), e);
}
});
} catch (RocksDBException e) {
throw new ProcessorStateException("Error opening store " + name + " at location " + dbDir.toString(), e);
}
open = true;
}
The expectation is that the state store (RocksDb) will retain the data indefinitely until manually deleted or until the storage disk goes down. I am not aware that Kafka streams has introduced having TTl with state stores yet.

Is RESTEasy RegisterBuiltin.register necessary when using ClientResponse<T>

I am developing a REST client using JBOSS app server and RESTEasy 2.3.6. I've included the following line at the beginning of my code:
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
Here's the rest of the snippet:
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
DefaultHttpClient httpclient = new DefaultHttpClient();
httpclient.getCredentialsProvider().setCredentials(
new AuthScope(host, port, AuthScope.ANY_REALM), new UsernamePasswordCredentials(userid,password));
ClientExecutor executor = createAuthenticatingExecutor(httpclient, host, port);
String uriTemplate = "http://myhost:8080/webapp/rest/MySearch";
ClientRequest request = new ClientRequest(uriTemplate, executor);
request.accept("application/json").queryParameter("query", searchArg);
ClientResponse<SearchResponse> response = null;
List<MyClass> values = null;
try
{
response = request.get(SearchResponse.class);
if (response.getResponseStatus().getStatusCode() != 200)
{
throw new Exception("REST GET failed");
}
SearchResponse searchResp = response.getEntity();
values = searchResp.getValue();
}
catch (ClientResponseFailure e)
{
log.error("REST call failed", e);
}
finally
{
response.releaseConnection();
}
private ClientExecutor createAuthenticatingExecutor(DefaultHttpClient client, String server, int port)
{
// Create AuthCache instance
AuthCache authCache = new BasicAuthCache();
// Generate BASIC scheme object and add it to the local auth cache
BasicScheme basicAuth = new BasicScheme();
HttpHost targetHost = new HttpHost(server, port);
authCache.put(targetHost, basicAuth);
// Add AuthCache to the execution context
BasicHttpContext localContext = new BasicHttpContext();
localContext.setAttribute(ClientContext.AUTH_CACHE, authCache);
// Create ClientExecutor.
ApacheHttpClient4Executor executor = new ApacheHttpClient4Executor(client, localContext);
return executor;
}
The above is a fairly simple client that employs the ClientRequest/ClientResponse<T> technique. This is documented here. The above code does work (only left out some trivial variable declarations like host and port). It is unclear to me from the JBOSS documentation as to whether I need to run RegisterBuiltin.register first. If I remove the line completely - my code still functions. Do I really need to include the register method call given the approach I have taken? The Docs say I need to run this once per VM. Secondly, if I am required to call it, is it safe to call more than one time in the same VM?
NOTE: I do understand there are newer versions of RESTEasy for JBOSS, we are not there yet.

JBoss return org.jboss.remoting.ProtocolException: Too many channels open

My program encountered a error:
"org.jboss.remoting3.ProtocolException: Too many channels open"
I have search from internet for some solutions to fix this error.Unfortunately, the suggestions from others is not working for me.
Below is the Code on how I call the jndi remote and the properties that I have used.
public static void createUser(String loginID) throws Exception {
Hashtable props = new Hashtable();
try {
props.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
props.put(Context.PROVIDER_URL, "remote://" + localhost:4447);
props.put("jboss.naming.client.ejb.context", "true");
props.put(Context.SECURITY_PRINCIPAL, "userJBoss");
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
context = new InitialContext(props);
context.lookup("ejb:/createUserOperation/CreateUserGenerator!password.api.CreateUserService");
.....
......
LOGGER.info("DONE");
} catch (Exception e) {
LOGGER.error("ERROR");
} finally {
context.close();
}
}
Due to some certain reason I am not able to show all the content of the method.
The "createUser" will be call everytime when there is a needed of create new user. It will be call up to hundred or thousand time.
I did always close the connection when every time it finish execute the method.
Let say I have call the method for 100 times, some of the users will be created successfully whereas some of the users will be failed.
Error below will prompt to me:
2014-12-04 17:23:23,026 - ERROR [Remoting "config-based-naming-client-endpoint" task-4] (org.jboss.ejb.client.remoting.RemotingConnectionEJBReceiver- Line:134) - Failed to open channel for context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#bbaebd6, receiver=Remoting connection EJB receiver [connection=Remoting connection <78e43506>,channel=jboss.ejb,nodename=webdev01]} org.jboss.remoting3.ProtocolException: Too many channels open
Once the error occurred, it required me to restart my jboss.And it comes again after sometimes.
Appreciate it if anyone wound able to help on my problem faced.
Thanks
You are using mixture of context properties.
This should be enough
final Properties ejbProperties = new Properties();
ejbProperties.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "false");
ejbProperties.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
ejbProperties.put("remote.connections", "1");
ejbProperties.put("remote.connection.1.host", "localhost");
ejbProperties.put("remote.connection.1.port", "4447");
ejbProperties.put("remote.connection.1.username", "ejbuser");
ejbProperties.put("remote.connection.1.password", "ejbuser123!");
final EJBClientConfiguration ejbClientConfiguration = new PropertiesBasedEJBClientConfiguration(ejbProperties);
final ConfigBasedEJBClientContextSelector selector = new ConfigBasedEJBClientContextSelector(ejbClientConfiguration);
EJBClientContext.setSelector(selector);
final Context context = new InitialContext(ejbProperties);
// lookup
Foo proxy = context.lookup("ejb:/createUserOperation/CreateUserGenerator!password.api.CreateUserService");
when using org.jboss.ejb.client.naming it creates EJBClientContext object.
When closing context you are closing InitialContext not EJBClientContext
to close EJBClientContext:
EJBClientContext.getCurrent().close();
There is a known JBoss bug (EAP 6, AS 7) whereby opening and closing too many InitialContext instances too quickly causes the following error:
ERROR: Failed to open channel for context EJBReceiverContext
Instead of:
final Properties properties = ...
final Context context = new InitialContext( properties );
Try caching the context for a set of properties instead:
private Map<Integer, InitialContext> initialContexts = new HashMap<>();
final Context context = getInitialContext(properties);
private InitialContext getInitialContext(final Properties properties) throws Exception {
final Integer hash = properties.hashCode();
InitialContext result = initialContexts.get(hash);
if (result == null) {
result = new InitialContext(properties);
initialContexts.put(hash, result);
}
return result;
}
Remember to call close() when the context is no longer necessary.

Getting java.lang.NoSuchMethodError exception when using GWT + JasperReports

I have integrated JasperReports on my NetBeans platform and I am able to generate reports using the following code:
Map<String, Object> params = new HashMap<String, Object>();
Connection conn = DriverManager.getConnection("databaseUrl", "userid", "password");
JasperReport jasperReport = JasperCompileManager.compileReport(reportSource);
JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport, params, conn);
JasperExportManager.exportReportToHtmlFile(jasperPrint, reportDest);
JasperViewer.viewReport(jasperPrint);
This stuff works perfect.
But not I'm trying to integrate JasperReports with GWT. I have my server as GlassFish server.
I am getting the Connection object using the followind code:
public static Connection getConnection() {
try {
String JNDI = "JNDI name";
InitialContext initCtx = new InitialContext();
javax.sql.DataSource ds = (javax.sql.DataSource) initCtx.lookup(JNDI);
Connection conn = (Connection) ds.getConnection();
return conn;
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
and then
Map<String, Object> params = new HashMap<String, Object>();
JasperReport jasperReport = JasperCompileManager.compileReport(reportSource);
JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport, params, getConnection());
JasperExportManager.exportReportToHtmlFile(jasperPrint, reportDest);
JasperViewer.viewReport(jasperPrint);
but i always get an Error. Here is a stacktrace:
com.google.gwt.user.server.rpc.UnexpectedException:
Service method 'public abstract java.lang.Boolean com.client.service.GenerateReport()'
threw an unexpected exception: java.lang.NoSuchMethodError:
net.sf.jasperreports.engine.fonts.SimpleFontFamily.setExportFonts(Ljava/util/Map‌​);
I am implementing this on Server. I am having RPC calls to get this method to work when a button is clicked.
Can you please help me how to work on this. (That is to integrate JasperReports with GWT).
I would highly appreciate any explanation with some code as i am just a beginner.
Thanks
Without the aid of error messages, I would say that you have Google App Engine enabled in your eclipse project preferences. GAE does NOT allow you to write to the file system, or make calls to a database.
Try disabling GAE, and things should work fine.