Could not find work item handler for Email - drools

I am trying to achieve email alert from bpm flow in drools.
Below are the steps I have taken
Step 1 : Added email task in bpm flow and configured
Step 2 : Configured work item handlers
new org.jbpm.process.workitem.email.EmailWorkItemHandler("smtp.gmail.com","587","hello#gmail.com","xseregrgr","true")
Step3 : Standalone.xml configuration file changes
<subsystem xmlns="urn:jboss:domain:mail:3.0">
<mail-session name="default" jndi-name="java:jboss/mail/Default">
<smtp-server outbound-socket-binding-ref="mail-smtp" username="hello#gmail.com" password="xyz" tls="true"/>
</mail-session>
</subsystem>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="smtp.gmail.com" port="587"/>
</outbound-socket-binding>
Post all this during testing , we received Could not find work item handler for Email
Anything got missed. Please let me know.

You are missing the WorkItem Definition. You can give it name like "Gmail email client send" and should look something like this:
[
[
"name" : "Email",
"displayName" : "Gmail email client send",
"category" : "jbpm-workitems-email",
"description" : "",
"defaultHandler" : "mvel: new org.jbpm.process.workitem.email.EmailWorkItemHandler()",
"documentation" : "jbpm-workitems-email/index.html",
"parameters" : [
"Reply-To" : new StringDataType()
,"Cc" : new StringDataType()
,"Bcc" : new StringDataType()
,"From" : new StringDataType()
,"To" : new StringDataType()
,"Body" : new StringDataType()
,"Attachments" : new StringDataType()
,"Subject" : new StringDataType()
,"Debug" : new StringDataType()
,"Template" : new StringDataType()
],
"mavenDependencies" : [
"org.jbpm:jbpm-workitems-email:7.34.0.Final"
],
"icon" : "Email.png"
]
]

The solution mentioned here works fine with the latest JBPM 7.25 version. Here some more details about the signle steps.
It's important that you delete everything that you added so far related to the email sending (e.g. Service Worker, Deployment configs, EMail send tasks in process model). This could be outdated or influence the configuration negative.
Start by open your "Project Settings" in Business Central and click "Install" for the "Email" Service Task.
Provide your EMail-Provider Configuration (Host, Port, User Name, Password). E.g. for Gmail this can be found here.
Warning: Please note that for Gmail you need to active the access for "Less secure apps" here to connect to the SMTP Server.
With the provided Information JBPM generates the required WorkItemHandler configuration (you need to refresh the page to see it). If you need to, you can update values here later on when you need to change the Mail-Provider configuration.
Next we build a new Business Process Model and add the required Email task.
Finally we can configure the Email task with the information for sending the email (e.g. From, To, Subject, Body)
That's all. Now you can deploy and run the Process.

go to Settings/Service Tasks and Install Email

Related

Setup Apereo Cas Management integrated with CAS server

I want to install Apero Cas Management (verison 6.0) and integrate it with Cas Server (version 6.0).
I have installed following these step:
Step 1: I installed Cas Server
I checked it with REST API. It worked.
My server stays at http://203.162.141.7:8080
And this is configuration of my Cas server. I put this config at /etc/cas/config. Here is my file cas.properties file
cas.server.name=http://203.162.141.7:8080
cas.server.prefix=${cas.server.name}/cas
logging.config: file:/etc/cas/config/log4j2.xml
server.port=8080
server.ssl.enabled=false
cas.serviceRegistry.initFromJson=false
cas.serviceRegistry.json.location=file:/etc/cas/services-repo
cas.authn.oauth.grants.resourceOwner.requireServiceHeader=true
cas.authn.oauth.userProfileViewType=NESTED
cas.authn.policy.requiredHandlerAuthenticationPolicyEnabled=false
cas.authn.attributeRepository.stub.attributes.email=casuser#example.org
#REST API JSON
cas.rest.attributeName=email
cas.rest.attributeValue=.+example.*
Step 2: I installed Cas-management-overlay
I put my cas-management-overlay's config file a /etc/cas/config too. Here is my management.properties file
cas.server.name=http://203.162.141.7:8080
cas.server.prefix=${cas.server.name}/cas
mgmt.serverName=http://203.162.141.7:8088
mgmt.adminRoles[0]=ROLE_ADMIN
mgmt.userPropertiesFile=file:/etc/cas/config/users.json
server.port=8088
server.ssl.enabled=false
logging.config=file:/etc/cas/config/log4j2-management.xml
And my here is users.json file
{
"casuser" : {
"#class" : "org.apereo.cas.mgmt.authz.json.UserAuthorizationDefinition",
"roles" : [ "ROLE_ADMIN" ]
}
}
Then I run ./build.sh, and it shows me that
Finally, I access this link to open cas-management http://203.162.141.7:8088/cas-management, but the it redirects to this url http://203.162.141.7:8080/cas/login?service=http%3A%2F%2F203.162.141.7%3A8088%2Fcas-management%2F and shows this error below
I don't know where I have gone wrong.
I think since you haven't told the management webapp about the location of the service registry, it can't add itself as a registered service.
Manually add a registered service for http://203.162.141.7:8088/cas-management and you should be able to log in to the management app at that point.
Here is my answer for cas-management register file name /etc/cas/services-repo/casManagement-1.json
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId":"^https://domain:8088/cas-management.+",
"name" : "casManagement",
"id" : 1,
"evaluationOrder" : 1,
"allowedAttributes":["cn","mail"]
}

Can anyone help me with this error code in Data Fusion

I'm having a go at creating my first data fusion pipeline.
The data is going from Google Cloud Storage csv file to Big Query.
I have created the pipeline and carried out a preview run which was successful but after deployment trying to run resulted in error.
I pretty much accepted all the default settings apart from obviously configuring my source and destination.
Error from Log ...
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Required 'compute.firewalls.list' permission for
'projects/xxxxxxxxxxx'",
"reason" : "forbidden"
} ],
"message" : "Required 'compute.firewalls.list' permission for
'projects/xxxxxxxxxx'"
}
After deployment run fails
Do note that as a part of creating an instance, you must set up permissions [0]. The role "Cloud Data Fusion API Service Agent" must be granted to the exact service account, as specified in that document, which has an email address that begins with "cloud-datafusion-management-sa#...".
Doing so should resolve your issue.
[0] : https://cloud.google.com/data-fusion/docs/how-to/create-instance#setting_up_permissions

XS Project Share SAP HANA cannot see in browser

I have project with XS project, I already shared to HANA packages but failed when show to browser, the error show:
404 - Not found
We could not find the resource you're trying to access.
It might be misspelled or currently unavailable.
My .xsaccess:
{
"exposed" : true,
"authentication" : [{"method":"Basic"}],
"cache_control" : "no-cache, no-store",
"cors" : {
"enabled" : false
}
}
.xsapp:
{}
xsprivileges:
{
"privileges" : [
{ "name" : "ProfileOwner", "description" : "Profile Ownership" }
]
}
and one question, is it possible the problem because the role user or privileges user, about authorization? How to fix this issue? thanks
The .xsapp should be a empty file with no content in it. The exposed parameter in the .xsaccess should be enough to expose your project. Make sure that all files are activated in the HANA repository.
If the error was authorization specific you would get a 503 error. If the 404 error is a XSEngine page, either your code isn't activated or the package path is incorrect.

Spark REST API difficulties in understanding, goal sending RESTful messages from webpage

For a project I would like to run Spark via a webpage. Here the goal is to submit dynamically submission requests and status updates. As inspiration I used the following weblink: When asking for
I am sending a REST request for checking spark submission after submitting the below Spark request: http://arturmkrtchyan.com/apache-spark-hidden-rest-api
The Request code for a Spark job submission is the following:
curl -X POST http://sparkmasterIP:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
"action" : "CreateSubmissionRequest",
"appArgs" : [ "/home/opc/TestApp.jar"],
"appResource" : "file:/home/opc/TestApp.jar",
"clientSparkVersion" : "1.6.0",
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass" : "com.Test",
"sparkProperties" : {
"spark.driver.supervise" : "false",
"spark.app.name" : "TestJob",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode" : "cluster",
"spark.master" : "spark://sparkmasterIP:6066"
}
}'
Response:
{
"action" : "CreateSubmissionResponse",
"message" : "Driver successfully submitted as driver-20170302152313-0044",
"serverSparkVersion" : "1.6.0",
"submissionId" : "driver-20170302152313-0044",
"success" : true
}
When asking for the submission status there were some difficulties. To request the submission status I used the submissionId displayed in the response code above. So the following command was used:
curl http://masterIP:6066/v1/submissions/status/driver-20170302152313-0044
The Response for Submission Status contained the following error:
"message" : "Exception from the cluster:\njava.io.FileNotFoundException: /home/opc/TestApp.jar denied)\n\tjava.io.FileInputStream.open0(Native Method)\n\tjava.io.FileInputStream.open(FileInputStream.java:195)\n\tjava.io.FileInputStream.<init>(FileInputStream.java:138)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:124)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:114)\n\torg.spark-project.guava.io.ByteSource.copyTo(ByteSource.java:202)\n\torg.spark-project.guava.io.Files.copy(Files.java:436)\n\torg.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:540)\n\torg.apache.spark.util.Utils$.copyFile(Utils.scala:511)\n\torg.apache.spark.util.Utils$.doFetchFile(Utils.scala:596)\n\torg.apache.spark.util.Utils$.fetchFile(Utils.scala:395)\n\torg.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:150)\n\torg.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:79)",
My question is how to use such an API, in such a way that the submission status can be obtained. If there is another API where the correct status can be obtained, then I would like a short description of how this API works in a RESTful way.
Thanks
As noted down in the comments of the blog http://arturmkrtchyan.com/apache-spark-hidden-rest-api , some more commenter's are experiencing this problem as well. Here below I will try to explain some of the possible reasons.
It looks like your file:/home/opc/TestApp.jar is not found/denied. This might be because of the directory cannot be found (access denied,cannot find). This is likely because it is not there on all nodes and the Spark submit is in cluster mode.
As noted in the Spark documentation for application jar Spark documentation. Application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.
To solve this one of the recommendations I can do is to execute the command using spark-submit. More information about spark-submit can be found at Spark submit and a book by Jacek Laskowski
spark-submit --status [submission ID] --master [spark://...]

Is CMS Replication required for ApplicationPool also?

Is CMS Replication required for ApplicationPool also?
When I run the command Get-CsManagementStoreReplicationStatus I get UpToDate : True for my domain but it comes False for my ApplicationPool.
UpToDate : True
ReplicaFqdn : ****.*****
LastStatusReport : 07-08-2014 11:42:26
LastUpdateCreation : 07-08-2014 11:42:26
ProductVersion : 5.0.8308.0
UpToDate : False
ReplicaFqdn : MyApplicationPool.****.*****
LastStatusReport :
LastUpdateCreation : 08-08-2014 15:16:03
ProductVersion :
UpToDate : False
ReplicaFqdn : ****.*****
LastStatusReport :
LastUpdateCreation : 08-08-2014 15:10:59
Am I on the right track? Have I created my ApplicationPool wrongly?
Yes, UCMA applications running on an app server generally require access to the CMS, so replication should be enabled.
On the app server, you'd need to:
Ensure the "Lync Server Replica Replicator Agent" service is running
Run Enable-CsReplica in the management shell
Run Enable-CsTopoloy
Then run Invoke-CSManagementStoreReplication to force a replication
I've noticed that it often takes a while for the CMS to be replicated to the app server, so you might need to run Get-CsManagementStoreReplicationStatus a few times before you see UpToDate change to True.