AEM - Users not synched with User Synchronization using Sling Distribution - aem

I do not see new user info (or updates to user profiles) being synched to/from Publish-Author.
When I create a new user on Publish, I can see that there is a “SimpleDistributionAgent” on Author.. but I cannot find the user in Author(searched entire crx).
I did all the osgi configs as detailed here:
https://docs.adobe.com/docs/en/aem/6-2/administer/security/security/sync.html
I do not see any error in the log…
Publish error.log
09.03.2017 14:27:41.711 *INFO* [127.0.0.1 [1489091261702] POST /libs/sling/distribution/services/exporters/socialpubsync-reverse HTTP/1.1] org.apache.sling.distribution.servlet.DistributionPackageExporterServlet Processed distribution export request in 8 ms: : fetched 1
09.03.2017 14:27:41.841 *INFO* [127.0.0.1 [1489091261793] POST /libs/sling/distribution/services/exporters/socialpubsync-reverse HTTP/1.1]
org.apache.sling.distribution.agent.impl.SimpleDistributionAgent [agent][socialpubsync-reverse] exported package distrpackage_1489091245609_7459cd18-91d9-404c-bb08-a296dd5d4aa4 with info DistributionPackageInfo{ request.type=ADD,
request.paths=[/home/users/C/C3Pz6GaEbUDD5-rdYr7Z/profile]} from queue default by exporter socialpubsync-reverse
Author error.log
09.03.2017 14:27:41.740 *INFO* [sling-default-19-scheduledEventTriggerorg.apache.sling.distribution.agent.impl.SimpleDistributionAgent$AgentBasedRequestHandler#7971f913]
org.apache.jackrabbit.vault.packaging.impl.JcrPackageDefinitionImpl unwrapping package sling/distribution:socialpubsync-vlt_1489091245573_674d4c01-853e-4c53-8828-31f63dda85d2:0.0.1
09.03.2017 14:27:41.801 *INFO* [sling-threadpool-70fe0a04-9496-4992-803d-ea75f39514ae-(apache-sling-job-thread-pool)-3-org_apache_sling_distribution_queue_socialpubsync_endpoint0(org/apache/sling/distribution/queue/socialpubsync/endpoint0)]
org.apache.sling.distribution.agent.impl.SimpleDistributionAgent [agent][socialpubsync] [endpoint0] PACKAGE-DELIVERED DSTRQ45:
ADD paths=[/home/users/C/C3Pz6GaEbUDD5-rdYr7Z/profile], importTime=6ms, execTime=879ms, size=5058B
No errors in Author and Publish Sync diagnostics
What am I missing?

User synchronization will not create users on Author. The sync is only between Publishers.
As of AEM 6.1, when user synchronization is enabled, user data is automatically synchronized across the publish instances in the farm and are not created on author.
https://docs.adobe.com/docs/en/aem/6-2/administer/security/security/sync.html
With the above setup, i started up 2 publish instances (4503, 4504) and when i create or update any user (or profile), the data is synched between both Publish instances.

Related

JBPM7.XX: Error creating Task When Authentication via LDAP

I have integrated authentication jbpm with LDAP. But, When start process instance. I cannot create user task.
Here is log server, can anyone can help?
2021-05-14 17:18:39,683 ERROR [org.jbpm.services.task.wih.LocalHTWorkItemHandler] (default task-10) Fri May 14 17:18:39 ICT 2021: Error when creating task on task server for work item id 5. Error reported by task server: There are no known Business Administrators, task cannot be created according to WS-HT specification: org.jbpm.services.task.exception.CannotAddTaskException: There are no known Business Administrators, task cannot be created according to WS-HT specification
at org.jbpm.services.task.commands.UserGroupCallbackTaskCommand.doCallbackOperationForPeopleAssignments(UserGroupCallbackTaskCommand.java:298)
at org.jbpm.services.task.commands.AddTaskCommand.execute(AddTaskCommand.java:109)
at org.jbpm.services.task.commands.AddTaskCommand.execute(AddTaskCommand.java:53)
at org.jbpm.services.task.commands.TaskCommandExecutorImpl$SelfExecutionCommandService.execute(TaskCommandExecutorImpl.java:80)
at org.jbpm.services.task.commands.TaskCommandExecutorImpl$SelfExecutionCommandService.execute(TaskCommandExecutorImpl.java:65)
You need to create an "admin" group, for example
cn=admin,ou=Roles,dc=jbpm,dc=org
You may take a look at "LDAP structure" chapter and source code:
https://blog.kie.org/2021/02/migrating-jbpm-images-secured-by-ldap-to-elytron.html

wso2 API Endpoint creation failed: 404 ressource not found

I am trying to create a REST API with wso2 API_Manager to gather data from a Postgres database (learning purpose). I struggle doing so and I would like to know whether:
I did not understand wso2 components' roles properly (new techno and subject for me)
or there is an error in the way I configured the manager.
System setup
I used this official docker image, added postgres jdbc jar in /repository/components/lib/ and added the following in /repository/conf/datasources/master-datasources.xml:
<datasource>
<name>s0m3dAtabas3</name>
<description>The db used for testing purposes</description>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://sandor_postgres:5432/s0m3dAtabas3</url>
<driverClassName>org.postgresql.Driver</driverClassName>
<username>s0m3us3rfr0mdAtAMaj0r</username>
<password>N0t5uchAs1mple1</password>
<maxActive>80</maxActive>
<minIdle>5</minIdle>
<maxWait>60000</maxWait>
<defaultAutoCommit>false</defaultAutoCommit>
<testOnBorrow>true</testOnBorrow>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
I made sure than the postgres' container named sandor_postgres is accessible from wso2's with these credentials. In this database, I have a table called something. The image comes with the following UIs:
admin
publisher
store
Graphical API creation
I first followed the WorldBank tutorial which seemed crystal clear (though I am not quite sure where the data came from). I then tried to adapt it.
Step 1: Design
I used the database name as context (s0m3dAtabas3) v.1.0.0. Since the table is called something, the url pattern I end up with is /s0m3dAtabas3/1.0.0/something
Step 2: Implement
This is where things start to be confusing. No matter the resource path I use in the Endpoint (end point type REST), I get a 404 and the logs are not very helpful
http://192.168.8.111:8280 -> 404
http://192.168.8.111:8280/something -> 404
http://192.168.8.111:9443/tried_several -> Invalid - Error connecting to backend
http://192.168.8.111:8243/tried_several -> Invalid - Error connecting to backend
INFO - InboundDBSyncRequestEvent Running DB sync task.
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /s0m3dAtabas3/1.0.0
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:42:31,030+0000]
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:42:31,197+0000]
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /s0m3dAtabas3/bullshit
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:48:30,649+0000]
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:48:30,790+0000]
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /
INFO - InboundDBSyncRequestEvent Running DB sync task.
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /
Did I miss some important configuration step or wso2 API Manager is not the standalone component I thought it was and requires another component to achieve what I am looking for?
It seems there is a misunderstanding in the concept-wise.
Here is the basic idea of a typical API Management solution.
You have a web service (REST, SOAP, etc...) which you need to expose as a managed API. Now, you can front your service with API Manager and expose it as a managed API with security, rate limiting, managed life cycle etc.
In your case, it seems you don't have such a service, but only have a database table. So, before using API Manager to front your service, you first need to expose your table as a service. For that purpose, I'd suggest you use the data service component of WSO2 EI 7.0.0. See [1] for how to do that. Once you have your service ready, you can use API Manager to expose it as a managed API.
[1] https://ei.docs.wso2.com/en/latest/micro-integrator/use-cases/tutorials/sending-a-simple-message-to-a-datasource/

Rundeck and Hipchat Plugin

I am configuring Rundeck in my work and I want to receive all notifications from jobs via Hipchat. I have found this plugin: https://github.com/hbakkum/rundeck-hipchat-plugin
I copied .jar file in Rundeck libext directory and now I see Hipchat option in Job notification. Despite the fact that I wrote the room ID and I get a token to allow Rundeck sends notifications to this room, nothing happens..
I saw this topic: https://github.com/rundeck/rundeck/issues/764
Also I am getting this logs:
2018-01-31 15:15:12,122 [quartzScheduler_Worker-3] INFO grails.app.services.rundeck.services.ExecutionUtilService - Execution successful: 13 in project proyecto_prueba
2018-01-31 15:15:12,501 [quartzScheduler_Worker-3] INFO grails.app.services.rundeck.services.ExecutionService - updated scheduled Execution
2018-01-31 15:15:31,088 [quartzScheduler_Worker-4] ERROR grails.app.services.rundeck.services.PluginService - Notification: configuration was not valid for plugin 'HipChatNotification': apiAuthToken: required
is Hipchat plugin working wrong because last update was in 2016 or Am I configuring something wrong?
Thanks beforehand.
Regards,
Mike.
I found the issue. Hipchat API Token has to have next scopes: Send notification y View Room. With these scopes, I have to specify next lines in Rundeck config files:
configure framework:
framework.plugin.Notification.HipChatNotification.apiVersion=v2
framework.plugin.Notification.HipChatNotification.apiAuthToken=value
configure project:
project.plugin.Notification.HipChatNotification.apiVersion=v2
project.plugin.Notification.HipChatNotification.apiAuthToken=value

ATG:Error while baseline indexing- Unable to process any CSF calls as the Credential Store server is not enabled

I am getting the following error while doing baseline index of my Endeca application in ATG
15:26:47,891 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-201) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,913 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (Thread-201) Starting bulk load
15:26:47,915 INFO [nucleusNamespace.atg.commerce.endeca.index.CategoryToDimensionOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Fa
iled to cancel incremental load of /atg/commerce/endeca/index/CategoryToDimensionOutputConfig, probably because no bulk load was running.
15:26:47,916 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-203) Opening configuration repository connection for application logistore
15:26:47,917 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-203) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,916 INFO [nucleusNamespace.atg.commerce.search.ProductCatalogOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to can
cel incremental load of /atg/commerce/search/ProductCatalogOutputConfig, probably because no bulk load was running.
15:26:47,917 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to canc
el incremental load of /atg/commerce/search/StoreLocationOutputConfig, probably because no bulk load was running.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-199) Opening configuration repository connection for application logistore
15:26:47,919 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-199) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,919 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-203) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-207) Opening configuration repository connection for application logistore
15:26:47,920 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-207) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,921 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-207) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
After doing extensive research I found that C:\ATG\ATG11.2\home\servers\atg_production_lockserver\localconfig\atg\dynamo\server\OPSSInitializer.properties has path for jps-config.xml ie
JPSConfigurationLocation=C:/ATG/ATG11.2/home/../home/security/jps-config.xml
This jps-config.xml has some CSF related configuration.
How can I get rid of this error for successful baseline indexing.
I am stuck on this part.
This happens if you change the default workbench password. Simple solution would be, change Endeca experience manager password back to admin and try.
Otherwise, password needs to be changed in multiple places.
Thanks,
Ajay Agrawal
Go to the OPSSInitializer component in dyn admin and check whether the path for jps-config.xml specified is correct there. If not, correct the path.

[ejabberd w/ smack]: how to successfully create a leaf node inside a pubsub collection node

A registered user created a collection node on my ejabberd server using smack library and following config:
PubSubManager psMgr = new PubSubManager(conn, "pubsub.mydomain");
ConfigureForm CForm = new ConfigureForm(DataForm.Type.submit);
CForm.setAccessModel(AccessModel.open); //anyone can access
CForm.setDeliverPayloads(true); //allow payloads with notif
CForm.setNotifyDelete(true); //notify subscribers when nodeis deleted
CForm.setPersistentItems(true); //save published items in storage # server
CForm.setPresenceBasedDelivery(false); //notify subscribers even when offline
CForm.setPublishModel(PublishModel.open); //only publishers can post to this node
CForm.setNodeType(NodeType.collection);
CForm.setChildrenAssociationPolicy(ChildrenAssociationPolicy.all);
CForm.setChildrenMax(65536);
psMgr.createNode("/collection_node", lCForm);
....this collection node is created fine. Note that the children association policy is 'all'.
Now, if a different user, registered on the same server, tries to create a leaf node inside this collection node, the server returns 'forbidden - auth' error.
ConfigureForm form = new ConfigureForm(DataForm.Type.submit);
form.setNodeType(NodeType.leaf);
form.setCollection("/collection_node");
psMgr.createNode("/collection_node/leaf_node", form);
I have these plugins enabled in my ejabberd server for the pubsub module ["collections", "dag", "flat", "hometree", "pep"].
Can anyone please suggest why should the leaf node creation fail even after the collection node has granted 'all' to associate child nodes with itself.
Smack version is: 4.1.2
ejabberd version: (for some weird reason shows): 0.0 . [However, the server was installed from source code available on (https://github.com/processone/ejabberd/archive/master.zip) in Nov-2015 with erlang installed at the same time (OTP 17.1). So should be pretty much latest unless i screwed up something during installation.]