AEM with TarMK Cold Standby Not working - aem

I am trying to configure AEM with TarMK Cold Standby. I have followed the below urls to configure
https://docs.adobe.com/docs/en/aem/6-0/deploy/recommended-deploys/tarmk-cold-standby.html
http://cq-ops.tumblr.com/post/102621710969/how-to-configure-an-aem-6-cold-standby-failover
But i am getting java.lang.IllegalStateException :
java.lang.IllegalStateException:Attempt to read external blob with blobId
[c184d2a3f1dbc709004a45ae6c5df7624c2ae653#32768] without specifying BlobStore
at org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getReference(SegmentBlob.java:118)
at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeBlob(SegmentWriter.java:706)
at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeProperty(SegmentWriter.java:808)
at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeProperty(SegmentWriter.java:796)

Related

"SchemaRegistryException: Failed to get Kafka cluster ID" for LOCAL setup

I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now

How to register app from private repo in Spring Cloud dataflow 2.6.1

I'm using SCDF 2.6.1 in Openshift 3, and I'm facing error while registering the app, error log like below :
java.lang.NullPointerException: null
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getRegistryRequest(DefaultContainerImageMetadataResolver.java:162)
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getImageLabels(DefaultContainerImageMetadataResolver.java:110)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.resolvePortNamesFromContainerImage(BootApplicationConfigurationMetadataResolver.java:215)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.listPortNames(BootApplicationConfigurationMetadataResolver.java:163)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.getInfo(AppRegistryController.java:193)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.info(AppRegistryController.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I checked the line of code in DefaultContainerImageMetadataResolver.java:162
// Convert the image name into a well-formed ContainerImage
ContainerImage containerImage = this.containerImageParser.parse(imageName);
// Find a registry configuration that matches the image's registry host
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
// Retrieve a registry authorizer that supports the configured authorization type.
RegistryAuthorizer registryAuthorizer = this.registryAuthorizerMap.get(registryConf.getAuthorizationType());
I'm pretty sure the error is because registryConf is null as result from
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
How to put my private repo URI in registryConfigurationMap ?
I have tried to put imagePullSecret in the deployment.yml which is registered with the private repo, but I think it doesn't work because in the startup log, I still see :
2020-09-03 04:55:24.111 INFO 1 --- [ main] urationMetadataResolverAutoConfiguration :
Final Registry Configurations: {registry-1.docker.io=RegistryConfiguration{registryHost='registry-1.docker.io', user='null', secret='****'', authorizationType=dockeroauth2, manifestMediaType='application/vnd.docker.distribution.manifest.v2+json', disableSslVerification='false',
extra={registryAuthUri=https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repository}:pull&offline_token=1&client_id=shell }}}
The only place where SCDF server downloads the container image layer is when it looks for app metadata.
Currently, this is configured to use the docker registry host (as this is where all the out-of-the-box applications are hosted).
If you want to override, you can modify these property values at the time of server startup and proceed.
Remember the fact that this configuration is only needed to download the app metadata layer of the image - not to download the entire container image at the SCDF server side.

Starting Druid to consume data from kafka

Took the latest version of druid 0.16.0-incubating.
Had 2 question .
1) As mentioned in quick start , micro-quick-start doesnt work as it complains about no file jvm.config and main.config under /conf/druid/single-server/micro-quickstart/coordinator-overlord .
2) As micro qucik start failed i started to try with single-server-small.
Was trying to import data from kafka in single-server-small but unable to do so as it says extension is not loaded , by the way which i can see gets loaded in logs.
But i think my main problem is when ever i land up on 'Load data' Section on druid web page on localhost:8888 , it keeps me giving below error
"Failed to get overlord modules : Unable to determine destination for [/proxy/overlord/status]; is your coordinator/overlord running ?"
I can see my coordinator-overlord process up .
Any suggestions ?
Thanks
Delete your kafka logs and druid logs(the location set up in common.runtime.properties) and then try again.

ATG:Error while baseline indexing- Unable to process any CSF calls as the Credential Store server is not enabled

I am getting the following error while doing baseline index of my Endeca application in ATG
15:26:47,891 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-201) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,913 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (Thread-201) Starting bulk load
15:26:47,915 INFO [nucleusNamespace.atg.commerce.endeca.index.CategoryToDimensionOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Fa
iled to cancel incremental load of /atg/commerce/endeca/index/CategoryToDimensionOutputConfig, probably because no bulk load was running.
15:26:47,916 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-203) Opening configuration repository connection for application logistore
15:26:47,917 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-203) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,916 INFO [nucleusNamespace.atg.commerce.search.ProductCatalogOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to can
cel incremental load of /atg/commerce/search/ProductCatalogOutputConfig, probably because no bulk load was running.
15:26:47,917 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to canc
el incremental load of /atg/commerce/search/StoreLocationOutputConfig, probably because no bulk load was running.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-199) Opening configuration repository connection for application logistore
15:26:47,919 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-199) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,919 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-203) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-207) Opening configuration repository connection for application logistore
15:26:47,920 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-207) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,921 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-207) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
After doing extensive research I found that C:\ATG\ATG11.2\home\servers\atg_production_lockserver\localconfig\atg\dynamo\server\OPSSInitializer.properties has path for jps-config.xml ie
JPSConfigurationLocation=C:/ATG/ATG11.2/home/../home/security/jps-config.xml
This jps-config.xml has some CSF related configuration.
How can I get rid of this error for successful baseline indexing.
I am stuck on this part.
This happens if you change the default workbench password. Simple solution would be, change Endeca experience manager password back to admin and try.
Otherwise, password needs to be changed in multiple places.
Thanks,
Ajay Agrawal
Go to the OPSSInitializer component in dyn admin and check whether the path for jps-config.xml specified is correct there. If not, correct the path.

MSDTC (Distributed Transaction Coordinator) Stops working. Error code -1073737669

I cannot start Distributed Transaction Coordinator service.
It stoped to work few days ago.
When I am trying to start service:
Registry properties:
RPC (For a test values was changed here to oposite and back - without any results ):
Windows logs\application logs:
53283
A MS DTC component has encountered an internal error. The process is being terminated. Error Specifics: DtcSystemShutdown (d:\w7rtm\com\complus\dtc\dtc\msdtc\src\msdtc.cpp#2539): Shutting down with an error
4111
The MS DTC service is stopping.
4102
DTC Security Configuration values (OFF = 0 and ON = 1): Network Administration of Transactions = 1,
Network Clients = 1,
Inbound Distributed Transactions using Native MSDTC Protocol = 1,
Outbound Distributed Transactions using Native MSDTC Protocol = 1,
Transaction Internet Protocol (TIP) = 0,
XA Transactions = 1,
SNA LU 6.2 Transactions = 1
Could not initialize the MS DTC Transaction Manager.
4356
Failed to initialize the MS DTC Communication Manager. Error Specifics: hr = 0x80070057, d:\w7rtm\com\complus\dtc\dtc\cm\src\ccm.cpp:2117, CmdLine: C:\Windows\System32\msdtc.exe, Pid: 5332
4358
The MS DTC Connection Manager is unable to register with RPC to use one of LRPC, TCP/IP, or UDP/IP. Please ensure that RPC is configured properly. If "ServerTcpPort" registry key is configured(DWORD value under the HKEY_LOCAL_MACHINE\Software\Microsoft\MSDTC for local DTC instance or under cluster hive for clustered DTC instance), please verify if the configured port is valid and the port is not already in use by a different component. Error Specifics:hr = 0x80070057, d:\w7rtm\com\complus\dtc\dtc\cm\src\iomgrsrv.cpp:2523, CmdLine: C:\Windows\System32\msdtc.exe, Pid: 5332
4156
String message: RPC raised an exception with a return code RPC_S_INVALIDA_ARG..
I found that we can use -resetlog command. But this doesnot resolving my problem:
Firewall is disabled.
Try to delete key HKLM\Software\Microsoft\Rpc\Internet from registry.
To get around this issue, I had to copy the log file (I accidentally deleted) to the location specified by the Local DTC Log information location.