xDB not storing any Interactions - mongodb

Update: So the issue was that my application Global.asax.cs did not derive from Sitecore.Web.Application.
So I have just installed Sitecore 8 with MongoDB 2.6.11.
For test purposes I placed the code below in page load event to activate the goal I have created earlier in sitecore.
The goal was created successfully via deploy and publish. I have also confirmed the item id of the goal is correct.
if (Sitecore.Analytics.Tracker.IsActive && Sitecore.Analytics.Tracker.Current.CurrentPage != null)
{
Sitecore.Data.Items.Item GoaltoTrigger = Sitecore.Context.Database.GetItem("{EDA8EA2C-7AF5-4D0F-AF76-A9C4E6BD7169}");
if (GoaltoTrigger != null)
{
Sitecore.Analytics.Data.Items.PageEventItem registerthegoal = new Sitecore.Analytics.Data.Items.PageEventItem(GoaltoTrigger);
Sitecore.Analytics.Model.PageEventData eventData = Sitecore.Analytics.Tracker.Current.CurrentPage.Register(registerthegoal);
eventData.Data = GoaltoTrigger["Description"];
Sitecore.Analytics.Tracker.Current.Interaction.AcceptModifications();
}
}
Session.Abandon();
Sadly this did not work and I can not see the goal in xDB under Interactions.
Other things I have looked into is that my layout definitely has the tag
<sc:VisitorIdentification runat="server" />
my Global.asax implements Sitecore.Web.Application
public class Global : Sitecore.Web.Application
Yet no luck. The Interactions are no where to be seen in Mongo (using mongo shell and roboMongo to look up the collection). Am I missing something else?
Sitecore Error
ManagedPoolThread #3 16:43:00 INFO Job ended: Sitecore.ListManagement.Analytics.UnlockContactListsAgent (units processed: )
12980 16:43:05 INFO Cache created: '[no name]' (max size: 976KB, running total: 2918MB)
12980 16:43:05 INFO Cache created: '[no name]' (max size: 976KB, running total: 2919MB)
12980 16:43:05 INFO Cache created: '[no name]' (max size: 976KB, running total: 2920MB)
12980 16:43:05 INFO Cache created: '[no name]' (max size: 976KB, running total: 2921MB)
12980 16:43:05 INFO Cache created: '[no name]' (max size: 976KB, running total: 2922MB)
ManagedPoolThread #5 16:43:06 ERROR Failed to perform MaxMind lookup
ManagedPoolThread #5 16:43:06 ERROR Failed to perform GeoIp lookup for 127.0.0.1
Exception: Sitecore.Analytics.Lookups.CannotParseResponseException
Message: Unexpected format. Cannot parse the MaxMind response for IP address: 127.0.0.1
Source: Sitecore.Analytics
at Sitecore.Analytics.Lookups.MaxMindProvider.GetInformationByIp(String ip)
at Sitecore.Analytics.Lookups.GeoIpManager.GetDataFromLookupProvider(GeoIpHandle geoIpHandle)
12980 16:43:08 INFO Cache created: 'WebUtil.QueryStringCache' (max size: 19KB, running total: 2922MB)
2700 16:43:08 INFO HttpModule is being initialized
12360 16:43:08 INFO HttpModule is being initialized
7068 16:43:08 INFO HttpModule is being initialized
9940 16:43:10 INFO [Experience Analytics]: Reduce agent found zero segments to process
ManagedPoolThread #1 16:43:10 INFO Job started: Sitecore.ListManagement.Analytics.UnlockContactListsAgent
ManagedPoolThread #1 16:43:10 INFO Job ended: Sitecore.ListManagement.Analytics.UnlockContactListsAgent (units processed: )

Triggering goals
First of all, this is the correct way to trigger a goal:
if (Sitecore.Analytics.Tracker.IsActive)
{
if (Sitecore.Analytics.Tracker.Current.CurrentPage != null)
{
var goalId = new Sitecore.Data.ID("{EDA8EA2C-7AF5-4D0F-AF76-A9C4E6BD7169}");
Sitecore.Analytics.Data.Items.PageEventItem goalToTrigger =
Sitecore.Analytics.Tracker.DefinitionItems.PageEvents[goalId];
if (goalToTrigger != null)
{
Sitecore.Analytics.Model.PageEventData eventData =
Sitecore.Analytics.Tracker.Current.CurrentPage.Register(goalToTrigger);
}
else
{
Sitecore.Diagnostics.Log.Error("Goal with ID " + goalId + " does not exist", this);
}
}
else
{
Sitecore.Diagnostics.Log.Error("Tracker.Current.CurrentPage is null", this);
}
}
else
{
Sitecore.Diagnostics.Log.Warn("The tracker is not active. Unable to register the goal.", this);
}
You should not attempt to change the event data after you've registered it.
Also, you should not call Interaction.AcceptModifications(), as this method is something xDB uses internally at some point.
CurrentPage.Register() is the only thing that you need to do.
Ending the session
I don't recommend using Session.Abandon(). It will probably result in saving your interaction to the collection database, but this way you are disrupting the normal flow of Sitecore's session. One of the problems that this may lead to is that the interaction's contact will remain locked for 21 minutes (or whatever your session timeout is set to + 1 minute).
Instead, for testing purposes, I recommend you to set session timeout to 1 minute and just wait for 1 minute after your last page request. This setting is located in the Web.config as an attribute of <sessionState>.
Troubleshooting interaction saving issues
Make sure that the analytics connection string is set properly.
Make sure that you have a license for xDB. You can see the list of available licenses in Sitecore Control Panel –> Administration –> Installed licenses.
a) In Sitecore 8.0 or lower, the license name is Sitecore.OMS.
b) In Sitecore 8.1 it's Sitecore.xDB.base.
Make sure that xDB and its tracking subsystem are enabled.
a) In Sitecore 8.0 or lower, Analytics.Enabled should be set to true in the Sitecore.Analytics.config.
b) In Sitecore 8.1, both Xdb.Enabled and Xdb.Tracking.Enabled should be set to true in the Sitecore.Xdb.config.
Tracking should also be enabled on site definitions.
a) In Sitecore 8.0 or lower, go to the <sites> section in the Web.config and check that enableAnalytics is not set to false on <site name="website"> or whatever site you are using.
b) In Sitecore 8.1, you should ensure that enableTracking is set to true for your site in the Sitecore.config.
Try making several page requests instead of just one before letting the session to expire.
Try disabling robot detection by setting both Analytics.Robots.IgnoreRobots and Analytics.AutoDetectBots to false in the Sitecore.Analytics.Tracking.config. If interactions are saved after this, I will update my answer with further instructions.
If nothing helps, go through the steps listed in the article Troubleshooting xDB data issues.

Related

PWA not working correctly offline. Uncaught (in promise) TypeError: Failed to fetch

I am trying to convert a simple webpage I have into a PWA in case the site it uses goes down.
I think I have done the majority of the work. The page is installable on my phone and passes all the Chrome lighthouse tests. But I get the following warning,
Web app manifest meets the installability requirements
Warnings: Page does not work offline. The page will not be regarded as installable after Chrome 93, stable release August 2021.
I also get the following warning and error in console,
The FetchEvent for "https://dannyj1984.github.io/index.html" resulted in a network error response: the promise was rejected.
Promise.then (async)
(anonymous) # serviceWorker.js:30
serviceWorker.js:1 Uncaught (in promise) TypeError: Failed to fetch
There is then a warning saying the site cannot be installed as does not work offline. I have read the chrome dev article which says from the chrome release in Aug21 apps that dont work offline wont be installable. But I am stuck on which part of my fetch is causing an issue. The code in my service worker is,
const TGAbxApp = "TG-ABX-App-v1"
const assets = [
//paths to files to add
]
self.addEventListener("install", installEvent => {
installEvent.waitUntil(
caches.open(TGAbxApp).then(cache => {
cache.addAll(assets)
})
)
})
self.addEventListener('fetch', function(event) {
event.respondWith(
caches.match(event.request)
.then(function(response) {
// Cache hit - return response
if (response) {
return response;
}
return fetch(event.request);
}
)
);
});
The above code for the fetch part of the service worker I took from Google and as I understand it, it first checks if there is data in the cache stored on install, if not it will request it from the network.
https://developer.chrome.com/blog/improved-pwa-offline-detection/
From Chrome 89 March 2021, it gives a warning if this check does not pass:
The installed service worker fetch event returns an HTTP 200 status code (indicating a successful fetch) in simulated offline mode.
So, in your case, the service worker should return a cached 'index.html' when fetch(event.request) is failed.
I've had the same problem. I re enabled the cache through the developer console->network. fixed

AEM - Users not synched with User Synchronization using Sling Distribution

I do not see new user info (or updates to user profiles) being synched to/from Publish-Author.
When I create a new user on Publish, I can see that there is a “SimpleDistributionAgent” on Author.. but I cannot find the user in Author(searched entire crx).
I did all the osgi configs as detailed here:
https://docs.adobe.com/docs/en/aem/6-2/administer/security/security/sync.html
I do not see any error in the log…
Publish error.log
09.03.2017 14:27:41.711 *INFO* [127.0.0.1 [1489091261702] POST /libs/sling/distribution/services/exporters/socialpubsync-reverse HTTP/1.1] org.apache.sling.distribution.servlet.DistributionPackageExporterServlet Processed distribution export request in 8 ms: : fetched 1
09.03.2017 14:27:41.841 *INFO* [127.0.0.1 [1489091261793] POST /libs/sling/distribution/services/exporters/socialpubsync-reverse HTTP/1.1]
org.apache.sling.distribution.agent.impl.SimpleDistributionAgent [agent][socialpubsync-reverse] exported package distrpackage_1489091245609_7459cd18-91d9-404c-bb08-a296dd5d4aa4 with info DistributionPackageInfo{ request.type=ADD,
request.paths=[/home/users/C/C3Pz6GaEbUDD5-rdYr7Z/profile]} from queue default by exporter socialpubsync-reverse
Author error.log
09.03.2017 14:27:41.740 *INFO* [sling-default-19-scheduledEventTriggerorg.apache.sling.distribution.agent.impl.SimpleDistributionAgent$AgentBasedRequestHandler#7971f913]
org.apache.jackrabbit.vault.packaging.impl.JcrPackageDefinitionImpl unwrapping package sling/distribution:socialpubsync-vlt_1489091245573_674d4c01-853e-4c53-8828-31f63dda85d2:0.0.1
09.03.2017 14:27:41.801 *INFO* [sling-threadpool-70fe0a04-9496-4992-803d-ea75f39514ae-(apache-sling-job-thread-pool)-3-org_apache_sling_distribution_queue_socialpubsync_endpoint0(org/apache/sling/distribution/queue/socialpubsync/endpoint0)]
org.apache.sling.distribution.agent.impl.SimpleDistributionAgent [agent][socialpubsync] [endpoint0] PACKAGE-DELIVERED DSTRQ45:
ADD paths=[/home/users/C/C3Pz6GaEbUDD5-rdYr7Z/profile], importTime=6ms, execTime=879ms, size=5058B
No errors in Author and Publish Sync diagnostics
What am I missing?
User synchronization will not create users on Author. The sync is only between Publishers.
As of AEM 6.1, when user synchronization is enabled, user data is automatically synchronized across the publish instances in the farm and are not created on author.
https://docs.adobe.com/docs/en/aem/6-2/administer/security/security/sync.html
With the above setup, i started up 2 publish instances (4503, 4504) and when i create or update any user (or profile), the data is synched between both Publish instances.

IBM BLUEMIX BLOCKCHAIN SDK-DEMO failing

I have been working with HFC SDK for Node.js and it used to work, but since last night I am having some problems.
When running helloblockchain.js only few times works, most time I get this error when it tries to enroll a new user:
E0113 11:56:05.983919636 5288 handshake.c:128] Security handshake failed: {"created":"#1484304965.983872199","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484304965.983866102","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Error: Failed to register and enroll JohnDoe: Error
Other times, the enroll works and the failure appears deploying the chaincode:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:14:27.341527043 5455 handshake.c:128] Security handshake failed: {"created":"#1484306067.341430168","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306067.341421859","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Failed to deploy chaincode: request={"fcn":"init","args":["a","100","b","200"],"chaincodePath":"chaincode","certificatePath":"/certs/peer/cert.pem"}, error={"error":{"code":14,"metadata":{"_internal_repr":{}}},"msg":"Error"}
Or:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:15:27.448867739 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.448692244","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.448668047","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
events.js:160
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:217:12)
E0113 12:15:27.563487641 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.563437122","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.563429661","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
This code worked yesterday, so I don't know what could be happening.
Does anybody know how can I fix it?
Thanks,
Javier.
ibm-bluemix
blockchain
These types of intermittent issues are usually related to GRPC. An initial suggestion is to ensure that you are using at least GRPC version 1.0.0.
If you are using a Mac, then the maximum number of open file descriptors should be checked (using ulimit -n). Sometimes this is initially set to a low value such as 256, so increasing the value could help.
There are a couple of GRPC issues with similar symptoms.
https://github.com/grpc/grpc/issues/8732
https://github.com/grpc/grpc/issues/8839
https://github.com/grpc/grpc/issues/8382
There is a grpc.initial_reconnect_backoff_ms property that is mentioned in some of these issues. Increasing the value past the 1000 ms level might help reduce the frequency of issues. Below are instructions for how the helloblockchain.js file can be modified to set this property to a higher value.
Open the helloblockchain.js file in the Hyperledger Fabric Client example and find the enrollAndRegisterUsers function.
Add “grpc.initial_reconnect_backoff_ms": 5000 to the setMemberServicesUrl call.
chain.setMemberServicesUrl(ca_url, {
pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Add “grpc.initial_reconnect_backoff_ms": 5000 to the addPeer call.
chain.addPeer("grpcs://" + peers[i].discovery_host + ":" + peers[i].discovery_port,
{pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Note that setting the grpc.initial_reconnect_backoff_ms property may reduce the frequency of issues, but it will not necessarily eliminate all issues.
The connection to the eventhub that is made in the helloblockchain.js file can also be a factor. There is an earlier version of the Hyperledger Fabric Client that does not utilize the eventhub. This earlier version could be tried to determine if this makes a difference. After running git clone https://github.com/IBM-Blockchain/SDK-Demo.git, run git checkout b7d5195 to use this prior level. Before running node helloblockchain.js from a Node.js command window, the git status command can be used to check the code level that is being used.

ATG:Error while baseline indexing- Unable to process any CSF calls as the Credential Store server is not enabled

I am getting the following error while doing baseline index of my Endeca application in ATG
15:26:47,891 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-201) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,913 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (Thread-201) Starting bulk load
15:26:47,915 INFO [nucleusNamespace.atg.commerce.endeca.index.CategoryToDimensionOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Fa
iled to cancel incremental load of /atg/commerce/endeca/index/CategoryToDimensionOutputConfig, probably because no bulk load was running.
15:26:47,916 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-203) Opening configuration repository connection for application logistore
15:26:47,917 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-203) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,916 INFO [nucleusNamespace.atg.commerce.search.ProductCatalogOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to can
cel incremental load of /atg/commerce/search/ProductCatalogOutputConfig, probably because no bulk load was running.
15:26:47,917 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to canc
el incremental load of /atg/commerce/search/StoreLocationOutputConfig, probably because no bulk load was running.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-199) Opening configuration repository connection for application logistore
15:26:47,919 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-199) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,919 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-203) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-207) Opening configuration repository connection for application logistore
15:26:47,920 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-207) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,921 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-207) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
After doing extensive research I found that C:\ATG\ATG11.2\home\servers\atg_production_lockserver\localconfig\atg\dynamo\server\OPSSInitializer.properties has path for jps-config.xml ie
JPSConfigurationLocation=C:/ATG/ATG11.2/home/../home/security/jps-config.xml
This jps-config.xml has some CSF related configuration.
How can I get rid of this error for successful baseline indexing.
I am stuck on this part.
This happens if you change the default workbench password. Simple solution would be, change Endeca experience manager password back to admin and try.
Otherwise, password needs to be changed in multiple places.
Thanks,
Ajay Agrawal
Go to the OPSSInitializer component in dyn admin and check whether the path for jps-config.xml specified is correct there. If not, correct the path.

MSDTC (Distributed Transaction Coordinator) Stops working. Error code -1073737669

I cannot start Distributed Transaction Coordinator service.
It stoped to work few days ago.
When I am trying to start service:
Registry properties:
RPC (For a test values was changed here to oposite and back - without any results ):
Windows logs\application logs:
53283
A MS DTC component has encountered an internal error. The process is being terminated. Error Specifics: DtcSystemShutdown (d:\w7rtm\com\complus\dtc\dtc\msdtc\src\msdtc.cpp#2539): Shutting down with an error
4111
The MS DTC service is stopping.
4102
DTC Security Configuration values (OFF = 0 and ON = 1): Network Administration of Transactions = 1,
Network Clients = 1,
Inbound Distributed Transactions using Native MSDTC Protocol = 1,
Outbound Distributed Transactions using Native MSDTC Protocol = 1,
Transaction Internet Protocol (TIP) = 0,
XA Transactions = 1,
SNA LU 6.2 Transactions = 1
Could not initialize the MS DTC Transaction Manager.
4356
Failed to initialize the MS DTC Communication Manager. Error Specifics: hr = 0x80070057, d:\w7rtm\com\complus\dtc\dtc\cm\src\ccm.cpp:2117, CmdLine: C:\Windows\System32\msdtc.exe, Pid: 5332
4358
The MS DTC Connection Manager is unable to register with RPC to use one of LRPC, TCP/IP, or UDP/IP. Please ensure that RPC is configured properly. If "ServerTcpPort" registry key is configured(DWORD value under the HKEY_LOCAL_MACHINE\Software\Microsoft\MSDTC for local DTC instance or under cluster hive for clustered DTC instance), please verify if the configured port is valid and the port is not already in use by a different component. Error Specifics:hr = 0x80070057, d:\w7rtm\com\complus\dtc\dtc\cm\src\iomgrsrv.cpp:2523, CmdLine: C:\Windows\System32\msdtc.exe, Pid: 5332
4156
String message: RPC raised an exception with a return code RPC_S_INVALIDA_ARG..
I found that we can use -resetlog command. But this doesnot resolving my problem:
Firewall is disabled.
Try to delete key HKLM\Software\Microsoft\Rpc\Internet from registry.
To get around this issue, I had to copy the log file (I accidentally deleted) to the location specified by the Local DTC Log information location.