SSO Bad Data Error - single-sign-on

I'm running BizTalk 2013r2 CU5 in Win2012r2
I noticed a file wasn't being collected from a receive location. The relevant host instance was running, so I checked the event log and found this:
SSO AUDIT Function: GetConfigInfo
({E182FB76-16B4-47D7-8178-4C66C9E3BA9D}) Tracking ID:
c4d0d0d1-0763-4ec5-99ea-fb2ac3bcc744 Client Computer: BizTalkBuild01
(BTSNTSvc64.exe:7940) Client User: BIZTALKBUILD01\BizTalkSvc
Application Name: {E182FB76-16B4-47D7-8178-4C66C9E3BA9D} Error Code:
0xC0002A1F, Cannot perform encryption or decryption because the secret
is not available from the master secret server. See the event log for
related errors.
I then restored the master secret using:
ssoConfig -restoresecret SSOxxxx.bak
After restoring, the file is still not being collected but the error messages in the event log have changed to this:
SSO AUDIT Function: GetConfigInfo
({2DC11892-82FF-4617-A491-5324CAEF8E90}) Tracking ID:
5e91d09d-1128-491b-851b-e8c8e69d06eb Client Computer: BizTalkBuild01
(BTSNTSvc64.exe:26408) Client User: BIZTALKBUILD01\BizTalkSvc
Application Name: {2DC11892-82FF-4617-A491-5324CAEF8E90} Error Code:
0x80090005, Bad Data.
Does anyone know of a solution to this please? This is the 2nd time I've faced this problem on different servers in the last 3 months.

The MSI for CU6 has now been fixed

For BizTalk 2013 R2 this may be a known issue, with a hotfix available!
There is a hotfix for this issue, however, the hotfix may introduce another issue (memory leak). A solution can be found here: https://blogs.msdn.microsoft.com/amantaras/2015/11/10/event-id-10536-entsso-bad-data-issue/

Related

InternalServiceFault when trying to connect SPGo to SP Online in VS Code

I'm trying to connect the SPGo plugin in Visual Studio Code to a Sharepoint Online site. There are lots of guides for this, for instance this one: https://medium.com/niftit-sharepoint-blog/saying-goodbye-to-sharepoint-designer-ac939a0b79ba
In short, I'm doing it like this:
Open VS Code
Open a local, empty folder)
SPGO: Configure workspace (follow guide, ending up with spgo.json
looking like the one I pasted)
SPGO: Populate local workspace (asking me for credentials and I plot
it in O365 style (email and password).
Statusbar says "Populating workspace"
After about 10 seconds I get the pasted error in the output window (spgo)
I'm using newest versions:
Visual Studio Code 1.37.1
SPGo 1.4.3
I have tried various sites in my tenant and I know they are up. I am Site Collection Administrator for the sites. I know the credentials are correct, of course. the remoteFolders and publishingScope doesn't affect anything, when changed. I assume authenticationType should be "Digest".
SPGo.json:
{
"sourceDirectory": "src",
"sharePointSiteUrl": "https://domain.sharepoint.com/sites/SiteName",
"publishingScope": "Major",
"authenticationType": "Digest",
"remoteFolders": [
"/SitePages/"
]
}
I don't get any files in the local folder, instead I get an error in the output:
================================ ERROR ================================
<s:Fault>
<s:Code>
<s:Value>s:Receiver</s:Value>
<s:Subcode>
<s:Value xmlns:a="http://schemas.microsoft.com/net/2005/12/windowscommunicationfoundation/dispatcher">a:InternalServiceFault</s:Value>
</s:Subcode>
</s:Code>
<s:Reason>
<s:Text xml:lang="en-US">The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework SDK documentation and inspect the server trace logs.</s:Text>
</s:Reason>
</s:Fault>
Error Detail:
----------------------
{}
===============================================================================
Sorry I missed this post for so long. First- thanks for the detailed write-up. This is the first time I've seen this specific issue with SPGo, so I do not know for sure what is the root cause.
Couple questions:
Are you using ADFS Authentication with your Office 365/SharePoint Online instance?
Are you able to use Addin-Only Authentication on this SP Site?
SPGo should be able to automatically work with ADFS in SharePoint Online but, as a fall-back, you could use Addin-Only Authentication. In this scenario you would create a ClientId and ClientSecret pair for the SharePoint Site Collection you are accessing and authenticate using those credentials. The ClientId would act as your UserName, and the ClientSecret would be your password.
Under the covers, I am using the node-sp-auth package for user authentication. Sergei (s-KaiNet on Github) has a great write-up on how to enable Addin-Only Authentication in SharePoint Online on his site, which you can find here.
Thanks for using SPGo!

Issue connecting composer to Blockchain on Bluemix - identity or token does not match

I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.

Could not initialize class com.ibm.ws.ffdc.FFDCFilter. DSRA0010E: SQL State = 28P01, Error Code = 0

Can I get assistance with the error codes coming from eclipse when I try to deploy enterprise application on websphere. I followed craig st jean, I also face another problem with configuration i.e websphere data sources using postgresql. i am using a windows machine, 64bit arch. the error codes are the topic of this question. i hope this question can be seen as relevant, since not much solutions exist for the first issue concerning com.ibm.ws.ffdc.FFDCFilter, thus if one doesn't overcome the first, how can one press on and attempt to solve the second. thanks.
Webspere logs
The test connection operation failed for data source AppDb on server server1 at node Lenovo-PCNode01 with the following exception: java.sql.SQLException: FATAL: password authentication failed for user "listmanagerremote" DSRA0010E: SQL State = 28P01, Error Code = 0. View JVM logs for further details.
I have fixed the issues with deployment in the eclipse neon IDE. I think it is either as a result of the installation of the IBM WebSphere Application Server Traditional v8.0x Developer tools for Neon, and IBM jre.
Eclipse console final message
00000063 CompositionUn A WSVR0191I: Composition unit WebSphere:cuname=ListManager in BLA WebSphere:blaname=ListManager started.
Postgre documents the 28P01 SQL State as an invalid password:
"28P01 INVALID PASSWORD invalid_password"
https://www.postgresql.org/docs/9.0/static/errcodes-appendix.html
Check your data source configuration to ensure that you have specified the correct password, or if using an authentication alias for your data source, confirm that the authentication data configuration contains the correct password, and that you have configured the data source and/or resource reference to use that authentication data.

Google cloud datalab deployment unsuccessful - sort of

This is a different scenario from other question on this topic. My deployment almost succeeded and I can see the following lines at the end of my log
[datalab].../#015Updating module [datalab]...done.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deployed module [datalab] to [https://main-dot-datalab-dot-.appspot.com]
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Step deploy datalab module succeeded.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deleting VM instance...
The landing page keeps showing a wait bar indicating the deployment is still in progress. I have tried deploying several times in last couple of days.
About additions described on the landing page -
An App Engine "datalab" module is added. - when I click on the pop-out url "https://datalab-dot-.appspot.com/" it throws an error page with "404 page not found"
A "datalab" Compute Engine network is added. - Under "Compute Engine > Operations" I can see a create instance for datalab deployment with my id and a delete instance operation with *******-ompute#developer.gserviceaccount.com id. not sure what it means.
Datalab branch is added to the git repo- Yes and with all the components.
I think the deployment is partially successful. When I visit the landing page again, the only option I see is to deploy the datalab again and not to start it. Can someone spot the problem ? Appreciate the help.
I read the other posts on this topic and tried to verify my deployment using - "https://console.developers.google.com/apis/api/source/overview?project=" I get the following message-
The API doesn't exist or you don't have permission to access it
You can try looking at the App Engine dashboard here, to verify that there is a "datalab" service deployed.
If that is missing, then you need to redeploy again (or switch to the new locally-run version).
If that is present, then you should also be able to see a "datalab" network here, and a VM instance named something like "gae-datalab-main-..." here. If either of those are missing, then try going back to the App Engine console, deleting the "datalab" service, and redeploying.

Getting AccessTokenRefreshError: invalid_grant in Google API fro service account

I am following this example
https://code.google.com/p/google-api-python-client/source/browse/samples/service_account/tasks.py
credentials = SignedJwtAssertionCredentials(
'141491975384#developer.gserviceaccount.com',
key,
scope='https://www.googleapis.com/auth/tasks')
http = httplib2.Http()
http = credentials.authorize(http)
service = build("tasks", "v1", http=http)
# List all the tasklists for the account.
lists = service.tasklists().list().execute(http=http)
pprint.pprint(lists)
The issue is , it works sometimes and i get the lists as JSON and after running program few more times i get error
File "/usr/local/lib/python2.7/site-packages/oauth2client/client.py", line 710, in _do_refresh_request
raise AccessTokenRefreshError(error_msg)
oauth2client.client.AccessTokenRefreshError: invalid_grant
I'm interface Google Drive but found the same error. In Issue 160 there is a report of setting the appropriate time on your local computer. Ever since I upgraded to Mac Mavericks I found that the I need to keep updating my system time. I was getting your reported error, set my system time back to current and I eliminated the error.
Are you running this code in a VM or sandboxed environment? If so, it could just be that your VM's clock isn't synchronised to your host machine. See a response to a similar question here.
I suffered the same (frustrating) problem and found that simply restarting my VM (ensuring the time was synchronised to the host machine (or at least set to the local timezone) fixed the problem.
Oauth service is highly dependent on the time, make sure your client uses NTP or another time syncing mechanism.
Test with curl -I https://docs.google.com/ ; date -u
You should see the same Date: (or whithin a few seconds) for this to work