Failed to find a valid digest in the 'integrity' attribute for resource? - progressive-web-apps

I have just created a hosted blazor webassembly pwa project, which generates client, server and shared projects, all fine. I start the solution and everything runs fine.
But after I start to add small changes to the projects it stops working with a message like this:
"Failed to find a valid digest in the 'integrity' attribute for resource '' with computed SHA-256 integrity '47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='. The resource has been blocked."
I search the net and stack overflow and find others having almost the same problem. Some can do clean and rebuild to solve this, but that's not working for me.
So, what is this? Why is this happening, totally useless?
Is it the PWA feature? Should I create a new solution without the pwa enabled?

It started happening to me recently. Only on a published released solution. Not on local debug.
Clean+rebuild didn't work for me. I had to delete bin and obj folders from both Client and Server (note: tried client only and, it did not work but, did not try server only) then republish.
cf. Failed to find a valid digest in the 'integrity' attribute for resource in Blazor app
It now occurs each time I upgrade or downgrade a package.
I've done several tests and can confirm :
DLL are the right ones (SHA256 hash validated) on the server.
the string in the blazor.publish.boot.json is the right ones.
I was even able to get rid of the problem by reverting to the previous package version prior to the bug (which changes back the related entry in blazor.publish.boot.json). Which for me confirms a reference is not updated somewhere.
The only significant changes I've made recently are switching to VS2022 and .NET6. The bug appeared after I did my first successful publish on Azure through VS2022: 1st package upgrade after that triggered the bug.

Related

Dynamics 365 - Plugin - Newly Created Images Are Null when checked in code?

I am working on a d365 unified interface sandbox environment on a development project.
This environment was setup recently as a clone of the production d365 instance.
Today I have been adding some plugins and finding a strange issue. I can get the plugin code on record create/update firing no problem (I have pre operation create/update and post operation create/update stages defined and the correct code gets hit for each).
But the C# plugin code does not recognise any of the pre or post images that I have added.
In code when we check IPluginExecutionContext.PostEntityImages it does not contain anything.
Any of the pre existing images that were there already when the environment was cloned are firing correctly. We have a process whereby we name all of our pre and post images the exact same for every entity and I know the ones I have created are named exactly as expected.
In this example I have created a Post Operation stage Update plugin on the OOB opportunity entity with a PreImage defined against it but the code just will not recognise it.
Anyone experienced this before?
TIA
Occasionally the sandbox service seems to fail picking up updates on a plugin assembly. In those cases updating the assembly with a different assembly version (build or revision number) can help.
If not, I would advise to simply remove the complete assembly and recreate it again.
If you do not have an automated deployment process in place, follow these steps:
Create a separate solution.
Add the assembly along with its step registrations and images to the solution.
Export the solution.
Remove the assembly using the plugin registration tool.
Import the solution again.

Deploying updated SSIS package doesn't work

The problem
So I am running into an interesting issue. I have been tasked to change a query for a simple SSIS package in Visual Studio 2015, which is a thing I have done multiple times in the last 6 months.
After changing the package and deploying it (to an installation of SQL server 2016, without errors!) I noticed that the execution of the package (scheduled with SSMS) generates the same result as the pre-updated package, meaning the demanded changes hadn't taken effect. Of course, as test, I have executed the package directly from VS2015 and got the result I wanted.
Ever since I have been running tests and trying to find a solution. The problems seems to lie with the receiving side of the deployment proces.
What I have tried
Deleted the package from the existing project in SSMS and redeployed. Deployment again seemd to succeed but the package didn't show up, so I had to restore an old version of the project.
Deploy the package from multiple different computers with access to VS2015 and the source code. No change...
Deploy the package to a new (empty) SSMS project: package does not appear in the project. This leads me to believe that the old package is kept when I publish the new version to the existing project in SSMS.
Regenating/rebuilding the package in VS2015, frankly this was never necessary and probably doesn't do anything for an SSIS package, but it may help you get an idea of my skill level.
In the past we have had issues with the encryption level blocking the deployment of packages. I have verified these settings and found no issues.
I have verified if any updates have recently been installed to the database server, which does not seem to be the case.
I have (of course) tried to google the issue, which is tricky due to the lack of errors. I have found the following links, that describe the same/a similar issue, but their solutions haven't helped:
https://dba.stackexchange.com/questions/259672/ssis-package-not-being-deployed
Deployed SSIS Package not reflecting changes made to package
What is still left to try
Rebuild project from scratch to see if that version is deployable.
Unfortunately I don't have a lot of experience with this subject and no colleagues or contacts to ask for help.
Thanks in advance.
My workaround
After quite a bit of time attempting to solve the issue I have resorted to working around the problem, by manually importing the .ispac file into the database. While this is not the prettiest of solutions, at least it's a workable one. If anyone has any other idea's I'll gladly see them, but for now the issue isn't nearly as pressing as it was.
From your post. "Deleted the package from the existing project in SSMS and redeployed. Deployment again seemd to succeed but the package didn't show up".
Are you 100% sure you are deploying it to the same project on the same server on the database? Are you refreshing after you deploy?

Unable to integrate CQ5.6.1 with Site Catalyst

I'm having difficulty in integrating AEM 5.6.1 with Site Catalyst. It allows me to connect in the configuration successfully, but does not work on the framework setup.
I've followed the standard procedure to connect AEM to SC and it accepts my login in the configuration, but fails on the framework set up with the browser message 'We were not able to login to SiteCatalyst. Please check your credentials and try again.'. Behind the scenes in the server log;
12.12.2014 14:10:06.967 *WARN* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.SitecatalystHttpClientImpl Data center 'https://api3.omniture.com/admin/1.3/rest/' responded with errors {"error":{"code":500,"message":"Internal Server Error"}}
12.12.2014 14:10:06.967 *ERROR* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.servlets.SitecatalystServlet Call to SiteCatalyst method 'Company.GetReportSuites' failed com.day.cq.analytics.sitecatalyst.SitecatalystException: not authenticated
I've tried accessing via the API Explorer and it works.
I've tried the troubleshooting guide without success.
I can log in to Site Catalyst, I'm an admin, I am in the web services access group.
I've tried using a clean install of CQ5.6.1 with geometrixx - it doesn't work either.
I've tried this from a server and from a localhost/dev machine with the same results. No proxy. I've even tried using the shared secret as the password but then it doesn't connect at all, and fails on the configuration screen.
What might cause this to fail?
If it doesn't work with a fresh install and Geometrixx, then it's probably an Adobe bug. That's typically the first thing support will ask you about.
I would also verify using Geometrixx Outdoors, or a more recent demo site, on your fresh install, just to ensure it's not an outdated ClientLib issue.
I know this isn't a direct answer to your question, but honestly, I would approach the integration differently. I've worked with the AEM-SC framework and it's buggy at best. It's very finicky, it doesn't REALLY work the way the documentation claims, and it requires that you're very specific about what Clientlibs are on the page.
Moving forward, I think using Adobe Dynamic Tag Manager is the better approach, for many reasons. My understanding is that it's Adobe's recommendation as well. I'd consider moving to that. In AEM 5.6.1, you'll have to customize your integration with DTM, but it's not very hard.
Solution: Add a property on the configuration node for sitecatalyst: (eg. /etc/cloudservices/sitecatalyst/my-sc-configuration)
server=https://api.omniture.com/admin/1.2/rest/
it also seems to work with newer API versions such as https://api3.omniture.com/admin/1.3/rest/
It would appear that for 5.6.1 it ignores the OSGi configuration, at least for the configuration screens. With this extra property, the framework page loads without error and allows selection of the RSID.

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Google App Engine: Deployed Source doesn't have Local updates

I'm working with Google App Engine in Eclipse w/ JSP pages in Windows 7.
I already have an app deployed and working, but I am unable to make changes to it for some reason.
If I make changes and debug locally, my localhost page is showing the changes that I implement.
While I am not getting any errors in the deployment, the same changes that work on my local debug are no longer showing up, so I can't update my app.
I thought updating the version number might help, but I had no luck with this.
Any ideas? Thanks.
Are you deploying the same version (as specified in appengine-web.xml) as the default version that is running on your app? If not, you'll have to access your new deployment at http://newversion.appname.appspot.com, or change your default version in app engine to your newly deployed version.
I have had the same problems too, especially when the changes concerned the static pages. Some little things to check:
If you have set an expiration date in your app.yaml, your browser cache could be holding the file.
If it’s specific to the online contents, it could be an intermediary cache (such as a squid server) serving the outdated contents, in which case you’d have to flush the cache to get the new version.
You could start by checking the log on the GAE console to see if the request is received by the server, that would help you debug.
Another trick, if you’re being served an outdated version of http://yourapp.appspot.com/index, try and pass a dummy argument to force the browser to update the version, for instance : http://yourapp.appspot.com/index?p=1