I am working on a d365 unified interface sandbox environment on a development project.
This environment was setup recently as a clone of the production d365 instance.
Today I have been adding some plugins and finding a strange issue. I can get the plugin code on record create/update firing no problem (I have pre operation create/update and post operation create/update stages defined and the correct code gets hit for each).
But the C# plugin code does not recognise any of the pre or post images that I have added.
In code when we check IPluginExecutionContext.PostEntityImages it does not contain anything.
Any of the pre existing images that were there already when the environment was cloned are firing correctly. We have a process whereby we name all of our pre and post images the exact same for every entity and I know the ones I have created are named exactly as expected.
In this example I have created a Post Operation stage Update plugin on the OOB opportunity entity with a PreImage defined against it but the code just will not recognise it.
Anyone experienced this before?
TIA
Occasionally the sandbox service seems to fail picking up updates on a plugin assembly. In those cases updating the assembly with a different assembly version (build or revision number) can help.
If not, I would advise to simply remove the complete assembly and recreate it again.
If you do not have an automated deployment process in place, follow these steps:
Create a separate solution.
Add the assembly along with its step registrations and images to the solution.
Export the solution.
Remove the assembly using the plugin registration tool.
Import the solution again.
Related
I have just created a hosted blazor webassembly pwa project, which generates client, server and shared projects, all fine. I start the solution and everything runs fine.
But after I start to add small changes to the projects it stops working with a message like this:
"Failed to find a valid digest in the 'integrity' attribute for resource '' with computed SHA-256 integrity '47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='. The resource has been blocked."
I search the net and stack overflow and find others having almost the same problem. Some can do clean and rebuild to solve this, but that's not working for me.
So, what is this? Why is this happening, totally useless?
Is it the PWA feature? Should I create a new solution without the pwa enabled?
It started happening to me recently. Only on a published released solution. Not on local debug.
Clean+rebuild didn't work for me. I had to delete bin and obj folders from both Client and Server (note: tried client only and, it did not work but, did not try server only) then republish.
cf. Failed to find a valid digest in the 'integrity' attribute for resource in Blazor app
It now occurs each time I upgrade or downgrade a package.
I've done several tests and can confirm :
DLL are the right ones (SHA256 hash validated) on the server.
the string in the blazor.publish.boot.json is the right ones.
I was even able to get rid of the problem by reverting to the previous package version prior to the bug (which changes back the related entry in blazor.publish.boot.json). Which for me confirms a reference is not updated somewhere.
The only significant changes I've made recently are switching to VS2022 and .NET6. The bug appeared after I did my first successful publish on Azure through VS2022: 1st package upgrade after that triggered the bug.
The problem
So I am running into an interesting issue. I have been tasked to change a query for a simple SSIS package in Visual Studio 2015, which is a thing I have done multiple times in the last 6 months.
After changing the package and deploying it (to an installation of SQL server 2016, without errors!) I noticed that the execution of the package (scheduled with SSMS) generates the same result as the pre-updated package, meaning the demanded changes hadn't taken effect. Of course, as test, I have executed the package directly from VS2015 and got the result I wanted.
Ever since I have been running tests and trying to find a solution. The problems seems to lie with the receiving side of the deployment proces.
What I have tried
Deleted the package from the existing project in SSMS and redeployed. Deployment again seemd to succeed but the package didn't show up, so I had to restore an old version of the project.
Deploy the package from multiple different computers with access to VS2015 and the source code. No change...
Deploy the package to a new (empty) SSMS project: package does not appear in the project. This leads me to believe that the old package is kept when I publish the new version to the existing project in SSMS.
Regenating/rebuilding the package in VS2015, frankly this was never necessary and probably doesn't do anything for an SSIS package, but it may help you get an idea of my skill level.
In the past we have had issues with the encryption level blocking the deployment of packages. I have verified these settings and found no issues.
I have verified if any updates have recently been installed to the database server, which does not seem to be the case.
I have (of course) tried to google the issue, which is tricky due to the lack of errors. I have found the following links, that describe the same/a similar issue, but their solutions haven't helped:
https://dba.stackexchange.com/questions/259672/ssis-package-not-being-deployed
Deployed SSIS Package not reflecting changes made to package
What is still left to try
Rebuild project from scratch to see if that version is deployable.
Unfortunately I don't have a lot of experience with this subject and no colleagues or contacts to ask for help.
Thanks in advance.
My workaround
After quite a bit of time attempting to solve the issue I have resorted to working around the problem, by manually importing the .ispac file into the database. While this is not the prettiest of solutions, at least it's a workable one. If anyone has any other idea's I'll gladly see them, but for now the issue isn't nearly as pressing as it was.
From your post. "Deleted the package from the existing project in SSMS and redeployed. Deployment again seemd to succeed but the package didn't show up".
Are you 100% sure you are deploying it to the same project on the same server on the database? Are you refreshing after you deploy?
When updating an assembly to plugin registration - in step 2 : select the plugin and workflow activities to register, if not all plugin selected they will be deleted with their steps and images from plugin registration, is there a way to recover a plugin that was deleted, is there an XML or a file that helps recover the steps and images?
If you have earlier solution backup or take the latest solution by including the Sdk message Processing Steps from other environments & import to get the lost Plugin steps/images registration data.
Also, as an Ops guide for troubleshooting & human readable version tracker in TFS source code, I follow this on each plugin. This helped me a lot. Even if its not deployed correctly in other environments, this will help to identify the gap.
Helpful in some situation too (for future), if there is no other environments other than Dev yet.
I have a VSO release management definition in which I'm deploying a cloud service and then running some tests. The deployment executes without issues, but then the tests don't run, I recieve the following message in the logs:
Warning: No test is available in My DLL Path. Make sure that installed test discoverers & executors, platform & framework version settings are appropriate and try again.
Now, the strange thing is that this release is triggered by a build, which runs exactly the same set of tests and they all run happily.
I've included a runsettings file specifying the framework version (based on some SO posts I found from a year ago with a similar issue) but its made no difference. Been messing with this for nearly 2 days now with no progress. Any suggestions happily accepted!
Arrrrgh! So it turns out, if I deleted the whole project, created it again and added my tests again, it just works. Gremlins apparently!
Admins, if this needs to be deleted, go ahead
I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.