2sxc App Export/Import - Web Log app modified to a Book Reading Log - import

Environment(s) DNN 9.10.2 / 2sic_2SexyContent_14.07.04 Install / Oqtane 3.1.4
I've modified the 2sxc Blog module 4 to be a book reading log.
I've created 608 entries and wish to export/import to Oqtane 3.1.4 site.
I attempted the export from 2sxc without noting the module version and when importing the data I get the following error:
Could not import the app / package: Sequence contains more than one matching element.
This is similar to https://github.com/2sic/2sxc/issues/460.
I've even installed 2sxc on another vanilla DNN 9.10.2 site and attempted the import on that (with the latest version of 2sxc 14.7.4 and get the same error.
I've seen in the Github issue noted above you could analyze the import process in your development environment? If so I'd be happy to send you the import zip file created from the App Export feature in 2sxc App Settings.
I have looked pretty deeply for any guidance on configuring the development environment for 2sxc and haven't been able to find anything. Also does anyone have good guidance on how to set up a development environment for the 2sxc modules? Would be great to include steps to set this up in Oqtane as I am moving all of my development energy moving forward to Oqtane module development - Oqtane is the next logical step for DNN Developers.
And also goes without saying that 2sxc is one of the best modules ever for DNN and offers a migration path away from DNN when you find that time coming. I've just gotten my Epiphany moment with Oqtane 3.1.4 realizing it is about at the same point DNN 2.1.2 was - a mature, extensible framework from which to build your .NET Core apps with.

It's great to hear others are experimenting with Oqtane - awesome!
Your first step should be to check the insights log on the API call that does the import. It will probably show you exactly where it failed.
My guess is that it has to do with languages which are enabled / disabled differently in Dnn and Oqtane.

Related

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Orchard cms multiple deployment environments

I've recently started to develop some sites using orchard, which is great so far, however I'm a bit confused about how to set up my deployment environments properly.
Normally I would set up local dev site, test, staging and live. using web.config transforms to alter connection strings and other app settings
I've recently been using AppHarbor for hosting and I think they are brillant.
There's a guid to setting up Orchard on AppHarbor here
Although I have to agree with comment here about all the post I've read expecting me to want to use and love Web Matrix!
Although most development in Orchard, will be done by creating modules. I think for at least one site they will want at least staging and live environments.
Whats the best way to set up and migrate from one environment to the next?
I've looked at the multi tenancy project, but that seem to address a different issue
I'd be interested to know what other have done. As well as any recommendations for modular Orchard development and in house source control - for those modules only.
I use the import-export module for exporting and importing content in my DTAP environments. Make sure to implement/override Importing and Exporting in your drivers (see: Custom part properties missing in export Orchard 1.6 /plug ;) )
Widgets however should be done manually AFAIK. They don't export and import well with that module.
As for modules and themes: just copy the folder. Same goes for media.

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.

How do I add Platform Update 1 to my bootstrapper?

I have been playing around with the new StateMachine workflow that has been added to Windows Workflow as part of Platform Update 1 (see also). I now want to look at installing what I've created and therefore need to make sure my bootstrapper is up-to-date. In the future, I will be moving to WIX but right now, for the purposes of prototyping, I'm just using a regular Setup and Deployment project and its bootstrap support.
The list of standard pre-requisites does not include the PU1 as an option. Therefore, how can I add support for it?
Update
I found this answer on StackOverflow regarding custom prerequisites, which led me to this article on MSDN, which led me to creating my own pre-requisite. However, I got a new error about mismatched framework requirements. I suspect I need to pick apart the multi-targeting support and the existing .NET framework prerequisite package to see how to make a new prerequisite that will work correctly.
I've had a stab at creating my own bootstrapper packages for this. The results are here to download. Note that these are entirely untested and provided as-is - use at your own risk. However, feedback is welcome. Hopefully Microsoft will provide an official solution.
See How to detect if the .NET Framework Platform Update 1 is installed
is the Microsoft .NET Framework 4 Platform Update 1 - Runtime Update (KB2478063) what you are looking for? See here for the download.

How do I create a build for a legacy system?

We are currently maintaining a Perl web application and we are slowly trying to bring it into the modern age.
We want to be able to build our application so that when a new developer comes along we can just give them a copy of the build and they can have a local copy of the application with minimal fuss.
Does anyone have any experience of creating a build for a legacy web application that could offer some pro tips?
Start by authoring an ordinary CPAN distribution, Perl modules go into lib etc. This is described in perlnewmod and the documents referenced in its section titled See also.
Use Module::Build as the build system. You extend it for the extra stuff you want to install, for example template files, this is described in the cookbook.