Content versions missing in copying content across CQ5 Environments - aem

I have a question related with Adobe CQ5 content migration. We have two CQ5 environments.E1 and E2. We copied content of E1's author to E2's author instance. We did it by creating CRX content package. Our content copy went well but now we are realizing that we have missed our versions in pages. For example on E1 we hav a page example.html with three versions (1.0,1.2,1.3) but in E2 we are finding no version of example.html.
PS: We create versions manually by Author instance.

The Package Manager only takes the latest version of a page and does not copy it's history. The only way to have the versions on both instances is using a cluster where the whole repository is synchronized.

Related

moodle plugin development and git repository

Inherited a moodle project that never had any kind of VCS, with some plugins installed from third parties, and a few modules developed in-house.
Problem is, I want to update moodle, and can't just use a brand new copy, since in moodle custom code lies within the "moodle" directory.
In other CMS/frameworks, that code would be physically separated from the core code, and you could mostly update the core files by pulling from the appropriate repo, and checking out the approriate branch (with custom code living in a different repo, and third party code either living in that repo or managed as a dependency).
Is there a way to organize custom moodle development (or downloads from third parties) so it's easy to separate "core" code from installed modules/themes?
We're using the .git/info/exclude file and list there all plugins which are third-party or developed in-house.
However, Moodle has awesome documentations for handling plugins using git in general, check it out: https://docs.moodle.org/32/en/Git_for_Administrators
I'm not aware of a so smart (and elegant) way of separating custom code from a default Moodle instance, even with GIT.
In a custom Moodle instance you may have:
new plugins (self developed or from third parties). You can see the list of additional plugins here:
your_moodle_systemadmin/plugins.php?contribonly=1 (or here: Home->Site administration->Plugins->Plugins overview).
In case you want an upgraded version of Moodle, you install it somewhere and then install on it the list of additional plugins. I would suggest here to check if the plugins have a new available version and consider installing it.
Custom code (that is to say: someone made core changes on Moodle). I would here compare the old code with the new one, or, even better:
a) compare the old customized system (MoodleOld Cust.) with a brand original old system (MoodleOld orig.)
b) Track all the core differences in your MoodleOld Cust with some inline comments
c) Compare MoodleOld Cust with your new system and pay attention only on differences marked by you on MoodleOld Cust.
d) Try to report the customizations on your new system, if wanted and / or necessary.

Upgrading umbraco version 4.11.10 to 6.0.0

I want to upgrade my existing umbraco version project 4.11.10 to 7.5.4, and i decided to upgrade step by step in incremental manner. When upgrading version 4.11.10 to 6.0.0 using nuget package and replacing old dll files with newer one, i am able to run back end and front end side, but on back end, i am not able to view the list of "content" and list of "document types" from my existing project as well as looks like still it displays ui of back end of older version 4, but when i click on "about us", it displays version 6.0.0 as attached in screenshot, please help me to find out the solution, Thanks :) Check here in attached screenshotCheck here - Content not loading
Upgrading from 4.11 to the newest version of 7 should work, HOWEVER, if you're using custom data types, the chances are that some stuff is going to break, as v7 doesn't use UserControls for DataTypes anymore. Some data types can be straight swapped, some not so much. You should be able to see all the DocTypes and content etc though.
When you've swapped the DataTypes out, you may need to reformat the data to match the new DataTypes so you don't lose any data as well. This is something you'd probably have to do either with the API of directly in the database.
I've written some blog posts on upgrading Umbraco, have a read and see if they help:
http://www.attackmonkey.co.uk/blog/2015/10/upgrading-to-73-part-1-preparation
http://www.attackmonkey.co.uk/blog/2015/10/upgrading-to-73-part-2-the-upgrade
http://www.attackmonkey.co.uk/blog/2015/12/upgrading-to-73-part-3-switching-to-razor
http://skrift.io/articles/archive/umbraco-upgrade-strategies/

OpenText Reddot CMS Version Control

Does anyone know how you version / source control changes in Reddot Cms (OpenText). Also is there any best practice advice for release management of changes from one Reddot environment to another Reddot instance. Any help or advice would be greatly appreciated.
There is best-practice, but as you have probably realised, there aren't too many practitioners of RedDot these days. In case you should come back to this thread (or for someone else's benefit) Versioning is built into the Template Manager, but has to be enabled. There's no Source Control integration last time I checked, but we developed a prototype system that allows for the creation of templates in Visual Studio. The project to complete that has since died due to lack of commercial support, but some of the ideas may be useful for you if you want it.
I split up the answer in two parts: Versioning and migration between stages.
Versioning can only be done with the template history or via an external service that grabs the templates on a regular basis or triggered manually. At least for the Management Server there is no built-in service for a "real" versioning or release of more than just single templates/content classes or even including pages.
There are 3 ways of moving changes from dev to test or prod I have seen often:
Two templates: Using two templates on one server, on called "Development" and the other one "Production". All new development is done on the "Development" template and moved to the other template as soon as finished. If elements are different between those templates they need to be duplicated. This is typically on small installations without staging areas. Nowadays, you will find only very few of those.
Partial tree export: Development is done on a dev server and the changes are exported as partial tree. There is a special area in the project tree where pages are created which templates shall be moved over. These are exported including the templates and imported on the target server to override the existing ones.
Tool support: There are external tools for moving templates and content classes to other servers. There is e.g. SitePort (http://siteport.net , can also move whole templates between RedDot servers afaik) and the Sync Tool (http://www.erminas.de/en/products#synctool , can compare and move single element attributes and/or single lines of templates, please note: this shall not be advertisement as the tool is made by us but I do not know any other like this). Some companies also have custom development tools for this.

How to use Plone as Document Management?

I wish to create a document repository for my company. Reason is because my company have many documents and they did not have a version tracking in place. This means everyone is using different version all the time.
Plone is something new to me and i got to know from a good friend of mine. And too bad he is not around anymore to answer my question. I believed in him and i wish to materialize his idea, to use Plone as a document repository for my company.
I have install Plone and manage to view the default Plone page, add all company's username and change the logo to my company's logo. And now the biggest question is, how to setup the document repository? What i have in mind was to create a "page" for the user to add files, download files, search for files and read its description.
Any lead for me to go about?
Reusable,
Same problem here. We started to use Plone as our main DMS 4 weeks ago (inserting existing docs at present).
For working copies, we use iterate (insert plone.app.iterate under eggs in your buildout.cfg).
For versioning, Products.CMFEditions. I believe this worked out of the box.
For creating new workflow, look into plone.app.workflowmanager and read the docs.
In a previous question we asked, we were still looking at Dexterity which has alot going for it but eventually we decided on adapting an existing content type based on Archetypes.
As for inserting files, as long as the description is ok, they will be found through the in-built search functionality, but you might consider using Iterate mentioned above to make sure that nobody is using the same file twice.
As your new, as I am, the docs seem hard at first but are actually quite good.
And this book is still giving me the foundation we need to keep adding functionality.
Good luck
I think, you should get pretty far with vanilla Plone installation, without developing your own extensions or other customization add-on-products. Therefore, I'd recommend you to start with Plone 4 User Manual to find out everything you could do out-of-the box.
As #Speediro mentioned, versioning support comes built-in for the main content types (and you don't actually see CMFEditions mentioned anywhere), but it's not activated for file uploads. Although, as briefly mentioned in the manual: Content items can be configured to have versioning enabled/disabled through the Site Setup → Plone Configuration panel under "Types".
Working Copy Support (plone.app.iterate) should also be there already waiting for activation on Site Setup's add-ons-panel.
Yet, before the Plone Collective (=community) Developer Docs or Professional Plone 4 Development, I'd recommend Practical Plone 3. It has a bit outdated graphics (because it was made for Plone 3), but it's great next step after the user manual. E.g. how to define content rules to send e-mails notifications for content updates (still through the browser without coding). Or how to create custom forms using Products.PloneFormGen.
When you really need to write your own code, it'd be time for Professional Plone 4 and the Collective Docs.
If you can't have a developer to manage your stuff, I would recommand to stay on official Plone, no custom code and use only widly used addons.
I mean:
stay on the default theme (sunburst)
use the default plone content types
only customize the logo
activate plone.app.iterate in the addon controlpanel
do not play with workflow because they need to know what you are doing. by default a file has the visibility of it's folder. It mean if you can see the folder you will be able to see all files inside. You can just activate default worklfow for files under the ZMI.
Use collective.quickupload addon
Your database will going really fast to a huge size because Plone is doing indexing and indexing means lot's of spaces. So you will have to handle this as system adminstrator;

Salesforce - How to Deploy between Environments (Sandboxes, Live etc)

We're looking into setting up a proper deployment process.
From what I've read there seems to be 4 methods of doing this.
Copy & Paste -- We don't want to do this
Using the "Package" mechanism built into the Salesforce Web Interface
Eclipse Force IDE "Deploy to Server" option
Ant Script (haven't tried this one yet)
Does anyone have advice on the limitation of the various methods .
Can you include everything in a Web Interface package?
We're looking to deploy the following items:
Apex Classes
Apex Triggers
WorkFlows
Email Templates
MailMerge Templates -- Can't seem to find these in Eclipse
Custom Fields
Page Layout
RecordTypes (can't seem to find these in Website or Eclipse)
PickList items?
SControls
I recommend the Force.com Migration Tool.
For reference:
Force.com Migration Tool Documentation
Migration Tool Guide
The Migration Tool allows you to use ant targets to move your metadata between salesforce.com organzations.
I can speak to this from recent painful experience.
Packaging: this is a very old method that predates the metadata API on which both Ant and Eclipse rely. In our experience, packaging's only benefit is in defining your project. If you're using Eclipse (which we do, and I recommend), you can define your project as being based on a particular package. As long as you remember to add new components to your package, your project hangs together
One thing that baffled us for a while, btw, are the many uses of package. We've noted the following:
Installed packages: these come in managed and unmanaged flavors and are really, in the words of a recent post on the SFDC boards, for ISVs to deploy their stuff into various unknown orgs "out there". Both managed and unmanaged packages have limitations that make them unsuitable and unneeded for deployment from development to production within an org, or in any case where you're doing custom development and don't intend to distribute code to a large anonymous base.
Non-installed packages: this is what you see when you click "Packages" in the web UI. These, that we sometimes call "development packages", seem to be just a convenient way to keep a project definition together.
Anyway, the conclusion I'm coming toward is that our team (custom development, not an ISV) does not need packages in any form.
The other forms of deployment, both Eclipse and Ant, rely on the Metadata API. In theory they are capable of exactly the same things. In reality they appear to be complementary. The Force.com migration tool, built into the Force.com IDE for Eclipse, makes deployment as easy as it can be (which is not very) and gives you a nice look at what it intends to deploy. On the other hand, we've seen Ant do some things the IDE could not. So it's probably worthwhile to learn both.
The process we're leaning toward is to keep all our projects in SVN, and use the SVN structure as the project definition (Eclipse will work with this and respect it). And we use Eclipse and sometimes Ant for migration. No apparent need for packages anywhere.
By the way, one more thing to be aware of -- not all components are migratable. Some things must be reconfigured by hand in the target environment. One example would be time-based workflows. Queues and Groups also need to behand-created, I think. Likewise the metadata API can't directly process field deletions so if you deleted a field in your source, you need to delete it by hand in the target. There are other cases as well.
Hope that's useful --
-- Steve Lane
As of Spring '09, mail merge templates are not supported in metadata but record types are. You will find record types as an XML element in the file for the object they belong to. Everything else on your list is supported with a small exception. Picklist values for standard fields cannot be edited in Spring '09. Stay tuned for news on Summer '09 feature announcements.
Update: Standard picklists on standard objects are now metadata exposed (as of API v16):
http://www.salesforce.com/us/developer/docs/api_meta/Content/meta_picklist.htm
Otherwise, Steve Lane's response is pretty accurate. The advantage of using unmanaged packages (what Steve calls non-installed packages) is that when you add metadata to a package, the metadata it depends on will automatically be added. So it's easier to grab a full set of metadata containing all its dependencies. If you are repeatedly moving metadata from one org (sandbox) to another (production), Steve's approach is probably the best way to go and certainly the most common today. I frequently use unmanaged "developer" packages to move something I've developed in one org to another unrelated org. For my purpose, I like to have the package defined in the org as opposed to an Eclipse project / SVN. But that probably doesn't make sense if you are doing team development across many dev/sandbox orgs and are using SVN already.
Jesper
Another option is to use Change Sets if you want to move meta data from a sandbox to production.
There are currently some limitations on how change sets can be used:
Sending a change set between two organizations requires a deployment
connection. Currently, change sets can only be sent between
organizations that are affiliated with a production organization, for
example, a production organization and a sandbox, or two sandboxes
created from the same organization.
From the docs:
A package must be managed for it to be published publicly on AppExchange, and for it to support upgrades. An organization can create a single managed package that can be downloaded and installed by many different organizations. They differ from unmanaged packages in that some components are locked, allowing the managed package to be upgraded later. Unmanaged packages do not include locked components and cannot be upgraded. In addition, managed packages obfuscate certain components (like Apex) on subscribing organizations, so as to protect the intellectual property of the developer.
Advantage to managed package would be that it allows you to easily version and distribute things across multiple SFDC organizations.