Magento 2 migration process with site improvements - magento2

We are currently in the process of migrating to Magento 2.1.x from 1.9.0.1. In our project, we are also improving our catalog, design, UX, and a few other components with this opportunity. We are at a place where we have successfully used the data migration tool to migrate the data to our dev environment and have modified the catalog to improve the attribute sets, design, and UX.
I have tried using the data migration tool's "delta" option but unfortunately, it's breaking due to different attributes. This makes sense with our improved catalog.
I need to choose a direction to put my effort toward and I think there may be other folks out there also migrating to Magento 2 and improving their Magento site during the process. My goal here is to gather ideas and solutions for our own project but also share those solutions to help other others and future migrations.
Production (1.9.0.1)
Still, live and collecting orders/customers
Development (2.1.4)
Used the data migration tool to import the first set of data from
1.9.0.1.
Changed the attribute sets, products, and categories to improve our Magento installation.
Added new modules for shipping and payment
OPTION A: Try to get the delta command working in the data migration tool
The idea is to modify the config and mappings (maybe the code) in the data migration tool to get the "delta" migration to work with the new modified catalog.
OPTION B: Migrate customers and orders from 1.9.0.1 production to 2.1.4
The idea is to modify the config and mappings (maybe the code) in the data migration tool to only pull ONLY the customer entities and order entities (with the associated attributes).
OPTION C: Start from scratch, run the data migration tool, import the catalog
The idea here is to use a base Magento 2 install, run the data migration tool with production data, then export the catalog from our dev site and import into the new production site.
If you have performed an M1->M2 migration and have some thoughts about which option you have used (or would use), it would be helpful to talk this out. Any help would be greatly appreciated.
Delta migration reference:
http://devdocs.magento.com/guides/v2.0/migration/migration-migrate-delta.html
Best,
Gary

After doing a lot of research and testing, we decided to go with OPTION B "Migrate customers and orders from 1.9.0.1 production to 2.1.4". One thing we realized was, we didn't need any of the customers or orders in our development environment as they were imported from the initial migration and test orders.
Please note, the following information is NOT all inclusive as it would take several pages of information to share that level of detail. These are the broad steps we followed over several days of test runs and several dozens of hours of tweaking and testing.
Here is the process we followed:
Step 1: Created a new environment where we took the development code and data and migrated the production data into. We called this environment "stage".
Step 2: Copied the development environment into stage.
Step 3: Wrote a script to truncate the stage tables involved with customers and orders
Step 4: In the stage environment, copied the 1.9.0.1 folder in the migration tool to 1.9.0.1-phase2.
Step 5: Modified the config xml and commented out the steps OTHER THAN "Customer Attributes Step", "Map Step", "Log Step", and "OrderGrids Step".
Step 6: Modified the map xml and ignored any tables we didn't need to migrate outside of the customer and order tables like "design_", "eav_", "catalog*", "cms_*", and about 16 more ignore statements.
Step 7: Put production into maintenance mode, got a fresh copy of the production database and put it on our non-production server (we created a separate database from dev and stage called m1).
Step 8: Ran the migrate:data command with the "-r" flag and pointed to our phase2 config directory mentioned in step 4.
Step 9: We had to modify the auto increment of the sequence tables to match production.
Step 10: Tested on stage to confirm it's exactly what we wanted with the catalog, configuration from dev and the customers and orders from production.
Step 11: Migrated stage to production
If anybody else is Migrating from Magento 1 to Magento 2 and modifies the site's catalog as part of the project, I hope this gives you a direction to follow. If anybody has a better method than we followed, please share to help future migration teams.
Best of luck!
Gary

Related

Salesforce DX: Single Project with multiple package directories vs. Multiple projects

We are currently working on the architecture of our Salesforce DX project. We've got an extensive codebase of existing customizations and are planning to turn them into multiple Unlocked Packages to make everything more modular etc. Of cause, not everything is subject to be packaged, some features would stay unpackaged.
And the question is: should this be a single project (with multiple package directories inside and single Git repo) or a project per feature is more preferrable (multiple Git repos)? How would you manage dependencies between packages and unpackaged stuff?
Could you please advise?
The Salesforce cli can run from anywhere. But certain commands are required to executed in the context of a Salesforce project directory. This includes all commands that execute packaging, deploying code to an org, all of which drives any CI/CD process you might use.
As I imagine trying to arrange a CI process spread across multiple projects, each with their own project folders, it seems like it would be adding unnecessary complexity.
Fundamentally, unlocked packages were designed to share a single Salesforce project. So unless you've found a reason otherwise, going that direction is the right move.
The Salesforce developer evangelist team maintains a sample app that is built using the multiple package model called "Easy Spaces". I'd suggest looking in the sfdx-package.json file there to understand how to arrange and identify package dependencies.
You may be able to infer some ways to organise the code in your project accordingly.
How to make all those decisions is way too much for a single answer. But this youtube video is a customer development team leader sharing how they rearchitected their codebase to use unlocked packages.
If you have any problems as you get started, there are also a number of questions already asked on the Salesforce StackExchange site, too.

How to refresh NetSuite sandbox code (only) from production?

Refresh NetSuite sandbox from production (code only)
I realize that we can refresh out sandbox from production but we don't want to refresh the entire sandbox, instead we want to refresh NetSuite SuiteScript, NetSuite Forms & UI Objects.
NetSuite's proprietary infrastructure/code and challenge it brings
I resisted asking this question for several weeks thinking it was too basic but it doesn't appear that way. After working with NetSuite for a while it has become clear that the line between source code and data has become blurry and it's my opinion this is exactly what makes refreshing code challenging.
I've also learned that storing NetSuite code in version control software is next to impossible (for all code) and this leads me to believe that my desire to refresh code only might be impossible as well. I have to wonder how IT shops that are encumbered by SOX compliance issues are able to satisfy auditors when it comes to controlling and modifying the business logic.
The real question and reason for refreshing the sandbox code
My motivation for refreshing sandbox code is the fact that we are encountering unexpected behavior in our sandbox accounts with certain forms (invoice & estimate) where custom tax fields (Ava-Tax) mysteriously moved from the items tab to a tab that contains transaction body fields! The form appears to not have been updated by anyone in over a year and there were no packages installed in the sandbox that might have broken the form.
If I cannot refresh source code is there a way to determine how a NetSuite form became corrupted knowing that the NetSuite Form is stored in a proprietary way and with no apparent source code available? I understand most of the NetSuite code is JavaScript that runs on both the server and client and there are parts that are unavailable to anyone outside of NetSuite.
Any solutions or suggestions are welcome and appreciated.
It is not at all impossible to store NetSuite code in Source Control. We use git to track all of our NetSuite source, and we follow a process similar to gitflow. Our master branch is always kept in sync with Production. Anytime we push code to Production, that gets merged from its feature/fix branch up to master and tagged as a release. If we want to roll back, we just revert master back a commit and upload the whole project to the File Cabinet. Then, if we want to refresh a Sandbox to match Production, we simply checkout master and upload all of that to the Sandbox.
Sandboxes themselves are much more difficult to keep in sync with a single branch in source because we are constantly developing there on separate feature branches.
If you do not already have such a system in place, all you really need to do is download the zip of your SuiteScripts folder from the Production File Cabinet and upload that to your Sandbox.
This isn't source control, but you can use SuiteBundler to copy items between accounts. SuiteBundler allows you to choose from a lot of things like forms, scripts and custom records. Later you can uninstall the bundle or dissolve it into the account.
It's not so easy to explain in few words, here but: You can use a deployment account to get things work properly. So you continuosly work on dev accounts and use multiple bundle / bundle version for follow branches/Versions of customizations. You update bundle from dev to deploy account only when a version is stable and production envs always install / update bundle version from deploy (not from dev). Since bundles are versionable and infinite, you can use git + dev + deploy account for manage Cvs. For get a versionable version of a form just add &xml=t in url in any form. But this is read only

No commit history migrated to Visual Studio Online when using OpsHub Migration Utility

Background: We use a feature branching strategy to isolate change. Here's a quick diagram of what I mean. Nodes are branches, edges are parent/child relationships in TFS:
Most of the time we create feature branches on a per issue basis, so there are a lot of feature branches.
Also, for my purposes, I only care about migrating source control and changeset history. We don't use TFS work items, test cases, or test results.
Attempt 1: When I first ran the migration tool, it ran for about an entire day before filling up my hard drive and failing.
Attempt 2: Thinking that the 150ish feature branches were the culprit for the slowness/storage needs, I wanted a way to migrate only the "dev" branch of my "ecomm" team project collection in the diagram above. I did not see any way in the opshub tool to migrate a single branch.
I accomplished this by creating a new team project collection "ecomm-migration", then branched $/ecomm/dev to $/ecomm-migration/dev. Then I migrated the ecomm-migration team project collection (which only contains a single branch).
Everything seemed to work: I could see all my source files on Visual Studio Online. However, when I browsed the history of the ecomm-migration project that was migrated to Visual Studio Online, the history was lost: everything appeared to be committed as a single changeset, and annotating files reflected this as well.
Why didn't the changeset history get migrated?
Am I doing it wrong?
Does my approach of creating a separate team project collection to reduce the size of the team project collection being migrated interfere with the tools ability to migrate changeset history?
Are there better tools/options for my scenario?
One thing I had previously considered was pruning dead feature branches with tf destroy, but it would be nice to avoid such drastic, irreversible, history destroying measures if possible.
I am using version 1.1.0.005 of the utility.
Can you please also share what is the version of the utility that you are using? You can find that from the About button on the utility left menu pane.
Addressing your concerns in points.
The disk space requirements of the utility is equivalent to the size of your project. A good reference point would be to get the latest version of the project to your hard drive through Visual Studio and see the size. So that SIZE + 20% would be a good approximation of space that the utility will need in your C: drive during the migration process.
While you did nothing wrong by branching your dev branch into a new project. The utility did nothing wrong either. When you migrated to VSO, you might have selected only the newly created project. Now in your VSO, the trunk of your data in the new project is not present. So, the utility converts Branch -> to -> Add. Which is the only sane way to migrate when source is not selected/available.
To get your desired result (of actual history) you will have to select all the projects where branch-merge was done (across projects) for migration. Which of course is nothing but what you did in Step 1.
Hence, we suggest you to migrate your original project, if you want to retain full history. We can work together to overcome the memory/space crunches that you are stuck in.

TFS process guidance template lock-in?

My team is looking to migrate many of our tools (SCM, bug-tracking, builds, testing) to TFS. We're considering moving each system in stages. For example, move source control first, bug/feature tracking next, etc...
Since we have to choose a process template to use source control (or anything in TFS) how locked in are we with the decision? I'm looking to avoid having to create another project later (or is that not as bad as I think it would be?).
I know I can in theory customize everything the process template configures after the fact (right?), but how feasible is this in practice?
Here is how I see things happening:
We migrate our source code. We choose Microsoft's CMMI template.
We create a new work item (or check-in note) that is a simple link to our legacy bug tracking system.
We work for awhile.
We wait until the powers that be (we're a decent sized software company) to work out a new TFS development workflow. This may be a simple collection of new work items or an entirely new template that configures all sorts of stuff.
We try to migrate our TFS project to this new system without loosing our history.
Will we be sorry we didn't just wait until all these decisions were finalized before using TFS?
So, you are right to think about your process template as there is a certain amount of "lock-in" however it isn't too severe. It's like you are stuck to your process template with honey rather than super glue.
Personally, I would start with the MSF Agile template. It is much lighter weight and contains less work items - so you more likley to want to add things to it (very easy in TFS, very well supported) rather than take them away (more complicated and not entirely satisfactory).
However, if the power's that be decide to go down an uber process definition process and magically come up with a new process template in 12 months time that they want you to use then it isn't completely lost. If you find that you want to create a brand new Team project, as long as it is on that server (or Project Collection in TFS 2010) then you can either branch your code over to the new team project (which means that history is somewhat obscured in current versions of the TFS clients) or you can create a new Team Project with an empty folder for source control and then move the child folders over from the old team project to the new one. This will preserve history perfectly as TFS maintains history for moves on the same TFS instance. Your work items from before the move will be stuck over in the old process template though and you'll need to decide if you want to copy them or just leave them to get closed out naturally.
Obviously, by actually using TFS for 12 months on real projects, when the powers that be come knocking you are also going to be in a much better position to know what you want your shiny new process template to look like - and I've often found that this is an excercise that just never happens and most people are happy tinkering around the edges of MSF Agile or pick something more prescriptive like Scrum For Team System.
Hope that helps,
Martin.

Daily Build and SQL Server Changes

I am about to try and automate a daily build, which will involve database changes, code generation, and of course a build, commit, and later on a deployment. At the moment, each developer on the team includes their structure and data changes for the DB in two files respectively, e.g. 6.029_Brady_Data.sql. Each structure and data file includes all changes for a version, but all changes are repeatable, i.e. with EXISTS checks etc, so they can be run every day if needed.
What can I do to bring more order to this process, which is currently basically concatenate all structure change files, run them repeatedly until all dependencies are resolved, then repeat with the data change files.
Create a database project using Visual studio database edition, put it into source control and let the developers check in their code. I have done this and it works good with daily builds and offer a lot of support for structuring your database code. See this blog post for features
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx