Software Migrations Mistakes and the Lessons Learned from Them

Use the same configuration management principals applied to your software code for your documentation, test plans and requirements.
Use the same configuration management principals applied to your software code for your documentation, test plans and requirements. | Source

Introduction

Software migrations are regular points of failure for IT. This is a major problem, given the rapid evolution of software applications and the constant need to regularly migrate software applications from one version to the next or one software application to another.

Having worked in software support and process improvement, I’ve learned a number of lessons about software migration that need to be shared with the wider community. Let me share them with you now.

Software Requirements

Small changes to software requirements have major impacts down the line. Don’t make little changes assuming it won’t affect something else.

Set strict deadlines after which further changes are forbidden, no matter how small or how essential they may be deemed to be. Then stick to the deadline. Last minute changes lead to unexpected test results, hurriedly altered test plans and uncertain requirements verification.

Centrally manage software requirements, software complaints, bug reports and functional requirements during software migrations. Don’t let developers get surprised by new software requirements or unreported problems that arose in test but aren’t reported to coders until the last weeks before a release.

Software Documentation

Maintain strict Configuration Management control of test documentation, software requirements and test plans as well as software code when managing software migrations. A test series run with the old test may be wrong, and the choice to give the old test plan a pass could result in defect escapes. After all, there was a reason you changed the test plan in the first place, and formatting is rarely all of it.

Maintain your test plans and user documentation as software requirements change.

Don’t forget to update documentation as functions are removed due to scheduling and funding constraints.

File Loading and File Loaders

Loaders should load metadata and files. While you may have a thoroughly planned data map for the software migration, verify that file loaders don’t filter out critical metadata or translate metadata to the wrong fields.

And the fact that the metadata loaded doesn’t mean the data did, too. Look for zero byte file sizes after using file loaders, which may exist even though all metadata fields are perfectly filled in.

Validate the loading of various types of files. You can’t assume that everything you upload will be a document file or Pro-E drawing. Users will upload scanned signature sheets as PDFs, financial files in Excel, Power Point presentations for review by others, flow charts and a dozen other file types.

Test plans should not only verify the straightforward workflow of creation - acceptance - vaulting. Test the rejection workflows, the exceptions and the failures, too.
Test plans should not only verify the straightforward workflow of creation - acceptance - vaulting. Test the rejection workflows, the exceptions and the failures, too. | Source

Validation and Software Migrations

Validate every type of object in every lifecycle state your new system will have after the software migration. This includes newly loaded items, objects at the start of the workflow, items in the middle of the workflow and the end of the workflow.

Validate data before you start checking object workflows, such as document review and data submittal workflows.

Don’t forget to verify that objects uploaded in the middle of a workflow can progress through the workflow to completion. In some applications, an object may get imported in a life-cycle state but not receive an in progress workflow. For these objects, putting them at the start of the life cycle before manually moving them to the correct state may be the best option. Or have the life-cycle completed before the objects are migrated. But don’t leave them hanging.

Test rejections and rework loops, not just the happy path of items approved and vaulted.

Software is now more often upgraded than IT hardware. Yet we tend to test software less thoroughly and concern for actual user needs than hardware.
Software is now more often upgraded than IT hardware. Yet we tend to test software less thoroughly and concern for actual user needs than hardware. | Source

Testing before the Software Migration

Schedule enough time per test fire (dedicated testing round) to complete all planned tests, and a little extra time to duplicate problems found during testing.

Put enough time between software test fires to actually fix problems found in the prior test fire. You can decide to test these fixes before the next test fire or verify that the software works properly in the next test fire. However, problems found in a test fire should always be fixed before go-live.

Always plan at least two test fires, preferably three or more.

Where possible, have two environments running so that one is always available. And you may possibly then get twice as much testing done.

Don't neglect the testing of any and all interfaces to the system.

If you have automated testing, have a human check on the computer. Informational notices and warnings may not be properly logged by an automated test script, though it is painfully obvious to the human observer.

Data Cleansing

Clean data on the legacy system before migration. Never assume loaders properly cleanse data, such as missing metadata fields or altering object lifecycle states. They are far more likely to add errors if logic has to built in to fill in fields.


Ensure that objects in the middle of the workflow during migration do not have to restart the workflow from the beginning during the system release. If everything in work starts the workflow over, you will dramatically increase the workload when users are already overwhelmed.

Training for Using the New Software

Train leaders like team leads first to improve stakeholder buy-in.

Auto-assign people to training courses instead of assuming they will sign up themselves. If it is optional, some will opt out of attending.

Spread out training so that everyone has an opportunity to attend at least once, and let people attend more than once if they feel the need.

Don't train in spurts, with training three months prior to a rollout and another blitz a week or two before go-live. Knowledge retention is best when users practice constantly for weeks prior to the new system's implementation.

Focus training sessions of specific activities like document approval, creation of parts and searching. Then have people attend the training sections they need to learn in addition to mandatory training. This ensures that no important information is glossed over in a long training session, and that people don’t sit through 30 minutes of non-relevant content in a catch-all training session.

Breaking up training into separate sections like searching, creating and approving documents makes scheduling more complex but improves information retention over an 7 hour data dump. Furthermore, people can choose to repeat individual sections they didn't attend, such as how to move a document through an approval workflow or how to revise a model, than sit through a second all-day lecture. People are then more likely to learn what is most crucial to their jobs.

When you create functional group training, remember that the user community is not only managers and view only users. Create functional training for engineers, finance, quality and configuration management users.

Train the trainers well before they are put in front of the audience. Training runs more smoothly when trainers are familiar with the material. Never put trainers in front of reluctant stakeholders and let them see a trainer fumble over a power point he received the day before.

Training systems should act like production for the maximum educational impact. Where possible, use data from the production system on the training database.

Communicate common solutions to all users during training. If there are location specific or unique business division practices, the software migration might be a good opportunity to standardize business practices.

Know that major changes like migrating to a new software system will result in questions to the help desk that was part of the training. Train your help desk staff, preferably by having them go through training along with users.

Ensure that training includes mundane actions such as how to log in and how to enter one's time in addition to job specific tasks like recording a non-conformance or approving a change notice.

Support after a Software Migration

It is better to over-staff support than under-staff it. Consider it an investment in improved productivity of users after the software migration because it shortens their learning curve.

Opt for one page cheat sheets over ten page "how to" workbooks.

Give training materials to help desk personnel before the software migration so that they can refer to it when they receive training questions after the software migration.

Track all user reported bugs, errors, problems and information requests after the software migration. Repeated questions about how to do something represent a training gap, whereas bugs or problems may be due to software problems.

Never throw a manual at a user and expect this to count as resolution. If they were unable to find the solution in existing documentation or learn it in training and were frustrated enough to call the help desk, take the time to fully resolve the issue.

More by this Author


Comments

No comments yet.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working