Enterprise-Level Business Systems Implementation Methods & Tools
With design models created, you are ready to start the selection and development processes. Traditionally, this phase was considered by most IT professionals to be where the real work starts. Over the past few decades, though, people have come to understand that all the planning processes that come before this phase are key in making this phase work successfully.
The major decision in the beginning of this development phase—also called implementation, rollout or construction in some SDLC-based methodologies—is to make a make-or-buy decision. That is, will you take these models and look for COTS packages that meet your designs, or will your develop the system in-house? If you go with a COTS system, you do not have to do any coding; you just purchase and install the system and move right to testing. You may need to customize the package. Examples of enterprise COTS are Microsoft Office, SAP, and MAS. If you choose to develop the system internally, you have to select the appropriate development language and write the code. This code is tested along the way in a variety of test modes, including module or unit testing, until the system is ready to move to a completed test version.
Whether purchased or developed in-house, once the code is complete and in testing, users and other testers put it through a detailed testing process to be sure that it functions as the requirements said it should. While the system is being tested, trainers may also develop training on the system, and others may be converting old data to new data. After all these activities are completed, the system is ready to be implemented, which is also known as being moved to production. This is where the system goes live and everyone begins to use the new system.
After a system moves into production, it still needs minor adjustments, enhancements, and tweaks. This marks the maintenance phase.
A system cannot be used by the intended users until it is developed, tested, and implemented. Although these are key aspects of delivering any quality system, they alone do not deliver the quality. They must be based on solid designs and complete requirements. Much of this quality is the result of following standards.
Change control is the management within the company when a large change is going to occur. This could be the deployment of an application to a large number of users, a turnover to a new system, just about any change that will affect the business processes, including down time. In a large company, there is a Change Control Committee. This committee will review all the plans before a project gets started, sometimes reviewing at specific phases, and giving the go-ahead when it's time to implement the project. Among the questions the committee will require to be answered by the project manager:
- Is this going to require down time of servers/switches/databases?
- What arrangements will be made to notify users of the change?
- What are the actual components involved?
- What is the backout plan?
- What training is scheduled?
- Is the hardware compatible with existing systems?
- Is the software known to be effective? How do you know? What is the schedule for deployment?
- What is the schedule for development and implementation?
Notice that costs are not involved - budgeting is between departments on not part of the CCC's purview.
Measurement of success (metrics)
How to measure success is actually started in the design phase. Whether the budget or timelines are met are measures of the success of the developer and project manager - that is not what we are addressing here. What we are looking for is quantifiable measures we can use to determine if the new system is an improvement on the old one. Does it save time and cost? Does it remove the possibilities of errors? Does it free up people to handle other parts of business or to do their job more efficiently?
While this may seem rather arbitrary, it can be quantified. In the design phase you determine where you can expect to improve. In the implementation phase you measure the differences before (with existing system) and after (with new system). This confirms the ROI (return of investment) for the client, and confirms to the developer just how much improvement is possible and met.
The things to look at:
- customer satisfaction - this can be measured by counting the number of complaints or returns before and after if the new system impacts the customer, and most do. Are orders being met faster? Are there fewer complaints? Is inventory ready for the demand where it wasn't before? Are delivery times improved?
- internal efficiency - are redundancies removed, such as accounting/payroll entering the same information that HR is? Are internal processes faster by being automated? What is the percentage of transactions improved over a specified amount of time? Is data retrieval faster? Are manual processes replaced and how much time has this saved? Not counting training of users, has this eased the work of the users? How? Can inventory be refreshed faster? Can you eliminate wasted inventory? Are errors reduced?
- What is the ROI? For instance, in 5 years, has the amount invested been 'paid back' in saved labor costs, diminished waste and increased efficiency?
As you see, you can actually measure whether a new project is worth its cost. The actual measurements will depend on the project itself, of course.
Copyright 2009: Bonnie-Jean Rohner. Any use or copy of this material in whole or part is only allowed with written permission of the author.