Category Archives: business systems analysis
There cannot be enough said about documentation – at every step of the SDLC. At Pfizer they had intranet Treps (Team Repositories) which are only accessible by people ‘with permission from the Project Manager – and not all of them have publishing rights. Here, the drafts are published, to be replaced by the final forms. The original project plan is published. The developer picks that up for guidance on his programming. The developer publishes system guides. The technical writers pick up the proposal to figure out what to put in the system test scripts, and these are published. The technical writers pick up the system tests and the system documentation to figure out what to put in the user manual. Even more technical writers and testers pick up the system tests and users manual to write User Acceptance Testing scripts. FAQs are published there for incorporation into the user’s manual and online help. Of course, the code has to be well documented (if any of you have ever coded in C or C++ you know how a week later you’ll never figure out what the program did). Each published document is approved with three signatures (project manager, technical manager and business manager) when in its final form.
Training is a more complicated decision than it seems. If there are a lot of users, some companies start training in groups of 20 or so quite early in the game, with a prototype if necessary. This is not the greatest idea because users get antsy to apply what they’ve learned and are afraid they will forget it if they don’t start right away (and they’re right). It doesn’t help that most training manuals are skeletal (on the assumption that the user will take notes) – they are too busy doing the hands-on practice to take many notes. So part of the implementation planning should be to make a training manual which is in fact a fast-reference outline of the most commonly used features.
The important users working on the development of the system are of course the original SMEs (Subject Matter Experts). With a large group of potential users, these people on the development team should ‘train the trainers’ – the technical support personnel and professional trainers. Most large companies have computer classrooms all laid out and waiting. We did this at Shawmut Bank – I worked directly with the curriculum developer to get a clear user’s manual and training session plan for the network. If the company has ‘shadow IT’ people, these are great to prepare – they will work one-to-one with their fellow users.
Whether the system is fully new or an upgrade, tutorials are great when built into the help menu; users can spend lunch hours or any other time privately learning the material. If the system is complex, modular tutorials are a great help as a backup after formal training.
Some companies have a ‘tracking sheet’ which identifies all variables and the order they are going to be tested, and a formal script format. The tracking sheet coincides with the ‘test plan’. One writer writes the scripts and a separate one runs them (to watch for omissions, etc.) before the system testers get hold of them. Not only do we tell them what to do, but we tell them what the expected result should be at every step; the testers record the actual results. A test manager doles out the forms and the test run numbers.
Most compilers test for syntax problems, and professional programmers are rarely so inexperienced as to need a walkthrough. But they do ‘act like a computer’ – it’s the first debug technique taught. But instead of writing down the variables, virtually any debugger allows you to step in and out of procedures and to name variables to watch as they change during execution of the code. Usually they are used on a needs basis – if there are runtime errors that the programmer cannot pinpoint, the debugger is used.
Automated testing is becoming popular, but I don’t think it’s very good, and most large companies feel the same. By the time you write up an automation, the system test is done. It’s best for ‘installation testing’ – that’s when the system as a virgin box works fine – now how about the 4 different platforms we handle, and how does it work with the standard desktop applications and OSs we use? So a company will set up ‘test beds’ – a series of computers with all the different combinations of OS and applications used; an automated script then runs each through its usage of the new system, to see if there are any conflicts.
Also on the test beds are test data. This test data should be as large as is possible, with all the idiosyncrasies of the live data. It would be virtually impossible to reproduce. So instead, the company takes a snapshot of the live data and loads it into a ‘safe’ space, so that testers can add/edit/delete data without damaging the real stuff. This test database is used by system and user testers. The only time live data is used is during a pilot.
Testing should include not only the usual uses of the system, but every anomaly you can think of. I didn’t get the nickname “Crash” for nothing. Consider all the possibilities – page 3 of 7 falls on the floor while transporting the papers to the scanner; someone enters the wrong index data; someone neglects to enter a required field; the user gets PO’d and starts banging on all the keys. Developers always assume people are going to do the right thing [NOT…]. I once e-mailed a 57-page error message to a developer. So when planning system testing, every possible scenario should be covered. Many developers will set ‘traps’ and make user-friendly error messages for those traps, which is fine. The system testers should aim for the internal error messages, so the developers know where and how to build traps. We’re having a tussle with a developer now because there are certain functions which, if not done right, just stop the system – no message at all – and the developer wants to leave it that way. Not on MY watch.
Alpha testing is always done in-house and can be quite extensive. IBM has a bunch of college engineering interns that do nothing but alpha-test its products – they play games, mess around with the GUI, write letters and crunch numbers, looking for glitches. This generates the coded versions (like the Kirk, Spock and McCoy versions of OS2). A lot of the things they list in the text as alpha testing – recovery, security, stress and performance – are usually considered system testing.
I’m sure you are all aware of beta testing. If this is not a shrink wrap, beta testing would be set up as a “pilot” – a separate group of people get the whole package and use it on live data. This is only done if it’s a huge deployment (over 1000 users). If it’s successful after 2-6 weeks, another ‘wave’ of users will get deployed.
Systems construction includes UAT (user acceptance testing) feedback and alterations based on the same. During the systems testing, the purpose is to find bugs – whether links perform as designed, whether the application has conflicts with other standard applications, whether the application depends on outside files that must be present. This last one has become a particular problem recently, as many customizable packages call Internet Explorer files. UAT is performed by selected end-users, as SMEs. Here they are looking to see if the application meets their needs, if the terminology is what they use (mustn’t use computerese when the user is a photography shop owner), whether drop-down lists contain all the items they need, etc. Some UAT testers will simply apply the application to their work; others need specific scripts of what to do. Often they will have suggestions for changes that they would like incorporated. At this point, the decision has to be made whether these changes should be made before deployment (which means another run of development, engineering and testing), or whether they can be cataloged and saved for the next version. This decision requires input from users, managers and IT.
Now comes the delicate part – actual installation (usually called deployment). Don’t forget we made a decision much earlier about whether to do this overnight, in tandem with the old system, or with the legacy system waiting in the wings in case of disaster. Many companies require a backout plan in case there are serious problems. Certainly a change management committee would require a backout. Keep in mind that many users never log out – they go home and leave their machines running, sometimes even over the weekend. The trouble is that if the deployment is done transparently, it’s done overnight, or it’s built into the login script. If the legacy application or other applications are open, this can corrupt the system installation. To handle this, most large corporations require an e-mail to all potential users of the application at least 24 hours before the deployment. Some also require a warning a week ahead. AT B-MS there is a team that does nothing else – the Process Manager sends them a copy of the announcement and tells them who the potential groups are (for instance, everyone in Genomics and Biochem). The mailing team has an up-to-date mailing list by department and sends it out. Unfortunately that doesn’t always work, and one night I created a mailing list of 200 people by hand, working with the Novell engineers to find all the people involved. The announcement will tell the user what is going to be installed, what special steps might need to be taken (like a reboot), and what impact the installation will have. Pfizer sets up the installation so you can choose to delay certain installations, choose not to install some, and has some that are mandatory. For the mandatory ones they survey (installation sends a token back to the installer) and remind those who haven’t installed yet.
Phased installation is great for die-hard legacy users – keep the GUI familiar and add functions incrementally.
One of the reasons so much depends on the data dictionary is so that no data is lost or corrupted during installation of a replacement system. A perfect example of this is the DCF database the state of Connecticut created. They’d forgotten a few fields, and so came out with a new version in 6 months. But the developer apparently did not separate the GUI from the tables. So three fields were lost entirely; the data picked up by the new version did not pick up the fields in their original order, and since they had the wrong data type, they got dropped. Now every time we go to discharge a patient, we have to re-enter those 3 fields.
Tutorials keep stressing system and code documentation because someone else will probably do the upgrades. This is another of those ethical questions – many contractors and outsources like to withhold documentation so the company is forced to recall them for upgrades. Ugly practice. And many, many in-house programmers used to avoid documenting their coding so that they couldn’t be fired – this is why that cumbersome COBOL was developed – to force documentation. Even if it’s documented, picking up someone else’s code is a real bear. But if a contracting company/outsource provides full documentation, they will become trusted – and will get repeat business. And if you’re in-house, you will be working on many projects. When two years pass and they ask you for work on upgrading, you will be very glad you documented it, believe me.
Help files can be difficult to develop, but they can make or break a system’s success. Since online help is cheaper than paper manuals, it’s become a replacement for them. Microsoft has a very extensive Help file – but they have one big problem, for those of us looking for advanced features – their help files are all written as context-sensitive. So if you search by index, and find what you want – they refer to buttons and menus you can’t find because you’re not in ‘the context’. For this reason I find MS Bibles invaluable.
Anyone who reads a Read.me file has found out that this file usually defines new features, special steps for unique peripherals, and last-minute glitches which were caught after release. This is acceptable (well, maybe not…) for shrink wrap, but should never be a standard practice for developers.
In most companies, deployment includes:
- Change management notifications – informing the Change Management team of any impacts on user time, system down time, or other effects on the environment
- Track all calls on the new system for a week or two to see which are unique to the guy who downloads every possible game to his system, or which happen only on the Macintoshes, or which happen only on the non-clinical machines. The project is not closed until it is determined that all systems work smoothly.
- Change management is notified when the project is closed.
- If there is a problem with one type of environment (perhaps the financial department), the users must be notified, and they must be told of any workarounds you find.
- If the system must be uninstalled on any machines, Change Management must be notified, as well as the users.
What many texts do not handle in the implementation section is evaluation. This is tremendously important. Evaluation should be a lessons-learned affair, not a condemnation of any sort. If the system is in-house, determinations can be made of changes to include in subsequent versions. Team responsibilities can be viewed and honed. If the implementer is outside of the company, they too can figure out what works and what does not for that particular client. Evaluation should generate evolution.
Design is perhaps the hardest part of the SDLC. Even though you are forming ideas in your mind while amassing the information and developing the analysis, now you need to formulate the picture and predetermine which questions or problems will occur and try to solve them ahead of time. Whatever technical knowledge you have comes into play, and a wise SA (systems analyst) will work hand in hand with any experts at his/her disposal.
The systems analyst needs to know what a computer is capable of, what coding can do, and what the existing or proposed hardware can handle. The better the SA knows coding, the more detailed the design, but this is sometimes a drawback. Better to figure the architecture and let the programmers determine the most efficient way to animate it.
Sources for a system are often a combination of known and new products, depending on the specific system needed. The Drug Discovery division of Bristol-Myers Squibb (B-MS) has a single outside company managing their licenses and tracking all the software, as well as developing all in-house applications. Pfizer uses a pre-developed package (currently Peregrine) to track all calls for technical support; while designed for this purpose, the client (Pfizer) customizes it extensively to track job types in its own nomenclature and to generate some automatic reports. I developed three tools for the Mashantucket Pequot Tribal Nation entirely because the type of information they are looking to manage is so different from that of a normal US corporation.
It is imperative that any data conversion from legacy systems be addressed at the outset — it will directly affect the data dictionary of the new system. Post-development conversions are expensive and awkward, indicating poor planning. Another situation that should be addressed right in the beginning is how the legacy and new systems will interact during deployment. For various reasons, one of three options will be employed:
- overnight replacement — the preference of developers, since their system is of course the Ideal Solution; this would be the necessary changeover in one fell swoop, as for POS systems; other than the logistics of switching, there is little impact on the design.
- concurrent systems — both the legacy and new systems run simultaneously. This doubles the work of the users, but is absolutely essential for accounting and validated systems. After 3-6 months, depending on the patience of the users, the resulting data is compared between the two systems. If there is no degradation or corruption of data, the new systems runs and the legacy system is dismantled. The designers have to be sure there is no conflict between the two systems and that they can run independently while being in the same environment.
- legacy system “on tap” — a favorite of the Regulatory division at B-MS. The new system is accessed just like (and where) the old system was, but the legacy system is still available in case the new system has functionality problems. This is important where the users cannot afford ‘downtime’. Important for Wall Street, or pharmaceutical firms when they have a 24-hour windows to report to the FDA. Not a great impact on design, but same-name file calls could mess up the new system. The legacy files need to be isolated.
The implementation environment is usually in place. The only instance where I’ve seen an agreement to switch environments is in an acquisition where one company is archaic and the acquirer will bring it up to current level. Usually hardware rules software, such as including cross-platform modules to a Mac lab.
Types of environment (and most places are a mix) — main frames, networks (there can be both NT and Novell in a single organization), minicomputers (Sun stations, scanning stations, ‘towers of power’), server-based applications, client/server applications, stand-alone applications.
It is possible that at this point the client will pull out for a variety of reasons. Political opposition may gain power, they may start realizing it will have to be far more extensive than originally presumed (companies with little experience with software development think anything is possible at little cost), or the budget suddenly has to be redirected to another need — I’ve seen this far more often than I like. For in-house IT this is not usually a problem; they switch to another project; it rarely means a cut in personnel because they are staff — additional personnel would not be hired until the proposal goes ahead. However, for outside resources, this is a painful moment – they have already expended time in research and presentation. Outside resources just thank the heavens they didn’t put out for the all-electronic JAD!
In most cases, the project continues.
When planning a baseline (minimal) system, design a stepped approach (with costs) right up to the top-of-the-line. Many clients would prefer to take a trimmed-down system if they know they can keep growing. And success on the first level makes them hungry for upgrades. I’d rather produce a baseline piece of perfection than a system with all the bells and whistles…and more bugs than Pest Control can handle!
The idea of having a firm develop and run your application on their own computers, where you supply input and take output, is not really that extreme. Examples are billing companies for medical offices, paycheck generating companies and “data warehousing” in any form. Many companies are large enough to need an outsource to generate business analysis, ad hoc reports and manage its data while the company pursues its own business function.
Considering outsourcing? There are complete hardware and software systems pre-built and supported fully by their manufacturers (such as systems to run photo studios, beauty parlors, restaurants). Absolutely essential: get interviews with previous clients of the outsource! Big companies make a difference; at B-MS they decided to employ an outsourced app called Asset Insight, to survey, categorize and track hardware and software globally. It was the choice and recommendation of one individual (who may have known the source). As the scope of the usage broadened to more than 100 machines, problems with the application increased logarithmically. Eventually they discovered the application was developed by three persons in a private home – they had no experience with a global enterprise and no way to pretest it.
Prepackaged, off-the-shelf systems are often called “shrink wrap”. Especially for small businesses, this is often the best choice to offer.
At the same time that Ashton-Tate was screwing around with dBaseIV (causing die-hard dBase II users to wait for dBase V before upgrading) a lot of PC databases came out — FoxPro, DB (now DB2, from IBM), MS Access and Alpha3 (then 4 then 5) came out. Within two years, Ashton-Tate was out of business. A good example of how a single version of a single company’s line can spell disaster. The SA must be able to predict which shrink-wrap applications will survive.
Turnkey systems were an exciting idea around 1990 — applications were just starting to get complex, requiring installation instead of running off diskettes. And there was little formal computer education, so a no-brainer system was very appealing. Larger corporations eventually discovered the ease of installation did not make for a very useful application. Smaller companies and educational organizations, which cannot afford a full time professional staff, still often go for turnkey systems.
PeopleSoft systems may be customizable, but they are not user-friendly. You need in-house PeopleSoft-trained people full time just to maintain the systems.
Hybrid systems could get messy — no access to source code, for instance. But I have seen some gifted software engineers who could get quite a lot accomplished.
Many companies think it’s cheaper to maintain in-house software, especially since this way they are not ‘held hostage’ by an outside source. The common side effect is an overload of the in-house staff.
At B-MS I handled a few tussles with vendors of software because they design the product for stand-alone licensing, where you receive an ‘unlock key’ when you purchase it. So what do you do when you want to install via a network of 4500 users? You can’t have a different number for each person; you want a quick and transparent installation. Often the vendor (for purchases of 700 licenses or more) will have their own software engineers redesign the install module to fit our purposes.
When working with enterprise systems, a company may purchase one set of user manuals for every 50-100 users, because of the cost. So software companies started beefing up their online help, albeit poorly. MS online help is very thorough — *if* you called it up as context-sensitive. Otherwise they refer to menus and buttons that are not on the screen you are at. The alternative to good online help is good training, whether the software is in-house or purchased. But then, what do you provide when new users come on board?
Trade-publication application reviews are not always seeded by the manufacturer and can be a good source for evaluation. Consider the source; some publications give excellent comparisons of similar applications, which is always better than a single-application review.
Simultaneous hardware upgrades or changes are usually cost-prohibitive, and most companies will not agree to it (unless they are woefully antiquated). But it’s a good idea to design ahead of the state of the current hardware, so the application will still be part of the repertoire when they do upgrade.
When OOD (object oriented design) first came out it was called object-oriented programming solutions — OOPS. And those of us comfortable with modular design thought the acronym appropriate. OOD is more of a conceptual thing than a practical application. But it’s very handy for architects.
Obviously, this white paper does not tell you how to design a system; its purpose is to help you know all the factors that will affect the system analyst’s architectural design.
Information-gathering at the inception of a project
I consider the use of interviews to be the most important fact-finding method, backed up by the collection of business documents, for fact finding on a development project. I prefer interviewing fewer people with an hour for the interview, and only interview one person in each role. My reasoning is that, in interviewing more than one in a role, the information would become redundant (and therefore a waste of time). Often, only the manager or the person “buying” the service (that is, the one whose budget gets most impacted) will offer to do a Needs Assessment interview – this is totally unacceptable because the manager does not fully understand the needs of the end user. Ask the manager for permission to interview the most experienced and/or ‘largest’ user in each role.
For instance, if the project is a payroll package for a retail chain you would want to speak to
- The manager/owner/requestor – find out what s/he is looking for, the budget, and the reason for the inception of the project. Who handles the W-2s and how? Do employees get a shift differential?
- Timekeeper – how are they amassing the employee weekly time information; if this information comes in electronically, can the new system import/convert the data? If the employee’s recording of his/her time and the timekeeper’s entry of the employees’ time is all manual, consider computerizing one or both sets of information as a high-end solution.
- Person cutting checks – how are they getting and storing the timekeeping, W-4, and tax information? What specific problems are they encountering, such as sorting recipients by branch, separating out checks to be mailed and check stubs to be mailed (for direct deposit accounts) and sorting those all by zip code? How is all this information being sent to the bank? Format?
- An employee – how does s/he track his/her time? Any complaints about this method?
If at all possible you want your system to be able to interface with any existing electronic systems, both for feasibility and the possibility of replacing the existing systems in the future.
The problems with interviewees are very real. If the person does a manual, clerical job, s/he is going to fear electronic replacement. This person is going to be very self-protective and might even mis-inform the interviewer to make the latter look incompetent. To get cooperation, keep assuring the person that you will not replace him/her; instead you will ease his/her workload – people in this type of a situation invariably complain of an overload of work and/or a fear the s/he is too accustomed to the “old” method and won’t be able to learn a new one.
If interviewees give you the ‘should’ scenario instead of the reality scenario, is this really bad? You will be building in just the corrections that are needed. Ask questions were you suspect there’s a specific lack of information. Encourage “complaining” – it is there that you will learn the weaknesses of the old system (hardware, software and wetware) and what you can offer as a cure.
An interviewee not being able to describe his/her work is very common, especially in a company which grew from a small mom-and-pop organization to a good-sized corporation. These folks learn their tasks, but not the terminology or standard methods. As an example, a person can have 10 years’ experience building and managing projects, but they don’t know what an SDLC is because they never went to school for Project Management – they just did it. You end up hearing descriptions of a specific task, rather than the actual information flow. For example, at Bristol-Myers I was trying to map the process by which the site network was managed. I got a glowing description of a recent problem with a single workstation and how it was eventually discovered to be the NIC (which was a different department’s purview). What I needed to know was for which situations would a technician contact the site Network people and what was the flow of information from there?
There is a specific problem in systems analysis – the people who get into this line of work often have a good technical background but are weak in “people skills” and judgment. These are two very important skills. If you are in this position, take advantage of any management courses available to you – it will only make you look better to your employer. If you find you don’t really like the people-part of the job (and many technical people don’t) consider being part of the development team instead of the client-interface team.
I have worked for two different very large corporations in the same field – one has a team that does nothing but document the processes in use in all departments (and publish it on the intranet); the other doesn’t even have a P&P (Processes and Procedures Manual) for the IT people. Needless to say, the former is more successful.
All business forms are of great value – not only what they fill out for input, but also every report/invoice/whatever that goes out. Whenever I’m doing a Needs Assessment I request sample copies of everything they mention and try to anticipate others they forgot. You can always weed out those you won’t need. And refinement of the data, to avoid duplication, is easier done this way. Sometimes they are inputting or outputting the same data, just using different names for it, so they don’t realize it’s the same. What you don’t want is to discover after the delivery that the data for a regular report is NOT there; you can build the report later if needed…but adding fields and inputting missing data is tough and expensive. Whenever an input field does not appear to be used for output, question whether it’s a necessary field (it may be something they intend to evaluate in the future.)
I am not a big advocate of questionnaires. Even if they are electronic, maybe 25% respond. Non-response is very important – there is usually a very good reason for it. Online forms are great; people usually think it’s safer, and no one will be reading the responses personally, so they are more likely to fill them out – and truthfully (after all, one couldn’t tell who it is by the handwriting). Use their intranet. Questions on the level of satisfaction are a bear – every try to decide if you are ‘somewhat satisfied, as opposed to ‘satisfied’? Avoid the subjective queries.
On requesting documents – an organization chart is usually not very important – unless you want to know who to please (often a determinant). Above all you want its forms of information -– what forms are used to gather the information now – job applications, patient intake forms, logs kept by the people running programs, etc. – and what information needs to be extracted (standard reports to stockholders, paychecks, metrics reports to management, invoices). The output will determine the format of the input. So I guess you could say I advocate a top-down approach.
One reason I have a lot of not-for-profit organizations as clients is because they need to report to at least one agency on their activities, as well as funding agencies (such as United Way and the Dept. of Children and Families). There is no pre-packaged software for these organizations which can track their activities (there are shrink-wrap packages for non-profit accounting). They have to report activities and demographics quarterly to each funding agency or lose the funding. These projects must be designed from this vantage point of output, and often ‘registration’ electronic forms need to be designed to nudge them into getting the data needed.
Direct observation is one of my favorite methods of information-gathering. Ask questions; if you do it right, they get into ‘brag mode’. An example of the value of observation: I have a client for whom we have done data warehousing since 1996; we know all the groups they need to answer to for funding and licensing, we know the demographics that are important to the management, and we know what the esoteric codes mean. The state of Connecticut decided to create its own databases for the reports it was evaluating each month, since it had been up to this time information was manually input and was constantly in error. But the man hired to build these databases had no idea what the information meant. The result – an awkward database that has had so many errors in it that it’s been revamped 3 times in the last 2 years. Plus, they seem to think their reporting agencies are stupid, so they locked access to the database. Result – our client had to pay for re-input of 600 records, and pays extra each month because we then have to export the state’s information, fix it and add our additional tracking information. Why fix it? The state has the zip code locked into CT-only zips – my client is near the Rhode Island border and often gets clients who have RI zip codes. Additional information? The state doesn’t think in terms of separate programs, so we have to re-align the data. And the state selected its own case-numbering system, so the client has to keep a double-system for all current cases. I could go on ad nauseum… If the developer had observed what the reporting agencies actually do, a lot of these problems wouldn’t have cropped up at all.
Characteristics for a good systems analyst during requirements determination:
- Impertinence – asking questions. It looks easier than it is. You could easily miss which questions to ask, which might require a call-back. This ability to know what to ask is a developed skill, and certainly each project will make you more aware of what to look for.
- Impartiality – the politics get in the way all the time. Always consider the source; what is the person’s attitude toward the project? What will this person gain or lose with the new system? You may actually be told by the ‘buyer’ that a particular person’s opinion is not [is most] important. Find out who makes the final decision – this is the person to try to satisfy in the end.
- Relax constraints – yeah, right. The biggest hurdle is “I have done it this way since Ben Franklin…” The client may insist on a mimic of the present system; any change to this would have to be gradual, probably a follow-up upgrade. Try to keep as close to the present way of doing things as is efficient, such as having electronic forms look very similar to the paper forms they are using – but with time-savers on them such as default values.
- Attention to detail – if you have a computing background you already know that it’s the details that will kill you
- Reframing – this is no problem if you are an outside source. But if you are an in-house organization this is very difficult for the analyst as well as the client.
Knowing the business objectives is necessary to sell your solution. For example, one year Bristol-Myers had a Business Strategy Objective (BSO), which was defined in detail. All work had to be justified according to the BSO or it wouldn’t be done. One key phrase is “this is the ROI” (Return of Investment); since it’s a business buzz word, ears perk up that have no interest in the technical stuff. And it’s always a selling point – prove to the client that they will get a better profit, to the tune of a multiple of the cost of the new system in the first 5 years, and you’ve made your sale. For not-for-profit organizations, the people are busy saving the world; they don’t carefully track their own activities. Showing them how tracking particular information can increase the funding works like a charm.
Every time I build an application for them, I discover a lot of things they could track easily by computer that they weren’t bothering to report at all. As a result of the more efficient reporting, their funding increased significantly. Be sure to observe more than the sponsor offers.
Watch for side activities/processes that can be incorporated into the system (for the high-end solution, or a follow-up proposal). And watch for redundant actions which can be eliminated – this is quantifiable ROI. Be sure to re-write notes after doing an interview, receiving a questionnaire, or reviewing documents; you’d be surprised what you’ll forget in a matter of hours. Remember that your time and that of the people giving you information are both valuable. Do not give notes back to the person you question – you should have notes that are not for the clients’ eyes, since they might misinterpret them. Instead, you could graph or outline the information and have them review it for accuracy.
JAD stands for Joint Analysis and Design. This is often where you’ll find professional facilitators. Within your organization a JAD should be run by someone on a management level – or someone being groomed/trained to be on a management level. It might be the Project Manager for that project or a Systems Analyst. Because professional facilitators don’t know the subject matter they often lead the discussion in the wrong direction. The only time you’d see a JAD is within a company that has application development as its business function. It’s expensive and time-consuming for the client. It would be nice to have it run electronically– and run it at the client site – but now we’re really getting into the big bucks. In most cases a JAD is tracked on a white board or large paper pads, and then has to be rewritten and published. One big problem is squabbles among the clients. Better then to interview them separately, make your suggestions and let them squabble it out after you leave.
Analysis of gathered information
Prototyping is a great idea and should be a standard part of the development process, if you can convince the client to do it. It allays fears of what to expect, makes it easier for clients to articulate their needs and practices, and it’s cheaper in time, money and work than coming up with a revision. The only detraction is that if the contract is turned down, the developer needs to “eat” the cost.
A good approach to convincing the sponsor if the need for change is charting the existing system: all manual, hard copy and electronic processes together in a ‘current systems’ data flow diagram. Don’t bother to flabbergast the client – win them over with professionalism.
There are hard copies which must be maintained in certain industries such as pharmaceuticals and legal companies. These can be scanned into a database, moving closer to a paperless work environment (and it’s certainly a lot easier to manage). There is a company in Connecticut that does nothing but scan and index all the paperwork an attorney needs for a case, so s/he can review, evaluate, cross-reference and call up any piece of evidence in an instant.
Data flow charts really aren’t that difficult for software companies, because they usually have teams that always do the same type of applications, such as warehouse management for whatever kind of warehouse comes along – they modularize and rearrange the modules.
For presentation to the customer, ‘before’ and ‘after’ data flow charts would show the simplification of the processes, which is usually enough to convince them to continue with the project. Those of you with Visio are probably aware of the many ways in which to represent processes. Is any one icon or system the best? Absolutely not. You can use circles and squares, as long as it’s clear and well-documented – you want to be able to send the client home with a printed copy to evaluate the changes on his own time, rather than making him feel pressured to respond immediately.
Knowledge of programming is a great asset to developing these charts, so the systems analyst will often work hand-in-hand with the programmers in drafting them.
During analysis, a feasibility study should be done. Feasibility is hard to nail down. It looks somewhat overwhelming at first. Many companies have a dollar figure per-employee for additional manpower. And experts in each phase (networking, users, hardware) can usually give you information on what it will cost, or if it’s not possible to do. In large companies, there is usually a good idea ahead of time of what the costs will be, and they will be looking more for the time span involved. In small companies there is usually a serious “sticker shock”, and many projects die at conception.
Throughout this process, the aim is to achieve an agreement, which will probably mean compromise on both sides.
When a company is considering installing a new system (or startup system), the most common approach is to create a team for the project. If the company is small it will usually hire an integration company to do the development. A mid-sized company may have an in-house project manager, but not the IT staff to handle the project; they will often contract out the work to an integration company answering to the project manager. In a large company, there is usually sufficient staff with the right capabilities to complete the project in-house, and a systems analyst functions as project manager.
An integrated system is one which combines more than one system, such as a network, or the phone lines at a Help Desk. Sometimes one would integrate a Mac network into a Novell PC network, or have communications between more than one network. Of course, software needs to be integrated into an existing or new system. Some companies that call themselves Integrators build entire systems. It’s not really a hard and fast term.
Choosing team members may not be a privilege given to the systems analyst; teams are sometimes made up of volunteers or more often assigned by management. It is up to the systems analyst to determine the talents of the team members and lead them accordingly. People in IT need to be “team players”, willing to work with each other, and the project manager needs above all to be able to manage all the personnel on the team to achieve the success of the project.
The actual participants in analysis and design vary, from a single programmer (or developer, as they prefer to be called) and his/her client, to entire teams, depending on the size and politics of the organization. The analysis may be done by a single person or a group. Often the project manager is the representative of the business end of the project, and the systems analyst manages the technical group. A new buzz word in the industry for a systems analyst is “application architect”; it loosely applies to someone who designs the path an application will follow, and is usually the purview of a systems analyst.
A successful team is built on trust in one another, listening to each other, and rigorous communications. As a consultant, I often step into a team of people who have worked together in the past. I have to have the faith in myself to speak up. And I find that the team invariably welcomes a fresh point of view.
The team members, then, can consist of the following: project manager, systems analyst, developer, hardware technician(s), database administrator (DBA), SME (subject matter expert), and sponsor (the person who instigated the project and will OK all steps and payments). Not all of these positions will be in all projects, of course, and some roles may be filled by more than one person.
I was recently working in a large world-wide corporation. They had a well-earned reputation of high retention – once you were hired there, it was rare that you got fired. However, in order to maintain this retention, they sometimes had to move people into roles that were created just to find a place for them. One example is called “shadow IT”. What this meant is that the person was no good at his/her job as a research chemist, but did demonstrate an ability for hacking on computers. These shadow ITs were not a formal part of the IT organization, and as such not held to the ethics or processes in the IT group. So they would do as they pleased, pass opinions on IT work as ‘representatives’ of their scientist colleagues, and generally cause more havoc than good.
As a systems analyst, managing expectations may be the most difficult part of the job. For example, you may have to tell clients they cannot have parts of the wish list due to costs, time constraints or equipment limitations.
Ethics are a strong part of the systems analyst’s function. Most companies outsource a great deal of their IT work, in every level from technical support to network management and project management. An ethical consultant (also called a contractor) should document all work so it can be picked up by a third party and s/he must respect the confidentiality of the work in which s/he is involved. I have been privy to information about therapy patients, drugs being developed, financial information on a small business such as a real estate office. It is very important that I and my staff let this information pass right through and never be repeated. A consultant who is hired via a contracting company is bound two ways – loyalty to the home company as well as the client – and must keep a fair balance between the two. If this is done correctly, the home company will support the consultant to the end. If the consultant functions unethically, s/he will end up on the street, and virtually blackballed.
The team should follow a specified methodology, which is determined by the project manager or systems analyst (these terms are often interchangeable). The most common method is the waterfall method to direct the SDLC (system development life cycle). Nonetheless, nothing is etched in stone, and the systems analyst (SA) must have the flexibility to handle problems as they show up.
The biggest problem is calming the client down. Next biggest is convincing the IT people involved that it’s human to miss something or not be able to predict something. Clients tend to believe the IT group’s contention that this will always work the first time around. While it’s nice to exude confidence, the truth of the matter is that no matter what extent you tested, once the product goes into production on a full scale of over 5000 users, passing around worldwide networks with Unix boxes, NT servers and Win2000 desktops, there are bound to be some logjams. The client has to understand that this is going to happen, and be reassured by the prompt putting out of the fires. The IT workers are similarly frustrated when they’ve tested the product extensively in systems testing, UAT (User Acceptance Testing, which is a small group of subject matter experts) and possibly even pilot programs and still problems crop up. Nonetheless, one server in the Mount Vernon site, or a user’s machine with unique settings (don’t ask…) can upset the best-laid plans. Assure the IT people that this is not unexpected – then recruit them to troubleshoot the problems that crop up.
The typical steps that a systems analyst will guide in the waterfall method are:
- initiation – client sends out an RFP or requests system or software development; developer asks general questions to get the ‘big picture’
- Analysis – through several different methods, developer determines exactly what is needed, budget, time constraints and feasibility; developer then proposes project to client via a Requirements document which, when signed by both parties, is a contract to go ahead with the development. Due to the cost of the analysis phase, a contract may be more general and signed before analysis begins.
- design – the systems analyst will architect the system, usually by designing system hardware, data flow diagrams, use-case diagrams and ERDs, and sometimes even a class diagram.
- development – programmers write code, build databases. Technicians build hardware and systems. At this point, system testing (of both hardware and software) is done, and UAT (User Acceptance Testing).
- implementation – setting up the entire system at the client’s site. This includes notifications and production testing.
- maintenance – upgrades, troubleshooting in the opening weeks of the implementation, patches
Because of the broad spectrum of responsibilities that an SA must shoulder, this person must have experience in all phases of this life cycle. And it is the SA who will probably answer for the success or failure of the project.
Once a project has been determined to be desirable, it is time to start the detailed information gathering processes. That information is then documented and approved, feeding the next phase in the SDLC process, design.
The purpose of analysis is to gather data on the existing system, determine the requirements for the new system, consider alternatives, and conduct a feasibility study of the solution. The primary output of the analysis phase is a detailed list of the system requirements. Data collection is the first part of analysis. The goal is to understand the data inputs and outputs for the system. There are various sources for data collection. For example, internal sources such as users and managers, organization charts, procedure manuals, reports, and business process flow charts. Other data may be external such as customers, suppliers, government agencies, and competitors.
Collecting data through a report is fairly straight-forward; however, extracting information from people is often a bit more challenging. Some of the ways to solicit information are; direct interviews, observation, surveys and questionnaires. Interviews can be structured where the interview questions are determined ahead of time or unstructured where it is more of a conversation that flows as it would naturally. If the unstructured approach is used, the interviewer should be sure to cover all the important topics and not get lost in the conversation. With observation, the analysis team or member directly sits with a system user to observe that user in the use of the system. This method is very helpful in that the observer is able to see how that user does his/her day-to-day job using the system. This is also helpful to uncover gaps in the way that one user may use the system differently than another one. Questionnaires are a good approach when users are spread across geographical areas and direct interview or observation are not feasible.
Prototypes are made for all kinds of things. When a new submarine is designed, a wooden prototype is built to scale about 10 feet long; it is then put through water dynamics tests to see if it would be seaworthy. In IT development, a prototype may only be a drawing, to see if the users are friendly to the look of it and to see if it has the buttons and so on that the users need for an opening screen. Other times a prototype is more functional (just without error codes, bells, whistles and fancy GUI) to see if the design idea is what the customer is looking for. The trouble with a functional prototype is that it’s expensive to develop — and if you are a vendor this is done before the contract is signed, so if the customer cancels the deal, this is labor unpaid. But if you are IT in-house, it’s a great idea.
E-commerce is the trade and/or sale of goods or services using the World Wide Web. The customer only sees a Web page s/he can access from his/her browser. Behind that is the selling company’s network. A Web server housed at the company communicates with the Internet to send out the site page. Information clicked on or entered by the customer is downloaded to this server and converted to regular data saved on a data server. From the data server, the seller collects and acts on the request. From the internal server, the seller can send e-mail and upload order status to the Web Server.
Requirements: You need to determine the functions of the proposed application, and any hardware that may be necessary to make the new system work. And you need to separate functions of the software, then determine if EACH requirement is mandatory or optional to achieve the goals. For instance, the manufacturing process module might be:
(1) determine the amount of raw materials needed to build each individual product [mandatory – needed to track inflow and outgo of inventory of raw material]
(2) track products produced [mandatory – need to know what inventory has been used up]
(3) decrement inventory needed for each produced product [mandatory – need to know inventory levels in real time];
(4) determine uncharacteristic changes in inventory usage [optional – algorithm can be difficult to determine “uncharacteristic”]
(5) determine low-inventory reorder levels [optional – can be input manually]
(6) automatically reorder specified amounts at reorder level [optional – may change regularly].
Measures of success
Measurement of success is a method to measure the effectiveness of the project, not the efficiency of the project team. Therefore time tables and meeting the budget do not count here. It in part involves the ROI. You need to find those things that save labor time or efficiency that can be quantified — numbers of transactions currently being processed, amount of inventory being recorded in per hour, number of products being manufactured, number of sales per day. Determine what types of these things you can ‘count’ now, and how many are proposed to be the result of the new system, How much labor time and/or money will this save in increased production/sales/efficiency?
A Project Feasibility Study is an exercise that involves documenting each of the potential solutions to a particular business problem or opportunity. Feasibility Studies can be undertaken by any type of business, project or team and they are a critical part of the Project Life Cycle.
When to use a Feasibility Study?
The purpose of a Feasibility Study is to identify the likelihood of one or more solutions meeting the stated business requirements. In other words, if you are unsure whether your solution will deliver the outcome you want, then a Project Feasibility Study will help gain that clarity. During the Feasibility Study, a variety of ‘assessment’ methods are undertaken. The outcome of the Feasibility Study is a confirmed solution for implementation.
Technical feasibility — can the project be achieved with the current systems in place?
Operational feasibility — can the project track/manage/perform the operations (processes) it is proposed to do?
Economic feasibility – can the project be performed within a reasonable budget?
- Research the business problem or opportunity
- Document the business requirements for a solution
- Identify all of the alternative solutions available
- Review each solution to determine its feasibility
- List any risks and issues with each solution
- Choose a preferred solution for implementation
- Document the results in a feasibility report
Statement of scope and goals
The statement of goals; shows an understanding of what the business process is that is desired these goals are listed. Scope, on the other hand is more specific — without expected results or promotional prose. For instance, the scope for a goal of ‘tracking incoming inventory’ could be to the following effect: Set up bar coding process for received materials to electronically enter received shipments directly into database. From here, the requirements can be determined.
COPYRIGHT 2001: BONNIE-JEAN ROHNER. All rights reserved.
Whether you work for a small nonprofit organization or a multibillion-dollar international business, your work is affected by business systems. Your personal information is kept in some type of human resource management (HRM) system; you get paid via a payroll system; and you communicate with others via an e-mail system. You would not be able to do your work without these systems, but have you ever asked yourself where these systems came from? How were they selected? How effectively do they function in your organization? Do they need upgrading?
Business analysts, systems analysts, project managers, and many other specialty employees all contribute to locating, developing, and implementing these systems.
Virtually all aspects of today’s companies are influenced by, if not outright controlled by, information systems. Although some of these are small, stand-alone packages, most have been integrated into enterprise-wide systems. Understanding how these systems interact with one another and how they comply with the missions of the business is key to properly analyzing and designing modifications for them.
Ask yourself what your corporate standard for the SDLC (systems development life cycle) is, if any. How does it compare to other SDLCs? Does it function effectively for your organization? Why or why not? Additionally, think about what enterprise-wide systems you currently have, which additional ones might benefit you, and why or how that might be.
Enterprise business systems development is the process of answering these questions, then coming up with a solution to the business’ needs and implementing that solution. This is the transformation from ‘smoke stack’ systems, where one department keeps adding to its own system, to The Big Picture. The systems analyst will take a birds-eye view of the company and find a solution wherein as many departments as can will share data and apply it to their own needs. The overall plan will probably include particular stand-alone applications for business-specific needs.
Whether you work in IT or in a management role in another department of your organization, it is likely you will be involved in updating and replacing information systems. Understanding the SDLC process and the major categories of systems within the organization is key to effectively functioning in either of those roles.
With design models created, you are ready to start the selection and development processes. Traditionally, this phase was considered by most IT professionals to be where the real work starts. Over the past few decades, though, people have come to understand that all the planning processes that come before this phase are key in making this phase work successfully.
The major decision in the beginning of this development phase—also called implementation, rollout or construction in some SDLC-based methodologies—is to make a make-or-buy decision. That is, will you take these models and look for COTS packages that meet your designs, or will your develop the system in-house? If you go with a COTS system, you do not have to do any coding; you just purchase and install the system and move right to testing. You may need to customize the package. Examples of enterprise COTS are Microsoft Office, SAP, and MAS. If you choose to develop the system internally, you have to select the appropriate development language and write the code. This code is tested along the way in a variety of test modes, including module or unit testing, until the system is ready to move to a completed test version.
Whether purchased or developed in-house, once the code is complete and in testing, users and other testers put it through a detailed testing process to be sure that it functions as the requirements said it should. While the system is being tested, trainers may also develop training on the system, and others may be converting old data to new data. After all these activities are completed, the system is ready to be implemented, which is also known as being moved to production. This is where the system goes live and everyone begins to use the new system.
Although these are key aspects of delivering any quality system, they alone do not deliver the quality. They must be based on solid designs and complete requirements. Much of this quality is the result of following standards. After a system moves into production, it still needs minor adjustments, enhancements, and tweaks. This marks the maintenance phase.
Change control is the management within the company when a large change is going to occur. This could be the deployment of an application to a large number of users, a turnover to a new system, just about any change that will affect the business processes, including down time. In a large company, there is a Change Control Committee. This committee will review all the plans before a project gets started, sometimes reviewing at specific phases, and giving the go-ahead when it’s time to implement the project. Among the questions the committee will require to be answered by the project manager:
- Is this going to require down time of servers/switches/databases?
- What arrangements will be made to notify users of the change?
- What are the actual components involved?
- What is the backout plan?
- What training is scheduled?
- Is the hardware compatible with existing systems?
- Is the software known to be effective? How do you know? What is the schedule for deployment?
- What is the schedule for development and implementation?
Notice that costs are not involved – budgeting is between departments on not part of the CCC’s purview.
How to measure success is actually started in the design phase. Whether the budget or timelines are met are measures of the success of the developer and project manager – that is not what we are addressing here. What we are looking for is quantifiable measures we can use to determine if the new system is an improvement on the old one. Does it save time and cost? Does it remove the possibilities of errors? Does it free up people to handle other parts of business or to do their job more efficiently? While this may seem rather arbitrary, it can be quantified. In the design phase you determine where you can expect to improve. In the implementation phase you measure the differences before (with existing system) and after (with new system). This confirms the ROI (return of investment) for the client, and confirms to the developer just how much improvement is possible and met.
The things to look at:
- customer satisfaction – this can be measured by counting the number of complaints or returns before and after if the new system impacts the customer, and most do. Are orders being met faster? Are there fewer complaints? Is inventory ready for the demand where it wasn’t before? Are delivery times improved?
- internal efficiency – are redundancies removed, such as accounting/payroll entering the same information that HR is? Are internal processes faster by being automated? What is the percentage of transactions improved over a specified amount of time? Is data retrieval faster? Are manual processes replaced and how much time has this saved? Not counting training of users, has this eased the work of the users? How? Can inventory be refreshed faster? Can you eliminate wasted inventory? Are errors reduced?
- What is the ROI? For instance, in 5 years, has the amount invested been ‘paid back’ in saved labor costs, diminished waste and increased efficiency?
As you see, you can actually measure whether a new project is worth its cost. The actual measurements will depend on the project itself, of course.
Using the above tools and methods, adding others that befit your project, you will be able to implement your project with confidence of success.
Copyright 2009: Bonnie-Jean Rohner