Home » information technology
Category Archives: information technology
There cannot be enough said about documentation – at every step of the SDLC. At Pfizer they had intranet Treps (Team Repositories) which are only accessible by people ‘with permission from the Project Manager – and not all of them have publishing rights. Here, the drafts are published, to be replaced by the final forms. The original project plan is published. The developer picks that up for guidance on his programming. The developer publishes system guides. The technical writers pick up the proposal to figure out what to put in the system test scripts, and these are published. The technical writers pick up the system tests and the system documentation to figure out what to put in the user manual. Even more technical writers and testers pick up the system tests and users manual to write User Acceptance Testing scripts. FAQs are published there for incorporation into the user’s manual and online help. Of course, the code has to be well documented (if any of you have ever coded in C or C++ you know how a week later you’ll never figure out what the program did). Each published document is approved with three signatures (project manager, technical manager and business manager) when in its final form.
Training is a more complicated decision than it seems. If there are a lot of users, some companies start training in groups of 20 or so quite early in the game, with a prototype if necessary. This is not the greatest idea because users get antsy to apply what they’ve learned and are afraid they will forget it if they don’t start right away (and they’re right). It doesn’t help that most training manuals are skeletal (on the assumption that the user will take notes) – they are too busy doing the hands-on practice to take many notes. So part of the implementation planning should be to make a training manual which is in fact a fast-reference outline of the most commonly used features.
The important users working on the development of the system are of course the original SMEs (Subject Matter Experts). With a large group of potential users, these people on the development team should ‘train the trainers’ – the technical support personnel and professional trainers. Most large companies have computer classrooms all laid out and waiting. We did this at Shawmut Bank – I worked directly with the curriculum developer to get a clear user’s manual and training session plan for the network. If the company has ‘shadow IT’ people, these are great to prepare – they will work one-to-one with their fellow users.
Whether the system is fully new or an upgrade, tutorials are great when built into the help menu; users can spend lunch hours or any other time privately learning the material. If the system is complex, modular tutorials are a great help as a backup after formal training.
Some companies have a ‘tracking sheet’ which identifies all variables and the order they are going to be tested, and a formal script format. The tracking sheet coincides with the ‘test plan’. One writer writes the scripts and a separate one runs them (to watch for omissions, etc.) before the system testers get hold of them. Not only do we tell them what to do, but we tell them what the expected result should be at every step; the testers record the actual results. A test manager doles out the forms and the test run numbers.
Most compilers test for syntax problems, and professional programmers are rarely so inexperienced as to need a walkthrough. But they do ‘act like a computer’ – it’s the first debug technique taught. But instead of writing down the variables, virtually any debugger allows you to step in and out of procedures and to name variables to watch as they change during execution of the code. Usually they are used on a needs basis – if there are runtime errors that the programmer cannot pinpoint, the debugger is used.
Automated testing is becoming popular, but I don’t think it’s very good, and most large companies feel the same. By the time you write up an automation, the system test is done. It’s best for ‘installation testing’ – that’s when the system as a virgin box works fine – now how about the 4 different platforms we handle, and how does it work with the standard desktop applications and OSs we use? So a company will set up ‘test beds’ – a series of computers with all the different combinations of OS and applications used; an automated script then runs each through its usage of the new system, to see if there are any conflicts.
Also on the test beds are test data. This test data should be as large as is possible, with all the idiosyncrasies of the live data. It would be virtually impossible to reproduce. So instead, the company takes a snapshot of the live data and loads it into a ‘safe’ space, so that testers can add/edit/delete data without damaging the real stuff. This test database is used by system and user testers. The only time live data is used is during a pilot.
Testing should include not only the usual uses of the system, but every anomaly you can think of. I didn’t get the nickname “Crash” for nothing. Consider all the possibilities – page 3 of 7 falls on the floor while transporting the papers to the scanner; someone enters the wrong index data; someone neglects to enter a required field; the user gets PO’d and starts banging on all the keys. Developers always assume people are going to do the right thing [NOT…]. I once e-mailed a 57-page error message to a developer. So when planning system testing, every possible scenario should be covered. Many developers will set ‘traps’ and make user-friendly error messages for those traps, which is fine. The system testers should aim for the internal error messages, so the developers know where and how to build traps. We’re having a tussle with a developer now because there are certain functions which, if not done right, just stop the system – no message at all – and the developer wants to leave it that way. Not on MY watch.
Alpha testing is always done in-house and can be quite extensive. IBM has a bunch of college engineering interns that do nothing but alpha-test its products – they play games, mess around with the GUI, write letters and crunch numbers, looking for glitches. This generates the coded versions (like the Kirk, Spock and McCoy versions of OS2). A lot of the things they list in the text as alpha testing – recovery, security, stress and performance – are usually considered system testing.
I’m sure you are all aware of beta testing. If this is not a shrink wrap, beta testing would be set up as a “pilot” – a separate group of people get the whole package and use it on live data. This is only done if it’s a huge deployment (over 1000 users). If it’s successful after 2-6 weeks, another ‘wave’ of users will get deployed.
Systems construction includes UAT (user acceptance testing) feedback and alterations based on the same. During the systems testing, the purpose is to find bugs – whether links perform as designed, whether the application has conflicts with other standard applications, whether the application depends on outside files that must be present. This last one has become a particular problem recently, as many customizable packages call Internet Explorer files. UAT is performed by selected end-users, as SMEs. Here they are looking to see if the application meets their needs, if the terminology is what they use (mustn’t use computerese when the user is a photography shop owner), whether drop-down lists contain all the items they need, etc. Some UAT testers will simply apply the application to their work; others need specific scripts of what to do. Often they will have suggestions for changes that they would like incorporated. At this point, the decision has to be made whether these changes should be made before deployment (which means another run of development, engineering and testing), or whether they can be cataloged and saved for the next version. This decision requires input from users, managers and IT.
Now comes the delicate part – actual installation (usually called deployment). Don’t forget we made a decision much earlier about whether to do this overnight, in tandem with the old system, or with the legacy system waiting in the wings in case of disaster. Many companies require a backout plan in case there are serious problems. Certainly a change management committee would require a backout. Keep in mind that many users never log out – they go home and leave their machines running, sometimes even over the weekend. The trouble is that if the deployment is done transparently, it’s done overnight, or it’s built into the login script. If the legacy application or other applications are open, this can corrupt the system installation. To handle this, most large corporations require an e-mail to all potential users of the application at least 24 hours before the deployment. Some also require a warning a week ahead. AT B-MS there is a team that does nothing else – the Process Manager sends them a copy of the announcement and tells them who the potential groups are (for instance, everyone in Genomics and Biochem). The mailing team has an up-to-date mailing list by department and sends it out. Unfortunately that doesn’t always work, and one night I created a mailing list of 200 people by hand, working with the Novell engineers to find all the people involved. The announcement will tell the user what is going to be installed, what special steps might need to be taken (like a reboot), and what impact the installation will have. Pfizer sets up the installation so you can choose to delay certain installations, choose not to install some, and has some that are mandatory. For the mandatory ones they survey (installation sends a token back to the installer) and remind those who haven’t installed yet.
Phased installation is great for die-hard legacy users – keep the GUI familiar and add functions incrementally.
One of the reasons so much depends on the data dictionary is so that no data is lost or corrupted during installation of a replacement system. A perfect example of this is the DCF database the state of Connecticut created. They’d forgotten a few fields, and so came out with a new version in 6 months. But the developer apparently did not separate the GUI from the tables. So three fields were lost entirely; the data picked up by the new version did not pick up the fields in their original order, and since they had the wrong data type, they got dropped. Now every time we go to discharge a patient, we have to re-enter those 3 fields.
Tutorials keep stressing system and code documentation because someone else will probably do the upgrades. This is another of those ethical questions – many contractors and outsources like to withhold documentation so the company is forced to recall them for upgrades. Ugly practice. And many, many in-house programmers used to avoid documenting their coding so that they couldn’t be fired – this is why that cumbersome COBOL was developed – to force documentation. Even if it’s documented, picking up someone else’s code is a real bear. But if a contracting company/outsource provides full documentation, they will become trusted – and will get repeat business. And if you’re in-house, you will be working on many projects. When two years pass and they ask you for work on upgrading, you will be very glad you documented it, believe me.
Help files can be difficult to develop, but they can make or break a system’s success. Since online help is cheaper than paper manuals, it’s become a replacement for them. Microsoft has a very extensive Help file – but they have one big problem, for those of us looking for advanced features – their help files are all written as context-sensitive. So if you search by index, and find what you want – they refer to buttons and menus you can’t find because you’re not in ‘the context’. For this reason I find MS Bibles invaluable.
Anyone who reads a Read.me file has found out that this file usually defines new features, special steps for unique peripherals, and last-minute glitches which were caught after release. This is acceptable (well, maybe not…) for shrink wrap, but should never be a standard practice for developers.
In most companies, deployment includes:
- Change management notifications – informing the Change Management team of any impacts on user time, system down time, or other effects on the environment
- Track all calls on the new system for a week or two to see which are unique to the guy who downloads every possible game to his system, or which happen only on the Macintoshes, or which happen only on the non-clinical machines. The project is not closed until it is determined that all systems work smoothly.
- Change management is notified when the project is closed.
- If there is a problem with one type of environment (perhaps the financial department), the users must be notified, and they must be told of any workarounds you find.
- If the system must be uninstalled on any machines, Change Management must be notified, as well as the users.
What many texts do not handle in the implementation section is evaluation. This is tremendously important. Evaluation should be a lessons-learned affair, not a condemnation of any sort. If the system is in-house, determinations can be made of changes to include in subsequent versions. Team responsibilities can be viewed and honed. If the implementer is outside of the company, they too can figure out what works and what does not for that particular client. Evaluation should generate evolution.
Design is perhaps the hardest part of the SDLC. Even though you are forming ideas in your mind while amassing the information and developing the analysis, now you need to formulate the picture and predetermine which questions or problems will occur and try to solve them ahead of time. Whatever technical knowledge you have comes into play, and a wise SA (systems analyst) will work hand in hand with any experts at his/her disposal.
The systems analyst needs to know what a computer is capable of, what coding can do, and what the existing or proposed hardware can handle. The better the SA knows coding, the more detailed the design, but this is sometimes a drawback. Better to figure the architecture and let the programmers determine the most efficient way to animate it.
Sources for a system are often a combination of known and new products, depending on the specific system needed. The Drug Discovery division of Bristol-Myers Squibb (B-MS) has a single outside company managing their licenses and tracking all the software, as well as developing all in-house applications. Pfizer uses a pre-developed package (currently Peregrine) to track all calls for technical support; while designed for this purpose, the client (Pfizer) customizes it extensively to track job types in its own nomenclature and to generate some automatic reports. I developed three tools for the Mashantucket Pequot Tribal Nation entirely because the type of information they are looking to manage is so different from that of a normal US corporation.
It is imperative that any data conversion from legacy systems be addressed at the outset — it will directly affect the data dictionary of the new system. Post-development conversions are expensive and awkward, indicating poor planning. Another situation that should be addressed right in the beginning is how the legacy and new systems will interact during deployment. For various reasons, one of three options will be employed:
- overnight replacement — the preference of developers, since their system is of course the Ideal Solution; this would be the necessary changeover in one fell swoop, as for POS systems; other than the logistics of switching, there is little impact on the design.
- concurrent systems — both the legacy and new systems run simultaneously. This doubles the work of the users, but is absolutely essential for accounting and validated systems. After 3-6 months, depending on the patience of the users, the resulting data is compared between the two systems. If there is no degradation or corruption of data, the new systems runs and the legacy system is dismantled. The designers have to be sure there is no conflict between the two systems and that they can run independently while being in the same environment.
- legacy system “on tap” — a favorite of the Regulatory division at B-MS. The new system is accessed just like (and where) the old system was, but the legacy system is still available in case the new system has functionality problems. This is important where the users cannot afford ‘downtime’. Important for Wall Street, or pharmaceutical firms when they have a 24-hour windows to report to the FDA. Not a great impact on design, but same-name file calls could mess up the new system. The legacy files need to be isolated.
The implementation environment is usually in place. The only instance where I’ve seen an agreement to switch environments is in an acquisition where one company is archaic and the acquirer will bring it up to current level. Usually hardware rules software, such as including cross-platform modules to a Mac lab.
Types of environment (and most places are a mix) — main frames, networks (there can be both NT and Novell in a single organization), minicomputers (Sun stations, scanning stations, ‘towers of power’), server-based applications, client/server applications, stand-alone applications.
It is possible that at this point the client will pull out for a variety of reasons. Political opposition may gain power, they may start realizing it will have to be far more extensive than originally presumed (companies with little experience with software development think anything is possible at little cost), or the budget suddenly has to be redirected to another need — I’ve seen this far more often than I like. For in-house IT this is not usually a problem; they switch to another project; it rarely means a cut in personnel because they are staff — additional personnel would not be hired until the proposal goes ahead. However, for outside resources, this is a painful moment – they have already expended time in research and presentation. Outside resources just thank the heavens they didn’t put out for the all-electronic JAD!
In most cases, the project continues.
When planning a baseline (minimal) system, design a stepped approach (with costs) right up to the top-of-the-line. Many clients would prefer to take a trimmed-down system if they know they can keep growing. And success on the first level makes them hungry for upgrades. I’d rather produce a baseline piece of perfection than a system with all the bells and whistles…and more bugs than Pest Control can handle!
The idea of having a firm develop and run your application on their own computers, where you supply input and take output, is not really that extreme. Examples are billing companies for medical offices, paycheck generating companies and “data warehousing” in any form. Many companies are large enough to need an outsource to generate business analysis, ad hoc reports and manage its data while the company pursues its own business function.
Considering outsourcing? There are complete hardware and software systems pre-built and supported fully by their manufacturers (such as systems to run photo studios, beauty parlors, restaurants). Absolutely essential: get interviews with previous clients of the outsource! Big companies make a difference; at B-MS they decided to employ an outsourced app called Asset Insight, to survey, categorize and track hardware and software globally. It was the choice and recommendation of one individual (who may have known the source). As the scope of the usage broadened to more than 100 machines, problems with the application increased logarithmically. Eventually they discovered the application was developed by three persons in a private home – they had no experience with a global enterprise and no way to pretest it.
Prepackaged, off-the-shelf systems are often called “shrink wrap”. Especially for small businesses, this is often the best choice to offer.
At the same time that Ashton-Tate was screwing around with dBaseIV (causing die-hard dBase II users to wait for dBase V before upgrading) a lot of PC databases came out — FoxPro, DB (now DB2, from IBM), MS Access and Alpha3 (then 4 then 5) came out. Within two years, Ashton-Tate was out of business. A good example of how a single version of a single company’s line can spell disaster. The SA must be able to predict which shrink-wrap applications will survive.
Turnkey systems were an exciting idea around 1990 — applications were just starting to get complex, requiring installation instead of running off diskettes. And there was little formal computer education, so a no-brainer system was very appealing. Larger corporations eventually discovered the ease of installation did not make for a very useful application. Smaller companies and educational organizations, which cannot afford a full time professional staff, still often go for turnkey systems.
PeopleSoft systems may be customizable, but they are not user-friendly. You need in-house PeopleSoft-trained people full time just to maintain the systems.
Hybrid systems could get messy — no access to source code, for instance. But I have seen some gifted software engineers who could get quite a lot accomplished.
Many companies think it’s cheaper to maintain in-house software, especially since this way they are not ‘held hostage’ by an outside source. The common side effect is an overload of the in-house staff.
At B-MS I handled a few tussles with vendors of software because they design the product for stand-alone licensing, where you receive an ‘unlock key’ when you purchase it. So what do you do when you want to install via a network of 4500 users? You can’t have a different number for each person; you want a quick and transparent installation. Often the vendor (for purchases of 700 licenses or more) will have their own software engineers redesign the install module to fit our purposes.
When working with enterprise systems, a company may purchase one set of user manuals for every 50-100 users, because of the cost. So software companies started beefing up their online help, albeit poorly. MS online help is very thorough — *if* you called it up as context-sensitive. Otherwise they refer to menus and buttons that are not on the screen you are at. The alternative to good online help is good training, whether the software is in-house or purchased. But then, what do you provide when new users come on board?
Trade-publication application reviews are not always seeded by the manufacturer and can be a good source for evaluation. Consider the source; some publications give excellent comparisons of similar applications, which is always better than a single-application review.
Simultaneous hardware upgrades or changes are usually cost-prohibitive, and most companies will not agree to it (unless they are woefully antiquated). But it’s a good idea to design ahead of the state of the current hardware, so the application will still be part of the repertoire when they do upgrade.
When OOD (object oriented design) first came out it was called object-oriented programming solutions — OOPS. And those of us comfortable with modular design thought the acronym appropriate. OOD is more of a conceptual thing than a practical application. But it’s very handy for architects.
Obviously, this white paper does not tell you how to design a system; its purpose is to help you know all the factors that will affect the system analyst’s architectural design.
Information-gathering at the inception of a project
I consider the use of interviews to be the most important fact-finding method, backed up by the collection of business documents, for fact finding on a development project. I prefer interviewing fewer people with an hour for the interview, and only interview one person in each role. My reasoning is that, in interviewing more than one in a role, the information would become redundant (and therefore a waste of time). Often, only the manager or the person “buying” the service (that is, the one whose budget gets most impacted) will offer to do a Needs Assessment interview – this is totally unacceptable because the manager does not fully understand the needs of the end user. Ask the manager for permission to interview the most experienced and/or ‘largest’ user in each role.
For instance, if the project is a payroll package for a retail chain you would want to speak to
- The manager/owner/requestor – find out what s/he is looking for, the budget, and the reason for the inception of the project. Who handles the W-2s and how? Do employees get a shift differential?
- Timekeeper – how are they amassing the employee weekly time information; if this information comes in electronically, can the new system import/convert the data? If the employee’s recording of his/her time and the timekeeper’s entry of the employees’ time is all manual, consider computerizing one or both sets of information as a high-end solution.
- Person cutting checks – how are they getting and storing the timekeeping, W-4, and tax information? What specific problems are they encountering, such as sorting recipients by branch, separating out checks to be mailed and check stubs to be mailed (for direct deposit accounts) and sorting those all by zip code? How is all this information being sent to the bank? Format?
- An employee – how does s/he track his/her time? Any complaints about this method?
If at all possible you want your system to be able to interface with any existing electronic systems, both for feasibility and the possibility of replacing the existing systems in the future.
The problems with interviewees are very real. If the person does a manual, clerical job, s/he is going to fear electronic replacement. This person is going to be very self-protective and might even mis-inform the interviewer to make the latter look incompetent. To get cooperation, keep assuring the person that you will not replace him/her; instead you will ease his/her workload – people in this type of a situation invariably complain of an overload of work and/or a fear the s/he is too accustomed to the “old” method and won’t be able to learn a new one.
If interviewees give you the ‘should’ scenario instead of the reality scenario, is this really bad? You will be building in just the corrections that are needed. Ask questions were you suspect there’s a specific lack of information. Encourage “complaining” – it is there that you will learn the weaknesses of the old system (hardware, software and wetware) and what you can offer as a cure.
An interviewee not being able to describe his/her work is very common, especially in a company which grew from a small mom-and-pop organization to a good-sized corporation. These folks learn their tasks, but not the terminology or standard methods. As an example, a person can have 10 years’ experience building and managing projects, but they don’t know what an SDLC is because they never went to school for Project Management – they just did it. You end up hearing descriptions of a specific task, rather than the actual information flow. For example, at Bristol-Myers I was trying to map the process by which the site network was managed. I got a glowing description of a recent problem with a single workstation and how it was eventually discovered to be the NIC (which was a different department’s purview). What I needed to know was for which situations would a technician contact the site Network people and what was the flow of information from there?
There is a specific problem in systems analysis – the people who get into this line of work often have a good technical background but are weak in “people skills” and judgment. These are two very important skills. If you are in this position, take advantage of any management courses available to you – it will only make you look better to your employer. If you find you don’t really like the people-part of the job (and many technical people don’t) consider being part of the development team instead of the client-interface team.
I have worked for two different very large corporations in the same field – one has a team that does nothing but document the processes in use in all departments (and publish it on the intranet); the other doesn’t even have a P&P (Processes and Procedures Manual) for the IT people. Needless to say, the former is more successful.
All business forms are of great value – not only what they fill out for input, but also every report/invoice/whatever that goes out. Whenever I’m doing a Needs Assessment I request sample copies of everything they mention and try to anticipate others they forgot. You can always weed out those you won’t need. And refinement of the data, to avoid duplication, is easier done this way. Sometimes they are inputting or outputting the same data, just using different names for it, so they don’t realize it’s the same. What you don’t want is to discover after the delivery that the data for a regular report is NOT there; you can build the report later if needed…but adding fields and inputting missing data is tough and expensive. Whenever an input field does not appear to be used for output, question whether it’s a necessary field (it may be something they intend to evaluate in the future.)
I am not a big advocate of questionnaires. Even if they are electronic, maybe 25% respond. Non-response is very important – there is usually a very good reason for it. Online forms are great; people usually think it’s safer, and no one will be reading the responses personally, so they are more likely to fill them out – and truthfully (after all, one couldn’t tell who it is by the handwriting). Use their intranet. Questions on the level of satisfaction are a bear – every try to decide if you are ‘somewhat satisfied, as opposed to ‘satisfied’? Avoid the subjective queries.
On requesting documents – an organization chart is usually not very important – unless you want to know who to please (often a determinant). Above all you want its forms of information -– what forms are used to gather the information now – job applications, patient intake forms, logs kept by the people running programs, etc. – and what information needs to be extracted (standard reports to stockholders, paychecks, metrics reports to management, invoices). The output will determine the format of the input. So I guess you could say I advocate a top-down approach.
One reason I have a lot of not-for-profit organizations as clients is because they need to report to at least one agency on their activities, as well as funding agencies (such as United Way and the Dept. of Children and Families). There is no pre-packaged software for these organizations which can track their activities (there are shrink-wrap packages for non-profit accounting). They have to report activities and demographics quarterly to each funding agency or lose the funding. These projects must be designed from this vantage point of output, and often ‘registration’ electronic forms need to be designed to nudge them into getting the data needed.
Direct observation is one of my favorite methods of information-gathering. Ask questions; if you do it right, they get into ‘brag mode’. An example of the value of observation: I have a client for whom we have done data warehousing since 1996; we know all the groups they need to answer to for funding and licensing, we know the demographics that are important to the management, and we know what the esoteric codes mean. The state of Connecticut decided to create its own databases for the reports it was evaluating each month, since it had been up to this time information was manually input and was constantly in error. But the man hired to build these databases had no idea what the information meant. The result – an awkward database that has had so many errors in it that it’s been revamped 3 times in the last 2 years. Plus, they seem to think their reporting agencies are stupid, so they locked access to the database. Result – our client had to pay for re-input of 600 records, and pays extra each month because we then have to export the state’s information, fix it and add our additional tracking information. Why fix it? The state has the zip code locked into CT-only zips – my client is near the Rhode Island border and often gets clients who have RI zip codes. Additional information? The state doesn’t think in terms of separate programs, so we have to re-align the data. And the state selected its own case-numbering system, so the client has to keep a double-system for all current cases. I could go on ad nauseum… If the developer had observed what the reporting agencies actually do, a lot of these problems wouldn’t have cropped up at all.
Characteristics for a good systems analyst during requirements determination:
- Impertinence – asking questions. It looks easier than it is. You could easily miss which questions to ask, which might require a call-back. This ability to know what to ask is a developed skill, and certainly each project will make you more aware of what to look for.
- Impartiality – the politics get in the way all the time. Always consider the source; what is the person’s attitude toward the project? What will this person gain or lose with the new system? You may actually be told by the ‘buyer’ that a particular person’s opinion is not [is most] important. Find out who makes the final decision – this is the person to try to satisfy in the end.
- Relax constraints – yeah, right. The biggest hurdle is “I have done it this way since Ben Franklin…” The client may insist on a mimic of the present system; any change to this would have to be gradual, probably a follow-up upgrade. Try to keep as close to the present way of doing things as is efficient, such as having electronic forms look very similar to the paper forms they are using – but with time-savers on them such as default values.
- Attention to detail – if you have a computing background you already know that it’s the details that will kill you
- Reframing – this is no problem if you are an outside source. But if you are an in-house organization this is very difficult for the analyst as well as the client.
Knowing the business objectives is necessary to sell your solution. For example, one year Bristol-Myers had a Business Strategy Objective (BSO), which was defined in detail. All work had to be justified according to the BSO or it wouldn’t be done. One key phrase is “this is the ROI” (Return of Investment); since it’s a business buzz word, ears perk up that have no interest in the technical stuff. And it’s always a selling point – prove to the client that they will get a better profit, to the tune of a multiple of the cost of the new system in the first 5 years, and you’ve made your sale. For not-for-profit organizations, the people are busy saving the world; they don’t carefully track their own activities. Showing them how tracking particular information can increase the funding works like a charm.
Every time I build an application for them, I discover a lot of things they could track easily by computer that they weren’t bothering to report at all. As a result of the more efficient reporting, their funding increased significantly. Be sure to observe more than the sponsor offers.
Watch for side activities/processes that can be incorporated into the system (for the high-end solution, or a follow-up proposal). And watch for redundant actions which can be eliminated – this is quantifiable ROI. Be sure to re-write notes after doing an interview, receiving a questionnaire, or reviewing documents; you’d be surprised what you’ll forget in a matter of hours. Remember that your time and that of the people giving you information are both valuable. Do not give notes back to the person you question – you should have notes that are not for the clients’ eyes, since they might misinterpret them. Instead, you could graph or outline the information and have them review it for accuracy.
JAD stands for Joint Analysis and Design. This is often where you’ll find professional facilitators. Within your organization a JAD should be run by someone on a management level – or someone being groomed/trained to be on a management level. It might be the Project Manager for that project or a Systems Analyst. Because professional facilitators don’t know the subject matter they often lead the discussion in the wrong direction. The only time you’d see a JAD is within a company that has application development as its business function. It’s expensive and time-consuming for the client. It would be nice to have it run electronically– and run it at the client site – but now we’re really getting into the big bucks. In most cases a JAD is tracked on a white board or large paper pads, and then has to be rewritten and published. One big problem is squabbles among the clients. Better then to interview them separately, make your suggestions and let them squabble it out after you leave.
Analysis of gathered information
Prototyping is a great idea and should be a standard part of the development process, if you can convince the client to do it. It allays fears of what to expect, makes it easier for clients to articulate their needs and practices, and it’s cheaper in time, money and work than coming up with a revision. The only detraction is that if the contract is turned down, the developer needs to “eat” the cost.
A good approach to convincing the sponsor if the need for change is charting the existing system: all manual, hard copy and electronic processes together in a ‘current systems’ data flow diagram. Don’t bother to flabbergast the client – win them over with professionalism.
There are hard copies which must be maintained in certain industries such as pharmaceuticals and legal companies. These can be scanned into a database, moving closer to a paperless work environment (and it’s certainly a lot easier to manage). There is a company in Connecticut that does nothing but scan and index all the paperwork an attorney needs for a case, so s/he can review, evaluate, cross-reference and call up any piece of evidence in an instant.
Data flow charts really aren’t that difficult for software companies, because they usually have teams that always do the same type of applications, such as warehouse management for whatever kind of warehouse comes along – they modularize and rearrange the modules.
For presentation to the customer, ‘before’ and ‘after’ data flow charts would show the simplification of the processes, which is usually enough to convince them to continue with the project. Those of you with Visio are probably aware of the many ways in which to represent processes. Is any one icon or system the best? Absolutely not. You can use circles and squares, as long as it’s clear and well-documented – you want to be able to send the client home with a printed copy to evaluate the changes on his own time, rather than making him feel pressured to respond immediately.
Knowledge of programming is a great asset to developing these charts, so the systems analyst will often work hand-in-hand with the programmers in drafting them.
During analysis, a feasibility study should be done. Feasibility is hard to nail down. It looks somewhat overwhelming at first. Many companies have a dollar figure per-employee for additional manpower. And experts in each phase (networking, users, hardware) can usually give you information on what it will cost, or if it’s not possible to do. In large companies, there is usually a good idea ahead of time of what the costs will be, and they will be looking more for the time span involved. In small companies there is usually a serious “sticker shock”, and many projects die at conception.
Throughout this process, the aim is to achieve an agreement, which will probably mean compromise on both sides.
Physical architecture of a system is defined by the way it functions. Originally computers were ‘mainframes’ – one huge collection of switches and tapes, with a single input point for teletype or punch cards, and a single output device, a printer. These computers were as large as a building. As time went on they became small as a room. People stood in line for their chance to use the computer. You will still find this type of architecture in colleges for engineering students.
In the 1970s, the personal computer debuted. Since it could not house the large relays of a mainframe, this was seen as a personal super typewriter, called a word processor; graphics were extremely limited in a personal monitor (remember Hercules boards?). Sharing data was achieved by saving things on 5-1/4-inch diskettes. Even the programs that were run on the PCs ran off the diskettes – hard disks were tiny (10 MB) if existent at all. RAM was equally limited. The Motorola CPU chip used in Macintosh, Atari and Commodore PCs opened the possibilities of expanding the usefulness of a PC to things other than word processing.
In an effort to tap the user population that was eyeing the Macs, Intel set about developing the 808x series of chips, which opened up the DOS market to private users. Math chips had to be added to the CPU, and graphics chips were eventually added. The user had to manually switch to non-CPU chips to use them. Still, both PCs and mainframes were in the position of a one-machine-one-user architecture.
While businesses needed the power of a mainframe, they could not waste the personnel time having people in line for use (or their jobs waiting in a queue for a mainframe operator). Thus began tiers. Several people needed to be able to access the mainframe at the same time. So “dumb” terminals were placed at most desktops, and e-mail became the fad. A dumb terminal has no processor – it is a monitor and keyboard wired to a main frame. As such, the mainframe did all the work. One may choose to not call a terminal a tier, since it’s more like a lateral expansion. Semantics.
So in the early 1990s people often had two terminals at their desks – a dumb terminal hooked to the mainframe and e-mail, and a PC for their non-shared work. Time on the mainframe could really stretch out – people learned when were the ‘peak’ periods of use and tried to do their work at ‘off’ times. On a Friday at 3 PM, when everyone was trying to wrap up the week’s work, it could take up to an hour just to get a report printed. So anything which could be done on the PC was done there. Because of the familiarity with the mainframe command system, Intel PCs were used more often than Macs, unless the people using a PC were technofreaks and wanted to play with Macs.
As it became apparent that personal computers had a great deal of popularity, and technology soared ahead in RAM, ROM and processors, designers were looking for a way to meld the advantages of mainframes and PCs. That gave birth to the network. In a network, the server does a lot of the work in trafficking that the mainframe once did, but part of the work was shared by the nodes, or “smart terminals”, as the nodes put in queue requests and passed information along to each other. At least, even at $20,000 for a server, it was cheaper than a $6,000,000 mainframe with no trade-in value!
There were a lot of growing pains at this period – many different networking models and operating systems (NOSs). Companies weren’t sure where the future was going, so they bought this and that and experimented. I remember being in one company where I could walk around and find Apples, Macintoshes, Intel (PC, AT, XT), “Trash 80s” (TRS-80), main frame terminals, and at least two different network OSs. It was a hodgepodge. And many companies are still carrying this legacy.
Client/server architecture is usually a network – a server does some of the work and the client (node) the rest. This is not just true of the NOS, but of applications using this architecture – part of the application is on the server, and downloaded to the node RAM when executed, part had to be called directly from the server, and part was local at the node. Yet another part might be in a database on another server. A good example of this was Word Perfect, which kept running even if the network went down – until you wanted to print, which you could only do through the server. In corporations, one PC could be used allowing the user to switch from network mode to mainframe access.
Web architecture could be considered client/server, but the web server is unique in that it must be designed to accept a “universal” language rather than the language of the NOS. Then the web interface usually has to communicate with information on data servers. All of these possibilities required new languages which were more efficient than AIX or VMX, which did not work well with the PC architecture.
IBM came up with a model of the various communication layers used by a NOS – those that ‘spoke’ to the business applications, those that handled the user interface, etc. If I remember right, there are 7 defined layers. Each of these layers actually performs a distinct function independent of the other layers. It is the synchronization of these layers, and the separation of responsibility between the node and the server or multiple servers (e-mail, web, data, applications, users) that make them so powerful.
As prices, technology and applications began developing for the tiered architecture, “migration” and “maintenance” became keywords. For over a decade companies were still seeking the best possible answer and then migrating over to their architecture(s) of choice. Applications were wholly revamped and it was like having a totally new system at each migration. Finally this has slowed down, and maintaining the applications and growing technology is the major activity of the IT departments.
The old method was that one technofreak or another decided “this” was the way to go and a company would simply adapt his/her recommendation. This sometimes resulted in smokestack architectures, piling one obsolete setup on top of another. In those days, there was no such degree major as computers – computer managers were either mathematicians or electrical engineers.
This is no longer an acceptable approach. Now we must look at what is there and develop a sane plan to maintain what we need from the legacy systems, see far enough into the future to be sure any replacements have a future, and design a migration which is not only successful but reasonably priced.
The data is the common point of reference for old and new systems. Since all of these systems process data that come from predominantly the same places, the different environments must seamlessly communicate and often share the same data. Hence, one of the most important considerations in systems architecture is the architecture of the data. And the next step is to migrate this data to a newer architecture.
The Decision Support System (DSS) is a very specialized application. The decision involved can be any decision a person or manager should need to make. Support means that this system will do the tedious, time-consuming, number crunching, information gathering parts of knowledge-collecting, allowing the user a full and impartial view of the facts from which to make an informed decision. Your spouse is a Decision Support Person – s/he listens to your ideas to move to Albuquerque, then says, “But, dear…you are allergic to yucca plants!”
At one time if an executive wanted information to aid his decision, he had to go to a computer operator to have the operator phrase the questions to a mainframe by punch card. Now executives keep their own computers and develop their own software. Now query languages are easier to learn, and applications software is a standard element of the desktop. There is a natural progression from EDP to MIS to DSS. An MIS system was used to help make decisions simply because the regular reports and metrics showed trends, successes and failures, as long as the manager using them remembered the results of the past ones and the points of change. But they couldn’t ‘predict’.
What does a person have to do to make a decision?
If the decision is more important or more complicated, one may need to devote more time and effort into making that decision. So let’s take a real-life example – Mrs. A got a good job offer to work in Maryland for a “Beltway Bandit”; her company was closing the local offices. She can either move to Maryland or stay here and get a different job. Her husband has been thinking of opening a business, and has started inquiring about office space and equipment. Mrs. A needs to do more than list the pros and cons; she needs to weight these pros and cons according to how important each item in the list is.
The SDLC (life cycle) of a DSS may follow these steps:
- Determine the decision that must be made.
- Determine the facts (knowledge) that need to be collected that might affect the decision.
- Collect the knowledge into a knowledge base.
- Determine the processes that need to be calculated from the facts.
- Run the processes on the knowledge.
- Determine the best way to present this derived information to the decision maker.
- Often, as a sort of post-mortem, people look at the collected knowledge and see other decisions that could use support by applying different processes.
One of the challenges I give to students who are learning spreadsheets is to write a spreadsheet that replaces their check register. While it sounds simple enough, the only way to do it is to break down functions we normally do “without thinking” (heuristics). How do we make decisions when we enter something in a check register? We look to see if we wrote a check or made a deposit, in order to know whether to add or subtract the amount from the running balance.
Codewise, we’d have to create a field that says ‘this is a deposit’ or ‘this is a check’, then write an IF statement that checks that field and then decides whether to add or subtract. Very awkward. Wouldn’t it be easier to understand that the spreadsheet recognizes a blank cell as zero? So if it adds everything in the “deposit amount” field and subtracts everything in the “check amount” field at every line, wouldn’t this work?
To support a decision we must also simplify until we understand the tools we are using, the value the decision maker puts on certain results, and the extent to which our system can help.
Today’s CIOs must understand business as well as technology. Pfizer resolves this by having two Project Managers on a system development team – one from the technology group, and one from the business group.
Take a look at your organization – where are decisions being made? Are they to influence planning, organizing, directing, coordinating, forecasting or investigation? Often we don’t recognize “decisions” unless it’s obvious, but actually each of these functions involves the analysis of knowledge and a decision to be made. Whenever a person develops a spreadsheet or a table, s/he is trying to organize knowledge in order to come to a decision – every “what if”; every “let’s look at the big picture”; every time someone compares two things. One only analyzes things when one wants to make a decision about them.
Do we have to classify and define such ethereal concepts as knowledge or decision power centers? Yes, viewing of types of knowledge and decisions, etc., helps us to look at the possible ways in which computers may assist decision making. The extent to which a computer can help a manager plan, organize, command, coordinate or control depends on reaching a more detailed and precise understanding of the functions and their interrelationships.
An anecdote about knowledge: When my father was working at Sperry (now Unisys) he had a little black box that had blinking lights running on batteries. He’d take it into staff meetings, and as he spewed out statistics of the state of the company, he’d pause, peer at the box, then recite the statistics. Everyone thought he had gotten his hands on a remarkably tiny mainframe, chock full of information. This was long before flash drives!
Most people don’t recognize “knowledge” per se. If you have ever had to graph a data flow diagram, as opposed to a process flow chart, you have experienced the dilemma of identifying a single type of knowledge. How can a single piece of knowledge shift an entire organization? Think back on your workplace – changes in personnel can completely realign the priorities (as when a new CEO takes over an ailing company). A shift in industry can greatly affect the knowledge needed and handled, as when networking made personal computers a standard business accoutrement, which caused IBM to do a reversal on their whole production scope. Changes in the clientele of a business can alter the knowledge base. With the economic crunch, many companies that had been diversifying had to reverse the process and start divesting – a decision had to be made as to where the company wants to go and what to let loose.
While we easily say “a computer could do this” when it comes to crunching numbers, or reporting on data, it is much harder to recognize how a computer can come to a decision. It sounds like science fiction.
In reality, computing systems rarely come to decisions. But they can be a tool for coming to a conclusion or decision yourself.
An example is a program a friend of mine just built in XML – it’s a GUI for the intranet which lists all the personnel of the CRi department, whether they are fulltime employees or contractors, who they answer to, and how long they have worked there. The purpose of the GUI – the department has been hiring contractors without restriction whenever a supervisor requested them. So this application is taking specific HR information and creating an ‘organization chart’ of sorts to help them get a view of the sub-departments, balance out the roles and establish a personnel budget.
Let’s look at a decision – what to have for breakfast? Some of you may have the privilege of just sitting down to a table festooned with goodies prepared by another. OK, now for reality – what are the influences on our decision? Whether you have to watch your cholesterol. Whether you have to watch the calories. How much preparation time do you have. Whether others must also be fed. And, of course, what’s in the refrigerator.
A manager must play interpersonal (getting one team member rolling so the other team members can get to work), informational (notifying users that there will be server down time) and decisional roles (where shall we build the new warehouse?). Interpersonal work is rarely a computerized function, whereas informational work is almost always supported by computerization. Actually, a lot of decisions use computer-based tools to help, without even realizing it – writing a white paper on the merits and disadvantages to a decision, developing a spreadsheet with the variables of a problem and changing the values to see the results, or extracting data from factory machines and downloading it into a database. I recently took a bunch of sales records that came in an ASCII (unformatted text) file for a pharmaceutical company, converted the data into tables, and did a variety of business analysis reports on the profitability of the company and the possible areas of change (including the outcome of those changes).
An example of a team decision maker – the President and his cabinet. The cabinet members are supposed to be experts in their fields. A good manager also works this way, trusting his/her finance people, HR people, etc., to input the specific knowledge of their specialties, so that the manager can come to the “informed decision.”
There are applications out there which actually evaluate a decision maker’s value system, to determine which shift s/he has. The manager answers a series of questions such as “Which would you prefer to do – buy a daily lottery ticket for $1 which pays out $50, or a weekly $5 lottery ticket which pays off $500?” The answer is given a numerical value, and the manager’s willingness to take risks is calculated. Then the DSS designed for that manager knows which solutions to drop and which to include, depending on the risk factor.
The design of a DSS depends on the decision context and decision type, as well as the nature of the decision maker. This is where you need to concentrate. You must also watch for sources of knowledge, such as databases you can tap into for information, rather than relying on input each time, which is time-intensive.
When designing a DSS, try to incorporate as much flexibility as possible, rather than working toward a single decision. Make it as user-friendly as possible. Be sure it uses reliable materials – remember the GIGO saying – Garbage In, Garbage Out. Determine the What, the How and the Why of any knowledge you want to incorporate.
Ad hoc reports encouraged the development of DSSs. A lot of times a “DSS” is developed on the spot for support of a decision, but it is simply not recognized as such. For example, I datawarehouse for a particular clinic, with monthly, quarterly and annual analyses generated for the agencies which support the clinic. Every once in a while the agencies ask for additional information, to view the effectiveness of this or that – for this purpose I have a database called “ad hoc reports”, which links to the general knowledge base used for the other reports; it is there that I develop specific queries to look at the data from a different angle to generate these ad hoc reports. It’s a great deal faster than the clinic dragging out all the old reports, or all the files for their 500 active patients, to figure out the answers. These reports are used by the funding agency to develop guidelines for its clinics, as well as determining which clinics get which funds.
The cost to develop an ideal DSS often outstrips its utility for a one-shot deal. It would only be feasible if it was flexible enough to handle a series of decisions. For that to occur, it might be better to develop an Expert System and query from there.
I’ve developed DSSs with a collection of variables – The user selects/fills in whatever variables the user wants to use, then selects the type of report s/he wants to generate. To allow this amount of flexibility, the DSS gets pretty darn large – a single module of code may be over 4 MB in size.
There are managers who will ask their direct reports for decision support development, regardless of the ‘cost’ in time involved. But in reality, there is an ROI (return on investment) – time saved in research, accuracy of information, access to information bases, speed of solution proposals, the opportunity to look at things from a different angle, and it can be used to stimulate the decision maker’s thoughts about the problem. Recognize that it is currently not possible (and may never be) for a computer to use imagination; that’s why it is preferable to create a support system than a decision maker – let the manager earn his/her Big Bucks by adding creativity to the results from a DSS.
How does a DSS “communicate”? It is that which separates an interactive model from a static one – questions to fill in and/or answer. Take an online survey on your shopping practices. It not only asks you a series of questions to answer, usually multiple choice, but it then looks over your answers; if you fail to answer, or select more than one when only one is acceptable, the survey stops, and notifies you of the error in your input; once you answer it correctly, it continues. If you answer a particular way on a particular question, more questions or a text box will pop up for expansion of your answer. At the end, it looks at your answers and, following a set of predetermined rules, comes to the conclusion you should get a larger car and presents its findings to you. Input – clarification – evaluation – presentation.
Text-oriented DSSs appear at first to be simply libraries, but that is no longer true. Two examples of these are two companies I’ve worked with.
The first is a company that emerged originally to assist General Dynamics in preparing for a lawsuit. Designed by GD employees, the object was to scan the documents involved and create a database which indexed each document (content, keywords, whose signature is there, names, dates, etc.). The attorneys could then find the documents they needed quickly. Without this system, I’ve seen attorneys actually roll an entire hand truck of boxes of documents into trial. Save a tree…shoot a lawyer. This was so successful that the developers broke off on their own and set up such databases for other attorneys.
The second example is Pfizer’s electronic submissions for new drugs to the FDA. While they still need to truck the 2-3 semis of documentation to Washington, it is all scanned and indexed, so an FDA investigator can ask for any single or related set of documents and they are generated from the servers sent with the documentation. This has changed acceptance of a drug from the original 2-1/2 years to a matter of 3-4 months – generating literally tens of millions of dollars of revenue.
While the term Decision Support System is not that well known, they are great fun to develop and it’s amazing how handy they can be.
When a company is considering installing a new system (or startup system), the most common approach is to create a team for the project. If the company is small it will usually hire an integration company to do the development. A mid-sized company may have an in-house project manager, but not the IT staff to handle the project; they will often contract out the work to an integration company answering to the project manager. In a large company, there is usually sufficient staff with the right capabilities to complete the project in-house, and a systems analyst functions as project manager.
An integrated system is one which combines more than one system, such as a network, or the phone lines at a Help Desk. Sometimes one would integrate a Mac network into a Novell PC network, or have communications between more than one network. Of course, software needs to be integrated into an existing or new system. Some companies that call themselves Integrators build entire systems. It’s not really a hard and fast term.
Choosing team members may not be a privilege given to the systems analyst; teams are sometimes made up of volunteers or more often assigned by management. It is up to the systems analyst to determine the talents of the team members and lead them accordingly. People in IT need to be “team players”, willing to work with each other, and the project manager needs above all to be able to manage all the personnel on the team to achieve the success of the project.
The actual participants in analysis and design vary, from a single programmer (or developer, as they prefer to be called) and his/her client, to entire teams, depending on the size and politics of the organization. The analysis may be done by a single person or a group. Often the project manager is the representative of the business end of the project, and the systems analyst manages the technical group. A new buzz word in the industry for a systems analyst is “application architect”; it loosely applies to someone who designs the path an application will follow, and is usually the purview of a systems analyst.
A successful team is built on trust in one another, listening to each other, and rigorous communications. As a consultant, I often step into a team of people who have worked together in the past. I have to have the faith in myself to speak up. And I find that the team invariably welcomes a fresh point of view.
The team members, then, can consist of the following: project manager, systems analyst, developer, hardware technician(s), database administrator (DBA), SME (subject matter expert), and sponsor (the person who instigated the project and will OK all steps and payments). Not all of these positions will be in all projects, of course, and some roles may be filled by more than one person.
I was recently working in a large world-wide corporation. They had a well-earned reputation of high retention – once you were hired there, it was rare that you got fired. However, in order to maintain this retention, they sometimes had to move people into roles that were created just to find a place for them. One example is called “shadow IT”. What this meant is that the person was no good at his/her job as a research chemist, but did demonstrate an ability for hacking on computers. These shadow ITs were not a formal part of the IT organization, and as such not held to the ethics or processes in the IT group. So they would do as they pleased, pass opinions on IT work as ‘representatives’ of their scientist colleagues, and generally cause more havoc than good.
As a systems analyst, managing expectations may be the most difficult part of the job. For example, you may have to tell clients they cannot have parts of the wish list due to costs, time constraints or equipment limitations.
Ethics are a strong part of the systems analyst’s function. Most companies outsource a great deal of their IT work, in every level from technical support to network management and project management. An ethical consultant (also called a contractor) should document all work so it can be picked up by a third party and s/he must respect the confidentiality of the work in which s/he is involved. I have been privy to information about therapy patients, drugs being developed, financial information on a small business such as a real estate office. It is very important that I and my staff let this information pass right through and never be repeated. A consultant who is hired via a contracting company is bound two ways – loyalty to the home company as well as the client – and must keep a fair balance between the two. If this is done correctly, the home company will support the consultant to the end. If the consultant functions unethically, s/he will end up on the street, and virtually blackballed.
The team should follow a specified methodology, which is determined by the project manager or systems analyst (these terms are often interchangeable). The most common method is the waterfall method to direct the SDLC (system development life cycle). Nonetheless, nothing is etched in stone, and the systems analyst (SA) must have the flexibility to handle problems as they show up.
The biggest problem is calming the client down. Next biggest is convincing the IT people involved that it’s human to miss something or not be able to predict something. Clients tend to believe the IT group’s contention that this will always work the first time around. While it’s nice to exude confidence, the truth of the matter is that no matter what extent you tested, once the product goes into production on a full scale of over 5000 users, passing around worldwide networks with Unix boxes, NT servers and Win2000 desktops, there are bound to be some logjams. The client has to understand that this is going to happen, and be reassured by the prompt putting out of the fires. The IT workers are similarly frustrated when they’ve tested the product extensively in systems testing, UAT (User Acceptance Testing, which is a small group of subject matter experts) and possibly even pilot programs and still problems crop up. Nonetheless, one server in the Mount Vernon site, or a user’s machine with unique settings (don’t ask…) can upset the best-laid plans. Assure the IT people that this is not unexpected – then recruit them to troubleshoot the problems that crop up.
The typical steps that a systems analyst will guide in the waterfall method are:
- initiation – client sends out an RFP or requests system or software development; developer asks general questions to get the ‘big picture’
- Analysis – through several different methods, developer determines exactly what is needed, budget, time constraints and feasibility; developer then proposes project to client via a Requirements document which, when signed by both parties, is a contract to go ahead with the development. Due to the cost of the analysis phase, a contract may be more general and signed before analysis begins.
- design – the systems analyst will architect the system, usually by designing system hardware, data flow diagrams, use-case diagrams and ERDs, and sometimes even a class diagram.
- development – programmers write code, build databases. Technicians build hardware and systems. At this point, system testing (of both hardware and software) is done, and UAT (User Acceptance Testing).
- implementation – setting up the entire system at the client’s site. This includes notifications and production testing.
- maintenance – upgrades, troubleshooting in the opening weeks of the implementation, patches
Because of the broad spectrum of responsibilities that an SA must shoulder, this person must have experience in all phases of this life cycle. And it is the SA who will probably answer for the success or failure of the project.
Once a project has been determined to be desirable, it is time to start the detailed information gathering processes. That information is then documented and approved, feeding the next phase in the SDLC process, design.
The purpose of analysis is to gather data on the existing system, determine the requirements for the new system, consider alternatives, and conduct a feasibility study of the solution. The primary output of the analysis phase is a detailed list of the system requirements. Data collection is the first part of analysis. The goal is to understand the data inputs and outputs for the system. There are various sources for data collection. For example, internal sources such as users and managers, organization charts, procedure manuals, reports, and business process flow charts. Other data may be external such as customers, suppliers, government agencies, and competitors.
Collecting data through a report is fairly straight-forward; however, extracting information from people is often a bit more challenging. Some of the ways to solicit information are; direct interviews, observation, surveys and questionnaires. Interviews can be structured where the interview questions are determined ahead of time or unstructured where it is more of a conversation that flows as it would naturally. If the unstructured approach is used, the interviewer should be sure to cover all the important topics and not get lost in the conversation. With observation, the analysis team or member directly sits with a system user to observe that user in the use of the system. This method is very helpful in that the observer is able to see how that user does his/her day-to-day job using the system. This is also helpful to uncover gaps in the way that one user may use the system differently than another one. Questionnaires are a good approach when users are spread across geographical areas and direct interview or observation are not feasible.
Prototypes are made for all kinds of things. When a new submarine is designed, a wooden prototype is built to scale about 10 feet long; it is then put through water dynamics tests to see if it would be seaworthy. In IT development, a prototype may only be a drawing, to see if the users are friendly to the look of it and to see if it has the buttons and so on that the users need for an opening screen. Other times a prototype is more functional (just without error codes, bells, whistles and fancy GUI) to see if the design idea is what the customer is looking for. The trouble with a functional prototype is that it’s expensive to develop — and if you are a vendor this is done before the contract is signed, so if the customer cancels the deal, this is labor unpaid. But if you are IT in-house, it’s a great idea.
E-commerce is the trade and/or sale of goods or services using the World Wide Web. The customer only sees a Web page s/he can access from his/her browser. Behind that is the selling company’s network. A Web server housed at the company communicates with the Internet to send out the site page. Information clicked on or entered by the customer is downloaded to this server and converted to regular data saved on a data server. From the data server, the seller collects and acts on the request. From the internal server, the seller can send e-mail and upload order status to the Web Server.
Requirements: You need to determine the functions of the proposed application, and any hardware that may be necessary to make the new system work. And you need to separate functions of the software, then determine if EACH requirement is mandatory or optional to achieve the goals. For instance, the manufacturing process module might be:
(1) determine the amount of raw materials needed to build each individual product [mandatory – needed to track inflow and outgo of inventory of raw material]
(2) track products produced [mandatory – need to know what inventory has been used up]
(3) decrement inventory needed for each produced product [mandatory – need to know inventory levels in real time];
(4) determine uncharacteristic changes in inventory usage [optional – algorithm can be difficult to determine “uncharacteristic”]
(5) determine low-inventory reorder levels [optional – can be input manually]
(6) automatically reorder specified amounts at reorder level [optional – may change regularly].
Measures of success
Measurement of success is a method to measure the effectiveness of the project, not the efficiency of the project team. Therefore time tables and meeting the budget do not count here. It in part involves the ROI. You need to find those things that save labor time or efficiency that can be quantified — numbers of transactions currently being processed, amount of inventory being recorded in per hour, number of products being manufactured, number of sales per day. Determine what types of these things you can ‘count’ now, and how many are proposed to be the result of the new system, How much labor time and/or money will this save in increased production/sales/efficiency?
A Project Feasibility Study is an exercise that involves documenting each of the potential solutions to a particular business problem or opportunity. Feasibility Studies can be undertaken by any type of business, project or team and they are a critical part of the Project Life Cycle.
When to use a Feasibility Study?
The purpose of a Feasibility Study is to identify the likelihood of one or more solutions meeting the stated business requirements. In other words, if you are unsure whether your solution will deliver the outcome you want, then a Project Feasibility Study will help gain that clarity. During the Feasibility Study, a variety of ‘assessment’ methods are undertaken. The outcome of the Feasibility Study is a confirmed solution for implementation.
Technical feasibility — can the project be achieved with the current systems in place?
Operational feasibility — can the project track/manage/perform the operations (processes) it is proposed to do?
Economic feasibility – can the project be performed within a reasonable budget?
- Research the business problem or opportunity
- Document the business requirements for a solution
- Identify all of the alternative solutions available
- Review each solution to determine its feasibility
- List any risks and issues with each solution
- Choose a preferred solution for implementation
- Document the results in a feasibility report
Statement of scope and goals
The statement of goals; shows an understanding of what the business process is that is desired these goals are listed. Scope, on the other hand is more specific — without expected results or promotional prose. For instance, the scope for a goal of ‘tracking incoming inventory’ could be to the following effect: Set up bar coding process for received materials to electronically enter received shipments directly into database. From here, the requirements can be determined.
COPYRIGHT 2001: BONNIE-JEAN ROHNER. All rights reserved.