Design is perhaps the hardest part of the SDLC. Even though you are forming ideas in your mind while amassing the information and developing the analysis, now you need to formulate the picture and predetermine which questions or problems will occur and try to solve them ahead of time. Whatever technical knowledge you have comes into play, and a wise SA (systems analyst) will work hand in hand with any experts at his/her disposal.
The systems analyst needs to know what a computer is capable of, what coding can do, and what the existing or proposed hardware can handle. The better the SA knows coding, the more detailed the design, but this is sometimes a drawback. Better to figure the architecture and let the programmers determine the most efficient way to animate it.
Sources for a system are often a combination of known and new products, depending on the specific system needed. The Drug Discovery division of Bristol-Myers Squibb (B-MS) has a single outside company managing their licenses and tracking all the software, as well as developing all in-house applications. Pfizer uses a pre-developed package (currently Peregrine) to track all calls for technical support; while designed for this purpose, the client (Pfizer) customizes it extensively to track job types in its own nomenclature and to generate some automatic reports. I developed three tools for the Mashantucket Pequot Tribal Nation entirely because the type of information they are looking to manage is so different from that of a normal US corporation.
It is imperative that any data conversion from legacy systems be addressed at the outset — it will directly affect the data dictionary of the new system. Post-development conversions are expensive and awkward, indicating poor planning. Another situation that should be addressed right in the beginning is how the legacy and new systems will interact during deployment. For various reasons, one of three options will be employed:
- overnight replacement — the preference of developers, since their system is of course the Ideal Solution; this would be the necessary changeover in one fell swoop, as for POS systems; other than the logistics of switching, there is little impact on the design.
- concurrent systems — both the legacy and new systems run simultaneously. This doubles the work of the users, but is absolutely essential for accounting and validated systems. After 3-6 months, depending on the patience of the users, the resulting data is compared between the two systems. If there is no degradation or corruption of data, the new systems runs and the legacy system is dismantled. The designers have to be sure there is no conflict between the two systems and that they can run independently while being in the same environment.
- legacy system “on tap” — a favorite of the Regulatory division at B-MS. The new system is accessed just like (and where) the old system was, but the legacy system is still available in case the new system has functionality problems. This is important where the users cannot afford ‘downtime’. Important for Wall Street, or pharmaceutical firms when they have a 24-hour windows to report to the FDA. Not a great impact on design, but same-name file calls could mess up the new system. The legacy files need to be isolated.
The implementation environment is usually in place. The only instance where I’ve seen an agreement to switch environments is in an acquisition where one company is archaic and the acquirer will bring it up to current level. Usually hardware rules software, such as including cross-platform modules to a Mac lab.
Types of environment (and most places are a mix) — main frames, networks (there can be both NT and Novell in a single organization), minicomputers (Sun stations, scanning stations, ‘towers of power’), server-based applications, client/server applications, stand-alone applications.
It is possible that at this point the client will pull out for a variety of reasons. Political opposition may gain power, they may start realizing it will have to be far more extensive than originally presumed (companies with little experience with software development think anything is possible at little cost), or the budget suddenly has to be redirected to another need — I’ve seen this far more often than I like. For in-house IT this is not usually a problem; they switch to another project; it rarely means a cut in personnel because they are staff — additional personnel would not be hired until the proposal goes ahead. However, for outside resources, this is a painful moment – they have already expended time in research and presentation. Outside resources just thank the heavens they didn’t put out for the all-electronic JAD!
In most cases, the project continues.
When planning a baseline (minimal) system, design a stepped approach (with costs) right up to the top-of-the-line. Many clients would prefer to take a trimmed-down system if they know they can keep growing. And success on the first level makes them hungry for upgrades. I’d rather produce a baseline piece of perfection than a system with all the bells and whistles…and more bugs than Pest Control can handle!
The idea of having a firm develop and run your application on their own computers, where you supply input and take output, is not really that extreme. Examples are billing companies for medical offices, paycheck generating companies and “data warehousing” in any form. Many companies are large enough to need an outsource to generate business analysis, ad hoc reports and manage its data while the company pursues its own business function.
Considering outsourcing? There are complete hardware and software systems pre-built and supported fully by their manufacturers (such as systems to run photo studios, beauty parlors, restaurants). Absolutely essential: get interviews with previous clients of the outsource! Big companies make a difference; at B-MS they decided to employ an outsourced app called Asset Insight, to survey, categorize and track hardware and software globally. It was the choice and recommendation of one individual (who may have known the source). As the scope of the usage broadened to more than 100 machines, problems with the application increased logarithmically. Eventually they discovered the application was developed by three persons in a private home – they had no experience with a global enterprise and no way to pretest it.
Prepackaged, off-the-shelf systems are often called “shrink wrap”. Especially for small businesses, this is often the best choice to offer.
At the same time that Ashton-Tate was screwing around with dBaseIV (causing die-hard dBase II users to wait for dBase V before upgrading) a lot of PC databases came out — FoxPro, DB (now DB2, from IBM), MS Access and Alpha3 (then 4 then 5) came out. Within two years, Ashton-Tate was out of business. A good example of how a single version of a single company’s line can spell disaster. The SA must be able to predict which shrink-wrap applications will survive.
Turnkey systems were an exciting idea around 1990 — applications were just starting to get complex, requiring installation instead of running off diskettes. And there was little formal computer education, so a no-brainer system was very appealing. Larger corporations eventually discovered the ease of installation did not make for a very useful application. Smaller companies and educational organizations, which cannot afford a full time professional staff, still often go for turnkey systems.
PeopleSoft systems may be customizable, but they are not user-friendly. You need in-house PeopleSoft-trained people full time just to maintain the systems.
Hybrid systems could get messy — no access to source code, for instance. But I have seen some gifted software engineers who could get quite a lot accomplished.
Many companies think it’s cheaper to maintain in-house software, especially since this way they are not ‘held hostage’ by an outside source. The common side effect is an overload of the in-house staff.
At B-MS I handled a few tussles with vendors of software because they design the product for stand-alone licensing, where you receive an ‘unlock key’ when you purchase it. So what do you do when you want to install via a network of 4500 users? You can’t have a different number for each person; you want a quick and transparent installation. Often the vendor (for purchases of 700 licenses or more) will have their own software engineers redesign the install module to fit our purposes.
When working with enterprise systems, a company may purchase one set of user manuals for every 50-100 users, because of the cost. So software companies started beefing up their online help, albeit poorly. MS online help is very thorough — *if* you called it up as context-sensitive. Otherwise they refer to menus and buttons that are not on the screen you are at. The alternative to good online help is good training, whether the software is in-house or purchased. But then, what do you provide when new users come on board?
Trade-publication application reviews are not always seeded by the manufacturer and can be a good source for evaluation. Consider the source; some publications give excellent comparisons of similar applications, which is always better than a single-application review.
Simultaneous hardware upgrades or changes are usually cost-prohibitive, and most companies will not agree to it (unless they are woefully antiquated). But it’s a good idea to design ahead of the state of the current hardware, so the application will still be part of the repertoire when they do upgrade.
When OOD (object oriented design) first came out it was called object-oriented programming solutions — OOPS. And those of us comfortable with modular design thought the acronym appropriate. OOD is more of a conceptual thing than a practical application. But it’s very handy for architects.
Obviously, this white paper does not tell you how to design a system; its purpose is to help you know all the factors that will affect the system analyst’s architectural design.