most recent posts

Vanishing Twin Syndrome

When you first discovered you were pregnant, your sonogram showed two or more fetuses. You got all excited, and immediately started gearing up for more than you planned. But a month later, the sonogram confirmed that you have just one baby, growing hale and hearty. What happened? This is called the vanishing twin syndrome. For one reason or another, the second fetus was not viable and the one which was healthy prevailed. It was as if the twin fetus just vanished.

The first thing to remind yourself is that this is not really unusual. A generation ago women did not have sonograms, and the only evidence might be found in the placenta after birth, if there is any at all. Most women didn’t even know it occurred. The prevalence of good prenatal care and early sonograms has made us more aware of the phenomenon. It was actually first recognized in 1945 by Stoeckel, and refers to the disappearance of one or more fetuses in a multiple pregnancy.

When this occurs in the first trimester, the most common time for the syndrome to be discovered, there is more than likely a developmental problem with the twin that vanishes; this is the time of organ development, and as with miscarriages, if the body detects chromosomal or development problems, the body ‘ejects’ the ailing fetus. If there is another baby (the ‘twin’), then because of the continued presence of the nourishing placenta, the sick fetus is reabsorbed into the placenta, and any evidence of the fetus is hard to find. The mother may experience mild cramps, pelvic pain or spotted bleeding, but usually there are no physical symptoms at all. If the symptoms do appear another sonogram should be performed to ensure that everything is alright. The viable fetus grows into a healthy baby. It is nature’s way of making sure the healthy survive.

Experts studying the vanishing twin syndrome estimate that one-eighth of pregnancies begin as twins. Obviously, the number of twins that make it to maturity is much smaller. Dr. Carolyn Givens (Pacific Fertility Center) estimates that fifteen to twenty percent of twin pregnancies miscarry one fetus. In 1986, a sonogram study was conducted on 1,000 pregnancies; exactly 21.9 percent of the twin pregnancies resulted in the vanishing twin syndrome.

When it comes to the first trimester, the cause is not solidly known. It is considered usually chromosomal or developmental damage, or could be improper cord implantation. It seems to occur with the same frequency between identical and fraternal twins, but it is suggested that the sharing of the placenta between identical twins may contribute to the odds. There is no preventative measure which could be taken.

Older mothers (over the age of 30) do tend to have this experience more often; the twins are usually fraternal, since older mothers tend to release more than one egg; and older mothers more often have chromosomal abnormalities, causing a higher rate of miscarriages.

Sometimes the only sign that the vanishing twin syndrome has occurred is that approximately seven percent of these women deliver the healthy baby before the 28 week gestation period, as compared to one percent of singleton mothers. About one third of these surviving twins will be underweight, with the incumbent complications.

With the advent of in vitro pregnancies comes the increased chance of multiple pregnancies. Since these pregnancies are closely monitored, it has been documented that they often experience the vanishing twin syndrome.

While first-trimester losses of twin fetuses are rather common and have few if any effects, the story changes if the twin vanishes in the second or third trimester. There is an increased risk of death for the surviving twin. The pregnancy is considered high-risk at this stage. It is more likely that the ‘vanishing’ twin will not be completely absorbed or ejected, leaving a ‘flattened’ fetal remnant or tumor material.

Late-term vanishing twins can also result in cutis aplasia or cerebral palsy in the surviving twin. The mother may experience preterm labor, obstruction of labor, infection, consumptive coagulopathy, or puerperal hemorrhage. There are instances of small tumors with remnants of the vanished twin which may occur in the survivor.

Therefore, if after your first trimester it is confirmed that you are carrying a multiple pregnancy, it is important to get regular sonograms and have the pregnancy monitored closely to avoid any problems.

No matter when the syndrome occurs, there may be a feeling of loss, not only for the parents, but also within the surviving baby. There can also be feelings of relief or guilt that the survivor is healthy. These feelings should be addressed, and if necessary worked through with a counselor before they affect relationships.


Business systems analysis – implementation

class diagram for a photography shop

class diagram for a photography shop

There cannot be enough said about documentation – at every step of the SDLC. At Pfizer they had intranet Treps (Team Repositories) which are only accessible by people ‘with permission from the Project Manager – and not all of them have publishing rights. Here, the drafts are published, to be replaced by the final forms. The original project plan is published. The developer picks that up for guidance on his programming. The developer publishes system guides. The technical writers pick up the proposal to figure out what to put in the system test scripts, and these are published. The technical writers pick up the system tests and the system documentation to figure out what to put in the user manual. Even more technical writers and testers pick up the system tests and users manual to write User Acceptance Testing scripts. FAQs are published there for incorporation into the user’s manual and online help. Of course, the code has to be well documented (if any of you have ever coded in C or C++ you know how a week later you’ll never figure out what the program did). Each published document is approved with three signatures (project manager, technical manager and business manager) when in its final form.

Training is a more complicated decision than it seems. If there are a lot of users, some companies start training in groups of 20 or so quite early in the game, with a prototype if necessary. This is not the greatest idea because users get antsy to apply what they’ve learned and are afraid they will forget it if they don’t start right away (and they’re right). It doesn’t help that most training manuals are skeletal (on the assumption that the user will take notes) – they are too busy doing the hands-on practice to take many notes. So part of the implementation planning should be to make a training manual which is in fact a fast-reference outline of the most commonly used features.

The important users working on the development of the system are of course the original SMEs (Subject Matter Experts). With a large group of potential users, these people on the development team should ‘train the trainers’ – the technical support personnel and professional trainers. Most large companies have computer classrooms all laid out and waiting. We did this at Shawmut Bank – I worked directly with the curriculum developer to get a clear user’s manual and training session plan for the network. If the company has ‘shadow IT’ people, these are great to prepare – they will work one-to-one with their fellow users.

Whether the system is fully new or an upgrade, tutorials are great when built into the help menu; users can spend lunch hours or any other time privately learning the material. If the system is complex, modular tutorials are a great help as a backup after formal training.

Some companies have a ‘tracking sheet’ which identifies all variables and the order they are going to be tested, and a formal script format. The tracking sheet coincides with the ‘test plan’. One writer writes the scripts and a separate one runs them (to watch for omissions, etc.) before the system testers get hold of them. Not only do we tell them what to do, but we tell them what the expected result should be at every step; the testers record the actual results. A test manager doles out the forms and the test run numbers.bsai-2

Most compilers test for syntax problems, and professional programmers are rarely so inexperienced as to need a walkthrough. But they do ‘act like a computer’ – it’s the first debug technique taught. But instead of writing down the variables, virtually any debugger allows you to step in and out of procedures and to name variables to watch as they change during execution of the code. Usually they are used on a needs basis – if there are runtime errors that the programmer cannot pinpoint, the debugger is used.

Automated testing is becoming popular, but I don’t think it’s very good, and most large companies feel the same. By the time you write up an automation, the system test is done. It’s best for ‘installation testing’ – that’s when the system as a virgin box works fine – now how about the 4 different platforms we handle, and how does it work with the standard desktop applications and OSs we use? So a company will set up ‘test beds’ – a series of computers with all the different combinations of OS and applications used; an automated script then runs each through its usage of the new system, to see if there are any conflicts.

Also on the test beds are test data. This test data should be as large as is possible, with all the idiosyncrasies of the live data. It would be virtually impossible to reproduce. So instead, the company takes a snapshot of the live data and loads it into a ‘safe’ space, so that testers can add/edit/delete data without damaging the real stuff. This test database is used by system and user testers. The only time live data is used is during a pilot.

Testing should include not only the usual uses of the system, but every anomaly you can think of. I didn’t get the nickname “Crash” for nothing. Consider all the possibilities – page 3 of 7 falls on the floor while transporting the papers to the scanner; someone enters the wrong index data; someone neglects to enter a required field; the user gets PO’d and starts banging on all the keys. Developers always assume people are going to do the right thing [NOT…]. I once e-mailed a 57-page error message to a developer. So when planning system testing, every possible scenario should be covered. Many developers will set ‘traps’ and make user-friendly error messages for those traps, which is fine. The system testers should aim for the internal error messages, so the developers know where and how to build traps. We’re having a tussle with a developer now because there are certain functions which, if not done right, just stop the system – no message at all – and the developer wants to leave it that way. Not on MY watch.

Alpha testing is always done in-house and can be quite extensive. IBM has a bunch of college engineering interns that do nothing but alpha-test its products – they play games, mess around with the GUI, write letters and crunch numbers, looking for glitches. This generates the coded versions (like the Kirk, Spock and McCoy versions of OS2). A lot of the things they list in the text as alpha testing – recovery, security, stress and performance – are usually considered system testing.

I’m sure you are all aware of beta testing. If this is not a shrink wrap, beta testing would be set up as a “pilot” – a separate group of people get the whole package and use it on live data. This is only done if it’s a huge deployment (over 1000 users). If it’s successful after 2-6 weeks, another ‘wave’ of users will get deployed.

Systems construction includes UAT (user acceptance testing) feedback and alterations based on the same. During the systems testing, the purpose is to find bugs – whether links perform as designed, whether the application has conflicts with other standard applications, whether the application depends on outside files that must be present. This last one has become a particular problem recently, as many customizable packages call Internet Explorer files. UAT is performed by selected end-users, as SMEs. Here they are looking to see if the application meets their needs, if the terminology is what they use (mustn’t use computerese when the user is a photography shop owner), whether drop-down lists contain all the items they need, etc. Some UAT testers will simply apply the application to their work; others need specific scripts of what to do. Often they will have suggestions for changes that they would like incorporated. At this point, the decision has to be made whether these changes should be made before deployment (which means another run of development, engineering and testing), or whether they can be cataloged and saved for the next version. This decision requires input from users, managers and IT.

Now comes the delicate part – actual installation (usually called deployment). Don’t forget we made a decision much earlier about whether to do this overnight, in tandem with the old system, or with the legacy system waiting in the wings in case of disaster. Many companies require a backout plan in case there are serious problems. Certainly a change management committee would require a backout. Keep in mind that many users never log out – they go home and leave their machines running, sometimes even over the weekend. The trouble is that if the deployment is done transparently, it’s done overnight, or it’s built into the login script. If the legacy application or other applications are open, this can corrupt the system installation. To handle this, most large corporations require an e-mail to all potential users of the application at least 24 hours before the deployment. Some also require a warning a week ahead. AT B-MS there is a team that does nothing else – the Process Manager sends them a copy of the announcement and tells them who the potential groups are (for instance, everyone in Genomics and Biochem). The mailing team has an up-to-date mailing list by department and sends it out. Unfortunately that doesn’t always work, and one night I created a mailing list of 200 people by hand, working with the Novell engineers to find all the people involved. The announcement will tell the user what is going to be installed, what special steps might need to be taken (like a reboot), and what impact the installation will have. Pfizer sets up the installation so you can choose to delay certain installations, choose not to install some, and has some that are mandatory. For the mandatory ones they survey (installation sends a token back to the installer) and remind those who haven’t installed yet.

bsai-3Phased installation is great for die-hard legacy users – keep the GUI familiar and add functions incrementally.

One of the reasons so much depends on the data dictionary is so that no data is lost or corrupted during installation of a replacement system. A perfect example of this is the DCF database the state of Connecticut created. They’d forgotten a few fields, and so came out with a new version in 6 months. But the developer apparently did not separate the GUI from the tables. So three fields were lost entirely; the data picked up by the new version did not pick up the fields in their original order, and since they had the wrong data type, they got dropped. Now every time we go to discharge a patient, we have to re-enter those 3 fields.

Tutorials keep stressing system and code documentation because someone else will probably do the upgrades. This is another of those ethical questions – many contractors and outsources like to withhold documentation so the company is forced to recall them for upgrades. Ugly practice. And many, many in-house programmers used to avoid documenting their coding so that they couldn’t be fired – this is why that cumbersome COBOL was developed – to force documentation. Even if it’s documented, picking up someone else’s code is a real bear. But if a contracting company/outsource provides full documentation, they will become trusted – and will get repeat business. And if you’re in-house, you will be working on many projects. When two years pass and they ask you for work on upgrading, you will be very glad you documented it, believe me.

Help files can be difficult to develop, but they can make or break a system’s success. Since online help is cheaper than paper manuals, it’s become a replacement for them. Microsoft has a very extensive Help file – but they have one big problem, for those of us looking for advanced features – their help files are all written as context-sensitive. So if you search by index, and find what you want – they refer to buttons and menus you can’t find because you’re not in ‘the context’. For this reason I find MS Bibles invaluable.bsai-4

Anyone who reads a file has found out that this file usually defines new features, special steps for unique peripherals, and last-minute glitches which were caught after release. This is acceptable (well, maybe not…) for shrink wrap, but should never be a standard practice for developers.

In most companies, deployment includes:

  • Change management notifications – informing the Change Management team of any impacts on user time, system down time, or other effects on the environment
  • Track all calls on the new system for a week or two to see which are unique to the guy who downloads every possible game to his system, or which happen only on the Macintoshes, or which happen only on the non-clinical machines. The project is not closed until it is determined that all systems work smoothly.
  • Change management is notified when the project is closed.
  • If there is a problem with one type of environment (perhaps the financial department), the users must be notified, and they must be told of any workarounds you find.
  • If the system must be uninstalled on any machines, Change Management must be notified, as well as the users.

What many texts do not handle in the implementation section is evaluation. This is tremendously important. Evaluation should be a lessons-learned affair, not a condemnation of any sort. If the system is in-house, determinations can be made of changes to include in subsequent versions. Team responsibilities can be viewed and honed. If the implementer is outside of the company, they too can figure out what works and what does not for that particular client. Evaluation should generate evolution.


Business systems analysis – design phase

This is the time to fly trial balloons.

This is the time to fly trial balloons.

Design is perhaps the hardest part of the SDLC. Even though you are forming ideas in your mind while amassing the information and developing the analysis, now you need to formulate the picture and predetermine which questions or problems will occur and try to solve them ahead of time. Whatever technical knowledge you have comes into play, and a wise SA (systems analyst) will work hand in hand with any experts at his/her disposal.

The systems analyst needs to know what a computer is capable of, what coding can do, and what the existing or proposed hardware can handle. The better the SA knows coding, the more detailed the design, but this is sometimes a drawback. Better to figure the architecture and let the programmers determine the most efficient way to animate it.

Sources for a system are often a combination of known and new products, depending on the specific system needed. The Drug Discovery division of Bristol-Myers Squibb (B-MS) has a single outside company managing their licenses and tracking all the software, as well as developing all in-house applications. Pfizer uses a pre-developed package (currently Peregrine) to track all calls for technical support; while designed for this purpose, the client (Pfizer) customizes it extensively to track job types in its own nomenclature and to generate some automatic reports. I developed three tools for the Mashantucket Pequot Tribal Nation entirely because the type of information they are looking to manage is so different from that of a normal US corporation.

Well, what have we here?

Well, what have we here?

It is imperative that any data conversion from legacy systems be addressed at the outset — it will directly affect the data dictionary of the new system.  Post-development conversions are expensive and awkward, indicating poor planning. Another situation that should be addressed right in the beginning is how the legacy and new systems will interact during deployment. For various reasons, one of three options will be employed:

  1. overnight replacement — the preference of developers, since their system is of course the Ideal Solution; this would be the necessary changeover in one fell swoop, as for POS systems; other than the logistics of switching, there is little impact on the design.
  2. concurrent systems — both the legacy and new systems run simultaneously. This doubles the work of the users, but is absolutely essential for accounting and validated systems. After 3-6 months, depending on the patience of the users, the resulting data is compared between the two systems. If there is no degradation or corruption of data, the new systems runs and the legacy system is dismantled. The designers have to be sure there is no conflict between the two systems and that they can run independently while being in the same environment.
  3. legacy system “on tap” — a favorite of the Regulatory division at B-MS. The new system is accessed just like (and where) the old system was, but the legacy system is still available in case the new system has functionality problems. This is important where the users cannot afford ‘downtime’. Important for Wall Street, or pharmaceutical firms when they have a 24-hour windows to report to the FDA. Not a great impact on design, but same-name file calls could mess up the new system. The legacy files need to be isolated.

The implementation environment is usually in place. The only instance where I’ve seen an agreement to switch environments is in an acquisition where one company is archaic and the acquirer will bring it up to current level. Usually hardware rules software, such as including cross-platform modules to a Mac lab.

Types of environment (and most places are a mix) — main frames, networks (there can be both NT and Novell in a single organization), minicomputers (Sun stations, scanning stations, ‘towers of power’), server-based applications, client/server applications, stand-alone applications.

It is possible that at this point the client will pull out for a variety of reasons. Political opposition may gain power, they may start realizing it will have to be far more extensive than originally presumed (companies with little experience with software development think anything is possible at little cost), or the budget suddenly has to be redirected to another need — I’ve seen this far more often than I like. For in-house IT this is not usually a problem; they switch to another project; it rarely means a cut in personnel because they are staff — additional personnel would not be hired until the proposal goes ahead. However, for outside resources, this is a painful moment – they have already expended time in research and presentation. Outside resources just thank the heavens they didn’t put out for the all-electronic JAD!

In most cases, the project continues.

When planning a baseline (minimal) system, design a stepped approach (with costs) right up to the top-of-the-line. Many clients would prefer to take a trimmed-down system if they know they can keep growing. And success on the first level makes them hungry for upgrades. I’d rather produce a baseline piece of perfection than a system with all the bells and whistles…and more bugs than Pest Control can handle!


The idea of having a firm develop and run your application on their own computers, where you supply input and take output, is not really that extreme. Examples are billing companies for medical offices, paycheck generating companies and “data warehousing” in any form. Many companies are large enough to need an outsource to generate business analysis, ad hoc reports and manage its data while the company pursues its own business function.

Considering outsourcing? There are complete hardware and software systems pre-built and supported fully by their manufacturers (such as systems to run photo studios, beauty parlors, restaurants). Absolutely essential: get interviews with previous clients of the outsource! Big companies make a difference; at B-MS they decided to employ an outsourced app called Asset Insight, to survey, categorize and track hardware and software globally. It was the choice and recommendation of one individual (who may have known the source). As the scope of the usage broadened to more than 100 machines, problems with the application increased logarithmically. Eventually they discovered the application was developed by three persons in a private home – they had no experience with a global enterprise and no way to pretest it.

Prepackaged, off-the-shelf systems are often called “shrink wrap”. Especially for small businesses, this is often the best choice to offer.

At the same time that Ashton-Tate was screwing around with dBaseIV (causing die-hard dBase II users to wait for dBase V before upgrading) a lot of PC databases came out — FoxPro, DB (now DB2, from IBM), MS Access and Alpha3 (then 4 then 5) came out. Within two years, Ashton-Tate was out of business. A good example of how a single version of a single company’s line can spell disaster. The SA must be able to predict which shrink-wrap applications will survive.

The key to success?

The key to success?

Turnkey systems were an exciting idea around 1990 — applications were just starting to get complex, requiring installation instead of running off diskettes. And there was little formal computer education, so a no-brainer system was very appealing. Larger corporations eventually discovered the ease of installation did not make for a very useful application. Smaller companies and educational organizations, which cannot afford a full time professional staff, still often go for turnkey systems.

PeopleSoft systems may be customizable, but they are not user-friendly. You need in-house PeopleSoft-trained people full time just to maintain the systems.

Hybrid systems could get messy — no access to source code, for instance. But I have seen some gifted software engineers who could get quite a lot accomplished.

Many companies think it’s cheaper to maintain in-house software, especially since this way they are not ‘held hostage’ by an outside source. The common side effect is an overload of the in-house staff.

At B-MS I handled a few tussles with vendors of software because they design the product for stand-alone licensing, where you receive an ‘unlock key’ when you purchase it. So what do you do when you want to install via a network of 4500 users? You can’t have a different number for each person; you want a quick and transparent installation. Often the vendor (for purchases of 700 licenses or more) will have their own software engineers redesign the install module to fit our purposes.

When working with enterprise systems, a company may purchase one set of user manuals for every 50-100 users, because of the cost. So software companies started beefing up their online help, albeit poorly. MS online help is very thorough — *if* you called it up as context-sensitive. Otherwise they refer to menus and buttons that are not on the screen you are at. The alternative to good online help is good training, whether the software is in-house or purchased. But then, what do you provide when new users come on board?

Trade-publication application reviews are not always seeded by the manufacturer and can be a good source for evaluation. Consider the source; some publications give excellent comparisons of similar applications, which is always better than a single-application review.

Simultaneous hardware upgrades or changes are usually cost-prohibitive, and most companies will not agree to it (unless they are woefully antiquated). But it’s a good idea to design ahead of the state of the current hardware, so the application will still be part of the repertoire when they do upgrade.

When OOD (object oriented design) first came out it was called object-oriented programming solutions — OOPS. And those of us comfortable with modular design thought  the acronym appropriate. OOD is more of a conceptual thing than a practical application. But it’s very handy for architects.

Obviously, this white paper does not tell you how to design a system; its purpose is to help you know all the factors that will affect the system analyst’s architectural design.

A happy client will return for more.

A happy client will return for more.

Business Systems Analysis – Analysis Phase

interviewInformation-gathering at the inception of a project

I consider the use of interviews to be the most important fact-finding method, backed up by the collection of business documents, for fact finding on a development project. I prefer interviewing fewer people with an hour for the interview, and only interview one person in each role. My reasoning is that, in interviewing more than one in a role, the information would become redundant (and therefore a waste of time). Often, only the manager or the person “buying” the service (that is, the one whose budget gets most impacted) will offer to do a Needs Assessment interview – this is totally unacceptable because the manager does not fully understand the needs of the end user. Ask the manager for permission to interview the most experienced and/or ‘largest’ user in each role.


For instance, if the project is a payroll package for a retail chain you would want to speak to

  1. The manager/owner/requestor – find out what s/he is looking for, the budget, and the reason for the inception of the project. Who handles the W-2s and how? Do employees get a shift differential?
  2. Timekeeper – how are they amassing the employee weekly time information; if this information comes in electronically, can the new system import/convert the data? If the employee’s recording of his/her time and the timekeeper’s entry of the employees’ time is all manual, consider computerizing one or both sets of information as a high-end solution.
  3. Person cutting checks – how are they getting and storing the timekeeping, W-4, and tax information? What specific problems are they encountering, such as sorting recipients by branch, separating out checks to be mailed and check stubs to be mailed (for direct deposit accounts) and sorting those all by zip code? How is all this information being sent to the bank? Format?
  4. An employee – how does s/he track his/her time? Any complaints about this method?


If at all possible you want your system to be able to interface with any existing electronic systems, both for feasibility and the possibility of replacing the existing systems in the future.fried-at-the-office

The problems with interviewees are very real. If the person does a manual, clerical job, s/he is going to fear electronic replacement. This person is going to be very self-protective and might even mis-inform the interviewer to make the latter look incompetent. To get cooperation, keep assuring the person that you will not replace him/her; instead you will ease his/her workload – people in this type of a situation invariably complain of an overload of work and/or a fear the s/he is too accustomed to the “old” method and won’t be able to learn a new one.

If interviewees give you the ‘should’ scenario instead of the reality scenario, is this really bad? You will be building in just the corrections that are needed. Ask questions were you suspect there’s a specific lack of information. Encourage “complaining” – it is there that you will learn the weaknesses of the old system (hardware, software and wetware) and what you can offer as a cure.

An interviewee not being able to describe his/her work is very common, especially in a company which grew from a small mom-and-pop organization to a good-sized corporation. These folks learn their tasks, but not the terminology or standard methods. As an example, a person can have 10 years’ experience building and managing projects, but they don’t know what an SDLC is because they never went to school for Project Management – they just did it. You end up hearing descriptions of a specific task, rather than the actual information flow. For example, at Bristol-Myers I was trying to map the process by which the site network was managed. I got a glowing description of a recent problem with a single workstation and how it was eventually discovered to be the NIC (which was a different department’s purview). What I needed to know was for which situations would a technician contact the site Network people and what was the flow of information from there?

drowning-in-workThere is a specific problem in systems analysis – the people who get into this line of work often have a good technical background but are weak in “people skills” and judgment. These are two very important skills. If you are in this position, take advantage of any management courses available to you – it will only make you look better to your employer. If you find you don’t really like the people-part of the job (and many technical people don’t) consider being part of the development team instead of the client-interface team.

I have worked for two different very large corporations in the same field – one has a team that does nothing but document the processes in use in all departments (and publish it on the intranet); the other doesn’t even have a P&P (Processes and Procedures Manual) for the IT people. Needless to say, the former is more successful.

All business forms are of great value – not only what they fill out for input, but also every report/invoice/whatever that goes out. Whenever I’m doing a Needs Assessment I request sample copies of everything they mention and try to anticipate others they forgot. You can always weed out those you won’t need. And refinement of the data, to avoid duplication, is easier done this way. Sometimes they are inputting or outputting the same data, just using different names for it, so they don’t realize it’s the same. What you don’t want is to discover after the delivery that the data for a regular report is NOT there; you can build the report later if needed…but adding fields and inputting missing data is tough and expensive. Whenever an input field does not appear to be used for output, question whether it’s a necessary field (it may be something they intend to evaluate in the future.)

I am not a big advocate of questionnaires. Even if they are electronic, maybe 25% respond. Non-response is very important – there is usually a very good reason for it. Online forms are great; people usually think it’s safer, and no one will be reading the responses personally, so they are more likely to fill them out – and truthfully (after all, one couldn’t tell who it is by the handwriting). Use their intranet. Questions on the level of satisfaction are a bear – every try to decide if you are ‘somewhat satisfied, as opposed to ‘satisfied’? Avoid the subjective queries.

On requesting documents – an organization chart is usually not very important – unless you want to know who to please (often a determinant). Above all you want its forms of information -– what forms are used to gather the information now – job applications, patient intake forms, logs kept by the people running programs, etc. – and what information needs to be extracted (standard reports to stockholders, paychecks, metrics reports to management, invoices). The output will determine the format of the input. So I guess you could say I advocate a top-down approach.computerphobe

One reason I have a lot of not-for-profit organizations as clients is because they need to report to at least one agency on their activities, as well as funding agencies (such as United Way and the Dept. of Children and Families). There is no pre-packaged software for these organizations which can track their activities (there are shrink-wrap packages for non-profit accounting). They have to report activities and demographics quarterly to each funding agency or lose the funding. These projects must be designed from this vantage point of output, and often ‘registration’ electronic forms need to be designed to nudge them into getting the data needed.

Direct observation is one of my favorite methods of information-gathering. Ask questions; if you do it right, they get into ‘brag mode’. An example of the value of observation: I have a client for whom we have done data warehousing since 1996; we know all the groups they need to answer to for funding and licensing, we know the demographics that are important to the management, and we know what the esoteric codes mean. The state of Connecticut decided to create its own databases for the reports it was evaluating each month, since it had been up to this time information was manually input and was constantly in error. But the man hired to build these databases had no idea what the information meant. The result – an awkward database that has had questioning-computerso many errors in it that it’s been revamped 3 times in the last 2 years. Plus, they seem to think their reporting agencies are stupid, so they locked access to the database. Result – our client had to pay for re-input of 600 records, and pays extra each month because we then have to export the state’s information, fix it and add our additional tracking information. Why fix it? The state has the zip code locked into CT-only zips – my client is near the Rhode Island border and often gets clients who have RI zip codes. Additional information? The state doesn’t think in terms of separate programs, so we have to re-align the data. And the state selected its own case-numbering system, so the client has to keep a double-system for all current cases. I could go on ad nauseum… If the developer had observed what the reporting agencies actually do, a lot of these problems wouldn’t have cropped up at all.

Characteristics for a good systems analyst during requirements determination:

  1. Impertinence – asking questions. It looks easier than it is. You could easily miss which questions to ask, which might require a call-back. This ability to know what to ask is a developed skill, and certainly each project will make you more aware of what to look for.
  2. Impartiality – the politics get in the way all the time. Always consider the source; what is the person’s attitude toward the project? What will this person gain or lose with the new system? You may actually be told by the ‘buyer’ that a particular person’s opinion is not [is most] important. Find out who makes the final decision – this is the person to try to satisfy in the end.
  3. Relax constraints – yeah, right. The biggest hurdle is “I have done it this way since Ben Franklin…” The client may insist on a mimic of the present system; any change to this would have to be gradual, probably a follow-up upgrade. Try to keep as close to the present way of doing things as is efficient, such as having electronic forms look very similar to the paper forms they are using – but with time-savers on them such as default values.
  4. Attention to detail – if you have a computing background you already know that it’s the details that will kill you
  5. Reframing – this is no problem if you are an outside source. But if you are an in-house organization this is very difficult for the analyst as well as the client.change-control

Knowing the business objectives is necessary to sell your solution. For example, one year Bristol-Myers had a Business Strategy Objective (BSO), which was defined in detail. All work had to be justified according to the BSO or it wouldn’t be done. One key phrase is “this is the ROI” (Return of Investment); since it’s a business buzz word, ears perk up that have no interest in the technical stuff. And it’s always a selling point – prove to the client that they will get a better profit, to the tune of a multiple of the cost of the new system in the first 5 years, and you’ve made your sale. For not-for-profit organizations, the people are busy saving the world; they don’t carefully track their own activities. Showing them how tracking particular information can increase the funding works like a charm.

Every time I build an application for them, I discover a lot of things they could track easily by computer that they weren’t bothering to report at all. As a result of the more efficient reporting, their funding increased significantly. Be sure to observe more than the sponsor offers.

Watch for side activities/processes that can be incorporated into the system (for the high-end solution, or a follow-up proposal). And watch for redundant actions which can be eliminated – this is quantifiable ROI. Be sure to re-write notes after doing an interview, receiving a questionnaire, or reviewing documents; you’d be surprised what you’ll forget in a matter of hours. Remember that your time and that of the people giving you information are both valuable. Do not give notes back to the person you question  – you should have notes that are not for the clients’ eyes, since they might misinterpret them. Instead,  you could graph or outline the information and have them review it for accuracy.

use_case_restaurant_model_svgJAD stands for Joint Analysis and Design. This is often where you’ll find professional facilitators. Within your organization a JAD should be run by someone on a management level – or someone being groomed/trained to be on a management level. It might be the Project Manager for that project or a Systems Analyst. Because professional facilitators don’t know the subject matter they often lead the discussion in the wrong direction. The only time you’d see a JAD is within a company that has application development as its business function. It’s expensive and time-consuming for the client. It would be nice to have it run electronically– and run it at the client site – but now we’re really getting into the big bucks. In most cases a JAD is tracked on a white board or large paper pads, and then has to be rewritten and published. One big problem is squabbles among the clients. Better then to interview them separately, make your suggestions and let them squabble it out after you leave.


Analysis of gathered information

Prototyping is a great idea and should be a standard part of the development process, if you can convince the client to do it. It allays fears of what to expect, makes it easier for clients to articulate their needs and practices, and it’s cheaper in time, money and work than coming up with a revision. The only detraction is that if the contract is turned down, the developer needs to “eat” the cost.

A good approach to convincing the sponsor if the need for change is charting the existing system: all manual, hard copy and electronic processes together in a ‘current systems’ data flow diagram. Don’t bother to flabbergast the client – win them over with professionalism.

There are hard copies which must be maintained in certain industries such as pharmaceuticals and legal companies. These can be scanned into a database, moving closer to a paperless work environment (and it’s certainly a lot easier to manage). There is a company in Connecticut that does nothing but scan and index all the paperwork an attorney needs for a case, so s/he can review, evaluate, cross-reference and call up any piece of evidence in an instant.measurement

Data flow charts really aren’t that difficult for software companies, because they usually have teams that always do the same type of applications, such as warehouse management   for whatever kind of warehouse comes along – they modularize and rearrange the modules.

For presentation to the customer, ‘before’ and ‘after’ data flow charts would show the simplification of the processes, which is usually enough to convince them to continue with the project. Those of you with Visio   are probably aware of the many ways in which to represent processes. Is any one icon or system the best? Absolutely not. You can use circles and squares, as long as it’s clear and well-documented – you want to be able to send the client home with a printed copy to evaluate the changes on his own time, rather than making him feel pressured to respond immediately.

Knowledge of programming is a great asset to developing these charts, so the systems analyst will often work hand-in-hand with the programmers in drafting them.

During analysis, a feasibility study should be done. Feasibility is hard to nail down. It looks somewhat overwhelming at first. Many companies have a dollar figure per-employee for additional manpower. And experts in each phase (networking, users, hardware) can usually give you information on what it will cost, or if it’s not possible to do. In large companies, there is usually a good idea ahead of time of what the costs will be, and they will be looking more for the time span involved. In small companies there is usually a serious “sticker shock”, and many projects die at conception.

Throughout this process, the aim is to achieve an agreement, which will probably mean compromise on both sides.

Handshake in an office

Handshake in an office

The transport system in plants

treesOnly a Hobbit would be able to demonstrate that a plant’s transport system is a tree’s ability to walk to the Tower. In the real world, a plant’s transport system moves water and nutrients throughout the plant.

The main source of water and mineral salts is the soil in which the plant is rooted. Like a ring of bendable straws, the plant transports ions and water from its root hairs to the tips of branches and the ends of leaves. In ‘reverse’, the plant brings photosynthesis products from the leaves to the rest of the plant.

Unlike an animal, these are not real arteries, but rather masses of cells called xylem and phloem cells. And unlike the pumping heart of an animal, the liquid is moved by osmosis, the difference in concentration which water tries to equalize. They are found in roots, stems and leaves. Both xylem and phloem cells grow from cambium cells which are present in the stem.

Xylem cells transport upward from the roots only. This is demonstrated by unique colored carnations, which are soaked in colored ink. The cells line up like tubes which are wide and have thick walls. They bring ions and water to the tips of the plant, so the plant may grow branches and create buds, flowers, cones and fruit. These cells live about one year then die and need to be replaced.

Phloem cells create “sieve tubes” which can transport in both directions – water from the ground, and sugars (mostly sucrose) and nutrients such as amino acids which are created by photosynthesis. Their end walls have small holes with strands of cytoplasm running vertically through them. Companion cells alongside the phloem cells control the movement and direction. When the yearly ring of xylem cells dies in a tree, the phloem continues as sap. Since phloem transports in both directions, it has a lot of sugars from some trees such as the maple tree, which can be tapped to make maple syrup.Stem-cross-section2

The root hair cells of the root absorb water and mineral salts from the surrounding soil, which are transported throughout the plant to the root cap cells, which protect the growing tip of a stem or branch. The osmotic pressure between the xylem cells gives the plant some structure, so that the stems of flowers don’t wilt as long as there is water being passed along. In trees, these cells ‘line up’ as a ring; as the tree grows outward, the old xylem cells die, and new ones are created along the bark. This is why when people cut a ring around a tree below its bark, the tree will die – there is no completion of the transport system; nutrients cannot reach the roots or branches.

When the water and mineral salts reach the leaves, the water evaporates off the leaves, causing a constant pull upwards. This is called transpiration. The leaf can control the rate of evaporation by opening and closing its stomata (pores). If there is no water, the leaf shrivels and dies. Light stimulates the opening of the stomata. Other environmental factors also affect the rate of transpiration, such as temperature, humidity and even wind.

Deciduous plants are those which seem to hibernate in the winter. As the weather gets colder, transportation of water and nutrients slows down. Winds cause fast transpiration and sunlight is in low supply. Sugar gets stored; depending on the sugars being created by a tree, this causes the colorful fall foliage. When the leaves can no longer photosynthesize, they fall off, and the place where the leaf was is sealed off, as are the tips of the branches and stems. This keeps the water pressure steady but unmoving. As the spring sun warms the plant, these caps are dropped and growth resumes. Leaves grow and darken as they photosynthesize, flowers pop up and form nuts or fruit, and the cycle resumes.

By understanding how and why a plant circulates water, mineral salts, sugars and amino acids, people can understand how to care for the trees and plants around them, and spot the signs that there is a problem with the transport system.

A brief history of system architecture

architecturePhysical architecture of a system is defined by the way it functions. Originally computers were ‘mainframes’ – one huge collection of switches and tapes, with a single input point for teletype or punch cards, and a single output device, a printer. These computers were as large as a building. As time went on they became small as a room. People stood in line for their chance to use the computer. You will still find this type of architecture in colleges for engineering students.

In the 1970s, the personal computer debuted. Since it could not house the large relays of a mainframe, this was seen as a personal super typewriter, called a word processor; graphics were extremely limited in a personal monitor (remember Hercules boards?). Sharing data was achieved by saving things on 5-1/4-inch diskettes. Even the programs that were run on the PCs ran off the diskettes – hard disks were tiny (10 MB) if existent at all. RAM was equally limited. The Motorola CPU chip used in Macintosh, Atari and Commodore PCs opened the possibilities of expanding the usefulness of a PC to things other than word processing.

In an effort to tap the user population that was eyeing the Macs, Intel set about developing the 808x series of chips, which opened up the DOS market to private users. Math chips had to be added to the CPU, and graphics chips were eventually added. The user had to manually switch to non-CPU chips to use them. Still, both PCs and mainframes were in the position of a one-machine-one-user architecture.

While businesses needed the power of a mainframe, they could not waste the personnel time having people in line for use (or their jobs waiting in a queue for a mainframe operator). Thus began tiers. Several people needed to be able to access the mainframe at the same time. So “dumb” terminals were placed at most desktops, and e-mail became the fad. A dumb terminal has no processor – it is a monitor and keyboard wired to a main frame. As such, the mainframe did all the work. One may choose to not call a terminal a tier, since it’s more like a lateral expansion. Semantics.

So in the early 1990s people often had two terminals at their desks – a dumb terminal hooked to the mainframe and e-mail, and a PC for their non-shared work. Time on the mainframe could really stretch out – people learned when were the ‘peak’ periods of use and tried to do their work at ‘off’ times. On a Friday at 3 PM, when everyone was trying to wrap up the week’s work, it could take up to an hour just to get a report printed. So anything which could be done on the PC was done there. Because of the familiarity with the mainframe command system, Intel PCs were used more often than Macs, unless the people using a PC were technofreaks and wanted to play with Macs.

As it became apparent that personal computers had a great deal of popularity, and technology soared ahead in RAM, ROM and processors, designers were looking for a way to meld the advantages of mainframes and PCs. That gave birth to the network. In a network, the server does a lot of the work in trafficking that the mainframe once did, but part of the work was shared by the nodes, or “smart terminals”, as the nodes put in queue requests and passed information along to each other. At least, even at $20,000 for a server, it was cheaper than a $6,000,000 mainframe with no trade-in value!

There were a lot of growing pains at this period – many different networking models and operating systems (NOSs). Companies weren’t sure where the future was going, so they bought this and that and experimented. I remember being in one company where I could walk around and find Apples, Macintoshes, Intel (PC, AT, XT), “Trash 80s” (TRS-80), main frame terminals, and at least two different network OSs. It was a hodgepodge. And many companies are still carrying this legacy.

Client/server architecture is usually a network – a server does some of the work and the client (node) the rest. This is not just true of the NOS, but of applications using this architecture – part of the application is on the server, and downloaded to the node RAM when executed, part had to be called directly from the server, and part was local at the node. Yet another part might be in a database on another server. A good example of this was Word Perfect, which kept running even if the network went down – until you wanted to print, which you could only do through the server. In corporations, one PC could be used allowing the user to switch from network mode to mainframe access.

Web architecture could be considered client/server, but the web server is unique in that it must be designed to accept a “universal” language rather than the language of the NOS. Then the web interface usually has to communicate with information on data servers. All of these possibilities required new languages which were more efficient than AIX or VMX, which did not work well with the PC architecture.

IBM came up with a model of the various communication layers used by a NOS – those that ‘spoke’ to the business applications, those that handled the user interface, etc. If I remember right, there are 7 defined layers. Each of these layers actually performs a distinct function independent of the other layers. It is the synchronization of these layers, and the separation of responsibility between the node and the server or multiple servers (e-mail, web, data, applications, users) that make them so powerful.

As prices, technology and applications began developing for the tiered architecture, “migration” and “maintenance” became keywords. For over a decade companies were still seeking the best possible answer and then migrating over to their architecture(s) of choice. Applications were wholly revamped and it was like having a totally new system at each migration. Finally this has slowed down, and maintaining the applications and growing technology is the major activity of the IT departments.

The old method was that one technofreak or another decided “this” was the way to go and a company would simply adapt his/her recommendation. This sometimes resulted in smokestack architectures, piling one obsolete setup on top of another. In those days, there was no such degree major as computers – computer managers were either mathematicians or electrical engineers.

This is no longer an acceptable approach. Now we must look at what is there and develop a sane plan to maintain what we need from the legacy systems, see far enough into the future to be sure any replacements have a future, and design a migration which is not only successful but reasonably priced.

The data is the common point of reference for old and new systems. Since all of these systems process data that come from predominantly the same places, the different environments must seamlessly communicate and often share the same data.  Hence, one of the most important considerations in systems architecture is the architecture of the data. And the next step is to migrate this data to a newer architecture.

Developing business software training

business training courses need to be both designed and reinforced systematically

Computer Training ClassI have trained people in business software in quite a few different environments – in classrooms, over the Internet, and one-on-one. In all these circumstances there is a need for consistency, thoroughness and, above all, applicability. To achieve this, there needs to be a systematic approach.

When the subject of the training is an in-house application or process, the trainer needs to use and abuse the system to encounter all the possible errors and entries the users might input. The trainer can only recognize the possibilities and prepare the users for them. The next step would be to design the series of steps the users would use to learn all the phases of the process. This should include improper input so that the users can get accustomed to the error messages and how to correct the situation. Next, write a teacher’s manual and student manuals. The student manuals should contain screen prints of what they would expect to see, as well as the steps to be done; and they should have plenty of white space for note taking. The best venue for this type of training is a computer lab or classroom, using a test system, so users can try different things without harming live data.

This type of training should be offered to employees yearly. Those who have had the training may want to brush up, and new employees should be required to attend.

When the application or process is changed, fresh training should be designed and offered, with fresh user manuals that stress the new aspects, but cover the material thoroughly as well, as is done in regression testing. This allows the trainer to show the new aspects, but it also creates a manual for new hires. The trainer’s manual would incorporate the changes.

If the application to be taught to business employees is over-the-counter, the approach is a little different. The developer of the training should meet with the hosting company and find out how the application is being used by the employees to be trained. For instance, if the application is Microsoft Excel, are the employees using it as a database, accounting ledgers, number crunching or analysis? And the host company should inform the developer of what version is being used.

Over-the-counter applications afford the training developer an opportunity to pre-design courses into modules. This way the developer can mix and match according to the needs of the business. For example, if the application is Excel, There would be an introductory module which explains the nature and terminology of the application. A separate module might be on the properties of cells in regard to protection and formula reference. A third module would be on the many functions in Excel and when and how to use them. A fourth might be on macros and the programming language to write them.Instructor with Students in Computer Lab

By discussing the needs with the host company, the training developer can customize the training using the pre-tested modules, and applying the practices specifically to the company’s users. This not only makes bosses happy, but employees develop enthusiasm when they see how it applies to their own work.

Trainer and student manuals are still needed in the same format as mentioned above.

The developer should check in with the host once or twice a year to see if other employees need the same training or if there is another aspect that the employees could learn. In any training such as this, it is a good idea to have all students fill out an open-end questionnaire to gauge how well the training went, and to get input about good ideas for change. This questionnaire should be unsigned; it is for the edification of the trainer and developer, not for evaluation of the students.

If the training is successful and well-retained, the developer will be asked to return. If, on the return, previous students are in the classroom, they appreciate consistency of presentation. And if this is a return visit, students are especially attentive when they notice their suggestions were followed.

The developer needs to save training modules for a wide variety of versions for the same application, because different businesses use different versions. Once the basic format for training has been determined, it is simply applied to the new version or even a new application. This helps prevent errors.

Many companies, on the wrap-up of a project, have a “lessons learned” meeting where they evaluate the problems encountered. These problems can be runaway expenses, poor cooperation from a sector of the project, incorrect choice of products, poor communication, whatever. Then steps are taken to avoid the same problems the next time. This should also be done with business training. Immediately after a training session ends, read the evaluations, take note of any notations in the trainer manual, then make the changes in the manuals and test them, while things are fresh in your mind, and you won’t forget to cover or correct anything.

When you have a systematic approach to developing business training, no matter what the subject matter, things go more smoothly. New assignments do not seem so daunting, and attention can be focused on the learning environment.

Writing 101

%d bloggers like this: