Friday, November 13, 2009

Fault Tolerance










 < Free Open Study > 





Fault Tolerance



Fault tolerance is the intrinsic ability of a software system to continuously deliver service to its users in the presence of faults. This approach to software reliability addresses how to keep a system functioning after the faults in the delivered system manifest themselves. The implementation of software fault tolerance is dramatically different from that of hardware. In a hardware fault tolerant system, a second or third complete set of hardware is running in parallel, shadowing the execution of the main processor. All of the mass-storage and mass-memory devices are mirrored so that if one fails, another immediately picks up the application. This is addressing the faults shown in the bathtub curve�hardware wearing out.



Trying to address software fault tolerance in the same fashion�parallel operation of the same software on a different processor�only results in the second copy of the exact same software failing a millisecond after the first copy. Simply running a separate copy of the application does nothing for software fault tolerance.



From the middle phases of the software development life cycle through product delivery and maintenance, reliability efforts focus on fault tolerance. Table 20-6 shows the subprocesses of the major design, implementation, installation, delivery, and maintenance life cycle phases that support fault tolerance.



Fault tolerance begins at the implementation product development phase and extends through installation, operations and support, and maintenance to final product retirement. As long as the software is running in a production mode, the fault tolerance approach to reliability is useful.



Design and implementation and installation have been discussed in the fault removal approach to software reliability. In fault tolerance, operations, support, and maintenance are added to complete this approach to software reliability. Fault tolerance is a follow-on to fault removal. All of the processes used in fault removal are used in this approach. The differences are in the focus on the product life cycle after installation. The projection of post-release staff needs can only be done with reference to historic information from previous products. The organization needs to have an available database of faults that were discovered in other products after installation. This database of faults and the effort taken to manage and repair them is used to estimate how much post-release effort will be required on the just-released product. Using the historic metrics on faults discovered by the development phase and the relative size of the new product compared to others, a quick estimate can be made of faults remaining and effort required to fix the new product.



Table 20-6. Fault Tolerance Life Cycle Activities
 Tolerance
Design and Implementation 
Allocate reliability among components



X



Engineer to meet reliability objectives



X



Focus resources based on functional profile



X



Manage fault introduction and propagation



X



Measure reliability of acquired software



X



Installation 
Determine operational profile



X



Conduct reliability growth testing



X



Track testing progress



X



Project additional testing needed



X



Certify reliability objectives are met



X



Operations, Support, and Maintenance 
Project post-release staff needs



X



Monitor field reliability versus objectives



X



Track customer satisfaction with reliability



X



Time new feature introduction by monitoring reliability



X



Guide product and process improvement with reliability measures



X





In order to provide a set of data for future products, the project or product manager must monitor field reliability versus reliability objectives. This is tied into the tracking of customer satisfaction with reliability objectives. The end-user is the best source of reliability information on the software product. This is where fault tolerance predictions meet the reality of the real world. End-users can neither be predicted nor directed. Therefore, estimates and assumptions made early in fault forecasting can only be validated through the fault tolerance approach to reliability.



The project/product manager must time new feature introduction by monitoring reliability. It is not a good idea to release new features to customers while known faults still reside in the software product. Combining the release of new feature sets with fault-fixes is an appropriate practice for software product organizations. Guiding product and process improvement with reliability measures feeds the information gathered from customer experience back into product fault removal and continuous development process improvement. Reliability measures are expensive to institute. Their results must be captured and fed back into the learning organization.












     < Free Open Study > 



    Section 3.3.&nbsp; Installing on Windows









    3.3. Installing on Windows


    Installing Subversion on Windows is easy to do. If you follow the links for Win32 on the SVN download page, subversion.tigris.org/project_packages.html, you will find a Windows installer program, which should be named something like svn-1.1.0-setup.exe. You can download the setup program and run it to install Subversion. It will step you through everything you need to do to install SVN and get the basic application set up (see Figure 3.1).



    Figure 3.1. The SVN setup program makes installation on Windows easy.







    If you plan to set up your Windows machine to serve a Subversion repository through Apache, I suggest that you run the Subversion installer after installing Apache. If you do so, the Subversion installer will give you the option of allowing it to automatically configure Apache to load the appropriate modules for Subversion and WebDAV. You can also re-run the Subversion installer at a later date, if you install Apache and want to configure it for Subversion.


    When installing Subversion on a Windows 2000 or XP machine, you should be able to install Subversion just by running the installer program. On the other hand, if you are installing on Windows 95, 98, or Millenium Edition, you may need to modify your Autoexec.bat file to properly configure the system environment for Subversion. For example, if you installed in C:\Program Files\Subversion (the default location), you should set up your Autoexec.bat file as follows.


    1. Make sure that the %PATH% environment variable points to the directory that contains the Subversion binaries, like this:


      SET PATH=C:\WINDOWS;C:\;C:\PROGRA~1\SUBVER~1\BIN

    2. Set the APR_ICONV_PATH environment variable.


      SET APR_ICONV_PATH="C:\Program Files\Subversion\iconv"

    3. Reboot your computer.


    Subversion can also be compiled from source on Windows with Visual Studio, using the Windows-specific source, available as a zip compressed download (from the same place as the gzip and bzip2 source downloads). The specific instructions for compiling under Windows, though, are beyond the scope of this book. If you are interested in compiling the Windows version, the INSTALL file included with the Subversion source contains detailed step-by-step information about the process.









      Are We Testing Enough?











      Are We Testing Enough?



      Defining "Good Enough"


      In Chapter 3, I presented Kano Analysis as a technique for thinking holistically in terms of satisfiers and dissatisfiers for a project, and in Chapter 4, I discussed planning an iteration. The iteration's objectives should determine the test objectives.


      Although it may be hard to determine what constitutes "good enough" testing for a project as a whole, it's not that hard for the iteration. The whole team should know what "good enough" means for the current iteration, and that should be captured in the scenarios, QoS that are planned to be implemented, and risks to be addressed in the iteration. Testing should verify the accomplishment of that target through the course of the iteration.[9]


      A value-up practice for planning "good enough" is to keep the bug backlog as close to zero as possible. Every time you defer resolving or closing a bug, you impose additional future liability on the project for three reasons: The bug itself will have to be handled multiple times, someone (usually a developer) will have a longer lag before returning to the code for analysis and fixing, and you'll create a "Broken Windows" effect. The Broken Windows theory holds that in neighborhoods where small details, such as broken windows, go unaddressed, other acts of crime are more likely to be ignored. Cem Kaner, a software testing professor and former public prosecutor, describes this well:[10]



      The challenge with graffiti and broken windows is that they identify a community standard. If the community can't even keep itself moderately clean, then: (1) Problems like these are not worth reporting, and so citizens will stop reporting them. (We also see the converse of this, as a well-established phenomenon. In communities that start actually prosecuting domestic violence or rape, the reported incidence of these crimes rises substantiallypresumably, the visible enforcement causes a higher probability of a report of a crime, rather than more crime). In software, many bugs are kept off the lists as not worth reporting. (2) People will be less likely to clean these bugs up on their own because their small effort won't make much of a difference. (3) Some people will feel it is acceptable (socially tolerated in this community) to commit more graffiti or to break more windows. (4) Many people will feel that if these are tolerated, there probably isn't much bandwidth available to enforce laws against more serious street crimes.



      Similarly, in projects with large bug backlogs, overall attention to quality issues may decline. This is one of many reasons to keep the bug backlog as close to zero as possible.




      Set Iteration Test Objectives by Assigning Work Items to the Iteration


      In VSTS, all work items, including scenarios, QoS, bugs, and risks, can be assigned to an iteration. This assignment creates a test target list for that iteration, or in other words, a visible bar defining good enough testing for that iteration. You can, of course, add more to that list or reschedule items to future iterations, but there is always a visible, agreed definition of the iteration test goals, and changes to it are tracked in an auditable manner.







      Exploratory Testing


      Most testing I've discussed so far is either automated or highly scripted manual testing. These are good for finding the things that you know to look for but weak for finding bugs or issues where you don't know to look. Exploratory testing, also called ad hoc testing, is an important mindset to bring to all of the testing that you do. In exploratory testing, the tester assumes the persona of the user and exercises the software as that persona would. Kaner, Bach, and Pettichord describe exploratory testing this way:



      By exploration, we mean purposeful wandering; navigating through a space with a general mission, but without a prescripted route. Exploration involves continuous learning and experimenting. There's a lot of backtracking, repetition, and other processes that look like waste to the untrained eye.[11]



      Exploratory testing can be a very important source of discovery, not just of bugs but also of unforeseen (or not yet described) scenarios and QoS requirements. Capture these in the backlog of the work item database so that you can use them in planning the current and future iterations. As a manager, plan for a certain level of exploratory testing in every iteration. Define charters for these testing sessions according to the goals of the iteration. Tune the charters and the resource level according to the value you get from these sessions. In short, plan capacity for exploratory testing.




      Testing as Discovery


      Embrace testing that discovers new scenarios, QoS requirements, and risks in addition of course to finding bugs. Capture the new scenarios, QoS, and risks as work items in the product backlog. This is vital information. It makes the quantitative coverage measurement a little harder in that you're increasing the denominator, but that's a small price to pay for helping the project deliver more customer value.


      A particular type of scenario test is the "soap opera." Hans Buwalda describes the technique in his article "Soap Opera Testing" as follows:



      Soap operas are dramatic daytime television shows that were originally sponsored by soap vendors. They depict life in a way that viewers can relate to, but the situations portrayed are typically condensed and exaggerated. In one episode, more things happen to the characters than most of us will experience in a lifetime. Opinions may differ about whether soap operas are fun to watch, but it must be great fun to write them. Soap opera testing is similar to a soap opera in that tests are based on real life, exaggerated, and condensed.[12]



      Soap operas are harsh, complex teststhey test many features using intricate and perhaps unforeseen sequences. The essence of soap operas is that they present cases that are relevant to the domain and important to the stakeholders but that cannot be tested in isolation. They are a good test of robustness in iterations where the software is mature enough to handle long test sequences.




      False Confidence


      When you have automated or highly scripted testing, and you do not balance it with exploration, you run the risk of what Boris Beizer coined as the "Pesticide Paradox":[13]



      Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.



      In other words, you can make your software immune to the tests that you already have. This pattern is especially a concern when the only testing being done is regression testing, and the test pool is very stable. There are three ways to mitigate the Pesticide Paradox:


      1. Make sure that the tests the software faces continually include fresh ones, including good negative tests.

      2. Look at gaps in test coverage against scenarios, QoS, risks, and code. Prioritize the gaps and think about tests that can close them.

      3. Use progressively harsher tests, notably soap operas and exploratory testing, to confirm, from a knowledgeable domain expert's perspective, that the software doesn't have undiscovered vulnerabilities.


      Exploratory testing, soap operas, and risk identification all mitigate against a false sense of confidence.













      Link Training, the First Step in Communication



      [ Team LiB ]





      Link Training, the First Step in Communication


      Link training is one of the first protocols that two agents perform to establish link configuration parameters such as link width, lane polarities, or maximum supported data rate. The following screenshots show the actual data captured for a X4 link training session between the Catalyst PX-4 Exerciser and a PCI Express system board. It should be noted that the link training exercise demonstrated here is based on the 1.0 specification. In the images that follow, several of the repeated packets (all of the idle state) are removed for clarity, but all of the sequences are shown. During the Link Training, the two agents start the communication by sending a low level "ordered set" TS1, and once they each recognize the expected packet from the other side they proceed with link width negotiation, confirmed by both agents with TS2. This is immediately followed by a sequence of three DLLPs (Data Link Layer Packets) to exchange flow control credit information.


      Figure A-6. Start with TS1

      After detecting the receiver at opposite sides of the link, each device starts by sending at least 1024 TS1 packets to allow adequate time for starting communication, locking the clocks, and reversing lane polarity if necessary.


      Figure A-7. SKIP

      At all times,SKIP "Ordered Sets" are sent approximately every 5 microseconds to compensate for clock synchronization mismatch between the two agents.


      Figure A-8. Completion of 1024 TS1

      After 1024 TS1 PADs, TS1 packets with data are sent to announce the group of lanes that are eligible to become the active link lanes. As indicated in packet 1052, the link fields switch from being PAD to include a Link number.


      Figure A-9. Lane Number Declaration

      In this sequence, lane numbers are being declared after the communication has been established. As indicated in packet 1070, the lane fields switch from being PAD to include Lane numbers.


      Figure A-10. Start of TS2

      TS2 packets start after Link setup. TS2 packets are far fewer than TS1. These are sent as a confirmation of successful physical layer link negotiation.


      Figure A-11. Initialization of Flow Control 1

      Initialization of Flow Control packets start after link training is completed by the physical layer. INIT FC1 sequences contain 3 packets to advertise available buffering for posted, non-posted and completion transaction types. These packets are repeated until both agents acknowledge it by starting the INIT FC2 transmissions.


      Figure A-12. Initialization of Flow Control 2

      There are three INIT FC2 packets which are repeated until both sides acknowledge by going into FC Updates. The Flow Control status of device is periodically updated using "Update FC " packets.


      Figure A-13. Flow Control Updates

      Flow control updates and SKIPs are continuously repeated. At this time, TLP packets can be inserted anytime to perform data transfers.


      Figure A-14. Alternate Display in Listing Format


      All bus activities may be viewed in the Symbol listing form which represents the bus data in raw form. In this listing, the SKP (1C) symbol after the COM (BC) identifies the SKIP type ordered set.





        [ Team LiB ]



        Documentation reviews














        3.3 Documentation reviews


        One of the two goals of the SQS is to facilitate the building of quality into the software products as they are produced. Documentation reviews provide a great opportunity to realize this goal. By maintaining a high standard for conducting and completing these reviews and establishing the respective baselines, the software quality practitioner can make significant contributions to the attainment of a quality software product.


        Software quality practitioners have much to do with the documentation reviews. They must make sure that the proper reviews are scheduled throughout the development life cycle. This includes determining the appropriate levels of formality, as well as the actual reviews to be conducted. The software quality practitioner also monitors the reviews to see that they are conducted and that defects in the documents are corrected before the next steps in publication or development are taken. In some cases, software quality itself is the reviewing agency, especially where there is not a requirement for in-depth technical analysis. In all cases, the software quality practitioner will report the results of the reviews to management.


        There are a number of types of documentation reviews, both formal and informal, that are applicable to each of the software documents. The appendixes present suggested outlines for most of the development documents. These outlines can be used as they are or adapted to the specific needs of the organization or project.


        The most basic of the reviews is the peer walk-through. As discussed in Section 3.1, this is a review of the document by a group of the author's peers who look for defects and weaknesses in the document as it is being prepared. Finding defects as they are introduced avoids more expensive corrective action later in the SDLC, and the document is more correct when it is released.


        Another basic document review is the format review or audit. This can be either formal or informal. When it is a part of a larger set of document reviews, the format audit is usually an informal examination of the overall format of the document to be sure that it adheres to the minimum standards for layout and content. In its informal style, little attention is paid to the actual technical content of the document. The major concern is that all required paragraphs are present and addressed. In some cases, this audit is held before or in conjunction with the document's technical peer walkthrough.


        A more formalized approach to the format audit is taken when there is no content review scheduled. In this case, the audit takes the form of a review and will also take the technical content into consideration. A formal format audit usually takes place after the peer walk-throughs and may be a part of the final review scheduled for shortly before delivery of the document. In that way, it serves as a quality-oriented audit and may lead to formal approval for publication.


        When the format audit is informal in nature, a companion content review should evaluate the actual technical content of the document. There are a number of ways in which the content review can be conducted. First is a review by the author's supervisor, which generally is used when formal customer-oriented reviews, such as the PDR and CDR, are scheduled. This type of content review serves to give confidence to the producer that the document is a quality product prior to review by the customer.


        A second type of content review is one conducted by a person or group outside the producer's group but still familiar enough with the subject matter to be able to critically evaluate the technical content. Also, there are the customer-conducted reviews of the document. Often, these are performed by the customer or an outside agency (such as an independent verification and validation contractor) in preparation for an upcoming formal, phase-end review.


        Still another type of review is the algorithm analysis. This examines the specific approaches, called out in the document, that will be used in the actual solutions of the problems being addressed by the software system. Algorithm analyses are usually restricted to very large or critical systems because of their cost in time and resources. Such things as missile guidance, electronic funds transfer, and security algorithms are candidates for this type of review. Payroll and inventory systems rarely warrant such in-depth study.




        3.3.1 Requirements reviews


        Requirements reviews are intended to show that the problem to be solved is completely spelled out. Informal reviews are held during the preparation of the document. A formal review is appropriate prior to delivery of the document.


        The requirements specification (see Appendix D) is the keystone of the entire software system. Without firm, clear requirements, there will no way to determine if the software successfully performs its intended functions. For this reason, the informal requirements review looks not only at the problem to be solved, but also at the way in which the problem is stated. A requirement that says "compute the sine of x in real time" certainly states the problem to be solved-the computation of the sine of x. However, it leaves a great deal to the designer to determine, for instance, the range of x, the accuracy to which the value of sine x is to be computed, the dimension of x (radians or degrees), and the definition of real time.


        Requirements statements must meet a series of criteria if they are to be considered adequate to be used as the basis of the design of the system. Included in these criteria are the following:




        • Necessity;




        • Feasibility;




        • Traceability;




        • Absence of ambiguity;




        • Correctness;




        • Completeness;




        • Clarity;




        • Measurability;




        • Testability.




        A requirement is sometimes included simply because it seems like a good idea; it may add nothing useful to the overall system. The requirements review will assess the necessity of each requirement. In conjunction with the necessity of the requirement is the feasibility of that requirement. A requirement may be thought to be necessary, but if it is not achievable, some other approach will have to be taken or some other method found to address the requirement. The necessity of a requirement is most often demonstrated by its traceability back to the business problem or business need that initiated it.


        Every requirement must be unambiguous. That is, every requirement should be written in such a way that the designer or tester need not try to interpret or guess what the writer meant. Terms like usually, sometimes, and under normal circumstances leave the door open to interpretation of what to do under unusual or abnormal circumstances. Failing to describe behavior in all possible cases leads to guessing on the part of readers, and Murphy's Law suggests that the guess will be wrong a good portion of the time.


        Completeness, correctness, and clarity are all criteria that address the way a given requirement is stated. A good requirement statement will present the requirement completely; that is, it will present all aspects of the requirement. The sine of x example was shown to be lacking several necessary parts of the requirement. The statement also must be correct. If, in fact, the requirement should call for the cosine of x, a perfectly stated requirement for the sine of x is not useful. And, finally, the requirement must be stated clearly. A statement that correctly and completely states the requirement but cannot be understood by the designer is as useless as no statement at all. The language of the requirements should be simple, straightforward, and use no jargon. That also means that somewhere in the requirements document the terms and acronyms used are clearly defined.


        Measurability and testability also go together. Every requirement will ultimately have to be demonstrated before the software can be considered complete. Requirements that have no definite measure or attribute that can be shown as present or absent cannot not be specifically tested. The sine of x example uses the term real time. This is hardly a measurable or testable quality. A more acceptable statement would be "every 30 milliseconds, starting at the receipt of the start pulse from the radar." In this way, the time interval for real time is defined, as is the starting point for that interval. When the test procedures are written, this interval can be measured, and the compliance or noncompliance of the software with this requirement can be shown exactly.


        The formal SRR is held at the end of the requirements phase. It is a demonstration that the requirements document is complete and meets the criteria previously stated. It also creates the first baseline for the software system. This is the approved basis for the commencement of the design efforts. All design components will be tracked back to this baseline for assurance that all requirements are addressed and that nothing not in the requirements appears in the design.



        The purpose of the requirements review, then, is to examine the statements of the requirements and determine if they adhere to the criteria for requirements. For the software quality practitioner, it may not be possible to determine the technical accuracy or correctness of the requirements, and this task will be delegated to those who have the specific technical expertise needed for this assessment. Software quality or its agent (perhaps an outside contractor or another group within the organization) will review the documents for the balance of the criteria.


        Each nonconformance to a criterion will be recorded along with a suggested correction. These will be returned to the authors of the documents, and the correction of the nonconformances tracked. The software quality practitioner also reports the results of the review and the status of the corrective actions to management.






        3.3.2 Design reviews


        Design reviews verify that the evolving design is both correct and traceable back to the approved requirements. Appendix E suggests an outline for the design documentation.




        3.3.2.1 Informal reviews


        Informal design reviews closely follow the style and execution of informal requirements reviews. Like the requirements, all aspects of the design must adhere to the criteria for good requirements statements. The design reviews go further, though, since there is more detail to be considered, as the requirements are broken down into smaller and smaller pieces in preparation for coding.


        The topic of walk-throughs and inspections has already been addressed. These are in-process reviews that occur during the preparation of the design. They look at design components as they are completed.


        Design documents describe how the requirements are apportioned to each subsystem and module of the software. As the apportionment proceeds, there is a tracing of the elements of the design back to the requirements. The reviews that are held determine if the design documentation describes each module according to the same criteria used for requirements.






        3.3.2.2 Formal reviews


        There are at least two formal design reviews, the PDR and the CDR. In addition, for larger or more complex systems, the organization standards may call for reviews with concentrations on interfaces or database concerns. Finally, there may be multiple occurrences of these reviews if the system is very large, critical, or complex.


        The number and degree of each review are governed by the standards and needs of the specific organization.


        The first formal design review is the PDR, which takes place at the end of the initial design phase and presents the functional or architectural breakdown of the requirements into executable modules. The PDR presents the design philosophy and approach to the solution of the problem as stated in the requirements. It is very important that the customer or user take an active role in this review. Defects in the requirements, misunderstandings of the problem to be solved, and needed redirections of effort can be resolved in the course of a properly conducted PDR.


        Defects found in the PDR are assigned for solution to the appropriate people or groups, and upon closure of the action items, the second baseline of the software is established. Changes made to the preliminary design are also reflected as appropriate in the requirements document, so that the requirements are kept up to date as the basis for acceptance of the software later on. The new baseline is used as the foundation for the detailed design efforts that follow.


        At the end of the detailed design, the CDR is held. This, too, is a time for significant customer or user involvement. The result of the CDR is the code-to design that is the blueprint for the coding of the software. Much attention is given in the CDR to the adherence of the detailed design to the baseline established at PDR. The customer or user, too, must approve the final design as being acceptable for the solution of the problem presented in the requirements. As before, the criteria for requirements statements must be met in the statements of the detailed design.


        So that there is assurance that nothing has been left out, each element of the detailed design is mapped back to the approved preliminary design and the requirements. The requirements are traced forward to the detailed design, as well, to show that no additions have been made along the way that do not address the requirements as stated. As before, all defects found during CDR are assigned for solution and closure. Once the detailed design is approved, it becomes the baseline for the coding effort. A requirements traceability matrix is an important tool to monitor the flow of requirements into the preliminary and detailed design and on into the code. The matrix can also help show that the testing activities address all the requirements.






        3.3.2.3 Additional reviews


        Another review that is sometimes held is the interface design review. The purpose of this review is to assess the interface specification that will have been prepared if there are significant interface concerns on a particular project. The format and conduct of this review are similar to the PDR and CDR, but there is no formal baseline established as a result of the review. The interface design review will contribute to the design baseline.


        The database design review also may be conducted on large or complex projects. Its intent is to ascertain that all data considerations have been made as the database for the software system has been prepared. This review will establish a baseline for the database, but it is an informal baseline, subordinate to the baseline from the CDR.








        3.3.3 Test documentation reviews


        Test documentation is reviewed to ensure that the test program will find defects and will test the software against its requirements.


        The objective of the test program as a whole is to find defects in the software products as they are developed and to demonstrate that the software complies with its requirements. Test documentation is begun during the requirements phase with the preparation of the initial test plans. Test documentation reviews also begin at this time, as the test plans are examined for their comprehensiveness in addressing the requirements. See Appendix G for a suggested test plan outline.


        The initial test plans are prepared with the final acceptance test in mind, as well as the intermediate tests that will examine the software during development. It is important, therefore, that each requirement be addressed in the overall test plan. By the same token, each portion of the test plan must specifically address some portion of the requirements. It is understood that the requirements, as they exist in the requirements phase, will certainly undergo some evolution as the software development process progresses. This does not negate the necessity for the test plans to track the requirements as the basis for the testing program. At each step further through the SDLC, the growing set of test documentation must be traceable back to the requirements. The test program documentation must also reflect the evolutionary changes in the requirements as they occur. Figure 3.4 shows how requirements may or may not be properly addressed by the tests. Some requirements may get lost, some tests may just appear. Proper review of the test plans will help identify these mismatches.






        Figure 3.4: Matching tests to requirements.

        As the SDLC progresses, more of the test documentation is prepared. During each phase of the SDLC, additional parts of the test program are developed. Test cases (see Appendix H) with their accompanying test data are prepared, followed by test scenarios and specific test procedures to be executed. For each test, pass/fail criteria are determined, based on the expected results from each test case or scenario. Test reports (see Appendix I) are prepared to record the results of the testing effort.


        In each instance, the test documentation is reviewed to ascertain that the test plans, cases, scenarios, data, procedures, reports, and so on are complete, necessary, correct, measurable, consistent, traceable, and unambiguous. In all, the most important criterion for the test documentation is that it specifies a test program that will find defects and demonstrate that the software requirements have been satisfied.


        Test documentation reviews take the same forms as the reviews of the software documentation itself. Walk-throughs of the test plans are conducted during their preparation, and they are formally reviewed as part of the SRR. Test cases, scenarios, and test data specifications are also subject to walk-throughs and sometimes inspections. At the PDR and CDR, these documents are formally reviewed.


        During the development of test procedures, there is a heavy emphasis on walk-throughs, inspections, and even dry runs to show that the procedures are comprehensive and actually executable. By the end of the coding phase, the acceptance test should be ready to be performed, with all documentation in readiness.


        The acceptance test is not the only test with which the test documentation is concerned, of course. All through the coding and testing phases, there have been unit, module, integration, and subsystem tests going on. Each of these tests has also been planned and documented, and that documentation has been reviewed. These tests have been a part of the overall test planning and development process and the plans, cases, scenarios, data, and so on have been reviewed right along with the acceptance test documentation. Again, the objective of all of these tests is to find the defects that prevent the software from complying with its requirements.






        3.3.4 User documentation reviews


        User documentation must not only present information about the system, it must be meaningful to the reader.


        The reviews of the user documentation are meant to determine that the documentation meets the criteria already discussed. Just as important, however, is the requirement that the documentation be meaningful to the user. The initial reviews will concentrate on completeness, correctness, and readability. The primary concern will be the needs of the user to understand how to make the system perform its function. Attention must be paid to starting the system, inputting data and interpreting or using output, and the meaning of error messages that tell the user something has been done incorrectly or is malfunctioning and what the user can do about it.


        The layout of the user document (see Appendix K for an example) and the comprehensiveness of the table of contents and the index can enhance or impede the user in the use of the document. Clarity of terminology and avoiding system-peculiar jargon are important to understanding the document's content. Reviews of the document during its preparation help to uncover and eliminate errors and defects of this type before they are firmly imbedded in the text.


        A critical step in the review of the user documentation is the actual trial use of the documentation, by one or more typical users, before the document is released. In this way, omissions, confusing terminology, inadequate index entries, unclear error messages, and so on can be found. Most of these defects are the result of the authors' close association with the system rather than outright mistakes. By having representatives of the actual using community try out the documentation, such defects are more easily identified and recommended corrections obtained.


        Changes to user work flow and tasks may also be impacted by the new software system. To the extent that they are minor changes to input, control, or output actions using the system, they may be summarized in the user documentation. Major changes to behavior or responsibilities may require training or retraining.



        Software products are often beta-tested. User documents should also be tested. Hands-on trial use of the user documentation can point out the differences from old to new processes and highlight those that require more complete coverage than will be available in the documentation itself.






        3.3.5 Other documentation reviews


        Other documents are often produced during the SDLC and must be reviewed as they are prepared.


        In addition to the normally required documentation, other documents are produced during the software system development. These include the software development plan, the software quality system plan, the CM plan, and various others that may be contractually invoked or called for by the organization's standards. Many of these other documents are of an administrative nature and are prepared prior to the start of software development.


        The software development plan (see Appendix A), which has many other names, lays out the plan for the overall software development effort. It will discuss schedules, resources, perhaps work breakdown and task assignment rules, and other details of the development process as they are to be followed for the particular system development.


        The software quality system plan and the CM plan (see Appendixes B and C, respectively) address the specifics of implementing these two disciplines for the project at hand. They, too, should include schedule and resource requirements, as well as the actual procedures and practices to be applied to the project. There may be additional documents called out by the contract or the organization's standards as well.


        If present, the safety and risk management plans (see Appendixes L and M) must undergo the same rigor of review as all the others. The software maintenance plan (see Appendix N), if prepared, is also to be reviewed.


        Since these are the project management documents, it is important that they be reviewed at each of the formal reviews during the SLC, with modifications made as necessary to the documents or overall development process to keep the project within its schedule and resource limitations.


        Reviews of all these documents concentrate on the basic criteria and completeness of the discussions of the specific areas covered. Attention must be paid to complying with the format and content standards imposed for each document.


        Finally, the software quality practitioner must ascertain that all documents required by standards or the contract are prepared on the required schedule and are kept up to date as the SLC progresses. Too often, documentation that was appropriate at the time of delivery is not maintained as the software is maintained in operation. This leads to increased difficulty and cost of later modification. It is very important to include resources for continued maintenance of the software documentation, especially the maintenance documentation discussed in Chapter 8. To ignore the maintenance of the documentation will result in time being spent reinventing or reengineering the documentation each time maintenance of the software is required.

















        Goals Aren't Estimates









































        Goals Aren't Estimates


        Tutorials on estimation typically are presented
        as though the necessary functionality is already defined and the
        purpose of estimation is to determine the corresponding effort, cost,
        time, and staff needed. Sometimes estimation works the other way,
        though. If the schedule is constrained such that the product absolutely
        must be delivered by a particular date, the relevant estimation
        parameter is how much functionality of given quality will fit into that
        time box. It's important to recognize that a management-imposed or
        marketing-imposed delivery schedule is not an
        estimate; it is a goal. A team of a certain size and skill level can
        produce only so much high-quality functionality in a fixed time.


        I once met a project manager who was delivering a
        new release of his product every three months. His team came up with a
        list of committed features they knew they could implement in the next
        quarter, as well as a prioritized list of "stretch" features. They
        would complete the committed features and then implement as many of the
        stretch features as they could before the scheduled delivery date
        arrived. This is an example of coping effectively with uncertain
        estimates through intelligent scope management in a time-boxed
        development cycle.




































        Chapter 33.&nbsp; Restructuring the System for Testing









        Chapter 33. Restructuring the System for Testing


        Chapter 30 introduced the motivation for changes to the application to accommodate testability. Chapter 32 showed one way to begin testing without such changes, by testing through the user interface. Such tests can then support the process of changing the application itself, helping to ensure that bugs are not introduced in the process.


        In this chapter, we introduce the ways and means of test infecting an application. Surprisingly, the changes we need to make to enable direct testing also have huge benefits in other respects. Michael Feathers [Fea04] provides considerable help in working with legacy, or non-test-infected, code.


        Sometimes, quite drastic changes are needed, and this can be off-putting because it looks like wasted work. The way forward is to make small changes that gradually test infect the code in areas that are changing anyway or are of low quality, so that the quality spreads in an organized, nonrisky manner.


        To do this, a boot-strapping approach is needed. Once in place, a slow outer layer of tests can support the evolution of changes to the application to test infect it.









          An Example Filter�Session Logging



          [ Team LiB ]





          An Example Filter�Session Logging


          Another useful application of filtering is the logging of incoming requests for later playback. You may occasionally want to take a snapshot of what users are doing on your system so you can simulate actual user activity for load testing and capacity planning.


          The recording process is very simple. For each different session, you generate a unique filename and store that filename in the session. (If a session doesn't contain a filename, you know that you are starting a new recording.)


          For each incoming request, write out the request parameters to the session recording, along with the name of the JSP or servlet. When the session is invalidated, either due to a timeout or being explicitly invalidated, the recording is closed out.


          Listing 7.5 shows the session recording filter.


          Listing 7.5 Source Code for SessionRecorder.java



          package examples.filter;

          import javax.servlet.Filter;
          import javax.servlet.FilterConfig;
          import javax.servlet.FilterChain;
          import javax.servlet.ServletRequest;
          import javax.servlet.ServletResponse;
          import javax.servlet.ServletException;
          import javax.servlet.http.HttpServletRequest;
          import javax.servlet.http.HttpServletResponse;
          import javax.servlet.http.HttpSession;
          import javax.servlet.http.HttpSessionBindingListener;
          import javax.servlet.http.HttpSessionBindingEvent;
          import java.util.HashMap;
          import java.util.ArrayList;
          import java.util.Iterator;
          import java.util.Enumeration;
          import java.io.PrintWriter;
          import java.io.FileWriter;
          import java.io.BufferedWriter;

          public class SessionRecorder implements Filter
          {
          public static final String RECORDING_FILE = "RECORDING_FILE";

          public static HashMap times = new HashMap();
          public static int recordingId;
          public static String recordingPrefix;
          public FilterConfig config;

          public SessionRecorder()
          {
          }

          public void init(FilterConfig filterConfig)
          {
          // Grab the destination of the recording files from
          // the filter configuration. The prefix should contain
          // a directory name and the beginning of a filename.
          // The rest of the filename will be a sequence number
          // and ".xml".
          recordingPrefix = filterConfig.getInitParameter("prefix");
          config = filterConfig;
          }

          public void destroy()
          {
          }

          public void doFilter(ServletRequest request,
          ServletResponse response, FilterChain chain)
          throws java.io.IOException, ServletException
          {
          // Ignore non-http requests.
          if (!(request instanceof HttpServletRequest))
          {
          chain.doFilter(request,response);
          return;
          }

          HttpServletRequest httpRequest = (HttpServletRequest) request;
          request.getSession();

          // Execute the JSP/servlet or chained filter.
          chain.doFilter(request, response);

          // Write the request out to the recording file.
          recordRequest((HttpServletRequest) request,
          (HttpServletResponse) response);
          }

          public synchronized static int getNextRecordingId()
          {
          return recordingId++;
          }

          public String generateRecordingFilename()
          {
          return recordingPrefix+getNextRecordingId()+".xml";
          }

          public PrintWriter openRecordingFile(String filename)
          {
          try
          {
          PrintWriter out = new PrintWriter(new BufferedWriter(
          new FileWriter(filename, true)));
          return out;
          }
          catch (Exception exc)
          {
          config.getServletContext()
          .log("Error opening recording file: ", exc);
          return null;
          }
          }

          public void recordRequest(HttpServletRequest request,
          HttpServletResponse response)
          {
          HttpSession session = request.getSession();

          // Get the recording file name.
          RecordingFile recordingFile =
          (RecordingFile) session.getAttribute(RECORDING_FILE);

          // If there is no recording file, create a new one.
          if (recordingFile == null)
          {
          recordingFile = new RecordingFile(generateRecordingFilename());
          session.setAttribute(RECORDING_FILE, recordingFile);
          initializeRecordingFile(recordingFile.filename);
          }

          // Write the request parameters and URI to the file.
          try
          {
          PrintWriter out = openRecordingFile(recordingFile.filename);
          if (out == null) return;

          out.println("<request>");
          out.print("<uri>");
          out.print(request.getRequestURI());
          out.println("</uri>");
          Enumeration e = request.getParameterNames();
          while (e.hasMoreElements())
          {
          String paramName = (String) e.nextElement();
          String[] values = request.getParameterValues(paramName);
          for (int i=0; i < values.length; i++)
          {
          out.print("<param><name>");
          out.print(paramName);
          out.print("</name><value>");
          out.print(values[i]);
          out.println("</value></param>");
          }
          }
          out.println("</request>");
          out.close();
          }
          catch (Exception exc)
          {
          config.getServletContext()
          .log("Error appending to recording file: ", exc);
          }
          }

          public void initializeRecordingFile(String filename)
          {
          try
          {
          PrintWriter out = openRecordingFile(filename);
          if (out == null) return;
          out.println("<?xml version=\"1.0\"?>");
          out.println("<session>");
          out.close();
          }
          catch (Exception exc)
          {
          config.getServletContext()
          .log("Error initializing recording file: ", exc);
          }
          }

          public void finishRecordingFile(String filename)
          {
          try
          {
          PrintWriter out = openRecordingFile(filename);
          if (out == null) return;
          out.println("</session>");
          out.close();
          }
          catch (Exception exc)
          {
          config.getServletContext()
          .log("Error finishing recording file: ", exc);
          }
          }

          class RecordingFile implements HttpSessionBindingListener
          {
          public String filename;

          public RecordingFile(String aFilename)
          {
          filename = aFilename;
          }

          public void valueBound(HttpSessionBindingEvent event)
          {
          }

          public void valueUnbound(HttpSessionBindingEvent event)
          {
          // When the session terminates, this method is invoked.
          // Close out the recording file by writing the closing tag.
          finishRecordingFile(filename);
          }
          }
          }

          The deployment descriptor for this application is shown in Listing 7.6.


          Listing 7.6 Source Code for web.xml



          <?xml version="1.0" encoding="ISO-8859-1"?>
          <web-app xmlns="http://java.sun.com/xml/ns/j2ee"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
          http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
          version="2.4">
          <display-name>Show Times</display-name>
          <description>An application to demonstrate the use of a filter
          </description>

          <filter>
          <filter-name>Session Recorder</filter-name>
          <filter-class>examples.filter.SessionRecorder</filter-class>
          <init-param>
          <param-name>prefix</param-name>
          <param-value>/tmp/sessions</param-value>
          </init-param>
          </filter>

          <filter-mapping>
          <filter-name>Session Recorder</filter-name>
          <url-pattern>/*</url-pattern>
          </filter-mapping>

          </web-app>

          Remember to change the <init-param> of the session recorder filter to a directory suitable to your configuration.


          Listing 7.7 shows the recording of a request to a simple JSP.


          Listing 7.7 Example Recorded Session



          <?xml version="1.0"?>
          <session>
          <request>
          <uri>/sessionrecorder/postit.jsp</uri>
          <param><name>age</name><value>10</value></param>
          <param><name>age</name><value>7</value></param>
          <param><name>name</name><value>Samantha</value></param>
          <param><name>name</name><value>Kaitlynn</value></param>
          </request>
          </session>

          Information like this can be very useful in debugging applications or for security purposes. In addition, with a complementary application that enables you to "play-back" the session, you can perform automated regression or load testing.





            [ Team LiB ]



            TCHAR.H


            3 4



            TCHAR.H



            This header file contains devices that are intended to make it easy for you to write string analysis and manipulation code, which will work with ANSI, DBCS, or Unicode strings, depending on your project setting. These devices include the following:




            • The TCHAR typedef. Its alias is either char or wchar_t, as previously described. Some types in the Platform SDK are derived from TCHAR, such as LPTSTR and LPCTSTR.


            • The _T (or _TEXT) macros. These produce either narrow or wide character literals or string literals in your source code, depending on your project setting.


            • Run-time library wrappers and other helper functions and macros that select the correct behavior against the TCHAR type. Examples include string pointer arithmetic helpers, such as _tcsinc and _tcsdec, and string function wrappers, such as _tcslen, _tcsclen, _tcscpy, and _tcsncpy.





            The following code segment returns the sum of all the character values in the given string. It works correctly for ANSI, DBCS, and Unicode strings and accepts narrow or wide string input depending on your project settings.





            double CrossSum(LPCTSTR sArg)
            {
            double fResult = 0;

            for (LPCTSTR sCurChar = sArg; *sCurChar;
            sCurChar = _tcsinc(sCurChar))
            fResult += _tcsnextc(sCurChar);

            return fResult;
            }



            The TChar.h header file contains interesting tricks and ideas that could be helpful in your own projects. Browse through it some time.



            27.5 Testing




            I l@ve RuBoard










            27.5
            Testing



            To test this program, we came up with a small C++ program that
            contains every different type of possible token. The results are
            shown in Example 27-2.




            Example 27-2. stat/test.cpp

            /********************************************************
            * This is a mult-line comment *
            * T_COMMENT, T_NEWLINE *
            ********************************************************/
            const int LINE_MAX = 500; // T_ID, T_OPERATOR, T_NUMBER

            // T_L_PAREN, T_R_PAREN
            static void do_file( const char *const name)
            {
            // T_L_CURLY
            char *name = "Test" // T_STRING

            // T_R_CURLY
            }
            // T_EOF









              I l@ve RuBoard



              12.3 Productivity Metrics




              I l@ve RuBoard










              12.3 Productivity Metrics


              As stated in the preface, productivity metrics are outside the scope of this book. Software productivity is a complex subject that deserves a much more complete treatment than a brief discussion in a book that focuses on quality and quality metrics. For non-OO projects, much research has been done in assessing and measuring productivity and there are a number of well-known books in the literature. For example, see Jones's work (1986, 1991, 1993, 2000). For productivity metrics for OO projects, relatively few research has been conducted and published. Because this chapter is on OO metrics in general, we include a brief discussion on productivity metrics.


              Metrics like lines of code per hour, function points per person-month (PM), number of classes per person-year (PY) or person-month, number of methods per PM, average person-days per class, or even hours per class and average number of classes per developer have been proposed or reported in the literature for OO productivity (Card and Scalzo, 1999; Chidamer et al., 1997; IBM OOTC, 1993; Lorenz and Kidd, 1994). Despite the differences in units of measurement, these metrics all measure the same concept of productivity, which is the number of units of output per unit of effort. In OO development, the unit of output is class or method and the common units of effort are PY and PM. Among the many variants of productivity metric, number of classes per PY and number of classes per PM are perhaps the most frequently used.


              Let us look at some actual data. For the five IBM projects discussed earlier, data on project size in terms of number of classes were available (Table 12.2). We also tracked the total PYs for each project, from design, through development, to the completion of testing. We did not have effort data for Project E because it was a joint project with an external alliance. The number of classes per PY thus calculated for these projects are shown in Table 12.5. The numbers ranged from 2.8 classes per PM to 6 classes per PM. The average of the projects was 4.4 classes per PM, with a standard deviation of 1.1. The dispersion of the distribution was small in view of the fact that these were separate projects with different development teams, albeit all developed in the same organization. The high number of classes per PM for Project B may be related to the small number of methods per class (3 methods per class) for the project, as discussed earlier. It is also significant to note that the differences between the C++ projects and the Smalltalk projects were small.


              Lorenz and Kidd (1994) show data on average number of person-days per class for four Smalltalk projects and two C++ projects, in a histogram format. From the histograms, we estimate the person-days per class for the four Smalltalk projects were 7, 6, 2, and 8, and for the two C++ projects, they were about 23 and 35. The Smalltalk data seem to be close to that of the IBM projects, amounting to about 4 classes per PM. The C++ projects amounted to about one PM or more per class.



























































              Table 12.5. Productivity in Terms of Number of Classes per PY for Five OO Projects
               

              Project A C++



              Project B C++



              Project C C++



              Project D IBM Smalltalk



              Project E OTI Smalltalk



              Project F Digitalk Smalltalk



              Number of Classes



              5,741



              2,513



              3,000



              100



              566



              492



              PY



              100



              35



              90



              2



              na



              10



              Classes per PY



              57.4



              71.8



              33.3



              50



              na



              49.2



              Classes per PM



              4.8



              6



              2.8



              4.2



              na



              4.1



              Methods per PM



              8



              18



              20



              71



              na



              86





              Lorenz and Kidd (1994) list the pertinent factors affecting the differences, including user interface versus model class, abstract versus concrete class, key versus support class, framework versus framework-client class, and immature versus mature classes. For example, they observe that key classes, classes that embody the "essence" of the business domain, normally take more time to develop and require more interactions with domain experts. Framework classes are powerful but are not easy to develop, and require more effort. Mature classes typically have more methods but required less development time. Therefore, without a good understanding of the projects and a consistent operational definition, it is difficult to make valid comparisons across projects or organizations.


              It should be noted that all the IBM projects discussed here were systems software, either part of an operating system, or related to an operating system or a development environment. The architecture and subsystem design were firmly in place. Therefore, the classes of these projects may belong more to the mature class category. Data on classes shown in the tables include all classes, regardless of whether they were abstract or concrete, and key or support classes.


              In a recent assessment of OO productivity, we looked at data from two OO projects developed at two IBM sites, which were developing middleware software related to business frameworks and Web servers. Their productivity numbers are shown in Table 12.6. The productivity numbers for these two projects were much lower than those discussed earlier. These numbers certainly reflected the difficulty in designing and implementing framework-related classes, versus the more mature classes related to operating systems. The effort data in the table include the end-to-end effort from architecture to design, development, and test. If we confined the measurement to development and test and excluded the effort related to design and architecture, then the metrics value would increase to the following:



              Web server: 2.6 classes per PM, 4.5 methods per PM


              Framework: 1.9 classes per PM, 14.8 methods per PM






























              Table 12.6. Productivity Metrics for Two OO Projects
               

              Classes (C++)



              Methods (C++)



              Total PMs



              Classes per PM



              Methods per PM



              Web Server



              598



              1,029



              318



              1.9



              3.2



              Framework



              3,215



              24,670



              2,608



              1.2



              9.5



              The IBM OOTC's rule of thumb for effort estimate (at the early design stage of a project) is one to three PM per business class (or key class) (West, 1999). In Lorenz and Kidd's (1994) definition, a key class is a class that is central to the business domain being automated. A key class is one that would cause great difficulties in developing and maintaining a system if it did not exist. Since the ratio between key classes and support classes, or total classes in the entire project is not known, it is difficult to correlate this 1 to 3 PM per key class guideline to the numbers discussed above.


              In summary, we attempt to evaluate some empirical OO productivity data, in terms of number of classes per PM. With the preceding discussion, we have the following tentative values as a stake in the ground for OO project effort estimation:



              • For project estimate at the early design phase, 1 to 3 PM per business class (or one-third of a class to one class per PM)


              • For framework-related projects, about 1.5 classes per PM


              • For mature class projects for systems software, about 4 classes per PM



              Further studies and accumulation of empirical findings are definitely needed to establish robustness for such OO productivity metrics. A drawback of OO metrics is that there are no conversion rules to lines of code metrics and to function point metrics. As such, comparisons between OO projects described by OO metrics and projects outside the OO paradigm cannot be made. According to Jones (2002), function point metrics works well with OO projects. Among the clients of the Software Productivity Research, Inc. (SPR), those who are interested in comparing OO productivity and quality level to procedural projects all use function point metrics (Jones, 2002). The function point could eventually be the link between OO and non-OO metrics. Because there are variations in the function and responsibility of classes and methods, there are studies that started to use the number of function points as a weighting factor when measuring the number of classes and methods.


              Finally, as a side note, regardless of whether it is classes per PM for OO projects or LOC per PY and function points per PM for procedural languages, these productivity metrics are two dimensional: output and effort. The productivity concept in software, especially at the project level, however, is three-dimensional: output (size or function of deliverables), effort, and time. This is because the tradeoff between time and effort is not linear, and therefore the dimension of time must be addressed. If quality is included as yet another variable, the productivity concept would be four-dimensional. Assuming quality is held constant or quality criteria can be established as part of the requirements for the deliverable, we can avoid the confusion of mixing productivity and quality, and productivity remains a three-dimensional concept. As shown in Figure 12.2, if one holds any of the two dimensions constant, a change in the third dimension is a statement of productivity. For example, if effort (resources) and development time are fixed, then the more output (function) a project produces, the more productive is the project team. Likewise, if resources and output (required functions) are fixed, then the faster the team delivers, the more productive it is.



              Figure 12.2. Dimensions of the Productivity Concept





              It appears then that the two-dimensional metrics are really not adequate to measure software productivity. Based on a large body of empirical data, Putnam and Myers (1992) derived the software productivity index (PI), which takes all three dimensions of productivity into account. For the output dimension, the PI equation still uses LOC and therefore the index is subject to all shortcomings associated with LOC, which are well documented in the literature (Jones, 1986, 1991, 1993, 2000). The index is still more robust comparing to the two-dimensional metrics because (1) it includes time in its calculation, (2) there is a coefficient in the formula to calibrate for the effects of project size, and (3) after the calculation is done based on the equation, a categorization process is used to translate the raw number of productivity parameters (which is a huge number) to the final productivity index (PI), and therefore the impact of the variations in LOC data is reduced. Putnam and associates also provided the values of PI by types of software based on a large body of empirical data on industry projects. Therefore, the calculated PI value of a project can be compared to the industry average according to type of software.


              For procedural programming, function point productivity metrics are regarded as better than the LOC-based productivity metrics. However, the time dimension still needs to be addressed. This is the same case for the OO productivity metrics. Applying Putnam's PI approach to the function point and OO metrics will likely produce better and more adequate productivity metrics. This, however, requires more research with a large body of empirical data in order to establish appropriate equations that are equivalent to the LOC-based PI equation.







                I l@ve RuBoard



                11.2 LOCAL LOOP











                 < Day Day Up > 











                11.2 LOCAL LOOP


                The local loop, which is a dedicated connection between the terminal equipment (the telephone) and the switch, is the costliest element in the telephone network. Generally, from the switch, a cable is laid up to a distribution box (also called a distribution point) from which individual cable pairs are taken to the individual telephone instruments.


                To reduce the cable laying work, particularly to provide telephone connections to dense areas such as high-rise residential complexes, digital loop carrier (DLC) systems are being introduced. The DLC system is shown in Figure 11.4. The telephone cables are distributed from the DLC system, and the DLC is connected to the digital switch using a single high-bandwidth cable.






                Figure 11.4: Digital loop carrier (DLC) in PSTN.


                To reduce the installation and maintenance effort (finding out where the cable fault is), wireless local loops are now being introduced. Wireless local loops (WLLs) using CDMA technology are becoming widespread. The advantages of WLL are (a) low maintenance costs because no digging is required; (b) low maintenance costs because equipment will be only at two ends (either the switch or the distribution point and the terminal equipment; (c) fast installation; and (d) possibility of limited mobility.


                Most of the present local loops using copper can support very limited data rates. To access the Internet using the telephone network, the speed is generally limited to about 56kbps. Nowdays, user demands are increasing for voice and video services that cannot be supported by the present local loops. Hence, fiber will be the best choice so that very high bandwidth can be available to subscribers, supporting services such as video conferencing, graphics, etc. In the future, optical fiber would be the choice for the local loop. Experiments are going on to develop plastic optical fibers that can take slight bends and support high data rates. Plastic optical fiber would be the ideal choice for fiber to the home.










                Because the data rate supported by twisted pair copper cable is limited, optical fiber will be used as the preferred medium for local loop in the future to provide high-bandwidth services. For remote/rural areas, wireless local loop is the preferred choice because of fast installation and low maintenance costs.































                 < Day Day Up > 



                Notes



                [ Team LiB ]





                Notes

                1. The Seattle Times, October 8, 1995. Emphasis in original.

                1. USA Today, February 16, 1998, pp. 1B�2B.

                1. Bronson, Po, "Manager's Journal," The Wall Street Journal, February 9, 1998.

                1. "Software Jobs Go Begging," The New York Times, January 13, 1998, p. A1.

                1. Krantz, Les, Jobs Rated Almanac, NY: St. Martin's Press, 1999.

                1. Thomsett, Rob, "Effective Project Teams: A Dilemma, a Model, a Solution," American Programmer, July�August 1990, pp. 25�35. Lyons, Michael L., "The DP Psyche," Datamation, August 15, 1985, pp. 103�109.

                1. Thomsett, Rob, "Effective Project Teams: A Dilemma, a Model, a Solution," American Programmer, July�August 1990, pp. 25�35. Lyons, Michael L., "The DP Psyche," Datamation, August 15, 1985, pp. 103�109. Bostrom, R. P., and K. M. Kaiser, "Personality Differences within Systems Project Teams," Proceedings of the 18th Annual Computer Personnel Research Conference, ACM No. 443810, 1981.

                1. Thomsett, Rob, "Effective Project Teams: A Dilemma, a Model, a Solution," American Programmer, July�August 1990, pp. 25�35. Lyons, Michael L., "The DP Psyche," Datamation, August 15, 1985, pp. 103�109.

                1. National Center for Education Statistics, 2001 Digest of Educational Statistics, Document Number NCES 2002130, April 2002.

                1. Thomsett, Rob, "Effective Project Teams: A Dilemma, a Model, a Solution," American Programmer, July�August 1990, pp. 25�35. Lyons, Michael L., "The DP Psyche," Datamation, August 15, 1985, pp. 103�109. Bostrom, R. P., and K. M. Kaiser, "Personality Differences within Systems Project Teams," Proceedings of the 18th Annual Computer Personnel Research Conference, ACM No. 443810, 1981.

                1. Glass, Robert L., Software Creativity, Englewood Cliffs, NJ: Prentice Hall PTR, 1994.

                1. Zachary, Pascal, Showstopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft, New York: Free Press, 1994.

                1. Software Engineering Institute, quoted in Fishman, Charles, "They Write the Right Stuff," Fast Company, December 1996. Mills, Harlan D., Software Productivity, Boston, MA: Little, Brown, 1983, pp. 71�81. Wheeler, David, Bill Brykczynski, and Reginald Meeson, Software Inspection: An Industry Best Practice, Los Alamitos, CA: IEEE Computer Society Press, 1996. Jones, Capers, Programming Productivity, New York: McGraw-Hill, 1986. Boehm, Barry W., "Improving Software Productivity," IEEE Computer, September 1987, pp. 43�57.

                1. See, for example, R.M. Stallman, "The GNU Manifesto," 1985, www.fsf.org/gnu/manifesto.html. Raymond, E.S., "Homesteading the Noosphere," 1998, www.catb.org/~esr/writings/homesteading.

                1. National Center for Education Statistics, 2001 Digest of Educational Statistics, Document Number NCES 2002130, April 2002.

                1. "Software Jobs Go Begging," The New York Times, January 13, 1998, p. A1.

                1. Lowell, Bill, and Angela Burgess, "A Moving Target: Studies Try to Define the IT Workforce," IT Professional, May/June 1999.

                1. Lowell, Bill, and Angela Burgess, "A Moving Target: Studies Try to Define the IT Workforce," IT Professional, May/June 1999.

                1. National Center for Education Statistics, 2001 Digest of Educational Statistics, Document Number NCES 2002130, April 2002.

                1. Occupational Outlook Handbook 2002-03 Edition, Bureau of Labor Statistics, 2002.

                1. Occupational Outlook Handbook 2002-03 Edition, Bureau of Labor Statistics, 2002.

                1. Hecker, Daniel E., "Occupational employment projections to 2010," Monthly Labor Review, November 2001, vol. 124, no. 11.

                1. Jones, Capers, Applied Software Measurement: Assuring Productivity and Quality, 2d Ed., New York: McGraw-Hill, 1997.

                1. Bylinsky, Gene, "Help Wanted: 50,000 Programmers," Fortune, March 1967, pp. 141ff.

                1. Krantz, Les, Jobs Rated Almanac, NY: St. Martin's Press, 1999.

                1. Bach, James, "Enough about Process: What We Need Are Heroes," IEEE Software, March 1995, pp. 96�98.

                1. McCue, Gerald M., "IBM's Santa Teresa Laboratory�Architectural Design for Program Development," IBM Systems Journal, vol. 17, no. 1, 1978, pp. 4�25.

                1. Lakhanpal, B., "Understanding the Factors Influencing the Performance of Software Development Groups: An Exploratory Group-Level Analysis," Information and Software Technology, 35 (8), 1993, pp. 468�473.





                  [ Team LiB ]



                  Newer Posts Older Posts Home