Saturday, November 7, 2009

Identifying Important Database Monitoring Categories




I l@ve RuBoard









Identifying Important Database Monitoring Categories


Depending on a company's organization, various people may be controlling different aspects of the databases. Database administrators, database and system operators, application managers, system administrators, and network managers each may have roles in either monitoring or configuring the company databases. In smaller companies, multiple roles are filled by a single person. Database operators keep the database up and available, solve performance problems, and back up the data. Database administrators manage security issues, handle load balancing and capacity planning issues, and deploy new database applications. Each may also have different needs for database monitoring tools.


These different monitoring activities can be grouped into the following categories:



  • Configuration management


  • Fault management


  • Performance and resource management


  • Security


  • Backup



These groups are described in more detail next. The appropriate level of monitoring in each of these areas depends on your specific database responsibilities.


Configuring the Database


Before you can effectively monitor a database, you should verify the proper database configuration. Both system and database configuration files should be checked for correctness. An invalid database configuration can prevent database access. Each database relies on several configuration files. For example, Oracle depends on the following configuration files:



  • tnsnames.ora


  • init.ora


  • listener.ora


  • sqlnet.ora



These files can be corrupted if configured manually or not distributed consistently to all systems in the computing environment using the database. The database administrator should ensure that the initial configuration and subsequent configurations are correct. Appropriate event notification should be put in place so that the operator can be aware of any configuration changes. The events can serve as an audit history when later trying to track down the cause of a problem.


Network General's Database Module for Oracle7 is one example of a tool that can monitor configuration information. This module is used as an add-on to a network analyzer to study database traffic. SQL packets are decoded, enabling the analyzer to scan configuration information contained in connection packets to see whether or not a configuration problem exists. Other approaches are to check periodically the modification dates of the system and database configuration files to see whether it has made changes. An older copy of the configuration should be kept for comparison purposes.


Note that the correct database configuration depends on how the database will be used. Online transaction processing applications have different workload characterizations than decision support applications, for example.



Watching for Database Faults


The database operator must continually watch for failures in the database environment. This includes system and networking components, in addition to the database. System failures can prevent access to the database, and the loss of system components could severely degrade performance. Monitoring the database server system is not sufficient, because the network connecting the clients to the server can also fail. Techniques for monitoring the system, disks, and networks are discussed in other chapters.


Various database software or hardware components can fail that, in turn, cause the database to fail. The operator may simply want to log on to the database application periodically to ensure that it is still running. For example, critical Oracle components to check include the Oracle Names servers, Oracle servers, SQL*NET Listeners, and the Oracle MultiProtocol Interchange (MPI), which is used to translate between different network transport stacks, such as TCP/IP and IPX/SPX.



Managing Database Resources and Performance


Monitoring database resources and performance can ensure that the database is being used effectively. The most common database problems are related to resource management. Some common errors are:



  • Out of disk space


  • Log file full


  • Performance overload



System and database resources should be studied periodically to detect trends. The operator should check the free space for tables, tablespaces, and indexes. Continual modifications and updates to a database can lead to database fragmentation, which means that the available storage for the database is scattered. Each available storage area is too small to be used and thus is wasted. Eventually, increased database fragmentation can lead to performance problems or even database failure. Some database monitoring tools help detect database fragmentation so that corrective action can be taken, such as restructuring the database objects.


Performance problems can be difficult to diagnose. Many database performance problems are caused by a poorly designed database, or programs that are not coded efficiently. For these cases, a simple solution is unlikely. Therefore, the database operator needs monitoring tools that can predict performance problems and provide corrective actions when appropriate.



Keeping the Database Server Secure


You should also be able to detect security intrusions or access violations related to the database server. Monitoring tools can track unsuccessful logins, privileged statement executions, and other events that are important for maintaining database security.


You may also want to use the network monitoring tools discussed in Chapter 6 to help keep your systems secure. Network firewalls can stop intruders before they are able to access a computer system.



Ensuring Successful Database Backups


You also need to back up the systems and ensure that the backups run successfully. Depending on how the database is being used, different types of backup tools may be needed. A system operator responsible for the backup and restore procedures for hundreds of systems may not have the expertise to handle the procedures associated with a database vendor's backup tool. Automated tools for managing and monitoring database backup can be helpful in lowering the skill level needed for the job.


As mentioned earlier, the database administrator is responsible for trending analysis and capacity planning. Ideally, the administrator is focused on these strategic and planning activities, but often they spend time helping operators troubleshoot database problems. Automated tools for database operations help free the administrator to focus on these longer-range planning tasks. The administrator is likely to use some of the same tools as the operator for performance and resource management. For example, PerfView can be used to forecast future performance bottlenecks, and can also help to analyze existing resource problems.


The remainder of this chapter describes a variety of tools and products that can be used to provide monitoring capabilities for databases. The emphasis is on the fault and performance and resource management categories; not many tools are available in the other important areas. Generic system and application tools, as well as database-specific tools, are covered in this chapter.


Oracle is, by far, the dominant vendor for UNIX database servers and thus gets the most attention in this chapter. Informix is another important UNIX database vendor and is used for comparison purposes.









    I l@ve RuBoard



    TestFixture











     < Day Day Up > 





    TestFixture



    Description



    The class

    TestFixture (see Figure C-23)

    defines the interface of a test fixture. TestCase

    is descended from TestFixture, so every test

    object is implicitly a test fixture. However, a test object is truly

    being used as a fixture only if it has multiple test methods that

    share objects. Philosophically, a fixture is a test environment, and

    the test methods interact with the environment to test different

    behaviors.





    The TestFixture methods setUp() and tearDown( ) are used to initialize

    and clean up the fixture's shared objects. When

    there are multiple test methods in the fixture, setUp() and tearDown( ) are called for each

    one. This ensures test isolation by making sure the fixture is in the

    same state for each test.





    TestFixture belongs to the namespace

    CppUnit. It is declared and implemented in

    TestFixture.h.





    Figure C-23. The class TestFixture









    Declaration





    class TestFixture









    Constructors/Destructors






    virtual ~TestFixture( )





    A destructor.











    Public Methods






    virtual void setUp( ) {}





    Initializes the fixture's shared objects. The

    default implementation does nothing.






    virtual void tearDown( ) {}





    Cleans up the fixture's shared objects. The default

    implementation does nothing.











    Protected/Private Methods



    None.





    Attributes



    None.















       < Day Day Up > 



      International Organization for Standardization (ISO)/IEC 12207










       < Free Open Study > 





      International Organization for Standardization (ISO)/IEC 12207



      The SEI is not the only quality or standards organization concerned with software processes and life cycles. As early as 1989, it was recognized that the standard imposed upon defense and other government contractors, Department of Defense Standard (DOD STD) 2167A, was not well suited to use with projects employing object-oriented design (OOD) or Rapid Application Development (RAD) methods. A new Military Standard (MIL STD) 498 was intended to correct issues with these methods, and it does indeed solve some of the problems. A third approach, International Organization for Standardization (ISO)/IEC 12207, which developed independently from MIL STD 498, is another step forward. It describes the major component processes of a complete software life cycle, their interfaces with one another, and the high-level relations that govern their interactions.



      As shown in Figure 4-7, ISO/IEC 12207 lists 12 engineering activities, following process implementation, that are similar to the phases in a typical SLCM:



      1. System requirement analysis;

      2. System architectural design;

      3. Software requirements analysis;

      4. Software architectural design;

      5. Software detailed design;

      6. Software coding and testing;

      7. Software integration;

      8. Software qualification testing;

      9. System integration;

      10. System qualification testing;

      11. Software installation;

      12. Software acceptance test.



      Figure 4-7. Engineering View of ISO/IEC 12207



      The ISO/IEC 12207 approach has been described as an implementation of a Plan-Do-Check-Act (PDCA) cycle, discussed in Chapter 3, "Process Overview."



      The intent is that the engineer should check the output of an engineering task before it becomes input to the next task. "The activities of the ISO/IEC 12207 development process have independence from one another … they are not ordered in a waterfall sequence and, … there are no requirements in the international standard that dictate which of them must be executed first and which next." In fact, ISO/IEC 12207 says explicitly, in paragraph 5.3.1.1, that "these activities and tasks may overlap or interact and may be performed iteratively or recursively." Paragraph 1.5 states that "this international standard does not prescribe a specific life cycle model or software development method." Paragraph 5.3.1.1 states that, unless the contract stipulates one, "the developer shall define or select a software life cycle model appropriate to the scope, magnitude, and complexity of the project. The activities and tasks of the development process shall be selected and mapped onto the life cycle model." The intent and effect of the language in the international standard are to provide flexibility in ordering activities, and to choose development models to avoid the waterfall bias of other standards.[8]



      Individuals and organizations respected in the software, project management, and quality arenas are in agreement on the necessity of process and life cycle process, in particular. PMI, Boehm, IT-Systems of the Federal Republic of Germany, the SEI, and the ISO have all recommended having a software development life cycle, one that is carefully selected and tailored for project suitability.












         < Free Open Study > 



        Qualities of Service










        Qualities of Service


        Scenarios are extremely valuable, but they are not the only type of requirement. Scenarios need to be understood in the context of qualities of service (QoS). (Once upon a time, QoS were called "nonfunctional requirements," but because that term is nondescriptive, I won't use it here. Sometimes they're called 'ilities, which is a useful shorthand.)


        Most dissatisfiers can be eliminated by appropriately defining qualities of service. QoS might define global attributes of your system, or they might define specific constraints on particular scenarios. For example, the security requirement that "unauthorized users should not be able to access the system or any of its data" is a global security QoS. The performance requirement that "for 95% of orders placed, confirmation must appear within three seconds at 1,000user load" is a specific performance QoS about a scenario of placing an order.


        Not all qualities of service apply to all systems, but you should know which ones apply to yours. Often QoS imply large architectural requirements or risk, so they should be negotiated with stakeholders early in a project.


        There is no definitive list of all the qualities of service that you need to consider. There have been several standards,[16] but they tend to become obsolete as technology evolves. For example, security and privacy issues are not covered in the major standards, even though they are the most important ones in many modern systems.


        The following four sections list some of the most common QoS to consider on a project.



        Security and Privacy


        Unfortunately, the spread of the Internet has made security and privacy every computer user's concern. These two QoS are important for both application development and operations, and customers are now sophisticated enough to demand to know what measures you are taking to protect them. Increasingly, they are becoming the subject of government regulation.


        • Security: The ability of the software to prevent access and disruption by unauthorized users, viruses, worms, spyware, and other agents.

        • Privacy: The ability of the software to prevent unauthorized access or viewing of Personally Identifiable Information.




        Performance


        Performance is most often noticed when it is poor. In designing, developing, and testing for performance, it is important to differentiate the QoS that influence the end experience of overall performance.


        • Responsiveness: The absence of delay when the software responds to an action, call, or event.

        • Concurrency: The capability of the software to perform well when operated concurrently with other software.

        • Efficiency: The capability of the software to provide appropriate performance relative to the resources used under stated conditions.[17]

        • Fault Tolerance: The capability of the software to maintain a specified level of performance in cases of software faults or of infringement of its specified interface. [18]

        • Scalability: The ability of the software to handle simultaneous operational loads.




        User Experience


        While "easy to use" has become a cliché, a significant body of knowledge has grown around design for user experience.


        • Accessibility: The extent to which individuals with disabilities have access to and use of information and data that is comparable to the access to and use by individuals without disabilities.[19]

        • Attractiveness: The capability of the software to be attractive to the user.[20]

        • Compatibility: The conformance of the software to conventions and expectations.

        • Discoverability: The ability of the user to find and learn features of the software.

        • Ease of Use: The cognitive efficiency with which a target user can perform desired tasks with the software.

        • Localizability: The extent to which the software can be adapted to conform to the linguistic, cultural, and conventional needs and expectations of a specific group of users.





        Manageability


        Most modern solutions are multitier, distributed, serviceoriented or clientserver applications. The cost of operating these applications often exceeds the cost of developing them by a large factor, yet few development teams know how to design for operations. Appropriate QoS to consider are as follows:


        • Availability: The degree to which a system or component is operational and accessible when required for use. Often expressed as a probability.[21] This is frequently cited as "nines," as in "three nines," meaning 99.9% availability.

        • Reliability: The capability . . . to maintain a specified level of performance when used under specified conditions.[22] Frequently stated as Mean Time Between Failures (MTBF).

        • Installability and Uninstallability: The capability . . . to be installed in a specific environment[23] and uninstalled without altering the environment's initial state.

        • Maintainability: The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment.[24]

        • Monitorability: The extent to which health and operational data can be automatically collected from the software in operation.

        • Operability: The extent to which the software can be controlled automatically in operation.

        • Portability: The capability of the software to be transferred from one environment to another.[25]

        • Recoverability: The capability of the software to reestablish a specified level of performance and recover the data directly affected in the case of a failure.[26]

        • Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.[27]

        • Supportability: The extent to which operational problems can be corrected in the software.

        • Conformance to Standards: The extent to which the software adheres to applicable rules.

        • Interoperability: The capability of the software to interact with one or more specified systems.[28]


        What makes a good QoS requirement? As with scenarios, QoS requirements need to be explicitly understandable to their stakeholder audiences, defined early, and when planned for an iteration, they need to be testable. You may start with a general statement about performance, for example, but in an iteration set specific targets on specific transactions at specific load. If you cannot state how to test satisfaction of the requirement when it becomes time to assess it, then you can't measure the completion.













        References










        References



        (Electronic Frontier Foundation 1998) "Cracking DES," http://cryptome.org/cracking-des.htm. First published by O'Reilly & Associates, May 1998.


        (Microsoft 2006a) Microsoft Corporation. .NET Framework Developer's Guide, "Cryptographic Services," http://msdn.microsoft.com/library/en-us/cpguide/html/cpconCryptographicServices.asp. MSDN.


        (Microsoft 2006b) Microsoft Corporation. "Microsoft CryptoAPI System Architecture," http://msdn.microsoft.com/library/en-us/seccrypto/security/cryptoapi_system_architecture.asp.


        (Microsoft 2001) Lambert, John. Microsoft Corporation. "Introducing CAPICOM," http://msdn.microsoft.com/library/en-us/dnsecure/html/intcapicom.asp. MSDN, May 2001.


        (RFC 2104) Internet Engineering Task Force, Network Working Group. RFC 2104: "HMAC: Keyed-Hashing for Message Authentication," http://www.ietf.org/rfc/rfc2104.txt. February 1997.


        (NIST 2005) National Institute of Standards and Technology. Guideline for Implementing Cryptography in the Federal Government, http://csrc.nist.gov/publications/nistpubs/800-21-1/sp800-21-1_Dec2005.pdf.


        (KeyLength 2006) KeyLength.com. "Cryptographic Key Length Recommendation," http://www.keylength.com.


        (RFC 2898) Internet Engineering Task Force, Network Working Group. RFC 2898: "PKCS #5: Password-Based Cryptography Specification, Version 2.0," http://www.ietf.org/rfc/rfc2898.txt. September 2000.













        Chapter 7 Error Handling and Built-In Exceptions

        Team-Fly
         

         

        Oracle® PL/SQL® Interactive Workbook, Second Edition
        By
        Benjamin Rosenzweig, Elena Silvestrova
        Table of Contents

        Appendix A. 
        Answers to Self-Review Questions



        Chapter 7 Error Handling and Built-In Exceptions



        Lab 7.1 Self-Review Answers

        A1:


        Questions


        Answers


        Comments


        1)


        B


        A compiler is able to detect only syntax errors. It cannot detect any runtime errors because they do not occur prior to the execution of the program. Furthermore, a runtime error generally occurs only on some occasions, and not the others.

        A2:


        Questions


        Answers


        Comments


        2)


        B


        An exception-handling section is an optional section of a PL/SQL block. You will recall that only executable section is a required section of a PL/SQL block.

        A3:


        Questions


        Answers


        Comments


        3)


        B

         

        A4:


        Questions


        Answers


        Comments


        4)


        B

         

        A5:


        Questions


        Answers


        Comments


        5)


        B, C


        Both options are correct. However, you should remember that the value of number 1 is not important. It is number 2 that causes an exception to be raised when its value is equal to zero.





        Lab 7.2 Self-Review Answers

        A1:


        Questions


        Answers


        Comments


        1)


        A


        You will recall that a built-in exception is raised when a program breaks an Oracle rule. In other words, you do not need to specify how to raise a built-in exception, rather, what actions must be taken when a particular built-in exception is raised. A built-in exception will be raised by Oracle implicitly.

        A2:


        Questions


        Answers


        Comments


        2)


        B

         

        A3:


        Questions


        Answers


        Comments


        3)


        B


        When a group function is used in the SELECT INTO statement, there is at least one row returned. As a result, exception NO_DATA_FOUND is not raised.

        A4:


        Questions


        Answers


        Comments


        4)


        B


        Once an exception has been raised in a PL/SQL block, the execution of the block terminates.

        A5:


        Questions


        Answers


        Comments


        5)


        B


        An exception-handling section may contain multiple exception handlers. For example, NO_DATA_FOUND and OTHERS.





          Team-Fly
           

           
          Top
           


          Recipe 4.5. Sorting an Array










          Recipe 4.5. Sorting an Array





          Problem


          You want to sort an array of objects, possibly according to some custom notion of what "sorting" means.




          Solution


          Homogeneous
          arrays of common data types, like strings or numbers, can be sorted "naturally" by just calling
          Array#sort
          :



          [5.01, -5, 0, 5].sort # => [-5, 0, 5, 5.01]
          ["Utahraptor", "Ankylosaur", "Maiasaur"].sort
          # => ["Ankylosaur", "Maiasaur", "Utahraptor"]



          To sort objects based on one of their data members, or by the results of a method call, use
          Array#sort_by
          . This code sorts an array of arrays by size, regardless of their contents:



          arrays = [[1,2,3], [100], [10,20]]
          arrays.sort_by { |x| x.size } # => [[100], [10, 20], [1, 2, 3]]



          To do a more general sort, create a code block that compares the relevant aspect of any two given objects. Pass this block into the sort method of the array you want to sort.


          This code sorts an array of numbers in ascending numeric order, except that the number 42 will always be at the end of the list:



          [1, 100, 42, 23, 26, 10000].sort do |x, y|
          x == 42 ? 1 : x <=> y
          end
          # => [1, 23, 26, 100, 10000, 42]





          Discussion


          If there is one "canonical" way to sort a particular class of object, then you can have that class implement the <=> comparison operator. This is how Ruby automatically knows how to sort numbers in ascending order and strings in ascending ASCII order: Numeric and String both implement the comparison operator.


          The sort_by method sorts an array using a Schwartzian transform (see Recipe 4.6 for an in-depth discussion). This is the most useful customized sort, because it's fast and easy to define. In this example, we use sort_by to sort on any one of an object's fields.



          class Animal
          attr_reader :name, :eyes, :appendages

          def initialize(name, eyes, appendages)
          @name, @eyes, @appendages = name, eyes, appendages
          end

          def inspect
          @name
          end
          end

          animals = [Animal.new("octopus", 2, 8),
          Animal.new("spider", 6, 8),
          Animal.new("bee", 5, 6),
          Animal.new("elephant", 2, 4),
          Animal.new("crab", 2, 10)]

          animals.sort_by { |x| x.eyes }
          # => [octopus, elephant, crab, bee, spider]

          animals.sort_by { |x| x.appendages }
          # => [elephant, bee, octopus, spider, crab]



          If you pass a block into sort, Ruby calls the block to make comparisons instead of using the comparison operator. This is the most general possible sort, and it's useful for cases where sort_by won't work.


          The comparison operator and a sort code block both take one argument: an object against which to compare self. A call to <=> (or a sort code block) should return1 if self is "less than" the given object (and should therefore show up before it in a sorted list). It should return 1 if self is "greater than" the given object (and should show up after it in a sorted list), and 0 if the objects are "equal" (and it doesn't matter which one shows up first). You can usually avoid remembering this by delegating the return value to some other object's <=> implementation.




          See Also


          • Recipe 4.6, "Ignoring Case When
            Sorting
            Strings," covers the workings of the Schwartzian Transform

          • Recipe 4.7, "Making Sure a Sorted Array Stays Sorted"

          • Recipe 4.10, "Shuffling an Array"

          • If you need to find the minimum or maximum item in a list according to some criteria, don't sort it just to save writing some code; see Recipe 4.11, "Getting the N Smallest Items of an Array," for other options













          The Scheduler










































          The Scheduler


          The heart of the RTOS is the scheduler. The
          scheduler is the piece of code that figures out which task should run
          next. Usually at the end of every system call, the system invokes the
          scheduler to determine if the currently running task should be
          suspended in favor of some other task.


          The OS views each task as being in one of several run
          states. Each RTOS defines its own set of run states, but three are
          almost universal: running, ready-to-run, and blocked.
          At any instant in time, only one task can be in the running state:
          namely, the currently executing active task. Tasks that could be
          running if the processor weren’t already being used by some higher
          priority task are assigned to the ready-to-run state. Tasks that are
          waiting for some system call to occur before they can run are assigned
          to the blocked state. When a blocked task becomes unblocked, it is
          moved to the ready-to-run state.


          When the currently running task finishes or loses its
          claim to the processor (perhaps because some higher priority task
          became ready), the OS does a context switch.
          To perform a context switch, the RTOS saves the current state of the
          CPU (or context) in some structure dedicated to the active task
          (usually called a task control block — TCB). The RTOS then retrieves
          the context belonging to the incoming task’s state (from the new task’s
          TCB) and loads that into the CPU as the new context. If the scheduler
          is priority based, the new task must have a
          higher priority than the previously running task because the scheduler
          always selects the highest priority ready-to-run task. It is important
          to note that the scheduler always selects the highest priority task that is ready to run.
          Frequently, there are several tasks in the system with higher priority
          than the current task, but these tasks are not running because they are
          not in the ready-to-run state.


          Depending on the particular RTOS, the scheduler might
          be invoked at times other than when the running task makes a system
          call. If the RTOS supports preemption, a context switch could also
          occur as the side effect of an interrupt. For example, an interrupt
          might occur because a peripheral device has input available. The
          interrupt handler captures this input, saves it for the listening task
          (via a system call), and then invokes the scheduler as part of its
          return protocol. If the listening task was blocked waiting for the
          input, it now becomes ready-to-run. If the listening task (or some
          other now-ready task) has a higher priority than the interrupted task,
          the scheduler performs a task switch, suspending the interrupted task
          and launching the higher priority task.


          In cases where a system call causes the current task to be put on hold, the context switch is called synchronous
          because it occurs through in-line code. In cases where an interrupt
          occurs and the interrupted task is put on hold, the context switch is
          called asynchronous because it happens at some arbitrary point during the execution of the interrupted task.




































          A Project Assessment Questionnaire




          I l@ve RuBoard









          A Project Assessment Questionnaire

          There are no right or wrong answers to these questions, the purpose of these questions is to ascertain an accurate view of the past and current development processes, methodologies, and practices used for your project.


          1. Project and Development Team Information


            1. Name of project: __________________________________________________


              1. Please provide a brief description of the project and product.


              2. Names and roles of respondent(s) to this questionnaire:



            2. Size of project (lines of code, function points, or other units):


              VA Java code?


              DB-related code?


              Other (C, C++, etc.)



            3. Delivery dates for key functions (or target delivery dates) including original dates and any reset dates: ______________________________________________ _______________________________________________________________


            4. Current stage of the project if not already shipped (e.g., functional test almost complete, in final test phase, product/release in beta, etc.):



              1. Does/did the project involve cross-site or cross-lab development?


              2. If yes, what site(s) and lab(s)?


              3. Is there a cross-lab development process available?


              4. At what organizational level are cross-site/cross-team development implemented (e.g., 1st line level, 2nd line/functional level, etc.)?




              1. Did the design point of the project serve to satisfy multiple users or constituencies?


              2. Was the project implemented on an open/common platform (e.g., Intel, PowerPC, Linux, Window, FreeBSD)?

                Please specify:



            5. On a scale of 1 to 10 (10 being the most complex), how would you rate the complexity of the project based on your experience and knowledge of similar types of software projects?


            6. Development cycle time (equate ship date with final delivery in an iterative model):


              1. From design start to ship: _____ months


              2. From design start to bring-up: _____ months


              3. From bring-up to code integration complete (all coding done): _____ months


              4. From code integration complete to internal customer use (all development tests complete) of the product: _____ months


              5. From development test complete to GA: _____ months



            7. Development team information (please provide estimates if exact numbers are not available):


              1. Total size of team of the entire project: _____


              2. Number of VA Java programmers spending 100% of time on project: _____


              3. Number of VA Java programmers spending less than 100% of time on project: _____


              4. Number of database programmers spending 100% of time on project _____


              5. Number of database programmers spending less than 100% of time on project _____


              6. Number of other programmers _____ (specify skills)


              7. Distribution of team members by education background (percent):


                Computer science _____ %


                Computer engineering _____ %


                Others (please specify) _____ %


                _________________ _____ %


                Total 100.0% (N = total number of members)



              8. Approximate annual turnover rate of team members: _____



            8. How would you describe the skills and experience levels of this team (e.g., years with tools experience, very experienced team, large percent of new hires, etc.)?


              1. Are there sufficient skilled technical leaders/developers in the organization to lead and support the whole team?


              2. If possible, please give percent distribution estimates with regard to years of industry software development experience:


                < 2 years _____ %


                �5 years _____ %


                > 5 years _____ %


                TOTAL _____ 100.0%






          1. Requirements and Specifications


            1. To what extent did the development team review the requirements before they were incorporated into the project. (Please mark the appropriate cell for each row in the table)




















































               

              Always



              Usually



              Sometimes



              Seldom



              Never



              Functional requirements


                   

              Performance requirements


                   

              Reliability/availability/serviceability (RAS) requirements


                   

              Usability requirements


                   

              Web Publishing/ID requirements


                   


              1. Per your experience and assessment, how important is this practice (requirements review) to the success of tools projects? (Please mark the appropriate cell for each row in the table.)















































               

              Very Important



              Important



              Somewhat Important



              Not Sure



              Functional requirements


                  

              Performance requirements


                  

              Reliability/availability/serviceability (RAS) requirements


                  

              Usability requirements


                  

              Web Publishing/ID requirements


                  


            2. Specifications were developed based on the requirements and used as the basis for project planning, design and development, testing, and related activities.


              1. Always


              2. Usually


              3. Sometimes


              4. Seldom


              5. Never


                1. Per your experience and assessment, how important is this practice (specifications and requirements to guide overall project implementation) to the success of this project?


                  1. Very important


                  2. Important


                  3. Somewhat important


                  4. Not sure



                2. If your assessment of the above is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?




            3. How did your project deal with (a) late requirements and (b) requirements changes? Please elaborate.



          Project Strengths and Weaknesses with Regard to Section B


          (B1) Is there any practice(s) by your project with regard to requirements and specifications that you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (B2) If you were to do this project all over again, what would you do differently with regard to requirements and specifications, and why?


          1. Design, Code and Reviews/Inspections


            1. To what extent did the design work of the project take the following into account? (Please mark the appropriate cells.)












































































               

              Largest Extent Possible



              Important Consideration



              Sometimes



              Seldom



              Don't Know



              (a) Design for extensibility


                   

              (b) Design for performance


                   

              (c) Design for reliability/availability/serviceability (RAS)


                   

              (d) Design for usability


                   

              (e) Design for debugability


                   

              (f) Design for maintainability


                   

              (g) Design for testability


                   

              (h) Design with modularity (component structure) to allow for component ownership and future enhancements


                   


            2. Was there an overall high-level design document in place for the project as overall guidelines for implementation and for common understanding across teams and individuals?


              a. Yes


              b. No



              1. Per your experience and assessment, how important is this practice (overall design document) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure




            3. To what extent were design reviews of the project conducted? (Please mark the appropriate cell in each row in the table.)




























               

              All Design Work Done Rigorously



              All Major Pieces of Design Items



              Selected Items Based on Criteria (e.g., Error Recovery)



              Design Reviews Were Occasionally Done



              Not Done



              Original design


                   

              Design changes/rework


                   


              1. Per your experience and assessment, how important is this practice (design review/verification) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              2. If your assessment in question 16(a) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            4. What is the most common form of design reviews for this project?


              1. Formal review meeting with moderators, reviewers/inspectors, and defect tracking�issues resolution and rework completion as part of the completion criteria


              2. Formal review but issue resolution is up to the owner


              3. Informal review by experts of related areas


              4. Codeveloper (codesigner) informal review


              5. Other.....(Please specify.)



            5. In your development process, are there an entry/exit criteria for major development phases?


              1. If yes to question 18, is the review process related to the entry/exit criteria of process phases (e.g., is the successful completion of design reviews part of exit criteria of the design phase)?


              2. If yes to question 18a, how effectively are the criteria followed/enforced?


                1. Very effectively


                2. Effectively


                3. Somewhat effectively


                4. Not effectively



              3. If yes to question 18, if entrance/exit criteria were not met, what did you do?


              4. Per your experience and assessment, how important is this practice (successful design review as part of exit criteria for the design phase) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              5. If your assessment in question 18d is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            6. Were any coding standards used?

              If yes, please briefly describe.


            7. To what extent did the code implementation of the project take the following factors into account? (Please mark the appropriate cell for each row in the table.)




























































               

              Largest Extent Possible



              Important Consideration



              Sometimes



              Seldom



              Don't Know



              Code for extendibility


                   

              Code for performance


                   

              Code for debugability


                   

              Code for reliability/availability/serviceability (RAS)


                   

              Code for usability


                   

              Code for maintainability


                   


            8. To what extent were code reviews/inspections conducted? (Please mark the appropriate cell for each row in the table.)




































               

              Rigorously 100% of the Code



              Major Pieces of Code



              Selected Items Based on Criteria (e.g., Error Recovery Code)



              Occasionally Done



              Not Done



              Original code implementation


                   

              After significant rework/changes


                   

              Final (or near final) code implementation


                   


              1. Per your experience and assessment, how important is this practice (code reviews and inspections) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              2. If your assessment in question 21(a) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?




          Project Strengths and Weaknesses with Regard to Section C


          (C1) Is there any practice(s) by your project with regard to design, code, and reviews/inspections that you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (C2) If you were to do this project all over again, what would you do differently with regard to design, code, and reviews/inspections, and why?


          1. Code Integration and Driver Build


            1. Was code integration dependency (e.g., with client software, with database, with information development, with other software, with other organizations or even with other sites) a concern for this project?


              a. Yes


              b. No



              1. If yes to question 22, please briefly describe how such dependencies were managed from a code integration/driver build point of view for this project and what (tools, process, etc.) was used.


              2. Per your experience and assessment, how important is this practice (code integration dependency management) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              3. If your assessment in question 22b is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            2. With regard to the integration and build process, how do you control part integration?


              1. In a cross-site development environment, how is the part integration handled from an organizational point of view? Is there an owning organization responsible for part integration?


              2. If yes to question 23(a), how is the development group involved in the integration/bring-up task?



            3. Please briefly describe your process, if any, in enhancing code integration quality and driver stability.


              1. Per your experience and assessment, how important is this practice (code integration control, action/process on integration quality and driver stability) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              2. If your assessment in question 24(a) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            4. What is your driver build cycle (e.g., daily, weekly, biweekly, monthly, flexible�build when ready, etc.)? Please provide your observations on your build cycle as it relates to your project progress (schedule and quality). If it varied throughout the project, please describe how this was handled through the different phases (i.e., early function delivery and bring-up, vs. fix-only mode, etc.).



          Project Strengths and Weaknesses with Regard to Section D


          (D1) Is there any practice(s) by your project with regard to code integration and driver build that you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (D2) If you were to do this project all over again, what would you do differently with regard to code integration and driver build, and why?


          1. Test


            1. Was there a test plan in place for this project at the functional (development test) and overall project level (including independent test team)? Who initiated the test plan? (Please fill in the cells in the table.)






















               

              Test Plan in Place (Yes/No)



              Who Initiated



              Who Executed



              Development Test


                 

              Overall Project


                 


            2. What types of test/test phases (unit, simulation test, functional, regression, independent test group, etc.) were conducted for this project? Please specify and give a brief explanation of each.


              1. Please elaborate on your error recovery or "bad path" testing.


              2. Please elaborate on your regression testing.



            3. Was test coverage/code coverage measurement implemented?

              If yes, for which test(s), and who does it?


            4. Are entry/exit criteria used for the major test phases/types?

              If yes,



              (a) please provide a brief description.


              (b) How are the criteria used or enforced?



              1. Per your experience and assessment, how important is this practice (entry/exit criteria for major tests) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              2. If your assessment in question 29a is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            5. Is there a change control process in place for integrating fixes?


              a. Yes (please briefly describe)


              b. No



              1. If yes, how effectively in your assessment is the process being implemented?


                1. Very effectively


                2. Effectively


                3. Somewhat effectively


                4. Not effectively



              2. Per your experience and assessment, how important is this practice (change control for defect fixes) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              3. If your assessment in question 30b is "very important" or "important" and your project's actual practice/effectiveness didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?




          Project Strengths and Weaknesses with Regard to Section E


          (E1) Is there any practice(s) by your project with regard to testing that you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (E2) If you were to do this project all over again, what would you do differently with regard to testing, and why?


          1. Project Management


            1. Was there a dedicated project manager for this project?


              1. How would you describe the role of project management for this project?


                1. Project management was basically done by line management.


                2. There was a project coordinator�coordinating activities and reporting status across development teams and line managers.


                3. There was a project manager but major project decisions were progress-driven by line management.


                4. The project manager, together with line management, was responsible for the success of the project. The project manager drove progress (e.g., dependency, schedule, quality) of the project and improvements across teams and line management areas.


                5. Other.....(Please specify/describe.)



              2. Per your experience and assessment, how important is this practice (effective role of project management) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              3. If your assessment in question 31(b) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            2. How were sizing estimates of the project (specifically the amount of design and development work) derived?


            3. How was the development schedule developed for this project? Please provide a brief statement (e.g., top-down [GA date mandated], bottom-up, bottom-up and top-down converged with proper experiences and history, based on sizing estimates, etc.).


              1. Per your experience and assessment, how important is this practice (effective sizing and schedule development process based on skills and experiences) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              2. If your assessment in question 33(a) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            4. Was a staged delivery/code drop plan developed early based on priorities and dependencies and executed?


              a. Yes, please briefly describe.


              b. No, please briefly describe.



              1. Per your experience and assessment, how important is this practice (good staging and code drop plan) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              2. If your assessment in question 34a is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            5. Does this project have to satisfy multiple constituents or diverse users?

              If yes,


              1. how was work prioritized?


              2. How was workload distribution determined?


              3. How was conflict resolved?



            6. If this is a cross-site development project (Question 5), please briefly describe how cross-site dependency was managed.


            7. Under what level of management were major dependencies for deliverables managed?


              1. Under the same development manager


              2. Under the same functional manager


              3. Across functional areas but under the same development directors


              4. Coordination across development directors


              5. Under the same project executive


              6. Other... Please describe.



            8. What were the major obstacles, if any, to effective team communications for your project?


            9. Were major checkpoint reviews conducted at various stages of the project throughout the development cycle?


              a. Yes


              b. No



              1. If yes to question 39, please describe the major checkpoint review deliverables.


              2. If yes to question 39, how effective in your view were those checkpoint reviews? Please briefly explain.


                1. Very effective


                2. Effective


                3. Somewhat effective


                4. Not effective



              3. Per your experience and assessment, how important is this practice (effective checkpoint process) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              4. If your assessment in question 39(c) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?




          Project Strengths and Weaknesses with Regard to Section F


          (F1) Is there any practice(s) by your project with regard to project management that you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (F2) If you were to do this project all over again, what would you do differently with regard to project management, and why?


          1. Metrics, Measurements, Analysis


            1. Were any in-process metrics used to manage the progress (schedule and quality) of the project (e.g., function delivery tracking, problem backlog tracking, test plan execution, etc.)?


              a. Yes


              b. No



              1. If yes, please specify/describe where applicable.


                1. Metric(s) used at the front end of the development cycle (i.e., up to code integration)


                2. Metric(s) used for driver stability


                3. Metric(s) used during testing with targets/baselines for comparisons


                4. Others (simulation measurement, test coverage/code coverage measurement, etc.)�Please specify.



              2. Per your experience and assessment, how important is this practice (good metrics for schedule and quality management) to the success of this project?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure



              3. If your assessment in question 40(b) is "very important" or "important" and your project's actual practice didn't match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture, experiences, etc.)?



            2. Was there any defect cause analysis (e.g., problem components, Pareto analysis) which resulted in improvement/corrective actions during the development of the project?

              If yes, please describe briefly.



          Project Strengths and Weaknesses with Regard to Section G


          (G1) Is there any practice(s) by your project with regard to metrics, measurements, and analysis you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (G2) If you were to do this project all over again, what would you do differently with regard to metrics, measurements, and analysis, and why?


          1. Development Environment/Library


            1. Please name and describe briefly your development environment/platform(s) and source code library system(s).


            2. To what extent was the entire team familiar with the operational, build, and support environment?


            3. Was your current development environment or any part of it a hindrance in any way? What changes might enhance the development process for quality, efficiency, or ease-of-use? Please provide specifics.



          Project Strengths and Weaknesses with Regard to Section H


          (H1) Is there any practice(s) by your project with regard to development environment/library system that you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (H2) If there is any development environment/library system that per your assessment is the best for tools development, please describe and explain.


          (H3) If you were to do this project all over again, what would you do differently with regard to development environment/library system, and why?


          1. Tools/Methodologies


            1. In what language(s) was the code for the project written?


            2. Was the project developed with


              1. object-oriented methodology?


              2. procedural methods?



            3. Are multiple environments required in order to fully test the project? If so, please describe.


            4. Are any kind of simulation test environments available? Please describe.


              1. If yes to question 48, how important is this to the success of tools projects?


                1. Very important


                2. Important


                3. Somewhat important


                4. Not sure




            5. Please describe briefly any tools that were used for each of the following areas:


              1. Design


              2. Debug


              3. Test�code coverage


              4. Test�automation/stress


              5. Other. Please explain.



            6. What was the learning curve of the development team to become proficient in using the above tools and the development environment/library discussed earlier? Please provide information if any specific education is needed.



          Project Strengths and Weaknesses with Regard to Section I


          (I1) Is there any practice(s) by your project with regard to tools and methodologies you consider a strength and that should be considered for implementation by other projects? If so, please describe and explain.


          (I2) If any tools and methodologies that per your assessment are the best for the type of projects similar to this project, please describe and explain.


          (I3) If you were to do this project all over again, what would you do differently with regard to tools and methodologies?


          1. Project Outcome Assessment


            1. Please provide a candid assessment of the schedule achievement (vs. original schedule) of the project. Please provide any pertinent information as appropriate (e.g., adherence to original schedule, meeting/not meeting GA date, meeting/not meeting interim checkpoints, any schedule reset, any function cutback/increase, unrealistic schedule to begin with, etc.).


            2. Please provide a candid assessment of the quality outcome of the project. Please provide any pertinent information as appropriate (e.g., in-process indicators, test defect volumes/rates, field quality indicators, customer feedback, customer satisfaction measurements, customer critical situations, any existing analysis and presentations. Please attach files or documents, etc.).


            3. How would you rate the overall success of the project (schedule, quality, costs, meeting commitments, etc.)?


              1. Very successful


              2. Successful


              3. Somewhat successful


              4. Not satisfactory




          2. Comments

            Please provide any comments, observations, insights with regard to your project specifically or tools projects in general.








            I l@ve RuBoard