Friday, December 4, 2009

Chapter 1. What Is Software Quality?




I l@ve RuBoard










Chapter 1. What Is Software Quality?


Quality must be defined and measured if improvement is to be achieved. Yet, a major problem in quality engineering and management is that the term quality is ambiguous, such that it is commonly misunderstood. The confusion may be attributed to several reasons. First, quality is not a single idea, but rather a multidimensional concept. The dimensions of quality include the entity of interest, the viewpoint on that entity, and the quality attributes of that entity. Second, for any concept there are levels of abstraction; when people talk about quality, one party could be referring to it in its broadest sense, whereas another might be referring to its specific meaning. Third, the term quality is a part of our daily language and the popular and professional uses of it may be very different.


In this chapter we discuss the popular views of quality, its formal definitions by quality experts and their implications, the meaning and specific uses of quality in software, and the approach and key elements of total quality management.








    I l@ve RuBoard



    15.9 Polymorphism and Dynamic Binding













    15.9 Polymorphism and Dynamic Binding


    Limited use of polymorphism and dynamic binding is easily addressed by unfolding polymorphic calls, considering each method that can be dynamically bound to each polymorphic call. Complete unfolding is impractical when many references may each be bound to instances of several subclasses.


    Consider, for example, the code fragment in Figure 15.15. Object Account may by an instance of any of the classes USAccount, UKAccount, EUAccount, JPAccount, or OtherAccount. Method validateCredit can be dynamically bound to methods validate- Credit of any of the classes EduCredit, BizCredit, or IndividualCredit, each implementing different credit policies. Parameter creditCard may be dynamically bound to VISACard, AmExpCard, or ChipmunkCard, each with different characteristics. Even in this simple example, replacing the calls with all possible instances results in 45 different cases (5 possible types of account × 3 possible types of credit × 3 possible credit cards).












    1 abstract class Credit {
    15 ...
    16 abstract boolean validateCredit( Account a, int amt, CreditCard c);
    60 ...
    61 }










    Figure 15.15: A method call in which the method itself and two of its parameters can be dynamically bound to different classes.

    The explosion in possible combinations is essentially the same combinatorial explosion encountered if we try to cover all combinations of attributes in functional testing, and the same solutions are applicable. The combinatorial testing approach presented in Chapter 11 can be used to choose a set of combinations that covers each pair of possible bindings (e.g., Business account in Japan, Education customer using Chipmunk Card), rather than all possible combinations (Japanese business customer using Chipmunk card). Table 15.4 shows 15 cases that cover all pairwise combinations of calls for the example of Figure 15.15.


























































    Table 15.4: A set of test case specifications that cover all pairwise combinations of the possible polymorphic bindings of Account, Credit, and creditCard.


    Open table as spreadsheet

    Account



    Credit



    creditCard



    USAccount



    EduCredit



    VISACard



    USAccount



    BizCredit



    AmExpCard



    USAccount



    individualCredit



    ChipmunkCard



    UKAccount



    EduCredit



    AmExpCard



    UKAccount



    BizCredit



    VISACard



    UKAccount



    individualCredit



    ChipmunkCard



    EUAccount



    EduCredit



    ChipmunkCard



    EUAccount



    BizCredit



    AmExpCard



    EUAccount



    individualCredit



    VISACard



    JPAccount



    EduCredit



    VISACard



    JPAccount



    BizCredit



    ChipmunkCard



    JPAccount



    individualCredit



    AmExpCard



    OtherAccount



    EduCredit



    ChipmunkCard



    OtherAccount



    BizCredit



    VISACard



    OtherAccount



    individualCredit



    AmExpCard



    The combinations in Table 15.4 were of dynamic bindings in a single call. Bindings in a sequence of calls can also interact. Consider, for example, method getYTD- Purchased of class Account shown in Figure 15.4 on page 278, which computes the total yearly purchase associated with one account to determine the applicable discount. Chipmunk offers tiered discounts to customers whose total yearly purchase reaches a threshold, considering all subsidiary accounts.


    The total yearly purchase for an account is computed by method getYTDPurchased, which sums purchases by all customers using the account and all subsidiaries. Amounts are always recorded in the local currency of the account, but getYTDPurchased sums the purchases of subsidiaries even when they use different currencies (e.g., when some are bound to subclass USAccount and others to EUAccount). The intra- and interclass testing techniques presented in the previous section may fail to reveal this type of fault. The problem can be addressed by selecting test cases that cover combinations of polymorphic calls and bindings. To identify sequential combinations of bindings, we must first identify individual polymorphic calls and binding sets, and then select possible sequences.


    Let us consider for simplicity only the method getYTDPurchased. This method is called once for each customer and once for each subsidiary of the account and in both cases can be dynamically bound to methods belonging to any of the subclasses of Account (UKAccount, EUAccount, and so on). At each of these calls, variable totalPurchased is used and changed, and at the end of the method it is used twice more (to set an instance variable and to return a value from the method).


    Data flow analysis may be used to identify potential interactions between possible bindings at a point where a variable is modified and points where the same value is used. Any of the standard data flow testing criteria could be extended to consider each possible method binding at the point of definition and the point of use. For instance, a single definition-use pair becomes n × m pairs if the point of definition can be bound in n ways and the point of use can be bound in m ways. If this is impractical, a weaker but still useful alternative is to vary both bindings independently, which results in m or n pairs (whichever is greater) rather than their product. Note that this weaker criterion would be very likely to reveal the fault in getYTDPurchased, provided the choices of binding at each point are really independent rather than going through the same set of choices in lockstep. In many cases, binding sets are not mutually independent, so the selection of combinations is limited.














    Spring Documentation




















    Spring Documentation



    One of the aspects of Spring that makes it such a useful framework for real developers who are building real applications is its wealth of well-written, accurate documentation. One of the key goals for the 1.1 release was to ensure that all the documentation was finished off and polished by the development team. This means that every feature of Spring is not only fully documented in the JavaDoc, but is also covered in the Spring reference manual included in every distribution.


    If you haven't yet familiarized yourself with the Spring JavaDoc and the reference manual, do so now. This book does not aim to be a replacement for either of these resources; rather, it aims to be a complementary reference, demonstrating how to build a Spring-based application from the ground up.



















    Sharing Functions between Frames











    Sharing Functions between Frames


    One common frame layout uses a permanent navigation frame and a content frame that might display a variety of different pages. Once again, it makes sense to put the call to the external JavaScript file into the page that's always present (the frameset page) instead of duplicating it for every possible content page. In Figure 5.10, we use this capability to have many pages share an identical function that returns a random banner image. Script 5.12 loads the pages into the frameset.




    Figure 5.10. The information in the right, or content, frame is created by code called from the frameset.

    [View full size image]




    Script 5.12. This script allows you to share functions between multiple frames.





    [View full width]
    var bannerArray = new Array("images/redBanner. gif", "images/greenBanner.gif", "images/
    blueBanner.gif");

    window.onload = initFrames;

    function initFrames() {
    var leftWin = document.getElementById ("left").contentWindow.document;

    for (var i=0; i<leftWin.links.length; i++) {
    leftWin.links[i].target = "content";
    leftWin.links[i].onclick = resetBanner;
    }

    setBanner();
    }

    function setBanner() {
    var contentWin = document.getElementById ("content").contentWindow.document;
    var randomNum = Math.floor(Math.random() * bannerArray.length);

    contentWin.getElementById("adBanner").src = bannerArray[randomNum];
    }

    function resetBanner() {
    setTimeout("setBanner()",1000);
    }




    To use a function on another page:












    1.

    [View full width]
    var bannerArray = new Array ("images/redBanner.gif", "images/greenBanner.gif", "images
    /blueBanner.gif");




    Start by creating a new array that contains all the possible banner image names, and assign the array to the bannerArray variable.


    2.

    window.onload = initFrames;




    When the frameset loads, call initFrames().


    3.

    var leftWin = document.getElementById("left"). contentWindow.document;




    Now we start the code inside the initFrames() function. We begin by creating the leftWin variable and setting it the same way we've previously stored framed pages: given the frame name (left), get that element (document.getElementById("left")); given that element, get the contentWindow property (document.getElementById("left").contentWindow); and given the contentWindow property, get its document property.


    4.


    for (var i=0; i<leftWin.links. length; i++) {
    leftWin.links[i].target = "content";
    leftWin.links[i].onclick = resetBanner;




    Because this function is being called from the frameset's context, setting the left navigation bar's links is slightly different than in previous examples. This time, we reset both the target property and the onclick handler for each link. The target is set to "content" and the onclick handler is set to the resetBanner function.


    5.

    setBanner();




    As the last initialization step, the setBanner() function is called.


    6.

    var contentWin = document.getElementById("content"). contentWindow.document;




    The setBanner() function loads up the content window and calculates a random number. Then, the ad banner in the content window is set to a random ad from the array. We begin by creating the contentWin variable and setting it the same way we've previously stored framed pages: given the frame name (content), get that element (document.getElementById("content")); given that element, get the contentWindow property (document.getElementById ("content").contentWindow); and given the contentWindow property, get its document property.


    7.
    var randomNum = Math.floor (Math.random() * bannerArray. length);


    This line uses the Math.random() function multiplied by the number of elements in the bannerArray array to calculate a random number between 0 and the number of elements in the array. Then it places the result into the randomNum variable.


    8.

    contentWin.getElementById ("adBanner").src = bannerArray[randomNum];




    Here, we set the src for adBanner to the current item in the array. That's the new image name, which will then be displayed on the page.


    9.

    function resetBanner() {
    setTimeout("setBanner()",1000);
    }




    The resetBanner() function is a little tricky, although it only has a single line of code. What it's doing is waiting for the content frame to load with its new page (one second should be sufficient), after which it can then call setBanner() to reset the banner.


    If we instead called setBanner() immediately, the new content page might not have loaded yet. In that case, we would have a problem, as we would then either get an error (because adBanner wasn't found), or we would reset the old adBannerthe one from the page that's being unloaded.



    Tip



    • Note that resetBanner does not return falsethis means that the browser will both do what's here and load the page from the HRef. This script depends on that, which is why both the onclick handler and target were set.














    Chapter 6. Disk Arrays










     < Free Open Study > 











    Chapter 6. Disk Arrays





    I/O certainly has been lagging in the last decade.

    �Seymour Cray, 1976





    Most of the improvements in disk technology have been made with the

    aim of increasing the capacity/price ratio. Although these changes

    have made mass storage much more affordable, they have brought about

    two fairly serious problems:





    • When a single disk can store tens of gigabytes of data, the

      reliability of an individual disk becomes a serious concern, as the

      failure of an individual disk results in the loss of large amounts of

      data.

    • Disk performance has lagged drastically behind capacity/price

      improvements.





    To solve these problems, a great deal

    of effort has been put into designing methods of organizing sets of

    disks to enhance both reliability and performance. This has come to

    be known as RAID, which either stands for "Redundant

    Array of Inexpensive Disks" or

    "Redundant Array of Independent

    Disks," depending on who you listen to. There are

    seven levels of RAID; each takes a different approach to solving

    these problems. The types of RAID are summarized in Table 6-1.





    Table 6-1. A summary of RAID levels


    RAID level





    Organization





    Strengths





    Weaknesses





    RAID 0





    Striping





    Very fast, simple





    Low reliability





    RAID 1





    Mirroring





    Fast, simple





    Expensive





    RAID 2





    Reliability viaHamming codes





    Reliable





    Inflexible





    RAID 3





    Parity





    Fast sequential access, reliable





    Slow random access, implementation difficult





    RAID 4





    Parity





    Average performance, reliable





    Bottlenecks on dedicated parity disk





    RAID 5





    Parity





    Good performance, reliable, cheap





    Slow writes, heavy cache requirements





    RAID 10





    Striped mirrors





    Very fast, simple





    Expensive







    There is a fundamental tradeoff in configuring disk arrays -- in

    fact, it is the classical example of the one of the principles of

    performance tuning (see Section 1.2.2). The tradeoff

    is stated in the following note.











    Fast, cheap, safe.



    Of these three attributes, you may pick only two.







    In this chapter, I discuss some of the fundamentals of assembling

    multiple disks into a single logical unit: the basic terminology

    used, the differences between software and hardware disk arrays,

    recipes for desiging disk arrays, and many other aspects of modern

    array implementation.




















       < Free Open Study > 



      Section 9.1.&nbsp; Expected Errors with Calculations









      9.1. Expected Errors with Calculations


      Sometimes, we want to test that the test data is rejected, as expected. For example, consider Figure 9.1, in which a negative amount is used in the first test row.


      Figure 9.1. Negative Amount

      CalculateDiscount

      amount

      discount()

      -100.00

      0.00

      1200.00

      60.00



      When this Fit table is run, we get the report (partly) shown in Figure 9.2, in which the program rejects the negative amount. The report also provides programmer-specific information about the error in the yellow-colored cell (see Plate 5), which we can ignore.



      Figure 9.2. Negative Amount Is Rejected

      [View full size image]




      Assuming that our business rule stipulates that it doesn't make sense to calculate the discount on a negative amount, we'd expect to get an error in that first test row. We can express that expectation by using the special value error in the calculated column instead, as shown in the Fit test in Figure 9.3.


      Figure 9.3. Use of error in ColumnFixture

      CalculateDiscount

      amount

      discount()

      -100.00

      error

      1200.00

      60.00



      The error cell is colored green if an error occurred (an exception), as shown in Figure 9.4. Otherwise, it is colored red.



      Figure 9.4. Negative Amount in error, as Expected







      It makes sense to include the error case here because there is only one. (Fit will complain about any values that are not numbers.) However, many values in a table may not be valid, such as the dates entered by the user through the user interface. In that case, it makes sense to split the table into two: one for defining valid date values and one for defining the calculations on the valid dates. This topic is covered further in Chapter 18.









        8.6 Handling Signals: Errors and Async-signal Safety



        [ Team LiB ]






        8.6 Handling Signals: Errors and Async-signal Safety


        Be aware of three difficulties that can occur when signals interact with function calls. The first concerns whether POSIX functions that are interrupted by signals should be restarted. Another problem occurs when signal handlers call nonreentrant functions. A third problem involves the handling of errors that use errno.


        What happens when a process catches a signal while it is executing a library function? The answer depends on the type of call. Terminal I/O can block the process for an undetermined length of time. There is no limit on how long it takes to get a key value from a keyboard or to read from a pipe. Function calls that perform such operations are sometimes characterized as "slow". Other operations, such as disk I/O, can block for short periods of time. Still others, such as getpid, do not block at all. Neither of these last types is considered to be "slow".


        The slow POSIX calls are the ones that are interrupted by signals. They return when a signal is caught and the signal handler returns. The interrupted function returns �1 with errno set to EINTR. Look in the ERRORS section of the man page to see if a given function can be interrupted by a signal. If a function sets errno and one of the possible values is EINTR, the function can be interrupted. The program must handle this error explicitly and restart the system call if desired. It is not always possible to logically determine which functions fit into this category, so be sure to check the man page.


        It was originally thought that the operating system needs to interrupt slow calls to allow-the user the option of canceling a blocked call. This traditional treatment of handling blocked functions has been found to add unneeded complexity to many programs. The POSIX committee decided that new functions (such as those in the POSIX threads extension) would never set errno to EINTR. However, the behavior of traditional functions such as read and write was not changed. Appendix B gives a restart library of wrappers that restart common interruptible functions such as read and write.


        Recall that a function is async-signal safe if it can be safely called from within a signal handler. Many POSIX library functions are not async-signal safe because they use static data structures, call malloc or free, or use global data structures in a nonreentrant way. Consequently, a single process might not correctly execute concurrent calls to these functions.


        Normally this is not a problem, but signals add concurrency to a program. Since signals occur asynchronously, a process may catch a signal while it is executing a library function. (For example, suppose the program interrupts a strtok call and executes another strtok in the signal handler. What happens when the first call resumes?) You must therefore be careful when calling library functions from inside signal handlers. Table 8.2 lists the functions that POSIX guarantees are safe to call from a signal handler. Notice that functions such as fprintf from the C standard I/O library are not on the list.


        Signal handlers can be entered asynchronously, that is, at any time. Care must be taken so that they do not interfere with error handling in the rest of the program. Suppose a function reports an error by returning -1 and setting errno. What happens if a signal is caught before the error message is printed? If the signal handler calls a function that changes errno, an incorrect error might be reported. As a general rule, signal handlers should save and restore errno if they call functions that might change errno.



        Example 8.28

        The following function can be used as a signal handler. The myhandler saves the value of errno on entry and restores it on return.



        void myhandler(int signo) {
        int esaved;
        esaved = errno;
        write(STDOUT_FILENO, "Got a signal\n", 13);
        errno = esaved;
        }


        Table 8.2. Functions that POSIX guarantees to be async-signal safe.

        _Exit

        getpid

        sigaddset

        _exit

        getppid

        sigdelset

        accept

        getsockname

        sigemptyset

        access

        getsockopt

        sigfillset

        aio_error

        getuid

        sigismember

        aio_return

        kill

        signal

        aio_suspend

        link

        sigpause

        alarm

        listen

        sigpending

        bind

        lseek

        sigprocmask

        cfgetispeed

        lstat

        sigqueue

        cfgetospeed

        mkdir

        sigset

        cfsetispeed

        mkfifo

        sigsuspend

        cfsetospeed

        open

        sleep

        chdir

        pathconf

        socket

        chmod

        pause

        socketpair

        chown

        pipe

        stat

        clock_gettime

        poll

        symlink

        close

        posix_trace_event

        sysconf

        connect

        pselect

        tcdrain

        creat

        raise

        tcflow

        dup

        read

        tcflush

        dup2

        readlink

        tcgetattr

        execle

        recv

        tcgetpgrp

        execve

        recvfrom

        tcsendbreak

        fchmod

        recvmsg

        tcsetattr

        fchown

        rename

        tcsetpgrp

        fcntl

        rmdir

        time

        fdatasync

        select

        timer_getoverrun

        fork

        sem_post

        timer_gettime

        fpathconf

        send

        timer_settime

        fstat

        sendmsg

        times

        fsync

        sendto

        umask

        ftruncate

        setgid

        uname

        getegid

        setpgid

        unlink

        geteuid

        setsid

        utime

        getgid

        setsockopt

        wait

        getgroups

        setuid

        waitpid

        getpeername

        shutdown

        write

        getpgrp

        sigaction

         


        Signal handling is complicated, but here are a few useful rules.


        • When in doubt, explicitly restart library calls within a program or use the restart library of Appendix B.

        • Check each library function used in a signal handler to make sure that it is on the list of async-signal safe functions.

        • Carefully analyze the potential interactions between a signal handler that changes an external variable and other program code that accesses the variable. Block signals to prevent unwanted interactions.

        • Save and restore errno when appropriate.






          [ Team LiB ]



          20.2 Quality and Process













          20.2 Quality and Process


          A software plan involves many intertwined concerns, from schedule to cost to usability and dependability. Despite the intertwining, it is useful to distinguish individual concerns and objectives to lessen the likelihood that they will be neglected, to allocate responsibilities, and to make the overall planning process more manageable.


          For example, a mature software project plan will include architectural design reviews, and the quality plan will allocate effort for reviewing testability aspects of the structure and build order. Clearly, design for testability is an aspect of software design and cannot be carried out by a separate testing team in isolation. It involves both test designers and other software designers in explicitly evaluating testability as one consideration in selecting among design alternatives. The objective of incorporating design for testability in the quality process is primarily to ensure that it is not overlooked and secondarily to plan activities that address it as effectively as possible.


          An appropriate quality process follows a form similar to the overall software process in which it is embedded. In a strict (and unrealistic) waterfall software process, one would follow the "V model" (Figure 2.1 on page 16) in a sequential manner, beginning unit testing only as implementation commenced following completion of the detailed design phase, and finishing unit testing before integration testing commenced. In the XP "test first" method, unit testing is conflated with subsystem and system testing. A cycle of test design and test execution is wrapped around each small-grain incremental development step. The role that inspection and peer reviews would play in other processes is filled in XP largely by pair programming. A typical spiral process model lies somewhere between, with distinct planning, design, and implementation steps in several increments coupled with a similar unfolding of analysis and test activities. Some processes specifically designed around quality activities are briefly outlined in the sidebars on pages 378, 380, and 381.


          A general principle, across all software processes, is that the cost of detecting and repairing a fault increases as a function of time between committing an error and detecting the resultant faults. Thus, whatever the intermediate work products in a software plan, an efficient quality plan will include a matched set of intermediate validation and verification activities that detect most faults within a short period of their introduction. Any step in a software process that is not paired with a validation or verification step is an opportunity for defects to fester, and any milestone in a project plan that does not include a quality check is an opportunity for a misleading assessment of progress.


          The particular verification or validation step at each stage depends on the nature of the intermediate work product and on the anticipated defects. For example, anticipated defects in a requirements statement might include incompleteness, ambiguity, inconsistency, and overambition relative to project goals and resources. A review step might address some of these, and automated analyses might help with completeness and consistency checking.


          The evolving collection of work products can be viewed as a set of descriptions of different parts and aspects of the software system, at different levels of detail. Portions of the implementation have the useful property of being executable in a conventional sense, and are the traditional subject of testing, but every level of specification and design can be both the subject of verification activities and a source of information for verifying other artifacts. A typical intermediate artifact - say, a subsystem interface definition or a database schema - will be subject to the following steps:



          Internal consistency check Check the artifact for compliance with structuring rules that define "well-formed" artifacts of that type. An important point of leverage is defining the syntactic and semantic rules thoroughly and precisely enough that many common errors result in detectable violations. This is analogous to syntax and strong-typing rules in programming languages, which are not enough to guarantee program correctness but effectively guard against many simple errors.



          External consistency check Check the artifact for consistency with related artifacts. Often this means checking for conformance to a "prior" or "higher-level" specification, but consistency checking does not depend on sequential, top-down development - all that is required is that the related information from two or more artifacts be defined precisely enough to support detection of discrepancies. Consistency usually proceeds from broad, syntactic checks to more detailed and expensive semantic checks, and a variety of automated and manual verification techniques may be applied.



          Generation of correctness conjectures Correctness conjectures, which can be test outcomes or other objective criteria, lay the groundwork for external consistency checks of other work products, particularly those that are yet to be developed or revised. Generating correctness conjectures for other work products will frequently motivate refinement of the current product. For example, an interface definition may be elaborated and made more precise so that implementations can be effectively tested.














          Chapter 9. Oracle and Data Warehousing



          [ Team LiB ]







          Chapter 9. Oracle and Data Warehousing




          Although a database is
          general-purpose software, it provides a solution for a variety of
          technical requirements, including:





          Recording and storing data



          Reliably storing data and protecting each user's
          data from the effects of other users' changes




          Reading data for online viewing and reports



          Providing a consistent view of the data




          Analyzing data to detect business trends



          Enables summarizing data and relating many different summaries to
          each other





          The last two solutions are often deployed as a data
          warehouse
          , part of an infrastructure that provides
          business
          intelligence
          for corporate performance
          management.



          Data warehousing and business intelligence implementations are a
          popular and powerful trend in information technology. There is a very
          simple motivation behind this trend: businesses gain the ability to
          use their data in making strategic and tactical decisions. Business
          intelligence can reveal hidden value embedded in an
          organization's data stores.



          Recognizing the trend, Oracle began adding data warehousing features
          to Oracle7 in the early 1990s. Additional features for warehousing
          and business intelligence appeared in subsequent releases,
          particularly to enable better performance, functionality,
          scalability, and management. Oracle also developed tools for building
          and using a business intelligence infrastructure, including data
          movement and business analyses tools.



          A business intelligence infrastructure can enable business analysts
          to answer the following:



          • How does a scenario relate to past business results?

          • What knowledge can be gained by looking at the data differently?

          • What could happen in the future?

          • How can business be changed to positively influence the future?


          This chapter introduces the basic concepts, technologies, and tools
          used in data warehousing and business intelligence. To help you
          understand how Oracle addresses infrastructure and analyses issues,
          we'll first spend a little time describing basic
          terms and technologies.







            [ Team LiB ]



            9.1 Overview








             

             










            9.1 Overview



            In this section we introduce the JSTL SQL actions and the configuration settings used in conjunction with those actions.



            SQL Actions



            JSTL provides six SQL actions that let you do the following:





            • Connect to a database



            • Query a database and access query results



            • Perform database queries with prepared statements



            • Update a database



            • Execute database transactions





            The JSTL SQL actions are listed in Table 9.1.



            The <sql:setDataSource> action lets you specify a data source for your database. You can specify that data source as an instance of javax.sql.DataSource, or as a string representing either a Java Naming and Directory Interface (JNDI) relative path or JDBC parameters. You can store that data source in the SQL_DATA_SOURCE configuration setting or in a scoped variable.



            The <sql:query> action queries a database and stores the result in a scoped variable. The <sql:query> action lets you use prepared statements with SQL query parameters and also lets you limit the number of rows in a query.



            The <sql:update> action updates a database (by inserting, updating, or deleting rows) or modifies one (by creating, altering, or dropping tables) and lets you store the number of rows affected by the update in a scoped variable. Like <sql:query>, <sql:update> can execute prepared statements.





































































            Table 9.1. JSTL Database Actions


            Action





            Description





            <sql:setDataSource>





            Stores a data source in a scoped variable or the SQL_DATA_SOURCE configuration setting.





            <sql:query>





            Queries a database and stores the query result in a scoped variable.





            <sql:update>





            Updates a database with a Data Manipulation Language (DML) command or a Data Definition Language (DDL) command.





            <sql:param>





            Sets SQL parameters for enclosing <sql:query> and <sql:update> actions.





            <sql:dateParam>





            Sets SQL date parameters for enclosing <sql:query> and <sql:update> actions.





            <sql:transaction>





            Establishes a transaction for enclosed <sql:query> and <sql:update> actions.





            The <sql:transaction> action defines a database transaction with nested <sql:query> and <sql:update> actions. If one of the nested queries or updates fails, <sql:transaction> rolls back the entire transaction and throws an exception; otherwise, <sql:transaction> commits the transaction.



            The <sql:param> and <sql:dateParam> actions supply SQL parameters for prepared statements to <sql:query> or <sql:update> actions.





            Configuration Settings



            JSTL uses two configuration settings in conjunction with the SQL actions: SQL_DATA_SOURCE and SQL_MAX_ROWS.[2] You can use the former to specify a data source, and the latter to limit the number of rows returned by database queries.



            [2] See "Configuration Settings" on page 230 for more information about configuration settings.



            The SQL_DATA_SOURCE configuration setting is listed in Table 9.2.



            The SQL_DATA_SOURCE configuration setting is used by the <sql:query>, <sql:update>, and <sql:transaction> actions. You can set that configuration setting in the deployment descriptor, in a business component with the Config class, or with the <sql:setDataSource> action. See "How JSTL Locates Data Sources" on page 363 for more information about how the SQL_DATA_SOURCE configuration setting is used, and see "Creating Data Sources" on page 365 for more information about how you can set that configuration setting.





















































            Table 9.2. The SQL_DATA_SOURCE Configuration Setting


            Config Constant





            SQL_DATA_SOURCE





            Name





            javax.servlet.jsp.jstl.sql.dataSource





            Type





            java.lang.String or javax.sql.DataSource





            Set by





            <sql:setDataSource>, Deployment Descriptor, Config class





            Used by





            <sql:query>, <sql:update>, and <sql:transaction>





            The SQL_MAX_ROWS configuration setting is listed in Table 9.3.





















































            Table 9.3. The SQL_MAX_ROWS Configuration Setting


            Config Constant





            SQL_MAX_ROWS





            Name





            javax.servlet.jsp.jstl.sql.maxRows





            Type





            java.lang.String or java.lang.Integer





            Set by





            Deployment Descriptor, Config class





            Used by





            <sql:query>





            The SQL_MAX_ROWS configuration setting is not set by any JSTL actions, but you can set that configuration setting in the deployment descriptor or in a business component with the Config class. See "Using <sql:query>" on page 378 for more information about the SQL_MAX_ROWS configuration setting.
















               

               


              Recipe 20.9. Capturing the Output and Error Streams from a Unix Shell Command










              Recipe 20.9. Capturing the Output and Error Streams from a Unix Shell Command



              Problem


              You want to run an
              external program as in Recipe 20.8, but you also want to capture the standard error stream. Using
              popen
              only gives you access to the standard output.




              Solution


              Use the open3 library in the Ruby standard library. Its popen3 method takes a code block, to which it passes three IO streams: one each for standard input, output, and error.


              Suppose you perform the Unix ls command to list a nonexistent directory. ls will rightly object to this and write an error message to its standard error stream. If you invoked ls with IO.popen or the %x{} construction, that error message is passed right along to the standard error stream of your Ruby process. You can't capture it or suppress it:



              %x{ls no_such_directory}
              # ls: no_such_directory: No such file or directory



              But if you use popen3, you can grab that error message and do whatever you want with it:



              require 'open3'

              Open3.popen3('ls -l no_such_directory') { |stdin, stdout, stderr| stderr.read }
              # => "ls: no_such_directory: No such file or directory\n"





              Discussion


              The same caveats in the previous recipe apply to the IO streams returned by
              popen3
              . If you're running a command that accepts data on standard input, and you read
              from stdout before closing stdin, your process will hang.


              Unlike IO.popen, the
              popen3
              method is only implemented on
              Unix systems. However, the win32-open3 package (part of the Win32Utils project) provides a popen3 implementation.




              See Also


              • Recipe 20.8, "Driving an External Process with popen"

              • Like many other Windows libraries for Ruby, win32-open3 is available from http://rubyforge.org/projects/win32utils













              Chapter 12 Relation to the 34 Competencies










               < Free Open Study > 





              Chapter 12 Relation to the 34 Competencies



              This chapter relies on the competencies shown here. For resource assignments, the key product development technique needed is an awareness of the processes of software development using different life cycles. Life cycles must be evaluated and tailored to the individual needs of each project. A general understanding of software development activities (software engineering) and how to define the software product are competencies needed for task and activity identification.



              The project management skills needed here are a continuation of those needed for building the WBS, such as documenting plans in an arranged structure and finding tasks and activities that can be used to create a schedule. Activity ID requires the people skills of leadership, interface, and communication, as well as the ability to present ideas effectively throughout the identification process.



              People management skills are needed the most here. Interaction and communications skills are vital to selecting the right people for the right activities. You may have to recruit people to your project, through interviews and a personnel-hiring process. Performance appraisal skills also are necessary to track progress and support career planning. Selecting and building a team come into play as you assemble resources for the activities. These are the skills and competencies to be touched on in this chapter.



              Product Development Techniques



              6. Managing subcontractors�Planning, managing, and monitoring performance

              11. Understanding development activities�Learning the software development cycle



              Project Management Skills



              18. Scheduling�Creating a schedule and key milestones

              21. Tracking process�Monitoring compliance of the project team



              People Management Skills



              23. Appraising performance�Evaluating teams to enhance performance

              26. Interaction and communication�Dealing with developers, upper management, and other teams

              29. Negotiating successfully�Resolving conflicts and negotiating successfully

              30. Planning careers�Structuring and giving career guidance

              32. Recruiting�Recruiting and interviewing team members successfully

              33. Selecting a team�Choosing highly competent teams

              34. Teambuilding�Forming, guiding, and maintaining an effective team












                 < Free Open Study > 



                Collecting System Performance Data




                I l@ve RuBoard









                Collecting System Performance Data


                Users call their IT department when they have delays in accessing data or applications. Good tools are needed to help an operator pinpoint the source of the problem. This section covers some of the interesting performance and resource-utilization metrics, and the tools available to collect data about these metrics.


                A wide range of conditions may result in resource and performance problems. Running out of available memory may be caused by a failure of a memory component or by a memory leak in an application. A sudden rise in CPU utilization could be an indication of processor failure or the introduction on the system of a CPU-intensive application. Analysis is needed to determine whether resource problems can be fixed with a configuration change, hardware repair, or other techniques.


                Many important system resources have configured limits. The following system resource metrics are important to monitor:



                • Number of named pipes


                • Number of messages and message queues


                • Number of system semaphores


                • Amount of shared memory


                • Number of open files


                • Number of processes



                Earlier, this chapter discussed some of the tools that can be used to check system resource usage. The sar and sysdef commands can compare current usage to configured limits. An EMS monitor is available to detect thresholds being exceeded for the following resources:



                • Callout table


                • Process table


                • File descriptor table


                • File lock table


                • Shared memory


                • System semaphores


                • Message queues and message segments



                The performance tools discussed in this section can also detect resource usage problems.


                Some system performance monitoring is available from the SAM Performance Monitors, with which an administrator can obtain information on system, disk, and virtual memory activity, for example. Text-based information is displayed in a Motif window when one of the desired metrics is selected.


                Having historical information is important, to understand how the system performance has varied over time. Knowing how your system behaves under normal conditions helps when trying to troubleshoot system performance problems. Note that the performance tools themselves impact the performance of the system, so you need to find a tool with low overhead.


                This section describes some common tools for measuring and monitoring system performance. Here are some of the key metrics discussed in this section:



                • Buffer cache queue length:
                  Refers to the number of processes blocked that are waiting for updates to the buffer cache. If this value is high, it could be an indication of a memory bottleneck.



                • Context switches:
                  How often processes are being swapped out of the run queue.



                • CPU utilization:
                  Expressed as a percentage of time spent in various execution states. Low utilization indicates that the CPU spent the majority of its time in the idle state.



                • CPU run queue length:
                  The average number of processes in the run state waiting to be scheduled.



                • Memory utilization:
                  Usually expressed as a ratio of the amount of memory in use versus the total memory available.



                • Paging:
                  Refers to the transfer of data between virtual memory (disks) and physical memory.



                • Swapping:
                  Refers to the transfer of data between physical memory and a special virtual memory area reserved for swapping.




                Performance tools, such as BMC PATROL and MeasureWare, don't always provide the same set of metrics on all platforms. For simplicity, this section focuses on the Sun Solaris and HP-UX platforms only. Also, these products are continually being enhanced, so the actual metrics available for use in your environment may not precisely match the information presented in this section.


                MeasureWare


                HP MeasureWare Agent is a Hewlett-Packard product that collects and logs resource and performance metrics. MeasureWare agents run and collect data on the individual server systems being monitored. agents exist for many platforms and operating systems, including HP-UX, Solaris, and AIX.


                The MeasureWare agents collect data, summarize it, timestamp it, log it, and send alarms when appropriate. The agents collect and report on a wide variety of system resources, performance metrics, and user-defined data. The information can then be exported to spreadsheets or to performance analysis programs, such as PerfView. The data can be used by these programs to generate alarms to warn of potential performance problems. By using historical data, trends can be discovered. This can help address resource issues before they affect system performance.


                MeasureWare agents collect data at three different levels: global system metrics, application, and process metrics. Global and application data is summarized at five-minute intervals, whereas process data is summarized at one-minute intervals. Important applications can be defined by an administrator by listing the processes that make up an application in a configuration file.

























                Table 4-4. Categories of MeasureWare Agent Information
                Category
                Metric Type
                System
                CPU, disk, networking, memory, process queue depths, user/process information, and summary information
                Application
                CPU, disk, memory, process count, average process wait states, and summary information
                Process
                CPU, disk, memory, average process wait states, overall process lifetime, and summary information
                Transaction
                Transaction count, average response time, distribution of response time metrics, and aborted transactions


                The basic categories of MeasureWare data are listed in Table 4-4. Also included are optional modules for database and networking support. MeasureWare agents also collect data provided through the DSI interface.


                The following lists the global system metrics that are available from MeasureWare on HP-UX and Sun Solaris. Additional metrics provided by MeasureWare are covered in other chapters.



                • CPU use during interval


                • Number and rate of physical disk inputs/outputs


                • Maximum percent full of all disk file sets


                • System CPU use during interval


                • User CPU use during interval


                • CPU use at nice priorities


                • CPU idle time during interval


                • Rate of system procedure calls during interval


                • Main memory use


                • Swap space use on disk


                • Number and rate of memory page faults during interval


                • Number of process swaps during interval


                • Percentage of virtual memory currently in active use


                • Number of processes in run queue during interval


                • Number of processes waiting for a disk during interval


                • Number of processes waiting for memory during interval


                • Number of processes currently in sleep state during interval


                • Number of processes waiting for other reasons during interval


                • Number of user sessions during interval


                • Number of processes alive during interval


                • Number of processes active during interval


                • Number of processes started during interval


                • Number of processes completed during interval


                • Average runtime of completing process during interval


                • Operating system version


                • Number of processors in the system


                • Number of disk devices and their device IDs


                • Main memory size


                • Swapping space allocated


                • Disk I/O information (see Chapter 5)


                • Networking statistics (see Chapter 6)



                Note that, in addition to performance metrics, MeasureWare provides useful configuration information, such as number of processors and the number of disk devices.


                The following additional global system metrics are available on HP-UX:



                • CPU use at real-time priorities


                • CPU use for context switching during interval


                • CPU use for interrupt handling during interval


                • Number of processes waiting for interprocess communications during interval


                • Number of processes waiting on network transfers during interval


                • Number and rate of terminal transactions during interval


                • Average terminal transaction "think" time


                • Average terminal transaction first response time


                • Average terminal response to prompt time


                • Distribution of transaction first response times


                • Distribution of transaction response to prompt times



                You can have alarms sent based on conditions that involve a combination of metrics. For example, a CPU bottleneck alarm can be based on the CPU use and CPU run queue length.


                MeasureWare agents provide these alarms to PerfView for analysis, and to the IT/O management console. SNMP traps can also be sent at the time a threshold condition is met. Automated actions can be taken, or the operator can choose to take a suggested action.


                MeasureWare's extract command can be used to export data to other tools, such as spreadsheet programs. Additionally, Application Resource Measurement (ARM) APIs (described in detail in Chapter 7) can be used to instrument applications so that response times can be measured. The application response time information can be passed along to MeasureWare agents for analysis.


                Although MeasureWare provides extensive performance and resource information, it provides limited configuration information and no data about system faults. For further information, visit the HP Resource and Performance Management Web site at http://www.openview.hp.com /solutions/application/.



                GlancePlus


                GlancePlus is a real-time, graphical performance monitoring tool from Hewlett-Packard. It is used to monitor the performance and system resource utilization of a single system. Both Motif-based and character-based interfaces are available. The product can be used on HP-UX, Sun Solaris, and many other operating systems.


                GlancePlus collects information similar to the information collected by MeasureWare, and samples data more frequently than MeasureWare. GlancePlus can be used to graphically view the following:



                • Current CPU, memory, swap, and disk activity and utilization (see Figure 4-9)

                  Figure 4-9. The GlancePlus main screen showing system utilization.




                • Application and process information


                • Transaction information, if the MeasureWare Agent is installed and active


                • Alarm information, color-coded to reflect severity


                • CPU utilization, with per-processor information available for multiprocessor systems


                • Memory utilization, split among cache, user, and system memory


                • Disk utilization, with the I/O paths of the top disk users indicated


                • I/O activity, by filesystem or logical volume



                GlancePlus is also capable of setting and receiving performance-related alarms. Customizable rules determine when a system performance problem should be sent as an alarm. The rules are managed by the GlancePlus Adviser. The Adviser menu gives you the option to Edit Adviser Syntax. When you select this option, all the alarm conditions are shown, and you can then modify them.



                Listing 4-13 Defining alarms in GlancePlus.


                alarm CPU_Bottleneck > 50 for 2 minutes
                start
                if CPU_Bottleneck > 90 then
                red alert "CPU Bottleneck probability= ", CPU_Bottleneck, "%"
                else
                yellow alert "CPU Bottleneck probability= ", CPU_Bottleneck, "%"
                repeat every 10 minutes
                if CPU_Bottleneck > 90 then
                red alert "CPU Bottleneck probability= ", CPU_Bottleneck, "%"
                else
                yellow alert "CPU Bottleneck probability= ", CPU_Bottleneck, "%"
                end
                reset alert "End of CPU Bottleneck Alert"

                Alarms result in onscreen notification, with the color representing the criticality of the alarm. An alarm can also trigger a command or script to be executed automatically. Instead of sending an alarm, GlancePlus can print messages or notify you by executing a UNIX command, such as mailx, using its EXEC feature.


                To configure events, you need to edit a configuration file. The GlancePlus Adviser syntax file (/var/opt/perf/adviser.syntax) contains symptom and alarm configuration. Additional syntax files can also be used. A condition for an alarm to be sent can be based on rules involving different symptoms. Listing 4-13 shows an example of how you can set up an alarm for CPU bottlenecks that is based on CPU utilization and the size of the run queue.


                You can also execute scripts in command mode. To execute a script, type:



                glance -adviser_only --syntax <script file name>

                In this example, a yellow alert is sent to the GlancePlus Alarm screen if a CPU bottleneck is suspected. As a bottleneck becomes more likely, the alarm changes to red. You can define the threshold for when the alarm should be sent. The symptoms are re-evaluated at every time interval.


                Here is a sampling of some of the useful system metrics that can be monitored with GlancePlus:



                • CPU utilization


                • CPU run queue length


                • Number of processors


                • Filesystem buffer cache queue length


                • Disk utilization and queue length


                • Physical memory capacity


                • Amount of physical memory available


                • Memory page fault rate


                • Total swap space


                • Amount of swap space available


                • Filesystem I/O rates


                • Amount of buffer cache available


                • Available shared memory


                • Available file table entries


                • Available process table entries


                • Most active processes


                • Wait states


                • System table resources


                • Open file information



                More than 600 metrics are accessible from GlancePlus. Some of these metrics are discussed in other chapters. The complete list of metrics can be found by using the online help facility. This information can also be found in the directory /opt/perf/paperdocs/gp/C.


                GlancePlus allows filters to be used to reduce the amount of information shown. For example, you can set up a filter in the Process view to show only the more active system processes.


                GlancePlus can also show short-term historical information. When selected, the alarm buttons, visible on the main GlancePlus screen, show a history of alarms that have occurred.


                GlancePlus also shows Process Resource Manager behavior, if PRM is installed, and allows the PRM process group entitlements to be changed.


                For further information, visit the HP Resource and Performance Management Web site at http://www.openview.hp.com/solutions/application/.



                PerfView


                PerfView is a graphical performance analysis tool from Hewlett-Packard. It is used to graphically display performance and system resource utilization for one system or multiple systems simultaneously, so that comparisons can be made. A variety of performance graphs can be displayed. The graphs are based on data collected over a period of time, unlike the real-time graphs of GlancePlus. This tool runs on HP-UX or NT systems and works with data collected by MeasureWare agents.


                PerfView has the following three main components:



                • PerfView Monitor:
                  Provides the ability to receive alarms. A textual description of an alarm can be displayed. Alarms can be filtered by severity, type, or source system. Also, after an alarm is received, the alarm can be selected to display a graph of related metrics. An operator can monitor trends leading to failures and then take proactive actions to avoid problems. Graphs can be used for comparison between systems and to show a history of resource consumption. An internal database is maintained that keeps a history of alarm notification messages.



                • PerfView Analyzer:
                  Provides resource and performance analyses for disks and other resources. System metrics can be shown at three different levels: process, application (configured by the user as a set of processes), and global system information. It relies on data received from MeasureWare agents on managed nodes. Data can be analyzed from up to eight systems concurrently. All MeasureWare data sources are supported. PerfView Analyzer is required by both PerfView Monitor and PerfView Planner.



                • PerfView Planner:
                  Provides forecasting capability. Graphs can be extrapolated into the future. A variety of graphs (such as linear, exponential, s-curve, and smoothed) can be shown for forecasted data.




                PerfView can be used to monitor critical system resources. Figure 4-10 shows the Perf- View Analyzer graphing memory utilization and paging rates. Other predefined graphs exist for history, CPU, memory, and queue information. For example, the history graph shows CPU, active processes, disk utilization, memory pageout rates, and swapout rates.


                Figure 4-10. PerfView graph showing memory utilization and paging rates.



                The PerfView Analyzer graph shown in Figure 4-11 compares the performance of two systems simultaneously. Up to eight systems can be compared in one graph. Comparing system utilization can be useful when determining where to deploy new applications, or when adding new users.


                Figure 4-11. PerfView graph comparing two systems.



                PerfView's ability to show history and trend information can be helpful in diagnosing system problems. Graphing performance information can help you to understand whether a persistent problem exists or if an anomaly is simply a momentary spike of activity.


                To diagnose a problem further, PerfView Monitor can allow users to change time intervals, to try to find the specific time a problem occurred. The graph is redrawn showing the new time period.


                PerfView is integrated with several other monitoring tools. You can launch GlancePlus from within PerfView by accessing the Tools menu. PerfView can be launched from the IT/O Applications Bank as well. When troubleshooting an event in the IT/O Message Browser window, you can launch PerfView to see a related performance graph.


                PerfView Monitor is not used with IT/O. Instead, the IT/O Message Browser is used. When an alarm is received in IT/O, the operator can click the alarm and a related PerfView graph can be shown.


                PerfView can show information collected from multiple systems in a single performance graph. The PerfView and ClusterView products have also been integrated to enable the operator to select a cluster symbol on an HP OpenView submap and launch the PerfView application. This quickly shows a performance comparison between all systems in the cluster.


                For further information, visit the HP Resource and Performance Management Web site at http://www.openview.hp.com/solutions/application/.



                BMC PATROL for UNIX


                BMC Software provides monitoring capabilities through its PATROL software suite. PATROL is a system, application, and event management suite for system and database administrators. PATROL provides the basic framework for defining thresholds, sending and translating events, and so forth. Optional products, called Knowledge Modules (KMs), are capable of monitoring specific components. For example, BMC PATROL includes KMs for UNIX, SAP R/3, Oracle, Informix, and other applications. In fact, more than 40 KMs are available from BMC for use with PATROL.


                With the PATROL KM for UNIX, managed components include the CPU, memory, users, kernel, processes, printers, security, and filesystems. These components are discovered automatically and represented on the PATROL console with status icons. System utilization can be shown as graphs, to capture trends, and data can either be displayed in real time or saved in log files.


                Like other graphical monitoring tools, PATROL provides an Event Manager window, which can show received events. Figure 4-12 highlights disk and NFS events received at the console.


                Figure 4-12. PATROL Event Manager showing disk and NFS events.



                For memory and swap resources, PATROL can show total real memory available, total virtual memory available, a list of swap devices, the number of processes swapped, and swap space utilization.


                For the CPU, PATROL can show bottlenecks and utilization information, along with a variety of statistics, such as CPU idle time, run queue length, and swap queue length. Information about the operating system itself is also maintained, such as the name, version, and creation date.


                PATROL can display the total number of processes, the number of zombie processes, and heavy CPU users. Through the PATROL console, you can perform administrative tasks, such as reprioritizing processes.


                PATROL also can display the total number of users and sessions, and can check security by monitoring the number of failed user and privileged logins. You can check the printer queue to see how many jobs are in the queue and to determine the state of the printer.


                PATROL can monitor the filesystem and can automatically determine the effectiveness of the buffer cache. Regular reports can be generated to check disk usage per user, to create a list of the largest files, or to list files that have not been accessed in a long time. Corrective actions, such as removing core files, can also be configured.


                In addition to the system metrics monitored by PATROL, the KM for UNIX includes a set of tools to provide additional system monitoring, including tools to monitor CPU usage, paging activity, I/O caching, swap activity, and system log files, tools to check filesystem and kernel file resources, and tools to monitor printer queues.


                The following list shows some of the parameters available for monitoring from the PATROL KM for UNIX:



                • CPUCpuUtil


                • CPUIdleTime


                • CPUInt


                • CPULoad


                • CPUProcsWaiting


                • CPUProcSwch


                • CPURunQSize


                • CPUSysTime


                • CPUUserTime


                • KERSysCall


                • MEMActiveVirPage


                • MEMFreeMem


                • MEMPageAnticipated


                • MEMPageFreed


                • MEMPageIn


                • MEMPageOut


                • MEMPageScanned


                • PRNQlength


                • PROCAvgUsrProc


                • PROCCpuHogs


                • PROCNoZombies


                • PROCNumProcs


                • PROCProcWait


                • PROCUserProcs


                • SWPSwapFreeSpace


                • SWPSwapIn


                • SWPSwapOut


                • SWPSwapSize


                • SWPSwapUsedPercent


                • USRNoSession


                • USRNoUser



                The BMC PATROL KM for UNIX is supported on Bull, DG AViiON, DEC Alpha, DEC Ultra, Hewlett-Packard, NCR, Olivetti, OSF/1, Pyramid, RS/6000, SCO, Sequent, SGI, Sun Solaris, SunOS, Unisys, and UNIXWare systems.



                Candle


                The Candle Corporation provides software for mainframes and distributed systems. The Availability Command Center is a suite of integrated performance monitors and availability management solutions. The Candle Command Center for Distributed Systems is used to manage the performance and availability of computer systems and applications. Command Center solutions are available for UNIX, NT, IBM AIX, and MVS platforms. The Command Center for Distributed Systems can monitor many systems from a single console.


                Candle's management agents provide detailed performance and availability metrics. The OMEGAMON Monitoring Agent for UNIX provides system information standardized across multiple UNIX platforms (IBM AIX, HP-UX, Sun Solaris, and SunOS). Available metrics include OS and CPU performance, process status, and disk performance. Disk performance is expressed as kilobytes per second, percent busy, and transfers per second. Disk performance and other tools can be launched from the Command Center console.


                The Command Center provides some predefined threshold conditions for sending alerts. You also can change these conditions. If you decide to change the threshold conditions, they are automatically redistributed to the appropriate systems. Different alarm severity levels can be used.


                The Command Center's event correlation engine and Visual Policy Editor can be used to create rules that automatically recognize the symptoms of problems and develop automated responses.


                Candle has performed additional testing of the Command Center with MC/ServiceGuard to ensure that its Command Center for Distributed Systems product runs in that environment. More information about Candle Corporation's products can be found on the Web at http://www.candle.com.









                  I l@ve RuBoard