Wednesday, December 16, 2009

Tuning and Customizing a Linux System










 < Free Open Study > 










Tuning and Customizing a Linux System


Daniel L. Morrill


Apress





All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher.



1-893115-27-5



Trademarked names may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, we use the names only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark.


Technical Reviewer: Douglas Kilpatrick


Editorial Directors: Dan Appleman, Peter Blackburn, Gary Cornell, Jason Gilmore, Simon Hayes, Karen Watterson, John Zukowski



Managing Editor: Grace Wong

Development Editor and Indexer: Valerie Perry

Copy Editors: Kim Goodfriend, Ami Knox, Nicole LeClerc

Production Editor: Kari Brooks

Compositor: Diana Van Winkle, Van Winkle Design Group

Artist: Cara Brunk, Blue Mud Productions

Cover Designer: Kurt Krames

Manufacturing Manager: Tom Debolski

Marketing Manager: Stephanie Rodriguez


Distributed to the book trade in the United States by Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY, 10010 and outside the United States by Springer-Verlag GmbH & Co. KG, Tiergartenstr. 17, 69112 Heidelberg, Germany.


In the United States, phone 1-800-SPRINGER, email orders@springer-ny.com, or visit http://www.springer-ny.com.


Outside the United States, fax +49 6221 345229, email orders@springer.de, or visit http://www.springer.de.


For information on translations, please contact Apress directly at 2560 9th Street, Suite 219, Berkeley, CA 94710. Phone 510-549-5930, fax: 510-549-5939, email info@apress.com, or visit http://www.apress.com.


The information in this book is distributed on an "as is" basis, without warranty. Although every precaution has been taken in the preparation of this work, neither the author nor Apress shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work.


The source code for this book is available to readers at http://www.apress.com in the Downloads section. You will need to answer questions pertaining to this book in order to successfully download the code.




To Aimee Morgan, who has always wanted to see her name in a bookstore.
This will have to do for now.




About the Author



Dan Morrill holds a master's degree in computer science and currently works as a researcher at GE's Global Research Center in upstate New York. He is skilled in a wide variety of software development areas, from real-time and embedded systems development, to desktop applications, to web infrastructure and applications. He has been using Unix systems since 1994 and has used Linux systems exclusively since 1998. Before being Enlightened, he used other operating systems, including IBM's OS/2, Microsoft's Windows, and various Unix platforms.


Dan lives in a modest house filled with far too many toys for his own good and delights in tinkering with them all. Sometimes they even survive.



About the Technical Reviewer



Douglas Kilpatrick has been working with Unix systems since 1992 and Linux systems since 1993. While he does have a beard, he offically denies rumors that he wears suspenders. Douglas has been working in the field of computer security for the last 4 years and has taken to random fits of maniacal laughter when the subject is raised.



Acknowledgments


First, I must thank the staff at Apress, especially the editors, whose patience and professionalism are like unto angels. If true quality takes time, this had better be the best book ever written, because they certainly waited long enough.


I am also indebted to my parents, Kevin and Carolyn Morrill, who set the stage for that fateful day when the busy glow of my own computer monitor first lit upon my face. Thanks for the support, then and since.


Sometimes it seems as if much of my success was inspired by others whose talent so vastly surpassed mine. Without these people and the professional milestones to which they drove me, this book would not exist, and so I must thank: Vincent Kane, for shaming me into learning programming; Chris "Hocy" Ho, for shaming me into learning Slackware; Douglas Kilpatrick, for shaming me into learning C; Richard Arthur, for shaming me into learning software engineering; David Czarnecki, for shaming me into starting a book; and Aimee Morgan, for shaming me into completing it.


And finally, to my colleagues at GE—both current and former—I can say only this: YLB. YPM.


















 < Free Open Study > 



Section 25.253. Select: a graphical selection list










25.253. Select: a graphical selection list


DOM Level 2 HTML: Node Element HTMLElement Select



25.253.1. Properties




readonly Form form


The <form> element that contains this <select> element.





readonly long length


The number of <option> elements contained by this <select> element. Same as options.length.





readonly HTMLCollection options


An array (HTMLCollection) of Option objects that represent the <option> elements contained in this <select> element, in the order in which they appear. See Select.options[] for further details.





long selectedIndex


The position of the selected option in the options array. If no options are selected, this property is -1. If multiple options are selected, this property holds the index of the first selected option.


Setting the value of this property selects the specified option and deselects all other options, even if the Select object


has the multiple attribute specified. When you're doing listbox selection (when size > 1), you can deselect all options by setting selectedIndex to -1. Note that changing the selection in this way does not trigger the onchange( ) event handler.





readonly String type


If multiple is true, this property is "select-multiple"; otherwise, it is "select-one". This property exists for compatibilty with the type property of the Input object.




In addition to the properties above, Select objects also mirror HTML attributes with the following properties:


Property

Attribute

Description

boolean disabled


disabled


Whether the user element is disabled


boolean multiple


multiple


Whether more than one option may be selected


String name


name


Element name for form submission


long size


size


The number of options to display at once


long tabIndex


tabindex


Position of Select element in the tabbing order






25.253.2. Methods






add( )


Inserts a new Option object into the options array, either by appending it at the end of the array or by inserting it before another specified option.





blur( )


Takes keyboard focus away from this element.





focus( )


Transfers keyboard focus to this element.





remove( )


Removes the <option> element at the specified position.






25.253.3. Event Handlers




onchange


Invoked when the user selects or deselects an item.





25.253.4. HTML Syntax




A Select element is created with a standard HTML <select> tag. Options to appear within the Select element are created with the <option> tag:



<form>
...
<select
name="name" // A name that identifies this element; specifies name property
[ size="integer" ] // Number of visible options in Select element
[ multiple ] // Multiple options may be selected, if present
[ onchange="handler" ] // Invoked when the selection changes
>
<option value="value1" [selected]>option_label1
<option value="value2" [selected]>option_label2// Other options here
</select>
...
</form>





25.253.5. Description


The Select element represents an HTML <select> tag, which displays a graphical list of choices to the user. If the multiple attribute is present in the HTML definition of the element, the user may select any number of options from the list. If that attribute is not present, the user may select only one option, and options have a radio-button behaviorselecting one deselects whichever was previously selected.


The options in a Select element may be displayed in two distinct ways. If the size attribute has a value greater than 1, or if the multiple attribute is present, they are displayed in a list box that is size lines high in the browser window. If size is smaller than the number of options, the listbox includes a scrollbar so all the options are accessible. On the other hand, if size is specified as 1 and multiple is not specified, the currently selected option is displayed on a single line, and the list of other options is made available through a drop-down menu. The first presentation style displays the options clearly but requires more space in the browser window. The second style requires minimal space but does not display alternative options as explicitly.


The options[] property of the Select element is the most interesting. This is the array of Option objects that describe the choices presented by the Select element. The length property specifies the length of this array (as does options.length). See Option for details.


For a Select element without the multiple attribute specified, you can determine which option is selected with the selectedIndex property. When multiple selections are allowed, however, this property tells you the index of only the first selected option. To determine the full set of selected options, you must iterate through the options[] array and check the selected property of each Option object.


The options displayed by the Select element may be dynamically modified. Add a new option with the add( ) method and the Option( ) constructor; remove an option with the remove( ) method. Changes are also possible by direct manipulation of the options array.




25.253.6. See Also


Form, Option, Select.options[]; Chapter 18













Comments










[Page 811 (continued)]

Comments


Java recognizes two types of comments. C-style comments use the same syntax found in C and C++. They are delimited by /* ... */ and //. The first set of delimiters is used to delimit a multiline comment. The Java compiler will ignore all text that occurs between /* and */. The second set of delimiters is used for a single-line comment. Java will ignore all the code on the rest of the line following a double slash (//). C-style comments are called implementation comments and are mainly used to describe the implementation of your code.


Documentation comments are unique to Java. They are delimited by /** ... */. These are used mainly to describe the specification or design of the code rather than its implementation. When a file containing documentation comments is processed by the javadoc tool that comes with the Java Development Kit (JDK), the documentation comments will be incorporated into an HTML document. This is how online documentation has been created for the Java library classes.



Implementation Commenting Guidelines


Implementation (C-style) comments should be used to provide an overview of the code and to provide information that is not easily discernible from the code itself. They should not be used as a substitute for poorly written or poorly designed code.


In general, comments should be used to improve the readability of the code. Of course, readability depends on the intended audience. Code that is easily readable by an expert programmer may be completely indecipherable to a novice. Our commenting guidelines are aimed at someone who is just learning to program in Java.






[Page 812]

Block Comments


A block comment or comment block is a multiline comment used to describe files, methods, data structures, and algorithms:


/*
* Multiline comment block
*/




Single-Line Comments


A single-line comment can be delimited either by // or by /* ... */. The // is also used to comment out a line of code that you want to skip during a particular run. The following example illustrates these uses:


/* Single line comment */
System.out.println("Hello"); // End of line comment
// System.out.println("Goodbye");


Note that the third line is commented out and would be ignored by the Java compiler.


In this text, we generally use slashes for single-line and end-of-line comments. And we frequently use end-of-line comments to serve as a running commentary on the code itself. These types of comments serve a pedagogical purposeto teach you how the code works. In a "production environment" it would be unusual to find this kind of running commentary.




Java Documentation Comments


Java's online documentation has been generated by the javadoc tool that comes with the Java Development Kit (JDK). To conserve space, we use documentation comments only sparingly in the programs listed in this textbook. However, javadoc comments are used more extensively to document the online source code that accompanies the textbook.


Documentation comments are placed before classes, interfaces, constructors, methods, and fields. They generally take the following form:


  /**
* The Example class blah blah
* @author J. Programmer
*/
public class Example { ...


Note how the class definition is aligned with the beginning of the comment. Javadoc comments use special tags, such as author and param, to identify certain elements of the documentation. For details on javadoc, see:


http://java.sun.com/reference/docs/index.html













COCOMO II










 < Free Open Study > 





COCOMO II



COCOMO II is a revised and extended version of the model, built upon the original COCOMO. It more easily allows the estimation of object-oriented software, software created via spiral or evolutionary models, and applications developed from commercial-off-the-shelf software. It was created, in part, to develop software cost database and tool support capabilities for continuous model improvement and to provide a quantitative analytic framework and a set of tools and techniques for evaluating the effects of software technology improvements on software life cycle costs and schedules.



During the earliest conceptual stages of a project, the model uses object point estimates to compute effort. During the early design stages, when little is known about project size or project staff, unadjusted function points are used as an input to the model. After an architecture has been selected, design and development begin with SLOC input to the model.



COCOMO II provides likely ranges of estimates that represent one standard deviation around the most likely estimate. Accommodating factors that received little attention in the first version, COCOMO now adjusts for software reuse and re-engineering where automated tools are used for translation of existing software. COCOMO II also accounts for requirements volatility in its estimates.



Whereas the exponent on size in the effort equations in the original COCOMO varies with the development mode, COCOMO II uses scaling factors to generalize and replace the effects of the development mode.



The COCOMO II application composition model uses object points to perform estimates. The model assumes the use of integrated CASE tools for rapid prototyping. Objects include screens, reports, and modules in third-generation programming languages. The number of raw objects is estimated, the complexity of each object is estimated, and the weighted total (object-point count) is computed. The percentage of reuse and anticipated productivity is also estimated. With this information, an effort estimate can be computed.



COCOMO II explicitly handles the availability of additional information in later stages of a project, the nonlinear costs of reusing software components, and the effects of several factors on the diseconomies of scale. (Some of these are the turnover rate of the staff, the geographic dispersion of the team, and the "maturity" of the development process as defined by the SEI.) The model also revises some coefficient values and eliminates discontinuities present in the old model (related to "development modes" and maintenance vs. adaptation).



COCOMO II is really three different models:





  • The application composition model

    Suitable for projects built with modern GUI-builder tools. Based on new object points.



  • The early design model

    Used to get rough estimates of a project's cost and duration before the entire architecture has been determined. It uses a small set of new cost drivers and new estimating equations, and it is based on unadjusted function points or KSLOC.



  • The post-architecture model

    The most detailed COCOMO II model, to be used after the development of the project's overall architecture. It has new cost drivers, new line counting rules, and new equations.



In collaboration with Rational, Inc., COCOMO II integrates phases and milestones with those of the Rational Unified Process and has provided phase and activity distribution estimators for COCOMO II.












     < Free Open Study > 



    19.5 Configuring a Keyboard



    [ Team LiB ]






    19.5 Configuring a Keyboard





    Windows
    9X/2000/XP allows you to customize some aspects of keyboard behavior.
    To do so, run the Keyboard applet (Start Settings
    Control Panel Keyboard) to
    display the Keyboard Properties dialog, which includes the following
    pages:



    Speed (Windows 9X/2000/XP)


    Includes settings for how long a key must be held down before it
    begins repeating and for how quickly it repeats. Also allows setting
    cursor blink rate, which controls how fast the virtual cursor blinks
    in Windows applications. Change any of these settings by dragging the
    associated slider. Changes take effect immediately when you click
    Apply or OK.



    Language (Windows 9X) or Input Locales (Windows 2000/XP)


    These pages are nearly identical except for their names. They are
    used to install additional keyboard languages and layouts. Windows 9X
    allows specifying a key sequence (either Left Alt-Shift or
    Ctrl-Shift) to rotate through available languages from the keyboard.
    Windows 2000/XP provides the same choices, and adds an optional
    second key sequence to jump directly to the default language using
    the same key sequences listed for Windows 9X, with the addition of
    one character, 0 through 9, tilde, or grave accent. Windows 2000/XP
    also allows specifying the method used to turn off Caps Lock, either
    by pressing the Caps Lock key or by pressing the Shift key.



    Hardware (Windows 2000/XP)


    This page displays the type of keyboard installed. It provides a
    Troubleshoot button, which invokes the Keyboard Troubleshooter
    Wizard, and a Properties button, which simply displays Device Manager
    properties for the keyboard.





    Installing a programmable keyboard and driver may install a separate
    management application, or may simply add pages and options to the
    standard Keyboard Properties dialog. For example, Figure 19-3 shows the additional page of the extended
    Keyboard Properties dialog that results from installing the Microsoft
    IntelliType Pro driver under Windows 2000. If you install a
    programmable keyboard, make sure to locate and explore the options
    its driver provides. The default driver installation for some
    programmable keyboards leaves some very useful options disabled or
    set to less-than-optimum values.






    Figure 19-3. The Windows 2000 Keyboard Properties dialog as modified by installing the Microsoft IntelliType Pro driver





    Finally, do not overlook the Accessibility Options dialog, shown in
    Figure 19-4 (Start Settings
    Control Panel Accessibility
    Options). This dialog is available in both Windows 9X and Windows
    2000/XP. Although intended primarily to aid people with various
    disabilities, some options available here may be useful to anyone. In
    particular, anyone who has accidentally toggled Caps Lock on will
    appreciate the audible warning provided by ToggleKeys.






    Figure 19-4. The Windows XP Accessibility Options dialog





    Linux also provides comprehensive keyboard configuration options via
    the configuration utilities included with the Gnome and KDE desktop
    environments. Figure 19-5, for example, shows the
    Red Hat 8.X Gnome Keyboard Accessibility Configuration dialog, which
    can be accessed by running
    gnome-keyboard-properties from the command line or
    by clicking Preferences Keyboard
    Accessibility from the Start menu.






    Figure 19-5. The Linux AccessX Keyboard Accessibility Configuration dialog









      [ Team LiB ]



      3.6 Feedback













      3.6 Feedback


      Feedback is another classic engineering principle that applies to analysis and testing. Feedback applies both to the process itself (process improvement) and to individual techniques (e.g., using test histories to prioritize regression testing).


      Systematic inspection and walkthrough derive part of their success from feedback. Participants in inspection are guided by checklists, and checklists are revised and refined based on experience. New checklist items may be derived from root cause analysis, analyzing previously observed failures to identify the initial errors that lead to them.




      Summary


      Principles constitute the core of a discipline. They form the basis of methods, techniques, methodologies and tools. They permit understanding, comparing, evaluating and extending different approaches, and they constitute the lasting basis of knowledge of a discipline.


      The six principles described in this chapter are




      • Sensitivity: better to fail every time than sometimes,




      • Redundancy: making intentions explicit,




      • Restriction: making the problem easier,




      • Partition: divide and conquer,




      • Visibility: making information accessible, and




      • Feedback: applying lessons from experience in process and techniques.




      Principles are identified heuristically by searching for a common denominator of techniques that apply to various problems and exploit different methods, sometimes borrowing ideas from other disciplines, sometimes observing recurrent phenomena. Potential principles are validated by finding existing and new techniques that exploit the underlying ideas. Generality and usefulness of principles become evident only with time. The initial list of principles proposed in this chapter is certainly incomplete. Readers are invited to validate the proposed principles and identify additional principles.





      Further Reading


      Analysis and testing is a relatively new discipline. To our knowledge, the principles underlying analysis and testing have not been discussed in the literature previously. Some of the principles advocated in this chapter are shared with other software engineering disciplines and are discussed in many books. A good introduction to software engineering principles is the third chapter of Ghezzi, Jazayeri, and Mandrioli's book on software engineering [GJM02].





      Exercises

























      3.1  




      Indicate which principles guided the following choices:




      1. Use an externally readable format also for internal files, when possible.




      2. Collect and analyze data about faults revealed and removed from the code.




      3. Separate test and debugging activities; that is, separate the design and execution of test cases to reveal failures (test) from the localization and removal of the corresponding faults (debugging).




      4. Distinguish test case design from execution.




      5. Produce complete fault reports.




      6. Use information from test case design to improve requirements and design specifications.




      7. Provide interfaces for fully inspecting the internal state of a class.




       

      3.2  



      A simple mechanism for augmenting fault tolerance consists of replicating computation and comparing the obtained results. Can we consider redundancy for fault tolerance an application of the redundancy principle?


       

      3.3  



      A system safety specification describes prohibited behaviors (what the system must never do). Explain how specified safety properties can be viewed as an implementation of the redundancy principle.


       

      3.4  



      Process visibility can be increased by extracting information about the progress of the process. Indicate some information that can be easily produced to increase process visibility.
















      PHP and the DOM




      I l@ve RuBoard










      PHP and the DOM


      PHP 4.0 comes with a primitive, though effective, implementation of the DOM, based on the libxml library. Created by Daniel Veillard, libxml (http://www.xmlsoft.org/) is a modular, standards-compliant C library that provides XML parsing capabilities to the GNOME project (http://www.gnome.org/).


      If you're using a stock PHP binary, it's quite likely that you'll need to recompile PHP to add support for this library to your PHP build. (Detailed instructions for accomplishing this are available in Appendix A, "Recompiling PHP to Add XML Support.")



      Under Construction


      If you're planning on using PHP's DOM extension in your development activities, be warned that this extension is still under development and is, therefore, subject to change without notice. Consequently, DOM code that works with one version of PHP may need to be rewritten or retested with subsequent versions.


      Note also that the examples in this chapter have been tested with the DOM extension that ships with PHP 4.1.1, and are not likely to work with earlier versions because PHP's DOM implementation underwent some fairly radical changes between the release of PHP 4.0.6 and PHP 4.1.1. If you're using an earlier PHP build, you might want to upgrade to PHP 4.1.1 in order to try out the examples in this chapter.




      A Simple Example


      When PHP parses an XML document, it creates a hierarchical tree structure (mirroring the structure of the document) that is composed of objects. Each of these objects has standard properties and methods, and you can use these properties and methods to traverse the object tree and access specific elements, attributes, or character data.


      The best way to understand how this works is with a simple example. Take a look at Listing 3.2, which demonstrates the basic concepts of this technique by traversing a DOM tree to locate a particular type of element, and print its value.



      Listing 3.2 Traversing a DOM Tree


      <?php

      // XML data
      $xml_string = "<?xml version='1.0'?>
      <sentence>What a wonderful profusion of colors and smells in the market
      <vegetable color='green'>cabbages</vegetable>,
      <vegetable color='red'>tomatoes</vegetable>,
      <fruit color='green'>apples</fruit>,
      <vegetable color='purple'>aubergines</vegetable>,
      <fruit color='yellow'>bananas</fruit>
      </sentence>";

      // create a DOM object from the XML data
      if(!$doc = xmldoc($xml_string))
      {
      die("Error parsing XML");
      }

      // start at the root
      $root = $doc->root();

      // move down one level to the root's children
      $children = $root->children();
      // iterate through the list of children
      foreach ($children as $child)
      {
      // if <vegetable> element
      if ($child->tagname == "vegetable")
      {
      // go down one more level
      // get the text node
      $text = $child->children();
      // print the content of the text node
      echo "Found: " . $text[0]->content . "<br>";
      }
      }

      ?>

      Let's go through Listing 3.2 step-by-step:



      1. The first order of business is to feed the parser the XML data, so that it can generate the DOM tree. This is accomplished via the xmldoc() function, which accepts a string of XML as argument, and creates a DOM object representing the XML data. (You can use xmldocfile() to parse an XML file instead of a string. Check out Listing 3.5 for an example.) The following line of code creates a DOM object, and assigns it to the PHP variable $doc:


        if(!$doc = xmldoc($xml_string))
        {
        die("Error parsing XML");
        }

      2. This newly created DOM object has certain properties and methods. One of the most important ones is the root() method, which returns an object representing the document's root element.

        The following line of code returns an object representing the document element, and assigns it to the PHP variable $root:



        $root = $doc->root();

      3. This returned node is itself an object, again with properties and methods of its own. These methods and properties provide information about the node, and its relationship to other nodes in the tree: its name and type, its parent, and its children. However, the elements I'm looking for aren't at this level�they're one level deeper. And so I used the root node's children() method to obtain a list of the nodes below it in the document hierarchy:


        $children = $root->children();

      4. This list of child nodes is returned as an array containing both text and element nodes. All I need to do now is iterate through this node list, looking for vegetable elements. As and when I find these, I dive one level deeper into the tree to access the corresponding character data and print it (this is a snap, given that each text node has a content property).


        foreach ($children as $child)
        {
        // if <vegetable> element
        if ($child->tagname == "vegetable")
        {
        // go down one more level
        // get the text node
        $text = $child->children();
        // print the content of the text node
        echo "Found: " . $text[0]->content . "<br>";
        }
        }

        When this script runs, it produces the following output:



        Found: cabbages
        Found: tomatoes
        Found: aubergines


      As Listing 3.2 demonstrates, DOM tree traversal takes place primarily by exploiting the parent-child relationships that exist between the nodes of the tree. After traversal to a particular depth has been accomplished, node properties can be used to extract all required information from the tree.







        I l@ve RuBoard



        Local and Anonymous Inner Classes










        [Page 834 (continued)]

        Local and Anonymous Inner Classes


        In this next example, ConverterFrame, a local class, is used to create an ActionEvent handler for the application's two buttons (Fig. F.2).


        Figure F.2. The use of a local class as an ActionListener adapter.

        (This item is displayed on page 835 in the print version)





        import javax.swing.*;
        import java.awt.*;
        import java.awt.event.*;

        public class ConverterFrame extends JFrame {
        private Converter converter = new Converter(); // Reference to app
        private JTextField inField = new JTextField(8);
        private JTextField outField = new JTextField(8);
        private JButton metersToInch;
        private JButton kgsToLbs;

        public ConverterFrame() {
        metersToInch = createJButton("Meters To Inches");
        kgsToLbs = createJButton("Kilos To Pounds");
        getContentPane().setLayout( new FlowLayout() );
        getContentPane().add(inField);
        getContentPane().add(outField);
        getContentPane().add(metersToInch);
        getContentPane().add(kgsToLbs);
        } // ConverterFram()

        private JButton createJButton(String s) { // A method to create a JButton
        JButton jbutton = new JButton(s);
        class ButtonListener implements ActionListener { // Local class

        public void actionPerformed(ActionEvent e) {
        double inValue = double.valueOf(inField.getText()).doubleValue();
        JButton button = (JButton) e.getSource();
        if (button.getText().equals("Meters To Inches"))
        outField.setText(""+ converter.new
        Distance().metersToInches(inValue));
        else
        outField.setText(""+ converter.new Weight().kgsToPounds(inValue));
        } // actionPerformed()
        } // ButtonListener class
        ActionListener listener = new ButtonListener(); // Create a listener
        jbutton.addActionListener(listener); // Register buttons with listener
        return jbutton;
        } // createJButton()

        public static void main(String args[]) {
        ConverterFrame frame = new ConverterFrame();
        frame.setSize(200,200);
        frame.setVisible(true);
        } // main()
        } // ConverterFrame class




        As we have seen, Java's event-handling model uses predefined interfaces, such as the ActionListener interface, to handle events. When a separate class is defined to implement an interface, it is sometimes called an adapter class. Rather than defining adapter classes as top-level classes, it is often more convenient to define them as local or anonymous classes.


        The key feature of the ConverterFrame program is the createJButton() method. This method is used instead of the JButton() constructor to create buttons and to create action listeners for the buttons. It takes a single String parameter for the button's label. It begins by instantiating a new JButton, a reference to which is passed back as the method's return value. After creating an instance button, a local inner class named ButtonListener is defined.


        The local class merely implements the ActionListener interface by defining the actionPerformed method. Note how actionPerformed() uses the containing class's converter variable to acquire access to the metersToInches() and kgsToPounds() methods, which are inner class methods of the Converter class (Fig. F.1). A local class can use instance variables, such as converter, that are defined in its containing class.


        After defining the local inner class, the createJButton() method creates an instance of the class (listener) and registers it as the button's action listener. When a separate object is created to serve as listener in this way, it is called an adapter. It implements a listener interface and thereby serves as adapter between the event and the object that generated the event. Any action events that occur on any buttons created with this method will be handled by this adapter. In other words, for any buttons created by the createJButton() method, a listener object is created and assigned as the button's event listener. The use of local classes makes the code for doing this much more compact and efficient.


        [Page 835]

        Local classes have some important restrictions. Although an instance of a local class can use fields and methods defined within the class or inherited from its superclasses, it cannot use local variables and parameters defined within its scope unless these are declared final. The reason for this restriction is that final variables receive special handling by the Java compiler. Because the compiler knows that the variable's value won't change, it can replace uses of the variable with their values at compile time.





        [Page 836]

        Anonymous Inner Classes


        An anonymous inner class is just a local class without a name. Instead of using two separate statements to define and instantiate the local class, Java provides syntax that lets you do it in one expression. The following code illustrates how this is done:


        private JButton createJButton(String s) {        // A method to create a JButton
        JButton jbutton = new JButton(s);

        jbutton.addActionListener( new ActionListener() { // Anonymous class
        public void actionPerformed(ActionEvent e) {
        double inValue = double.valueOf(inField.getText()).doubleValue();
        JButton button = (JButton) e.getSource();
        if (button.getLabel().equals("Meters To Inches"))
        outField.setText("" + converter.new
        Distance().metersToInches(inValue));
        else
        outField.setText("" + converter.new Weight().kgsToPounds(inValue));
        } // actionPerformed()
        }); // ActionListener class
        return jbutton;
        } // createJButton()


        Note that the body of the class definition is put right after the new operator. The result is that we still create an instance of the adapter object, but we define it on the fly. If the name following new is a class name, Java will define the anonymous class as a subclass of the named class. If the name following new is an interface, the anonymous class will implement the interface. In this example, the anonymous class is an implementation of the ActionListener interface.


        Local and anonymous classes provide an elegant and convenient way to implement adapter classes that are intended to be used once and have relatively short and simple implementations. The choice of local versus anonymous should largely depend on whether you need more than one instance of the class. If so, or if it is important that the class have a name for some other reason (readability), then you should use a local class. Otherwise, use an anonymous class. As in all design decisions of this nature, you should use whichever approach or style makes your code more readable and more understandable.













        2.6 Discussion Questions











        Team-Fly

         

         

















        Documenting Software Architectures: Views and Beyond
        By
        Paul Clements, Felix Bachmann, Len Bass, David Garlan, James Ivers, Reed Little, Robert Nord, Judith Stafford
        Table of Contents

        Chapter 2. 
        Styles of the Module Viewtype







        2.6 Discussion Questions






        1:

        Can you think of a system that cannot be described using a layered view? If a system is not layered, what would this say about its allowed-to-use relation?


        2:

        How does a UML class diagram relate to the styles given in this chapter? Does that diagram show decomposition, uses, generalization, or another combination? (Hint: We'll discuss this in some detail in Section 11.2.)


        3:

        We consciously chose the term generalization to avoid the multiple meanings that the term inheritance has acquired. Find two or three of these meanings, compare them, and discuss how they are both a kind of generalization. (Hint: You may wish to consult books by Booch and Rumbaugh, respectively.)


        4:

        Suppose that a portion of a system is generated with, for example, a user interface builder tool. Using one or more views in the module viewtype, how would you show the tool, the input to the tool�the user interface specification�and the output from the tool?













          Team-Fly

           

           





          Top



          2.2 Modeling



          [ Team LiB ]






          2.2 Modeling



          Throughout
          this book, I will be using industry-standard diagrams to illustrate
          designs. A critical part of relational data architecture is
          understanding a special kind of diagram called an entity relationship
          diagram, or ERD. An ERD graphically
          captures the entities in your problem domain and illustrates the
          relationships among them. Figure 2-2 is the ERD of
          the music library database.




          Figure 2-2. The ERD for the music library



          There are in fact several forms of ERDs. In the style I use in this
          book, each entity is indicated by a box with the name of the entity
          at the top. A line separates the name of the entity from its
          attributes inside the box.
          Primary key
          attributes have "PK" after them,
          and foreign key attributes have
          "FK" after them.



          The lines between entities indicate a relationship. At each end of
          the relationship are symbols that indicate
          what type of relationship it is and whether it is optional or
          mandatory. Table 2-4 describes these symbols.



          Table 2-4. Symbols for an ERD

          Symbol



          Description






          The many side of a mandatory
          one-to-many or many-to-many relationship





          The
          one side of a mandatory one-to-one or one-to-many relationship





          The many side of an optional one-to-many or many-to-many relationship





          The one side of an optional one-to-one or one-to-many relationship




          Our ERD therefore says the following things:



          • One compact disc contains one or more songs.

          • One song appears on exactly one compact disc.

          • One compact disc features one or more artists.

          • One artist is featured on one or more compact discs.

          • An artist can optionally be part of one or more artists (bands).


          This ERD
          is a logical representation of the music
          library. The entities in a logical model are not tables. First of
          all, you probably noticed there is no composite entity handling the
          relationship between an artist and a compact discæ¡° have drawn
          the relation directly as a many-to-many relationship. Furthermore,
          all of the entity names and attributes are in plain English. Finally,
          no foreign keys are shown.




          BEST PRACTICE: Develop an ERD to model your problem before you create the database.




          The
          physical data model transforms the logical data model into the tables
          that will be created in the working database. A data architect works
          with the logical data model while DBAs (database administrators) and
          developers work with the physical data model. You translate the
          logical data model into a physical one by adding join tables, turning
          domains into database-specific data types, and using table and column
          names appropriate to your DBMS. Figure 2-3 shows
          the physical data model for the music library as it would be created
          in MySQL.




          Figure 2-3. The physical data model for the music library







            [ Team LiB ]



            Epilogue: A Cascade of New Insights



            [ Team LiB ]





            Epilogue: A Cascade of New Insights


            That breakthrough got us out of the woods, but it was not the end of the story. The deeper model opened unexpected opportunities to make the application richer and the design clearer.


            Just weeks after the release of the Share Pie version of the software, we noticed another awkward aspect of the model that was complicating the design. An important ENTITY was missing, its absence leaving extra responsibilities to be taken up by other objects. Specifically, there were significant rules governing loan drawdowns, fee payments, and so on, and all this logic was crammed into various methods on the Facility and Loan. These design problems, which had been barely noticeable before the Share Pie breakthrough, became obvious with our clearer field of vision. Now we noticed terms popping up in our discussions that were nowhere to be found in the model�terms such as "transaction" (meaning a financial transaction)�that we started to realize were being implied by all those complicated methods.


            Following a process similar to the one described earlier (although, thankfully, under much less time pressure) led to yet another round of insights and a still deeper model. This new model made those implicit concepts explicit, as kinds of Transactions, and at the same time simplified the Positions (an abstraction including the Facility and Loan). It became easy to define the diverse transactions we had, along with their rules, negotiating procedures, and approval processes, and all in relatively self-explanatory code.


            Figure 8.9. Another model break-through that followed several weeks later. Constraints on Transactions could be expressed with easy precision.


            As is often the case after a real breakthrough to a deep model, the clarity and simplicity of the new design, combined with the enhanced communication based on the new UBIQUITOUS LANGUAGE, had led to yet another modeling breakthrough.


            Our pace of development was accelerating at a stage where most projects are beginning to bog down in the mass and complexity of what has already been built.





              [ Team LiB ]



              Tell Us What You Think!






















               Print  
               E-Mail
               
               Add
              Note    Add
              Bookmark
                 














              JSTL:
              JSP Standard Tag Library Kick Start
              By
              Jeff Heaton

              Table
              of Contents









              Tell Us What You Think!


              As the reader of this book, class=docEmphasis>you are our most important critic and
              commentator. We value your opinion and want to know what we're
              doing right, what we could do better, what areas you'd like to
              see us publish in, and any other words of wisdom you're
              willing to pass our way.


              As an executive editor for Sams Publishing, I
              welcome your comments. You can email or write me directly to
              let me know what you did or didn't like about this book as
              well as what we can do to make our books better.


              Please note that I cannot help you with
              technical problems related to the class=docEmphasis>topic of this book. We do have a User
              Services group, however, where I will forward specific
              technical questions related to the book.


              When you write, please be sure to include
              this book's title and author as well as your name, email
              address, and phone number. I will carefully review your
              comments and share them with the author and editors who worked
              on the book.












              Email:


              feedback@samspublishing.com

              Mail:


              class=docText>Michael Stephens
              Executive Editor
              Sams Publishing
              201 West 103rd Street
              Indianapolis, IN 46290 USA



              For more information about this book or
              another Sams title, visit our Web site at http://www.samspublishing.com/. Type the
              ISBN (excluding hyphens) or the title of a book in the Search
              field to find the page you're looking for.











                 Print  
                 E-Mail
                 
                 Add
                Note    Add
                Bookmark
                   




                Top

                [0672324504/pref03]






                 
                 

                The EVP Interface












                The EVP Interface

                In order to make programming with cryptographic functions easier, OpenSSL employs a high-level API called EVP. EVP enables the application programmer to ignore algorithm specific details and write high-level code that works even if the underlying algorithm changes. For example, an application programmer writes a program to encrypt data by using CAST with a 256-bit key. Due to export restrictions, however, he or she must employ DBS with a 64-bit key. EVP enables this process to happen seamlessly with little, if any, retooling. EVP achieves this task by operating as a dispatch layer for function invocations. When a cryptographic operation begins, the application programmer normally passes two structures to the function call:





                • An EVP context. The context is an operation-specific data structure that externalizes and maintains state between function calls. For instance, a cipher context contains the initialization vector for a given algorithm.





                • An algorithm specifier. This structure encapsulates the algorithm that the EVP function will use. This structure provides basic information (such as block size and key length) and a set of function pointers to the actual cryptographic functions to be invoked.




                As you can see, each individual EVP call is effectively stateless. State is externalized into the context, which has two key advantages:




                • Thread safety, which OpenSSL does not intrinsically support, is easier to build in because EVP does not contend over many shared resources.




                • The application programmer might change the algorithm (from CAST to DES in the earlier example) by changing the cipher specifier passed to thecryptographic function.




                You generally employ EVP by using a three-step process:





                1. Initialization: Functions named EVP_.*_lnit[_ex] indicate to OpenSSL that a cryptographic operation is about to start. They enable the application programmer to specify a context, algorithm, and other initialization parameters.




                2. Updating: Functions named EVP_.*_Update provide data to an algorithm, often in an iterative process.




                3. Finalization: Functions named EVP_.*_Final_ [ex] finish a particular operation and release any transient resources associated with the context.




                This pattern enables the application programmer to read input data in chunks, performing operations over large data sets without having to have all the data in memory at any one time.




                Engines


                An OpenSSL engine is an implementation of a particular set of algorithms that —depending on the architecture and available hardware—can be either completely software based or consist of a driver code for dedicated cryptographic hardware. The engine interface was written to enable OpenSSL to take full advantage of special-purpose cryptographic hardware. Using the EVP interface, an applications programmer can either specify an engine on a case-by-case basis as an argument to the initialization function or enable OpenSSL to use a default engine for the appropriate operation.

















                7.16 Date-Based Summaries




                I l@ve RuBoard










                7.16 Date-Based Summaries




                7.16.1 Problem



                You
                want to produce a summary based on date or time values.





                7.16.2 Solution



                Use GROUP BY to categorize
                temporal values into bins of the appropriate duration. Often this
                will involve using expressions to extract the significant parts of
                dates or times.





                7.16.3 Discussion



                To put records in time order, you use an ORDER
                BY clause to sort a column that has a temporal
                type. If instead you want to summarize records based on groupings
                into time intervals, you need to determine how to categorize each
                record into the proper interval and use GROUP
                BY to group them accordingly.



                Sometimes you can use temporal values directly if they group
                naturally into the desired categories. This is quite likely if a
                table represents date or time parts using separate columns. For
                example, the baseball1.com master ballplayer
                table represents birth dates using separate year, month, and day
                columns. To see how many ballplayers were born on each day of the
                year, perform a calendar date summary that uses the month and day
                values but ignores the year:



                mysql> SELECT birthmonth, birthday, COUNT(*)
                -> FROM master
                -> WHERE birthmonth IS NOT NULL AND birthday IS NOT NULL
                -> GROUP BY birthmonth, birthday;
                +------------+----------+----------+
                | birthmonth | birthday | COUNT(*) |
                +------------+----------+----------+
                | 1 | 1 | 47 |
                | 1 | 2 | 40 |
                | 1 | 3 | 50 |
                | 1 | 4 | 38 |
                ...
                | 12 | 28 | 33 |
                | 12 | 29 | 32 |
                | 12 | 30 | 32 |
                | 12 | 31 | 27 |
                +------------+----------+----------+


                A less fine-grained summary can be obtained by using only the month
                values:



                mysql> SELECT birthmonth, COUNT(*)
                -> FROM master
                -> WHERE birthmonth IS NOT NULL
                -> GROUP BY birthmonth;
                +------------+----------+
                | birthmonth | COUNT(*) |
                +------------+----------+
                | 1 | 1311 |
                | 2 | 1144 |
                | 3 | 1243 |
                | 4 | 1179 |
                | 5 | 1118 |
                | 6 | 1105 |
                | 7 | 1244 |
                | 8 | 1438 |
                | 9 | 1314 |
                | 10 | 1438 |
                | 11 | 1314 |
                | 12 | 1269 |
                +------------+----------+


                Sometimes temporal values can be used directly, even when not
                represented as separate columns. To determine how many drivers were
                on the road and how many miles were driven each day, group the
                records in the driver_log table by date:



                mysql> SELECT trav_date,
                -> COUNT(*) AS 'number of drivers', SUM(miles) As 'miles logged'
                -> FROM driver_log GROUP BY trav_date;
                +------------+-------------------+--------------+
                | trav_date | number of drivers | miles logged |
                +------------+-------------------+--------------+
                | 2001-11-26 | 1 | 115 |
                | 2001-11-27 | 1 | 96 |
                | 2001-11-29 | 3 | 822 |
                | 2001-11-30 | 2 | 355 |
                | 2001-12-01 | 1 | 197 |
                | 2001-12-02 | 2 | 581 |
                +------------+-------------------+--------------+


                However, this summary will grow lengthier as you add more records to
                the table. At some point, the number of distinct dates likely will
                become so large that the summary fails to be useful, and
                you'd probably decide to change the category size
                from daily to weekly or monthly.



                When a temporal column
                contains so many distinct values that it fails to categorize well,
                it's typical for a summary to group records using
                expressions that map the relevant parts of the date or time values
                onto a smaller set of categories. For example, to produce a
                time-of-day summary for records in the mail table,
                do this:[1]


                [1] Note that the result includes an entry only
                for hours of the day actually represented in the data. To generate a
                summary with an entry for every hour, use a join to fill in the
                "missing" values. See Recipe 12.10.



                mysql> SELECT HOUR(t) AS hour,
                -> COUNT(*) AS 'number of messages',
                -> SUM(size) AS 'number of bytes sent'
                -> FROM mail
                -> GROUP BY hour;
                +------+--------------------+----------------------+
                | hour | number of messages | number of bytes sent |
                +------+--------------------+----------------------+
                | 7 | 1 | 3824 |
                | 8 | 1 | 978 |
                | 9 | 2 | 2904 |
                | 10 | 2 | 1056806 |
                | 11 | 1 | 5781 |
                | 12 | 2 | 195798 |
                | 13 | 1 | 271 |
                | 14 | 1 | 98151 |
                | 15 | 1 | 1048 |
                | 17 | 2 | 2398338 |
                | 22 | 1 | 23992 |
                | 23 | 1 | 10294 |
                +------+--------------------+----------------------+


                To produce a day-of-week summary instead, use the DAYOFWEEK(
                )

                function:



                mysql> SELECT DAYOFWEEK(t) AS weekday,
                -> COUNT(*) AS 'number of messages',
                -> SUM(size) AS 'number of bytes sent'
                -> FROM mail
                -> GROUP BY weekday;
                +---------+--------------------+----------------------+
                | weekday | number of messages | number of bytes sent |
                +---------+--------------------+----------------------+
                | 1 | 1 | 271 |
                | 2 | 4 | 2500705 |
                | 3 | 4 | 1007190 |
                | 4 | 2 | 10907 |
                | 5 | 1 | 873 |
                | 6 | 1 | 58274 |
                | 7 | 3 | 219965 |
                +---------+--------------------+----------------------+


                To make the output more meaningful, you might want to use
                DAYNAME( ) to display weekday names instead. However,
                because day names sort lexically (for example,
                "Tuesday" sorts after
                "Friday"), use DAYNAME(
                )
                only for display purposes. Continue to group on the
                numeric day values so that output rows sort that way:



                mysql> SELECT DAYNAME(t) AS weekday,
                -> COUNT(*) AS 'number of messages',
                -> SUM(size) AS 'number of bytes sent'
                -> FROM mail
                -> GROUP BY DAYOFWEEK(t);
                +-----------+--------------------+----------------------+
                | weekday | number of messages | number of bytes sent |
                +-----------+--------------------+----------------------+
                | Sunday | 1 | 271 |
                | Monday | 4 | 2500705 |
                | Tuesday | 4 | 1007190 |
                | Wednesday | 2 | 10907 |
                | Thursday | 1 | 873 |
                | Friday | 1 | 58274 |
                | Saturday | 3 | 219965 |
                +-----------+--------------------+----------------------+


                A similar technique can be used for summarizing month-of-year
                categories that are sorted by numeric value but displayed by month
                name.



                Uses for temporal categorizations are plentiful:




                • DATETIME or
                  TIMESTAMP columns have the potential to contain
                  many unique values. To produce daily summaries, strip off the time of
                  day part to collapse all values occurring within a given day to the
                  same value. Any of the following GROUP
                  BY clauses will do this, though the last one is
                  likely to be slowest:

                  GROUP BY FROM_DAYS(TO_DAYS(col_name))
                  GROUP BY YEAR(col_name), MONTH(col_name), DAYOFMONTH(col_name)
                  GROUP BY DATE_FORMAT(col_name,'%Y-%m-%e')

                • To produce monthly or quarterly sales reports, group by
                  MONTH(col_name)
                  or
                  QUARTER(col_name)
                  to place dates into the correct part of the year.


                • To summarize web server activity, put your server's
                  logs into MySQL and run queries that collapse the records into
                  different time categories. Chapter 18 discusses how
                  to do this for Apache.










                  I l@ve RuBoard



                  Chapter 22.&nbsp; Dynamic Plug-ins










                  Chapter 22. Dynamic Plug-ins


                  Applications are often dynamic. New functions are added, old functions are removed, but the system keeps running. We saw this in Chapter 21, "Installing and Updating Plug-ins," with the dynamic installation of MUC capabilities into Hyperbola. The Eclipse Runtime and OSGi enable this kind of behavior and the RCP base plug-ins tolerate it, but the ability does not come freeyou must follow certain practices to make the most of these scenarios.


                  This chapter discusses the unique challenges presented to plug-in writers as they attempt to handle the comings and goings of plug-ins in the environmentdynamic tolerance. We first look at Hyperbola as an example of dynamic tolerance. With that as a base, we identify some common dynamic plug-in scenarios and outline coding practices and designs for handling them. Throughout the discussion, the dynamic MUC example from Chapter 21 is used as an example of exploiting Eclipse's dynamic capabilities.












                  10.1 Introduction



                  [ Team LiB ]









                  10.1 Introduction


                  Throughout this book we have treated architecture as something largely under your control and shown how to make architectural decisions (and, as we will see in Part Three, how to analyze those decisions) to achieve the goals and requirements in place for a system under development. But there is another side to the picture. Suppose we have a system that already exists, but we do not know its architecture. Perhaps the architecture was never recorded by the original developers. Perhaps it was recorded but the documentation has been lost. Or perhaps it was recorded but the documentation is no longer synchronized with the system after a series of changes. How do we maintain such a system? How do we manage its evolution to maintain the quality attributes that its architecture (whatever it may be) has provided for us?


                  This chapter is about a way to answer these questions using architecture reconstruction, in which the "as-built" architecture of an implemented system is obtained from an existing system. This is done through a detailed analysis of the system using tool support. The tools extract information about the system and aid in building and aggregating successive levels of abstraction. If the tools are successful, the end result is an architectural representation that aids in reasoning about the system. In some cases, it may not be possible to generate a useful representation. This is sometimes the case with legacy systems that have no coherent architectural design to recover (although that in itself is useful to know).



                  Architecture reconstruction is an interpretive, interactive, and iterative process involving many activities; it is not automatic. It requires the skills and attention of both the reverse engineering expert and the architect (or someone who has substantial knowledge of the architecture), largely because architectural constructs are not represented explicitly in the source code. There is no programming language construct for "layer" or "connector" or other architectural elements that we can easily pick out of a source code file. Architectural patterns, if used, are seldom labeled. Instead, architectural constructs are realized by many diverse mechanisms in an implementation, usually a collection of functions, classes, files, objects, and so forth. When a system is initially developed, its high-level design/architectural elements are mapped to implementation elements. Therefore, when we reconstruct those elements, we need to apply the inverses of the mappings. Coming up with those requires architectural insight. Familiarity with compiler construction techniques and utilities such as grep, sed, awk, perl, python, and lex/yacc is also important.


                  The results of architectural reconstruction can be used in several ways. If no documentation exists or if it is out of date, the recovered architectural representation can be used as a basis for redocumenting the architecture, as discussed in Chapter 9. This approach can also be used to recover the as-built architecture, to check conformance against an "as-designed" architecture. This assures us that our maintainers (or our developers, for that matter) have followed the architectural edicts set forth for them and are not eroding the architecture, breaking down abstractions, bridging layers, compromising information hiding, and so forth. The reconstruction can also be used as the basis for analyzing the architecture (see Chapters 11 and 12) or as a starting point for re-engineering the system to a new desired architecture. Finally, the representation can be used to identify elements for re-use or to establish an architecture-based software product line (see Chapter 14).


                  Architecture reconstruction has been used in a variety of projects ranging from MRI scanners to public telephone switches and from helicopter guidance systems to classified NASA systems. It has been used



                  • to redocument architectures for physics simulation systems.


                  • to understand architectural dependencies in embedded control software for mining machinery.


                  • to evaluate the conformance of a satellite ground system's implementation to its reference architecture .


                  • to understand different systems in the automotive industry.



                  THE WORKBENCH APPROACH


                  Architecture reconstruction requires tool support, but no single tool or tool set is always adequate to carry it out. For one thing, tools tend to be language-specific and we may encounter any number of languages in the artifacts we examine. A mature MRI scanner, for example, can contain software written in 15 languages. For another thing, data extraction tools are imperfect; they often return incomplete results or false positives, and so we use a selection of tools to augment and check on each other. Finally, the goals of reconstruction vary, as discussed above. What you wish to do with the recovered documentation will determine what information you need to extract, which in turn will suggest different tools.


                  Taken together, these have led to a particular design philosophy for a tool set to support architecture reconstruction known as the workbench. A workbench should be open (easy to integrate new tools as required) and provide a lightweight integration framework whereby tools added to the tool set do not affect the existing tools or data unnecessarily.


                  An example of a workbench, which we will use to illustrate several of the points in this chapter, is Dali, developed at the SEI. For Further Reading at the end of the chapter describes others.



                  RECONSTRUCTION ACTIVITIES


                  Software architecture reconstruction comprises the following activities, carried out iteratively:



                  1. Information extraction.
                    The purpose of this activity is to extract information from various sources.


                  2. Database construction.
                    Database construction involves converting this information into a standard form such as the Rigi Standard Form (a tuple-based data format in the form of relationship <entity1> <entity2>) and an SQL-based database format from which the database is created.


                  3. View fusion.
                    View fusion combines information in the database to produce a coherent view of the architecture.


                  4. Reconstruction.
                    The reconstruction activity is where the main work of building abstractions and various representations of the data to generate an architecture representation takes place.



                  As you might expect, the activities are highly iterative. Figure 10.1 depicts the architecture reconstruction activities and how information flows among them.


                  Figure 10.1. Architecture reconstruction activities. (The arrows show how information flows among the activities.)



                  The reconstruction process needs to have several people involved. These include the person doing the reconstruction (reconstructor) and one or more individuals who are familiar with the system being reconstructed (architects and software engineers).


                  The reconstructor extracts the information from the system and either manually or with the use of tools abstracts the architecture from it. The architecture is obtained by the reconstructor through a set of hypotheses about the system. These hypotheses reflect the inverse mappings from the source artifacts to the design (ideally the opposite of the design mappings). They are tested by generating the inverse mappings and applying them to the extracted information and validating the result. To most effectively generate these hypotheses and validate them, people familiar with the system must be involved, including the system architect or engineers who have worked on it (who initially developed it or who currently maintain it).


                  In the following sections, the various activities of architecture reconstruction are outlined in more detail along with some guidelines for each. Most of these guidelines are not specific to the use of a particular workbench and would be applicable even if the architecture reconstruction was carried out manually.








                    [ Team LiB ]



                    Newer Posts Older Posts Home