Wednesday, October 21, 2009

void keyword








 

 




















void keywordAbsence of type or function arguments


















simple-type-specifier := void




The void keyword can be used as a type specifier

to indicate the absence of a type or as a function's

parameter list to indicate that the function takes no parameters.



When used as a type specifier, it is most often used as a function

return type to indicate that the function does not return a value. It

is also used as a generic pointer (e.g., void*),

although this usage is needed less often in C++ than in C.



C++ does not require that void be used to indicate

that there are no function parameters, but it is often used in this

way for compatibility with C.



Example





void func(void)

{

std::cout << "hello, world\n";

}






See Also



declaration, type, Chapter 2, Chapter 5
















     

     


    Section 24.2.&nbsp; Preparing the Project










    24.2. Preparing the Project


    Now for some work in the Terminal application. Before it can be taken over by Xcode, the Graphviz project has to be unpacked from its archive and configured to the tools and libraries available on your system. First, extract the archive; point the command interface at the directory that contains the archive, and invoke tar. The example here assumes that we're working with version 2.2 of Graphviz:


    $ cd Projects
    $ tar xzvf graphviz-2.2.tar.gz
    graphviz-2.2/
    graphviz-2.2/agraph/
    graphviz-2.2/agraph/README
    graphviz-2.2/agraph/aghdr.h
    graphviz-2.2/agraph/agraph.h
    graphviz-2.2/agraph/malloc.h
    graphviz-2.2/agraph/vmstub.h
    .
    .
    .


    The options to tar(xzvf) told it to extract the contents after unzipping them from the named file and to be verbose, or print the name of each file. The verbosity isn't strictly necessary, but the extraction takes time, and it's nice to have something to show what's going on.


    Next, the project has to be configured. If you point your command line interface at the newly created graphviz-2.2 directory and list the contents of the current directory, you'll find files named INSTALL and README and a script named configure. I can't promise that these files will be at the root of every open-source project you download, but you'll usually find them.


    $ cd graphviz-2.2
    $ ls
    AUTHORS config graphviz.spec.in
    COPYING config.h.in iffe
    ChangeLog config.h.old lefty
    Config.mk configure lneato
    INSTALL configure.ac m4
    .
    .
    .
    README dotty tclhandle
    .
    .
    .
    $ cat README
    Graphviz - Graph Drawing Programs from AT&T Research and
    Lucent Bell Labs

    See doc/build.html for prerequisites and detailed build notes.
    $ open doc/build.html
    $


    The usual scenario is that if a project has a configure script, you must execute ./configure from the root of the project in order to adapt the project's makefiles and headers to the architecture, tool set, and library suite of your machine. The INSTALL file will usually contain annotations of any special flags or features this particular configure script accepts.


    However, in this case, INSTALL appears to be unchanged from its generic content. Examining the README file shows that the build instructions are in graphviz-2.2/doc/build.html.


    The build.html file confirms that Graphviz shares the same build-and-install recipe as almost every other open-source project (do not type this in):


    ./configure
    make
    make install


    It also says that ./configure --help will show us all the available options, which are numerous. For the purposes of this tutorial, we'll take none of them.


    So now we invoke ./configure. A glance at its voluminous output suggests why it is necessary:


    $ ./configure
    checking build system type... powerpc-apple-darwin8.0.0
    checking host system type... powerpc-apple-darwin8.0.0
    checking target system type... powerpc-apple-darwin8.0.0
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for gawk... no
    checking for mawk... no
    checking for nawk... no
    checking for awk... awk
    checking whether make sets $(MAKE)... yes
    checking for gcc... gcc
    checking for C compiler default output file name... a.out
    checking whether the C compiler works... yes
    .
    .
    .
    $



    During this process, the configuration script fails to find the header for the FreeType typographic library. As Graphviz relies on FreeType, not only for labeling but also for scaling objects, this will cripple our version of the package. In real life, we'd take the trouble to make up the gapit involves installing X Window support and adding --with_freetype=/usr/X11R6 to the parameters of configurebut we're doing this only for the exercise.




    By the end of this process, all the makefiles in graphviz-2.2 and its subdirectories will have been created and the header files adjusted to the peculiarities of your development environment. If all you cared about was obtaining Graphviz, the easiest thing would be simply to type make at the next command prompt and, assuming all went well, sudo make install at the command prompt after that.


    But in our scenario, you want to bring Xcode into the picture. Maybe you want to modify Graphviz, or you think you'll have to edit its source for it to compile properly. Maybe you just want to study it. You need an interactive, integrated development environment. It's at this point in the life cycleafter configuration but before the first makethat Xcode can enter.












    The Student Directory








    The Student Directory



    package sis.studentinfo;

    import junit.framework.*;
    import java.io.*;

    public class StudentDirectoryTest extends TestCase {
    private StudentDirectory dir;

    protected void setUp() throws IOException {
    dir = new StudentDirectory();
    }

    protected void tearDown() throws IOException {
    dir.close();
    dir.remove();
    }

    public void testRandomAccess() throws IOException {
    final int numberOfStudents = 10;
    for (int i = 0; i < numberOfStudents; i++)
    addStudent(dir, i);
    dir.close();

    dir = new StudentDirectory();
    for (int i = 0; i < numberOfStudents; i++)
    verifyStudentLookup(dir, i);
    }

    void addStudent(StudentDirectory directory, int i)
    throws IOException {
    String id = "" + i;
    Student student = new Student(id);
    student.setId(id);
    student.addCredits(i);
    directory.add(student);
    }

    void verifyStudentLookup(StudentDirectory directory, int i)
    throws IOException {
    String id = "" + i;
    Student student = dir.findById(id);
    assertEquals(id, student.getLastName());
    assertEquals(id, student.getId());
    assertEquals(i, student.getCredits());
    }
    }


    The most significant new addition to StudentDirectoryTest appears in testRandomAccess. When it adds the students to the directory, the test closes it. It then creates a new directory instance to be used for the student lookups. By doing this, the test demonstrates at least some notion of persistence.


    An additional performance test might be worthwhile to demonstrate that a lookup into the directory takes the same amount of time regardless of where it appears in the file. Additions of students to the directory should also execute in constant time.



    package sis.studentinfo;

    import java.io.*;
    import sis.db.*;

    public class StudentDirectory {
    private static final String DIR_BASENAME = "studentDir";
    private DataFile db;

    public StudentDirectory() throws IOException {
    db = DataFile.open(DIR_BASENAME);
    }

    public void add(Student student) throws IOException {
    db.add(student.getId(), student);
    }

    public Student findById(String id) throws IOException {
    return (Student)db.findBy(id);
    }

    public void close() throws IOException {
    db.close();
    }

    public void remove() {
    db.deleteFiles();
    }
    }


    In contrast, most of the StudentDirectory class has changed. The StudentDirectory class now encapsulates a DataFile instance to supply directory functionality. It provides a few additional specifics, including the key field to use (the student id) and the base filename for the data and key files. Beyond that, the class merely delegates messages to the DataFile object.








      Section 4.7. Variable Scope Revisited










      4.7. Variable Scope Revisited






      When we first discussed the notion of variable scope,
      I based the definition solely on the lexical structure of JavaScript code: global variables have global scope, and variables declared in functions have local scope. If one function definition is nested within another, variables declared within that nested function have a nested local scope. Now that you know that global variables are properties of a global object
      and that local variables are properties of a special call object, we can return to the notion of variable scope and reconceptualize it. This new description of scope offers a useful way to think about variables in many contexts; it provides a powerful new understanding of how JavaScript works.


      Every JavaScript execution context has a scope chain


      associated with it. This scope chain is a list or chain of objects. When JavaScript code needs to look up the value of a variable x (a process called variable name resolution),
      it starts by looking at the first object in the chain. If that object has a property named x, the value of that property is used. If the first object does not have a property named x, JavaScript continues the search with the next object in the chain. If the second object does not have a property named x, the search moves on to the next object, and so on.


      In top-level JavaScript code (i.e., code not contained within any function definitions), the scope chain consists of a single object, the global object. All variables are looked up in this object. If a variable does not exist, the variable value is undefined. In a (nonnested) function, however, the scope chain consists of two objects. The first is the function's call object, and the second is the global object. When the function refers to a variable, the call object (the local scope) is checked first, and the global object (the global scope) is checked second. A nested function would have three or more objects in its scope chain. Figure 4-1 illustrates the process of looking up a variable name in the scope chain of a function.



      Figure 4-1. The scope chain and variable resolution













      Getting Started








      Getting Started


      To get started, we're going to create a simple imagejust an image showing a background colorand then send it as a JPEG image to the browser. To start, you create an image object with imagecreate, which you use like this:



      imagecreate(int x_size, int y_size)


      The x_size and y_size values are in pixels. Here's how we create our first image:



      $image_height = 100;
      $image_width = 300;

      $image = imagecreate($image_width, $image_height);
      .
      .
      .


      To set colors for the image, you use imagecolorallocate, passing it the image you're working with, as well as the red, green, and blue components as values from 0 to 255:



      imagecolorallocate(resource image, int red, int green, int blue)


      The first time you call imagecolorallocate, this function sets the background color. Subsequent calls set various drawing colors, as we'll see. Here's how we set the background color to light gray (red = 200, green = 200, blue = 200):



      $image = imagecreate($image_width, $image_height);

      $back_color = imagecolorallocate($image, 200, 200, 200);
      .
      .
      .


      To send a JPEG image back to the browser, you have to tell the browser that you're doing so with the header function to set the image's type, and then you send the image with the imagejpeg function like this (do this before any other output is sent to the browser):



      $image_height = 100;
      $image_width = 300;

      $image = imagecreate($image_width, $image_height);

      $back_color = imagecolorallocate($image, 200, 200, 200);

      header('Content-Type: image/jpeg');
      imagejpeg($image);
      .
      .
      .


      Here are some of the image-creating functions for various image formats:



      • imagegif.
        Output a GIF image to browser or file


      • imagejpeg.
        Output a JPEG image to browser or file


      • imagewbmp.
        Output a WBMP image to browser or file


      • imagepng.
        Output a PNG image to browser or file


      After sending the image, you can destroy the image object with the imagedestroy function; all this is shown in phpbox.php, Example 1.


      Example 1. Displaying an image, phpbox.php



      <?php
      $image_height = 100;
      $image_width = 300;

      $image = imagecreate($image_width, $image_height);

      $back_color = imagecolorallocate($image, 200, 200, 200);

      header('Content-Type: image/jpeg');
      imagejpeg($image);

      imagedestroy($image);
      ?>



      The results appear in Figure 1, where you can see our image, which is simply all background color. Cool, a JPEG image created on the server!


      Figure 1. Displaying an image.

      [View full size image]









        Chapter 9. Getting PHP to Talk to MySQL










        Chapter 9. Getting PHP to Talk to MySQL


        Now that you're comfortable using the MySQL client tools to manipulate data in the database, you can begin using PHP to display and modify data from the database. PHP has standard functions for working with the database.


        First, we're going to discuss PHP's built-in database functions. We'll also show you how to use the PEAR database functions that provide the ability to use the same functions to access any supported database. This type of flexibility comes from a process called abstraction.
        Abstraction is the information you need to log into a database that is placed into a standard format. This standard format allows you to interact with MySQL as well as other databases using the same format. Similarly, MySQL-specific functions are replaced with generic ones that know how to talk to many databases.


        In this chapter, you'll learn how to connect to a MySQL server from PHP, learn how to use PHP to access and retrieve stored data, and how to correctly display information to the user.












        15.2 Transforming XML into HTML



        [ Team LiB ]






        15.2 Transforming XML into HTML



        You may also have heard about the
        Extensible Stylesheet Language (XSL). XSL defines one set of XML
        elements to transform an XML document into some other type of
        document, and another set of elements to produce a formatted version
        of an XML document suitable for display. Browsers and other programs
        that need to render an XML document with different styles for
        different elements, such as a bold large font for a header and a
        regular font for paragraph text, use the formatting part of XSL. The
        transformation part of XSL is referred to
        as XSLT. XSLT can turn a source XML document, such as a document
        representing an order, into different forms using different
        stylesheets. This is useful in business-to-business (B2B)
        applications, where different partners often require the same
        information in slightly different formats. You can read more about
        XSL and XSLT at http://www.w3.org/TR/xsl/.



        In a web application, XSLT can transform
        structured XML data into HTML. Example 15-2 shows an
        example of a JSP page in which the same phone book information used
        in Example 15-1 is transformed into an HTML table.




        Example 15-2. Transforming XML to HTML (phone_html.jsp)

        <%@ page contentType="text/html" %>
        <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
        <%@ taglib prefix="x" uri="http://java.sun.com/jsp/jstl/xml" %>
        <html>
        <head>
        <title>Phone List</title>
        </head>
        <body bgcolor="white">

        <c:import url="htmltable.xsl" var="stylesheet" />
        <x:transform xslt="${stylesheet}">
        <?xml version="1.0" encoding="ISO-8859-1"?>
        <employees>
        <employee id="123">
        <first-name>Hans</first-name>
        <last-name>Bergsten</last-name>
        <telephone>310-555-1212</telephone>
        </employee>
        <employee id="456">
        <first-name>Bob</first-name>
        <last-name>Eckstein</last-name>
        <telephone>800-555-5678</telephone>
        </employee>
        <employee id="789">
        <first-name>Paula</first-name>
        <last-name>Ferguson</last-name>
        <telephone>213-555-1234</telephone>
        </employee>
        </employees>
        </x:transform>


        </body>
        </html>



        At the top of the page, the taglib directive for
        the JSTL XML library is included, along with the directive for the
        JSTL core library used in previous chapters.



        To transform the XML data, you first need to get hold of the XSLT
        stylesheet. The JSTL
        <c:import>
        action, described in Table 15-1, loads the
        stylesheet from the file specified by the url
        attribute and saves it in the variable named by the
        var attribute.



        Table 15-1. Attributes for JSTL <c:import>

        Attribute name



        Java type



        Dynamic value accepted



        Description



        url


        String


        Yes



        Mandatory. A page- or context-relative path, or an absolute URL.



        context


        String


        Yes



        Optional. The context path for another application.



        charEncoding


        String


        Yes



        Optional. The character encoding for the imported content. Default is
        the encoding specified by the protocol used for the import or
        ISO-8859-1 if no encoding is found.



        var


        String


        No



        Optional. The name of the variable to hold the result as a
        String.



        scope


        String


        No



        Optional. The scope for the variable, one of page,
        request, session, or
        application. page is the
        default.



        varReader


        String


        No



        Optional. The name of the variable to expose the result as a
        Reader to the body.




        The <c:import> action is very versatile. You
        can use it to import data from resources in the same application,
        another application on the same server (identified by the
        context attribute), and even from an external
        server by specifying an absolute URL for any protocol supported by
        the web container, such as HTTP, HTTPS, or FTP. Parameters can be
        defined either in the URL as a query string or using nested
        <c:param> actions. The imported data can be
        saved as a String in any scope, or exposed as a
        java.io.Reader to actions within the
        element's body. Using a Reader is
        slightly more efficient, because the
        <c:import> action doesn't
        have to read the input in this case; it just wraps a
        Reader around the input stream that a nested
        action then reads directly. I'll show you an example
        of this later. When you import a resource (such as a JSP page) that
        belongs to the same application, the target resource has access to
        all request parameters and variables in the request scope, the same
        way as when you use the <jsp:forward> action
        (Chapter 10).



        The transformation is performed by a JSTL action named
        <x:transform>,
        described in Table 15-2.



        Table 15-2. Attributes for JSTL <x:transform>

        Attribute name



        Java type



        Dynamic value accepted



        Description



        doc


        String, java.io.Reader,
        javax.xml.transform.Source,
        org.w3c.dom.Document, or the types exposed by
        <x:parse> and
        <x:set>



        Yes



        Mandatory, unless specified as the body. The XML document to
        transform.



        xslt


        String, java.io.Reader,
        javax.xml.transform.Source



        Yes



        Mandatory. The XSLT stylesheet.



        docSystemId


        String


        Yes



        Optional. The system identifier for the XML document.



        xsltSystemId


        String


        Yes



        Optional. The system identifier for the XSLT stylesheet.



        result


        javax.xml.transform.Result


        Yes



        Optional. A Result object used to capture or
        process the transformation result.



        var


        String


        No



        Optional. The name of the variable to hold the result as a
        org.w3c.dom.Document.



        scope


        String


        No



        Optional. The scope for the variable; one of page,
        request, session, or
        application. page is the
        default.




        The XML document to transform can be specified as the body, as in
        Example 15-2, or as a variable through the
        doc attribute. The example XML document contains
        elements representing information about employees. The
        xsl attribute is set to the XSL stylesheet
        imported by the <c:import> action. It
        contains XSLT elements for transforming the XML document into an HTML
        table. In Example 15-2, both the
        var and the result attributes
        are omitted, so the <x:transform> action
        adds its result to the response. This is the most common use, but the
        var and result attributes can
        be used if the transformation result needs to be captured and
        processed further.



        Descriptions of all the XSLT elements would fill a
        book all by itself, but Example 15-3 shows the
        stylesheet used here to give you an idea of how XSLT looks.




        Example 15-3. XSL stylesheet that generates an HTML table (htmltable.xsl)

        <?xml version="1.0"?> 
        <xsl:stylesheet version="1.0"
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform">

        <xsl:template match="employees">

        <table border="1" width="100%">
        <tr>
        <th>ID</th>
        <th>Employee Name</th>
        <th>Phone Number</th>
        </tr>
        <xsl:for-each select="employee">
        <tr>
        <td>
        <xsl:value-of select="@id"/>
        </td>
        <td>
        <xsl:value-of select="last-name"/>,
        <xsl:value-of select="first-name"/>

        </td>
        <td>
        <xsl:value-of select="telephone"/>
        </td>
        </tr>
        </xsl:for-each>
        </table>
        </xsl:template>

        </xsl:stylesheet>



        The XSLT elements are similar to JSP action elements in that they
        perform some action rather than identify information types. The XSLT
        elements select and process pieces of the source XML document. Here,
        the <xsl:template>
        element selects the top <employees> element
        in the source XML document, the
        <xsl:for-each>
        element loops over all nested <employee>
        elements, and the <xsl:value-of>
        elements extract the attribute values and nested elements for each
        <employee> element. The non-XSLT elements
        are used as template data, the same way as in JSP. You get the idea.



        An XSLT stylesheet can use parameters to represent dynamic data,
        provided to the XSLT processor when a document is transformed:



        <?xml version="1.0"?> 
        <xsl:stylesheet version="1.0"
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform">

        <xsl:param name="empName" />

        <xsl:template match="employees/employee[name = $empName]">
        ...


        The parameter in this example limits the
        <employee> elements to be processed to those
        that have a <name> element with the value
        specified by the parameter.



        To pass the parameter value to the XSLT stylesheet, you must use a
        nested <x:param> action in the
        <x:transform> body:



        <x:transform xslt="${stylesheet}">
        <x:param name="empName" value="${param:empName}" />
        <?xml version="1.0" encoding="ISO-8859-1"?>
        <employees>
        <employee id="123">
        <first-name>Hans</first-name>
        <last-name>Bergsten</last-name>
        <telephone>310-555-1212</telephone>
        </employee>
        ...
        </x:transform>


        Here I pass on a request parameter value to the stylesheet, but you
        can, of course, use any EL expression as the value.



        XML documents, including XSLT stylesheets, can contain references to
        external entities, for instance in the XSL
        <xsl:include> and
        <xsl:import> elements. If these references
        are written as relative paths in the document, a base URI must be
        used to establish what they are relative to. You can pass base URIs
        for the XSLT stylesheet and the XML source to the
        <x:transform> action through the
        xsltSystemId and the
        docSystemId attributes. The value can be any valid
        URI, such as an absolute file or HTTP URL or a context- or
        page-relative path.







          [ Team LiB ]



          Section 35.7.&nbsp; Input Routing










          35.7. Input Routing


          Ingress IP packets for which no route can be found in the cache by ip_route_input are checked against the routing tables by ip_route_input_slow, which is defined in net/ipv4/route.c and whose logic is shown in Figures 35-5(a) and 35-5(b). In this section, we describe the internals of this routine in detail.



          Figure 35-5a. ip_route_input_slow function




          Figure 35-5b. ip_route_input_slow function



          The function starts with a few sanity checks on the source and destination addresses; for instance, the source IP address must not be a multicast address. I already listed most of those checks in the section "Verbose Monitoring" in Chapter 31. More sanity checks are done later in the function.


          The routing table lookup is done with fib_lookup, the routine introduced in the section "fib_lookup Function." If fib_lookup cannot find a matching route, the packet is dropped; additionally, if the receiving interface is configured with forwarding enabled, an ICMP_UNREACHABLE message is sent back to the source. Note that the ICMP message is sent not by ip_route_input_slow but by its caller, who takes care of it upon seeing a return value of RTN_UNREACHABLE.


          In case of success, ip_route_input_slow distinguishes the following three cases:


          • Packet addressed to a broadcast address

          • Packet addressed to a local address

          • Packet addressed to a remote address


          In the first two cases, the packet is to be delivered locally, and in the third, it needs to be forwarded. The details of how local delivery and forwarding are handled can be found in the sections "Local Delivery" and "Forwarding." Here are some of the tasks they both need to take care of:



          Sanity checks, especially on the source address


          Source addresses are checked against illegal values and are run through fib_validate_source to detect spoofing attempts.


          Creation and initialization of a new cache entry (the local variable rth)


          See the following section, "Creation of a Cache Entry."



          35.7.1. Creation of a Cache Entry





          I said already in the section "Cache Lookup" in Chapter 33 that ip_route_input (and therefore ip_route_input_slow, in case of a cache miss) can be called just to consult the routing table, not necessarily to route an ingress packet. Because of that, ip_route_input_slow does not always create a new cache entry. When invoked from IP or an L4 protocol (such as IP over IP), the function always creates a cache entry. Currently, the only other possibility is invocation by ARP. Routes generated by ARP are cached only when they would be valid for proxy ARP. See the section "Processing ARPOP_REQUEST Packets" in Chapter 28.


          The new entry is allocated with dst_alloc. Of particular importance are the following initializations for the new cache entry:



          rth->u.dst.input



          rth->u.dst.output


          These two virtual functions are invoked respectively by dst_input and dst_output to complete the processing of ingress and egress packets, as shown in Figure 18-1 in Chapter 18. We already saw in the section "Setting Functions for Reception and Transmission" how these two routines can be initialized depending on whether a packet is to be forwarded, delivered locally, or dropped.


          rth->fl


          This flowi structure is used as a search key by cache lookups. It is important to note that rth->fl's fields are initialized to the input parameters received by ip_route_input_slow: this ensures that the next time a lookup is done with the same parameters, ip_route_input will be able to satisfy it with a cache lookup.


          rth->rt_spec_dst


          This is the preferred source address. See the following section, "Preferred Source Address Selection."




          35.7.2. Preferred Source Address Selection






          The route added to the routing cache is unidirectional, meaning that it will not be used to route traffic in the reverse direction toward the source IP address of the packet being routed. However, in some cases, the reception of a packet can trigger an action that requires the local host to choose a source IP address that it can use when transmitting a packet back to the sender.[*] This address, the preferred source IP address,[] must be saved with the routing cache entry that routed the ingress packet. Here are two cases where that address, which is saved in a field called rt_spec_dst, comes in handy:

          [*] The preferred source IP address

          to use for traffic generated locally (i.e., packets whose transmission is not triggered or influenced by the reception of another packet) may be different. See the section "Selecting the Source IP Address."

          [] RFC 1122 calls it the "specific destination."



          ICMP


          When a host receives an ICMP ECHO REQUEST message (popularly known as "pings" from the name of the command that usually generates them), the host returns an ICMP ECHO REPLY unless it is explicitly configured not to. The rt_spec_dst of the route used for the ingress ICMP ECHO REQUEST is used as the source address for the routing lookup made to route the ICMP ECHO REPLY. See icmp_reply in net/ipv4/icmp.c, and see Chapter 25. The ip_send_reply routine in net/ipv4/ip_output.c does something similar.


          IP options


          A couple of IP options require the intermediate hosts between the source and the destination to write the IP addresses of their receiving interfaces into the IP header. The address that Linux writes is rt_spec_dst. See the description of ip_options_compile in Chapter 19.



          The preferred source is selected through the fib_validate_source function mentioned in the section "Helper Routines" and called by ip_route_input_slow.


          ip_route_input_slow initializes the preferred source IP address rt_spec_dst based on the destination address of the packet being routed:



          Packet addressed to a local address


          In this case, the local address to which the packet was addressed becomes the preferred source address. (The ICMP example previously cited falls into this case.)


          Broadcast packet


          A broadcast address cannot be used as a source address for egress packets, so in this case, ip_route_input_slow does more investigation with the help of two other routines: inet_select_addr and fib_validate_source (see the section "Helper Routines").


          When the source IP address is not set in the received packet (that is, when it is all zeroes), inet_select_addr selects the first address with scope RT_SCOPE_LINK on the device the packet was received from. This is because packets are sent with a null source address when addressed to the limited broadcast address, which is an address with scope RT_SCOPE_LINK. An example is a DHCP discovery message.


          When the source address is not all zeroes, fib_validate_source take cares of it.


          Forwarded packet


          In this case, the choice is left to fib_validate_source. (The IP options example previously cited falls into this case.)


          The preferred source IP to use for packets matching a given route can be explicitly configured by the user with a command like this:



          ip route add 10.0.1.0/24 via 10.0.0.1 src 10.0.3.100



          In this example, when transmitting packets to the hosts of the 10.0.1.0/24 subnet, the kernel will use 10.0.3.100 as the source IP address. Of course, only locally configured addresses are accepted: this means that for the previous command to be accepted, 10.0.3.100 must have been configured on one of the local interfaces, but not necessarily on the same device used to reach the 10.0.1.0/24 subnet. (Remember that in Linux, addresses belong to the host, not to the devices; see the section "Responding from Multiple Interfaces" in Chapter 28.) An administrator normally provides a source address when she does not want to use the one that would be picked by default from the egress device.


          Figure 35-6 summarizes how rt_spec_dst is selected.




          35.7.3. Local Delivery




          The following types of packets are delivered locally by initializing dst->input appropriately, as we saw in the section "Initialization of Function Pointers for Ingress Traffic":


          • Packets addressed to locally configured addresses, including multicast addresses

          • Packets addressed to broadcast addresses



          Figure 35-6. Selection of rt_spec_dst



          ip_route_input_slow recognizes two kinds of broadcasts:



          Limited broadcasts


          This is an address consisting of all ones: 255.255.255.255.[*] It can be recognized easily without a call to fib_lookup. Limited broadcasts are delivered to any host on the link, regardless of the subnet the host is configured on. No table lookup is required.

          [*] There is an obsolete form of limited broadcast that consists of all zeros: 0.0.0.0.


          Subnet broadcasts


          These broadcasts are directed at hosts configured on a specific subnet. If hosts are configured on different subnets reachable via the same device (see Figure 30-4(c) in Chapter 30), only the right ones will receive a subnet broadcast. Unlike a limited broadcast, subnet broadcasts

          cannot be recognized without involving the routing table with fib_lookup. For example, the address 10.0.1.127 might be a subnet broadcast in 10.0.1.0/25, but not in 10.0.1.0/24.


          ip_route_input_slow accepts broadcasts only if they are generated by the IP protocol. You might think that this a superfluous check, given that ip_route_input_slow is called to route IP packets. However, as I said in the section "Cache Lookup" in Chapter 33, the input buffer to ip_route_input (and therefore to ip_route_input_slow in case of a cache miss) does not necessarily represent a packet to be routed.


          If everything goes fine, a new cache entry, rtable, is created, initialized, and inserted into the routing cache.


          Note that there is no need to handle Multipath for packets that are delivered locally.




          35.7.4. Forwarding


          If the packet is to be forwarded but the configuration of the ingress device has disabled forwarding
          , the packet cannot be transmitted and must be dropped. The forwarding status of the device is checked with IN_DEV_FORWARD. Figure 35-7 shows the internals of ip_mkroute_input; in particular, it shows what that function looks like when there is no support for multipath caching (i.e., when ip_mkroute_input ends up being an alias to ip_mkroute_input_def). In the section "Multipath Caching," you will see how the other case differs.


          If the matching route returned by fib_lookup includes more than one next hop, fib_select_multipath is used to choose among them. When multipath caching is supported, the selection is taken care of differently. The section "Effects of Multipath on Next Hop Selection" describes the algorithm used for the selection.


          The source address is validated with fib_validate_source. Then, based on the factors we saw in the section "Transmitting ICMP_REDIRECT Messages" in Chapter 31, the kernel may decide to send an ICMP_REDIRECT to the source. In that case, the ICMP message is sent not by ip_route_input_slow directly, but by ip_forward, which takes care of it upon seeing the RTCF_DOREDIRECT flag.


          As we saw in the section "Creation of a Cache Entry," the result of a routing lookup is not always cached.




          35.7.5. Routing Failure







          When a packet cannot be routed, either because of host configuration or because no route matches, the new route is added to the cache with dst->input initialized to ip_error. This means that all the ingress packets matching this route will be processed by ip_error. That function, when invoked by dst_input, will generate the proper ICMP_UNREACHABLE message depending on why the packet cannot be routed, and will drop the packet. Adding the erroneous route to the cache is useful because it can speed up the error processing of further packets sent to the same incorrect address.


          ICMP messages are rate limited by ip_error. We already saw in the section "Egress ICMP REDIRECT Rate Limiting" in Chapter 33 that ICMP_REDIRECT messages are also rate limited by the DST. The rate limiting discussed here is independent of the other, but is enforced using the same fields of the dst_entry. This is possible because given any route, these two forms of rate limiting are mutually exclusive: one applies to ICMP_REDIRECT messages and the other one applies to ICMP_UNREACHABLEmessages.


          Here is how rate limiting is implemented by ip_error with a simple token bucket algorithm.


          The timestamp dst.rate_last is updated every time ip_error is invoked to generate an ICMP message. dst.rate_tokens specifies how many ICMP messagesalso known as the number of tokens, or the budgetcan be sent before the rate limiting kicks in and new ICMP_UNREACHABLE transmission requests will be ignored. The budget is decremented each time an ICMP_UNREACHABLE message is sent, and is incremented by ip_error itself. The budget cannot exceed the maximum number ip_rt_error_burst, which represents, as its name suggests, the maximum number of ICMP messages a host can send in 1 second (i.e., the burst). Its value is expressed in Hz so that it is easy to add tokens based on the difference between the local time jiffies and dst.rate_last.



          Figure 35-7. ip_mkroute_input function



          When ip_error is invoked and at least one token is available, the function is allowed to transmit an ICMP_UNREACHABLE message. The ICMP subtype is derived from dst.error, which was initialized by ip_route_input_slow when fib_lookup failed to find a route.













          Section 5.3. Augmenting Associations in Collections








          5.3. Augmenting Associations in Collections


          All right, we've got a handle on what we need to do if we want our albums' tracks to be kept in the right order. What
          about the additional information we'd like to keep, such as the disc on
          which the track is found? When we map a collection of associations, we've
          seen that Hibernate creates a join table in which to store the
          relationships between objects. And we've just seen how to add an index
          column to the ALBUM_TRACKS table to
          maintain an ordering for the collection. Ideally, we'd like to have the
          ability to augment that table with more information of our own choosing,
          in order to record the other details we'd like to know about album
          tracks.


          As it turns out, we can do just that, and in a very straightforward
          way.



          5.3.1. How do I do that?


          Up to this point we've seen two ways of getting tables into our
          database schema. The first was by explicitly mapping properties of a
          Java object onto columns of a table. The second was defining a
          collection (of values or associations), and specifying the table and
          columns used to manage that collection. As it turns out, there's nothing
          that prevents us from using a single table in both ways. Some of its
          columns can be used directly to map to our own objects' properties,
          while the others can manage the mapping of a collection. This lets us
          achieve our goals of recording the tracks that make up an album in an
          ordered way, augmented by additional details to support multidisc
          albums.


          NOTE


          This flexibility takes a little getting used to, but it makes
          sense, especially if you think about mapping objects to an existing
          database schema.



          We'll want a new data object, AlbumTrack,
          to contain information about how a track is used on an album. Since
          we've already seen several examples of how to map full-blown entities
          with independent existence, and there really isn't a need for our
          AlbumTrack object to exist outside
          the context of an Album entity, this is a good
          opportunity to look at mapping a
          component. Recall that in Hibernate jargon an entity is an object
          that stands on its own in the persistence mechanism: it can be created,
          queried, and deleted independently of any other objects, and therefore
          has its own persistent identity (as reflected by its mandatory id property). A component, in contrast, is an
          object that can be saved to and retrieved from the database, but only as
          a subordinate part of some other entity. In this case, we'll define a
          list of AlbumTrack objects as a component part of
          our Album entity. Example 5-4 shows a
          mapping for the Album class that achieves
          this.


          Example 5-4. Album.hbm.xml, the mapping definition for an album



          <?xml version="1.0"?>
          <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN"
          "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">

          <hibernate-mapping>
          <class name="com.oreilly.hh.data.Album" table="ALBUM">
          <meta attribute="class-description">
          Represents an album in the music database, an organized list of tracks.
          @author Jim Elliott (with help from Hibernate)
          </meta>

          <id column="ALBUM_ID" name="id" type="int">
          <meta attribute="scope-set">protected</meta>
          <generator class="native" />
          </id>

          <property name="title" type="string">
          <meta attribute="use-in-tostring">true</meta>
          <column index="ALBUM_TITLE" name="TITLE" not-null="true" />
          </property>

          <property name="numDiscs" type="integer" />

          <set name="artists" table="ALBUM_ARTISTS">
          <key column="ALBUM_ID" />
          <many-to-many class="com.oreilly.hh.data.Artist" column="ARTIST_ID" />
          </set>

          <set name="comments" table="ALBUM_COMMENTS">
          <key column="ALBUM_ID" />
          <element column="COMMENT" type="string" />
          </set>

          <list name="tracks" table="ALBUM_TRACKS">
          <meta attribute="use-in-tostring">true</meta>
          <key column="ALBUM_ID" />
          <index column="LIST_POS" />
          <composite-element class="com.oreilly.hh.data.AlbumTrack">
          <many-to-one class="com.oreilly.hh.data.Track" name="track">
          <meta attribute="use-in-tostring">true</meta>
          <column name="TRACK_ID" />
          </many-to-one>
          <property name="disc" type="integer" />
          <property name="positionOnDisc" type="integer" />
          </composite-element>
          </list>

          <property name="added" type="date">
          <meta attribute="field-description">
          When the album was created
          </meta>
          </property>

          </class>
          </hibernate-mapping>





          Once we've created the file Album.hbm.xml, we need to add it to the list
          of mapping resources in hibernate.cfg.xml. Open up the hibernate.cfg.xml file in src, and add the line
          highlighted in bold in Example 5-5.


          Example 5-5. Adding Album.hbm.xml to the Hibernate configuration



          <?xml version='1.0' encoding='utf-8'?>
          <!DOCTYPE hibernate-configuration PUBLIC
          "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
          "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">

          <hibernate-configuration>

          <session-factory>

          ...
          <mapping resource="com/oreilly/hh/data/Track.hbm.xml"/>
          <mapping resource="com/oreilly/hh/data/Artist.hbm.xml"/>
          <mapping resource="com/oreilly/hh/data/Album.hbm.xml"/>


          </session-factory>

          </hibernate-configuration>




          A lot of this is similar to mappings we've seen before, but the
          tracks list is worth some careful
          examination. The discussion gets involved, so let's step back a minute
          and recall exactly what we're trying to accomplish.


          We want our album to keep an ordered list of the tracks that make
          it up, along with additional information about each track that tells
          which disc it's on (in case the album has multiple discs) and the
          track's position within the disc. This conceptual relationship is shown
          in the middle of Figure 5-1. The association between albums and tracks
          is mediated by an "Album Tracks" object that adds disc and position
          information, as well as keeping them in the right order. The model of
          the tracks themselves is familiar (we're leaving out artist and comment
          information in this diagram, in an effort to keep it simple). This model
          is what we've captured in the album mapping document, Example 5-4. Let's examine
          the details of how it was done. Later we'll look at how Hibernate turns
          this specification into Java code (the bottom part of Figure 5-1) and a
          database schema (the top part).



          Figure 5-1. Models of the tables, concepts, and objects involved in
          representing album tracks








          All right, armed with this reminder and elaboration of the
          conceptual framework, we're ready to look at the details of Example 5-4:



          This mapping achieves the goals with which we started with
          illustrates how arbitrary information can be attached to a collection of
          associations.


          The source for the component class itself can be found in Example 5-6, and it might
          help clarify this discussion. Compare this source code with its
          graphical representation at the bottom of Figure 5-1.


          You may have noticed that we chose an explicit column name
          of TRACK_ID to use for
          the <many-to-one> link to the TRACK table. We've actually been doing this in
          a number of places, but previously it didn't require an entire separate
          line. It's worth talking about the reasoning behind this choice. Without
          this instruction, Hibernate will just use the property name (track) for the column name. You can use any
          names you want for your columns, but Java
          Database Best Practices
          encourages naming foreign key columns
          the same as the primary keys in the original tables to which they refer.
          This helps data modeling tools recognize and display the "natural joins"
          the foreign keys represent, which makes it easier for people to
          understand and work with the data. This consideration is also why I
          included the table names as part of the primary keys' column
          names.




          5.3.2. What just happened?


          I was all set to explain that by choosing to use a composite
          element to encapsulate our augmented track list, we'd have to write the
          Java source for AlbumTrack ourselves. I was sure
          this went far beyond the capabilities of the code generation tool. Much
          to my delight, when I tried ant
          codegen
          to see what sort of errors would result, the command
          reported success, and both Album.java and AlbumTrack.java appeared in the source
          directory!


          NOTE


          Sometimes it's nice to be proved
          wrong.



          It was at this point that I went back and added the <use-in-tostring> meta element for the track's many-to-one mapping inside the
          component. I wasn't sure this would work either, because the only
          examples of its use I'd found in the reference manual were attached to
          actual <property> tags. But work it
          did, exactly as I had hoped.


          The Hibernate best practices encourage using fine-grained classes
          and mapping them as components. Given how easily the code generation
          tool allows you to create them from your mapping documents, there is
          absolutely no excuse for ignoring this advice. Example 5-6 shows the
          source generated for our nested composite mapping.



          Example 5-6. Code generated for AlbumTrack.java

          package com.oreilly.hh.data;
          // Generated Jun 21, 2007 11:11:48 AM by Hibernate Tools 3.2.0.b9

          /**
          * Represents an album in the music database, an organized list of tracks.
          * @author Jim Elliott (with help from Hibernate)
          */
          public class AlbumTrack implements java.io.Serializable {

          private Track track;
          private Integer disc;
          private Integer positionOnDisc;

          public AlbumTrack() {
          }

          public AlbumTrack(Track track, Integer disc, Integer positionOnDisc) {
          this.track = track;
          this.disc = disc;
          this.positionOnDisc = positionOnDisc;
          }

          public Track getTrack() {
          return this.track;
          }

          public void setTrack(Track track) {
          this.track = track;
          }

          public Integer getDisc() {
          return this.disc;
          }

          public void setDisc(Integer disc) {
          this.disc = disc;
          }

          public Integer getPositionOnDisc() {
          return this.positionOnDisc;
          }

          public void setPositionOnDisc(Integer positionOnDisc) {
          this.positionOnDisc = positionOnDisc;
          }

          /**
          * toString
          * @return String
          */
          public String toString() {
          StringBuffer buffer = new StringBuffer();

          buffer.append(getClass().getName()).append("@").append(
          Integer.toHexString(hashCode())).append(" [");
          buffer.append("track").append("='").append(getTrack()).append("' ");
          buffer.append("]");

          return buffer.toString();
          }
          }





          This looks similar to the generated code for entities we've seen
          in previous chapters, but it lacks an id property, which makes
          sense. Component classes don't need identifier fields, and they need not
          implement any special interfaces. The class JavaDoc is shared with the
          Album class, in which this component is used. The
          source of the Album class itself is a typical
          generated entity, so there's no need to reproduce it here.


          At this point we can build the schema for these new mappings, via
          ant schema. Example 5-7 shows
          highlights of the resulting schema creation process. This is the
          concrete HSQLDB representation of the schema modeled at the top of Figure 5-1.


          Example 5-7. Additions to the schema caused by our new album mapping



          ...
          [hibernatetool] create table ALBUM (ALBUM_ID integer generated by default
          as identity (start with 1), TITLE varchar(255) not null,
          numDiscs integer, added date, primary key (ALBUM_ID));
          ...
          [hibernatetool] create table ALBUM_ARTISTS (ALBUM_ID integer not null,
          ARTIST_ID integer not null,
          primary key (ALBUM_ID, ARTIST_ID));
          ...
          [hibernatetool] create table ALBUM_COMMENTS (ALBUM_ID integer not null,
          COMMENT varchar(255));
          ...
          [hibernatetool] create table ALBUM_TRACKS (ALBUM_ID integer not null,
          TRACK_ID integer, disc integer, positionOnDisc integer,
          LIST_POS integer not null,
          primary key (ALBUM_ID, LIST_POS));
          ...
          [hibernatetool] create index ALBUM_TITLE on ALBUM (TITLE);
          ...
          [hibernatetool] alter table ALBUM_ARTISTS add constraint FK7BA403FC620962DF
          foreign key (ARTIST_ID) references ARTIST;
          [hibernatetool] alter table ALBUM_ARTISTS add constraint FK7BA403FC3C553835
          foreign key (ALBUM_ID) references ALBUM;
          [hibernatetool] alter table ALBUM_COMMENTS add constraint FK1E2C21E43C553835
          foreign key (ALBUM_ID) references ALBUM;
          [hibernatetool] alter table ALBUM_TRACKS add constraint FKD1CBBC782DCBFAB5
          foreign key (TRACK_ID) references TRACK;
          [hibernatetool] alter table ALBUM_TRACKS add constraint FKD1CBBC783C553835
          foreign key (ALBUM_ID) references ALBUM;
          ...




          You may find that making radical changes to the schema
          causes problems for Hibernate or the HSQLDB driver. When I
          switched to this new approach for mapping album tracks, I ran into
          trouble because the first set of mappings established database
          constraints that Hibernate didn't know to drop before trying to build
          the revised schema. This prevented it from dropping and recreating
          some tables. If this ever happens to you, you can delete the database
          file (music.script in the
          data directory) and start from
          scratch, which should work fine. Recent versions of Hibernate also
          seem more robust in scenarios like this.




          Figure 5-2 shows our enriched schema in HSQLDB's
          graphical management interface.



          Figure 5-2. The schema with album-related tables








          You might wonder why we use the separate
          Track class at all, rather than simply embedding
          all that information directly in our enhanced
          AlbumTrack collection. The simple answer is that
          not all tracks are part of an album—some might be singles, downloads, or
          otherwise independent. Given that we need a separate table to keep track
          of these anyway, it would be a poor design choice to duplicate its
          contents in the AlbumTracks table rather than
          associating with it. There is also a more subtle advantage to this
          approach, which is actually used in my own music database: this
          structure allows us to share a single track file between multiple
          albums. If the exact same recording appears on an album, a "best of"
          collection, and one or more period collections or sound tracks, linking
          all these albums to the same track file saves disk space.


          Another point worth noting about the ALBUM_TRACK schema is that there is no obvious
          ID column. If you look back at the
          schema definition Hibernate emitted for
          ALBUM_TRACK
          in Example 5-7, you'll see the phrase primary key
          (ALBUM_ID, LIST_POS)
          . Hibernate has noticed that, given the
          relationships we've requested in Album.hbm.xml, a row in the ALBUM_TRACK table can be uniquely identified
          by a combination of the ID of the Album with
          which it's associated and the index within the list it's modeling, so it
          has set these up as a composite key
          for the table. This is a nice little optimization we didn't even have to
          think about. Also notice that one of those columns is a property of the
          AlbumTrack class while the other is not. We'll
          look at a slightly different way to model this relationship in Chapter 7.


          Let's look at some sample code showing how to use these new data
          objects. Example 5-8 shows
          a class that creates an album record and its list of tracks, then prints
          it out to test the debugging support that we've configured through the
          toString⁠⁠(⁠ ⁠) method.


          Example 5-8. Source of AlbumTest.java



          package com.oreilly.hh;

          import org.hibernate.*;
          import org.hibernate.cfg.Configuration;

          import com.oreilly.hh.data.*;

          import java.sql.Time;
          import java.util.*;

          /**
          * Create sample album data, letting Hibernate persist it for us.
          */
          public class AlbumTest {

          /**
          * Quick and dirty helper method to handle repetitive portion of creating
          * album tracks. A real implementation would have much more flexibility.
          */
          private static void addAlbumTrack(Album album, String title, String file,
          Time length, Artist artist, int disc,
          int positionOnDisc, Session session) {
          Track track = new Track(title, file, length, new HashSet<Artist>(),
          new Date(), (short)0, new HashSet<String>());
          track.getArtists().add(artist);
          session.save(track);
          album.getTracks().add(new AlbumTrack(track, disc, positionOnDisc));
          }

          public static void main(String args[]) throws Exception {
          // Create a configuration based on the properties file we've put
          // in the standard place.
          Configuration config = new Configuration();
          config.configure();

          // Get the session factory we can use for persistence
          SessionFactory sessionFactory = config.buildSessionFactory();

          // Ask for a session using the JDBC information we've configured
          Session session = sessionFactory.openSession();
          Transaction tx = null;
          try {
          // Create some data and persist it
          tx = session.beginTransaction();

          Artist artist = CreateTest.getArtist("Martin L. Gore", true,
          session);
          Album album = new Album("Counterfeit e.p.", 1,
          new HashSet<Artist>(), new HashSet<String>(),
          new ArrayList<AlbumTrack>(5), new Date());
          album.getArtists().add(artist);
          session.save(album);

          addAlbumTrack(album, "Compulsion", "vol1/album83/track01.mp3",
          Time.valueOf("00:05:29"), artist, 1, 1, session);
          addAlbumTrack(album, "In a Manner of Speaking",
          "vol1/album83/track02.mp3", Time.valueOf("00:04:21"),
          artist, 1, 2, session);
          addAlbumTrack(album, "Smile in the Crowd",
          "vol1/album83/track03.mp3", Time.valueOf("00:05:06"),
          artist, 1, 3, session);
          addAlbumTrack(album, "Gone", "vol1/album83/track04.mp3",
          Time.valueOf("00:03:32"), artist, 1, 4, session);
          addAlbumTrack(album, "Never Turn Your Back on Mother Earth",
          "vol1/album83/track05.mp3", Time.valueOf("00:03:07"),
          artist, 1, 5, session);
          addAlbumTrack(album, "Motherless Child", "vol1/album83/track06.mp3",
          Time.valueOf("00:03:32"), artist, 1, 6, session);

          System.out.println(album);

          // We're done; make our changes permanent
          tx.commit();

          // This commented out section is for experimenting with deletions.
          //tx = session.beginTransaction();
          //album.getTracks().remove(1);
          //session.update(album);
          //tx.commit();

          //tx = session.beginTransaction();
          //session.delete(album);
          //tx.commit();

          } catch (Exception e) {
          if (tx != null) {
          // Something went wrong; discard all partial changes
          tx.rollback();
          }
          throw new Exception("Transaction failed", e);
          } finally {
          // No matter what, close the session
          session.close();
          }

          // Clean up after ourselves
          sessionFactory.close();
          }
          }






          In this simple example we're creating an album with just one disc.
          This quick-and-dirty method can't cope with many variations, but it does
          allow the example to be compressed nicely.


          We also need a new target at the end of build.xml to invoke the class. Add the lines
          of Example 5-9 at the
          end of the file (but inside the <project> element, of course).


          Example 5-9. New target to run our album test class



          <target name="atest" description="Creates and persists some album data"
          depends="compile">
          <java classname="com.oreilly.hh.AlbumTest" fork="true">
          <classpath refid="project.class.path"/>
          </java>
          </target>




          With this in place, assuming you've generated the schema, run
          ant ctest followed by ant atest. (Running <ctest> first is optional, but having some
          extra data in there to begin with makes the album data somewhat more
          interesting. Recall that you can run these targets in one command as
          ant ctest atest, and if you want to
          start by erasing the contents of the database first, you can invoke
          ant schema ctest atest.) The
          debugging output produced by this command is shown in Example 5-10. Although
          admittedly cryptic, you should be able to see that the album and tracks
          have been created, and the order of the tracks has been
          maintained.


          Example 5-10. Output from running the album test



          atest:
          [java] com.oreilly.hh.data.Album@5bcf3a [title='Counterfeit e.p.' tracks='[
          com.oreilly.hh.data.AlbumTrack@6a346a [track='com.oreilly.hh.data.Track@973271 [
          title='Compulsion' volume='Volume[left=100, right=100]' sourceMedia='CD' ]' ], c
          om.oreilly.hh.data.AlbumTrack@8e0e1 [track='com.oreilly.hh.data.Track@e3f8b9 [ti
          tle='In a Manner of Speaking' volume='Volume[left=100, right=100]' sourceMedia='
          CD' ]' ], com.oreilly.hh.data.AlbumTrack@de59f0 [track='com.oreilly.hh.data.Trac
          k@e2d159 [title='Smile in the Crowd' volume='Volume[left=100, right=100]' source
          Media='CD' ]' ], com.oreilly.hh.data.AlbumTrack@1e5a36 [track='com.oreilly.hh.da
          ta.Track@b4bb65 [title='Gone' volume='Volume[left=100, right=100]' sourceMedia='
          CD' ]' ], com.oreilly.hh.data.AlbumTrack@7b1683 [track='com.oreilly.hh.data.Trac
          k@3171e [title='Never Turn Your Back on Mother Earth' volume='Volume[left=100, r
          ight=100]' sourceMedia='CD' ]' ], com.oreilly.hh.data.AlbumTrack@e2e4d7 [track='
          com.oreilly.hh.data.Track@1dfc6e [title='Motherless Child' volume='Volume[left=1
          00, right=100]' sourceMedia='CD' ]' ]]' ]



          If we run our old query test, we can see both the old and new
          data, as in Example 5-11.


          Example 5-11. All tracks less than seven minutes long, whether from albums or
          otherwise



          % ant qtest
          Buildfile: build.xml
          ...
          qtest:
          [java] Track: "Russian Trance" (PPK) 00:03:30
          [java] Track: "Video Killed the Radio Star" (The Buggles) 00:03:49
          [java] Track: "Gravity's Angel" (Laurie Anderson) 00:06:06
          [java] Track: "Adagio for Strings (Ferry Corsten Remix)" (Ferry Corsten, Sa
          muel Barber, William Orbit) 00:06:35
          [java] Track: "Test Tone 1" 00:00:10
          [java] Comment: Pink noise to test equalization
          [java] Track: "Compulsion" (Martin L. Gore) 00:05:29
          [java] Track: "In a Manner of Speaking" (Martin L. Gore) 00:04:21
          [java] Track: "Smile in the Crowd" (Martin L. Gore) 00:05:06
          [java] Track: "Gone" (Martin L. Gore) 00:03:32
          [java] Track: "Never Turn Your Back on Mother Earth" (Martin L. Gore) 00:03
          :07
          [java] Track: "Motherless Child" (Martin L. Gore) 00:03:32

          BUILD SUCCESSFUL
          Total time: 2 seconds



          Finally, Figure 5-3 shows a query in the HSQLDB interface that
          examines the contents of the ALBUM_TRACKS table.



          Figure 5-3. Our augmented collection of associations in action










          5.3.3. What about…


          …deleting, rearranging, and otherwise manipulating these
          interrelated pieces of information? This is actually supported
          fairly automatically, as the next section
          illustrates.










          13.4 Using SSL from within J2EE Programs











           < Day Day Up > 





          13.4 Using SSL from within J2EE Programs



          Using the SSL support built into a J2EE product is a very simple and effective approach but in some cases may have some limits because the communication between the client and the server can use only the capabilities offered by the J2EE container. A direct SSL socket connection between client and server allows more sophisticated and responsive applications. For example, a Java servlet can communicate through an SSL connection and interoperate with other services and processes that are not necessarily written in the Java language. This can be done by using a Java package that provides SSL function.



          13.4.1 JSSE



          The Java Secure Socket Extension is a set of Java packages and APIs that enable the use of SSL from within Java programs.[7] JSSE provides a framework and an implementation for the SSL and TLS protocols and includes functionality for data encryption, server authentication, message integrity, and optional client authentication. Using JSSE, developers can enforce secure passage of data between a client and a server running any application protocol over TCP/IP, such as HTTP, TELNET, or FTP.

          [7] JSSE was originally shipped as a standard extension to the J2SE V1.2 and V1.3 platforms. JSSE has been integrated into the J2SE platform starting with J2SE V1.4. JSSE is not an integral part of J2EE, but J2EE applications can use it as explained in Section 4.10.1 on page 145.



          From a programmatic point of view, the main advantage of using JSSE is that a programmer does not have to bother with the details of the record and handshake protocols and with the encryption and decryption of the information exchanged. The underlying JSSE implementation will take care of those details at runtime. This way, the risk of creating subtle but dangerous security vulnerabilities is minimized. Moreover, JSSE simplifies application development by providing developers with a building block that can be integrated directly into existing applications. As SSL and TLS are standard protocols, Java programs using JSSE can communicate via SSL with other processes and services that may not have been written in the Java language.



          The JSSE design resembles that of the Java Cryptography Architecture and Java Cryptography Extension (see Section 11.1 on page 377). Like the JCA and the JCE, JSSE provides both an API framework and an implementation of that API and is designed so that any vendor's implementation can be installed and used without having to recompile the application. JSSE uses the same provider architecture defined in the JCA and the JCE (see Section 11.1.3 on page 382). This enables applications to be vendor neutral, implementation independent, and, whenever possible, algorithm independent.



          The JSSE API consists of the package javax.net and its subpackage javax.net.ssl. This API supplements the core network and cryptographic services defined in the java.net and java.security packages by providing extended networking socket classes, trust managers (see Section 13.4.2), key managers, SSL contexts, and a socket-factory framework for encapsulating socket-creation behavior. This API supports SSL V2.0 and V3.0, TLS V1.0, and HTTPS. These security protocols encapsulate a normal bidirectional stream socket, and the JSSE API adds transparent support for authentication, encryption, and data-integrity protection.



          In Section 4.10.1 on page 145, we studied how servlets can become HTTPS clients by invoking an HTTPS URL programmatically. In Section 13.5 on page 462, we show how a servlet can use the JSSE API.



          13.4.2 Trust Managers



          A trust manager is a software component responsible for deciding whether the credentials presented by a peer during an authentication process should be considered trusted. In JSSE, trust managers are represented as implementations of the javax.net.ssl.TrustManager interface. TrustManager instances are created by either using a javax.net.ssl.TrustManagerFactory object or instantiating a class implementing the TrustManager interface.



          13.4.3 Truststores



          A truststore is a regular keystore that is used when authenticating a peer to decide whether the peer should be trusted. In JSSE, truststores are represented as java.security.KeyStore objects (see Section 11.2.7 on page 395). As the entries in a truststore are used to make trust decisions during an authentication process, an entry should be added to a truststore only by a system administrator and only if the entity represented by that entity is considered trusted.



          The J2SE reference implementation comes with a default truststore file called cacerts, located in the lib/security subdirectory of the Java home directory. This file contains the digital certificates of common CAs, such as VeriSign and Thawte. The default password to access and modify this truststore is changeit. Users of a Java system can specify a different truststore by setting the system property javax.net.ssl.trustStore to the truststore file of their preference. This can be done programmatically, through a call to java.lang.System.setProperty(), or statically, using the -Djavax.net.ssl.trustStore option of the java command.



          Often, it is useful to keep regular keystore files separated from truststore files. A regular keystore file contains private information, such as an entity's own certificates and corresponding private keys, whereas a truststore contains public information, such as that entity's trusted certificate entries, including CA certificates. Using two different files instead of a single keystore file provides for a cleaner separation between an entity's own certificate and key entries and other entities' certificates. This physical separation, which reflects the logical distinction of private- and public-key material, gives system administrators more flexibility in managing the security of the system. For example, a truststore file could be made write protected so that only system administrators are allowed to add entries to it. Conversely, a user's keystore can be write accessible to its owner.













             < Day Day Up >