Friday, January 8, 2010

Section 13.2. How to create a table using XHTML










13.2. How to create a table
using XHTML


Before we pull out Tony's site and start making changes, let's get the table working like we want it in a separate XHTML file. We've started the table and already entered the headings and the first two rows of the table into an XHTML file called "table.html" in the "chapter13/journal/" folder. Check it out:



<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1" />
<style type="text/css">

td
, th
{border: 1px solid black;}
Just a small bit of CSS so we can see the structure of the table in the browser. Don't worry
about this for now.

</style>

<title>Testing Tony's Table</title>

</head>

<body>

<table>
We use a <table> tag to start the table.

<tr
>

Each <tr> element forms a table row.

<th>City</th>

Here's the first row, which we start with a <tr>.

<th>Date</th>
Each <th> element
is a table heading for a column.

<th>Temperature</th>
<th>Altitude</th>
Notice that the table
headings are listed one after each other. While
these look like they might make up a column in the HTML, we are actually defining the
entire table headings row.
Look back at Tony's list to see
how his headings map to these.

<th>Population</th>
<th>Diner Rating</th>
</tr>
<tr>
Each
<td> element holds one cell of the table, and each cell makes
a separate column.

<td>Walla Walla, WA</td>
Here's the start of the second row, which is for the
city Walla Walla.

<td>June 15th</td>
<td>75</td>
<td>1,204 ft</td>
<td>29,686</td>
<td>4/5</td>
All
these <td>s make up one row.

</tr>
<tr>
<td>Magic City, ID</td>
And here's the third row. Again,
the <td> elements each hold one piece of table
data
.

<td>June 25th</td>
<td>74</td></td>
<td>5,312 ft</td>
<td>50</td>
<td>3/5</td>
</tr>
</table>
</body>
</html>













Chapter&nbsp;1.&nbsp;XHTML: Giving Structure to Content








Chapter 1. XHTML: Giving Structure to Content


[View full size image]


Stylin' with CSS is all about designing and building Web pages that look stylish and professional, that are intuitive for visitors to use, that work across a wide variety of user agents (browsers, handhelds, cell phones, and so on) and whose content can be easily updated and distributed for other purposes.


Like any artist, your ability to achieve your creative vision is governed by your technical abilities. In this book, I'm not going to wade into all the excruciating details that underpin the workings of the Web, but I will say that without a certain degree of technical knowledge, you will never achieve your Web design goals. So this book is intended to give you the core technical information you need to realize your creative vision, and hopefully to give you the confidence to "go in" further as you build your skills. But most of all, we will focus on design; by this I mean design in its broadest sense, not just aesthetics (visual appearance), but also ergonomics (logical organization) and functionality (useful capabilities).


Everything in this book is based on Web standards, the rules of browser behavior defined in the recommendations of W3C (the World Wide Web Consortium), which all the major browser developers have agreed to follow. As you will see, browsers based on the Gecko enginethe current versions Mozilla/Firefox and Netscapeand those based on the Konquerer engineincluding the excellent browser Safari for Macdo a much better job delivering standards-based performance than the once-ubiquitous Microsoft Internet Explorer, which fails to implement many CSS specifications.


Every so often I'll mention CSS2 or CSS3. These terms simply refer to a specific version of the CSS standard. Just as with any technology, CSS continues to be refined.










    Recipe 16.11. Getting Information About a Domain Name










    Recipe 16.11. Getting Information About a Domain Name



    16.11.1. Problem


    You
    want to look up contact information or other details about a domain name.




    16.11.2. Solution


    Use

    PEAR's Net_Whois class:


    require 'Net/Whois.php';
    $server = 'whois.networksolutions.com';
    $query = 'example.org';
    $data = Net_Whois::query($server, $query);





    16.11.3. Discussion


    The Net_Whois::query( )
    method returns a large text string whose contents reinforce how hard it can be to parse different Whois results:


    Registrant:
    Internet Assigned Numbers Authority (EXAMPLE2-DOM)
    4676 Admiralty Way, Suite 330
    Marina del Rey, CA 90292
    US

    Domain Name: EXAMPLE.ORG

    Administrative Contact, Technical Contact, Billing Contact:
    Internet Assigned Numbers Authority (IANA) iana@IANA.ORG
    4676 Admiralty Way, Suite 330
    Marina del Rey, CA 90292
    US
    310-823-9358
    Fax- 310-823-8649

    Record last updated on 07-Jan-2002.
    Record expires on 01-Sep-2009.
    Record created on 31-Aug-1995.
    Database last updated on 6-Apr-2002 02:56:00 EST.

    Domain servers in listed order:

    A.IANA-SERVERS.NET 192.0.34.43
    B.IANA-SERVERS.NET 193.0.0.236



    For instance, if you want to parse out the names and IP addresses of the domain name servers, use this:


    preg_match_all('/^\s*([\S]+)\s+([\d.]+)\s*$/m', $data, $dns,
    PREG_SET_ORDER);

    foreach ($dns as $server) {
    print "$server[1] : $server[2]\n";
    }



    You must set $server to the correct Whois server for a domain to get information about that domain. If you don't know the server to use, query whois.internic.net:


    require 'Net/Whois.php';

    print Net_Whois::query('whois.internic.net','example.org');

    require 'Net/Whois.php';

    print Net_Whois::query('whois.internic.net','example.org');
    [whois.internic.net]

    Whois Server Version 1.3

    Domain names in the .com, .net, and .org domains can now be registered
    with many different competing registrars. Go to http://www.internic.net
    for detailed information.

    Domain Name: EXAMPLE.ORG
    Registrar: NETWORK SOLUTIONS, INC.
    Whois Server: whois.networksolutions.com
    Referral URL: http://www.networksolutions.com
    Name Server: A.IANA-SERVERS.NET
    Name Server: B.IANA-SERVERS.NET
    Updated Date: 19-aug-2002


    >>> Last update of whois database: Wed, 21 Aug 2002 04:56:56 EDT <<<

    The Registry database contains ONLY .COM, .NET, .ORG, .EDU domains and
    Registrars.



    The Whois Server: line says that the correct server to ask for information about example.org is whois.networksolutions.com.




    16.11.4. See Also


    PEAR's Net_Whois class at http://pear.php.net/package-info.php?package=Net_Whois.













    Section 7.2. Accessing Hashes


    Hashes > Accessing Hashes




    7.2. Accessing Hashes


    Here's a hash that associates zip codes with the names of towns in Wyoming that start with the letter T (had to limit it somehow):

    zip = { 82442 => "Ten Sleep", 83025 => "Teton Village", 83127 => "Thayne",
    82443 => "Thermopolis", 82084 => "Tie Siding", 82336 => "Tipton", 82240 =>
    "Torrington", 83110 => "Turnerville", 83112 => "Turnerville" }


    There are tons of ways to access keys and/or values from a hash. You can pick what works for you—what works for the task at hand.


    You can test to see if the hash zip has a given key with any of the following methods, which are all synonyms of each other: key?, has_key?, member?, or include?:

    zip.has_key? 82442 # => true


    Or you can do the flip side, and see if it has a given value with value? or has_value?:

    zip.has_value? "Ten Sleep" # => true


    Let's start pulling stuff out of zip. Here is a simple way of grabbing a value: the [] method. It retrieves a single hash value based on a key:

    zip[82442] # => "Ten Sleep"


    Then we have the methods keys and values. Return an array containing all the keys in a hash with keys:


    zip.keys # => [83110, 83127, 82336, 83112, 82084, 83025, 82442, 82443, 82240]




    Get an array with all the values from a hash with values:

    zip.values # => ["Turnerville", "Thayne", "Tipton",
    "Turnerville", "Tie Siding", "Teton Village", "Ten Sleep", "Thermopolis",
    "Torrington"]


    Retrieve the values out of a hash based on one or more keys with values_at, also placing the value or values in an array:


    zip.values_at 82084 # => ["Tie Siding"]
    zip.values_at 82442, 82443, 82240 # => ["Ten Sleep", "Thermopolis", "Torrington"]




    Now return a value for a given key (one key only) with the index method:

    zip.index "Thayne" # => 83127


    The select method uses a block to return a new, multidimensional array of key-value pairs:


    zip.select { |key,val| key > 83000 } # => [[83110,
    "Turnerville"], [83127, "Thayne"], [83112, "Turnerville"], [83025, "Teton Village"]]




     

     


    Directing Implementation



    [ Team LiB ]









    Directing Implementation


    GP 3.2 Collect Improvement Information


    Collect work products, measures, measurement results, and improvement information derived from planning and performing the decision analysis and resolution process to support the future use and improvement of the organization's processes and process assets.



    GG 4 Institutionalize a Quantitatively Managed Process


    The process is institutionalized as a quantitatively managed process.


    GP 4.1 Establish Quantitative Objectives for the Process

    Establish and maintain quantitative objectives for the decision analysis and resolution process that address quality and process performance based on customer needs and business objectives.



    GP 4.2 Stabilize Subprocess Performance

    Stabilize the performance of one or more subprocesses to determine the ability of the decision analysis and resolution process to achieve the established quantitative quality and process-performance objectives.




    GG 5 Institutionalize an Optimizing Process


    The process is institutionalized as an optimizing process.


    GP 5.1 Ensure Continuous Process Improvement

    Ensure continuous improvement of the decision analysis and resolution process in fulfilling the relevant business objectives of the organization.



    GP 5.2 Correct Root Causes of Problems

    Identify and correct the root causes of defects and other problems in the decision analysis and resolution process.









      [ Team LiB ]



      Chapter 4. C Data Structures



      [ Team LiB ]






      Chapter 4. C Data Structures



      The ignoring of data is, in fact, the easiest and most popular mode of obtaining unity in one's thought.

      �William James


      Programs work by applying algorithms on data. The internal organization of data plays an important role in how algorithms operate.


      You will find elements with the same type organized as a collection using a number of different mechanisms, each having different affordances regarding data storage and access. A vector, implemented as an array, offers random access to all elements, but it can be inefficient to change in size at runtime. A vector can also be used to organize groups of elements in the form of a table or stacked into two dimensions to create a matrix. Operations on a vector are sometimes restricted to occur at one end, in which case we talk about a stack, or in a first-in first-out order, treating the vector as a queue. When the order of the elements is not important, maps are used to create lookup tables and sets employed to form element collections.


      The ability to link data elements together using pointers gives rise to a number of other structures you will often encounter. A linked list easily grows dynamically but offers only serial access (including stack and queue operations), whereas a suitably organized tree can be used to access data elements based on a key and can also easily grow dynamically while allowing traversal in an orderly fashion. We finish our discussion of C data structures with an examination of some applications and code patterns relating to graphs�the most flexible representation of linked data structures you will encounter.


      Languages such as Java, C++, and C# offer abstractions for implementing these data structures as part of the language's library. In C these data structures are typically explicitly coded within the code body that uses them, however, their properties and the operations performed on them are common. The objective of this chapter is to learn how to read explicit data structure operations in terms of the underlying abstract data class.






        [ Team LiB ]



        Section 3.2. The case Statement


        Conditional Love > The case Statement




        3.2. The case Statement


        Ruby's case statement provides a way to express conditional logic in a succinct way. It is similar to using elsifs with colons, but you use case in place of if, and when in place of elsif.


        Here is an example similar to what you saw earlier using lang with the possible symbols :en, :es, :fr, and :de:

        lang = :fr

        dog = case lang
        when :en: "dog"
        when :es: "perro"
        when :fr: "chien"
        when :de: "Hund"
        else "dog"
        end
        # "chien" is assigned to dog


        case/when is more convenient and terse than if/elsif/else because the logic of == is assumed—you don't have to keep retyping == or the variable name:


        Ruby's case is similar to the switch statement, a familiar C construct, but case is more powerful. One of the annoying things to me about switch statements in C, C++, and Java, is that you can't switch on strings in a straightforward way (though you can in C#).


        If the lang variable held a string instead of symbols, your code would look like this:

        lang = "de"

        dog = case lang
        when "en": "dog"
        when "es": "perro"
        when "fr": "chien"
        when "de": "Hund"
        else "dog"
        end
        # "Hund" is assigned to dog


        The next example uses several ranges to test values. A range is a range of numbers.

        scale = 8
        case scale
        when 0: puts "lowest"
        when 1..3: puts "medium-low"
        when 4..5: puts "medium"
        when 6..7: puts "medium-high"
        when 8..9: puts "high"
        when 10: puts "highest"
        else puts "off scale"
        end
        # => high


        The range 1..3 means a range of numbers from 1 to 3, inclusive. Because scale equals 8, scale matches the range 8..9 and case returns the string high. However, when you use three dots as in the range 1...5, the ending value 5 is excluded. The sets of dots, .. and ..., are called range operators; two dots includes all the numbers in the range, and three dots excludes the last value in the range. Underneath the hood, case uses the === operator from Range to test whether a value is a member of or included in a range.


         

         


        WormChase as an Applet









        WormChase as an Applet


        Figure 3-1 shows the WormChase game as an applet and as an application. It has the same GUI interface as the windowed version: a large canvas with two text fields at the bottom used to report the number of boxes added to the scene, and the time.


        Class diagrams showing the public methods are given in Figure 3-10. A comparison with the diagrams for the windowed version in Figure 3-2 show the classes stay mainly the same. The only substantial change is to replace JFrame by JApplet at the top level.



        Figure 3-10. Class diagrams for the WormChase applet



        The code for this version of WormChase is in the directory Worm/WormApplet/.



        The Worm class is unchanged from the windowed version. The Obstacles class now calls setBoxNumber( ) in WormChaseApplet rather than WormChase.


        WormPanel reports its termination statistics in a different way, but the animation loop and statistics gathering are unchanged. WormChaseApplet handles pausing, resumption, and termination by tying them to events in the applet life cycle. By comparison, WormChase utilizes Window events.


        The applet's web page passes the requested frame rate to it as a parameter:



        <applet code="WormChaseApplet.class" width="500" height="415">
        <param name="fps" value="80">
        </applet>




        The WormChaseApplet Class


        Figure 3-11 shows the class diagram for WormChaseApplet with all its variables and methods.



        Figure 3-11. WormChaseApplet in detail



        The applet's init( ) method reads the FPS value from the web page, sets up the GUI, and starts the game:



        public void init( )
        {
        String str = getParameter("fps");
        int fps = (str != null) ? Integer.parseInt(str) : DEFAULT_FPS;

        long period = (long) 1000.0/fps;
        System.out.println("fps: " + fps + "; period: "+period+" ms");

        makeGUI(period);
        wp.startGame( );
        }



        makeGUI( ) is the same as the one in the JFrame version. The call to startGame( ) replaces the use of addNotify( ) in the JPanel.


        The applet life-cycle methodsstart( ), stop( ), and destroy( )contain calls to WormPanel to resume, pause, and terminate the game:



        public void start( )
        { wp.resumeGame( ); }

        public void stop( )
        { wp.pauseGame( ); }

        public void destroy( )
        { wp.stopGame( ); }



        A browser calls destroy( ) prior to deleting the web page (and its applet) or perhaps as the browser itself is closed. The browser will wait for the destroy( ) call to return before exiting.




        The WormPanel Class


        The only major change to WormPanel is how printStats( ) is called. The stopGame( ) method is modified to call finishOff( ), which calls printStats( ):



        public void stopGame( )
        { running = false;
        finishOff( ); // new bit, different from the application
        }

        private void finishOff( )
        { if (!finishedOff) {
        finishedOff = true;
        printStats( );
        }
        } // end of finishedOff( )



        finishOff( ) checks a global finishedOff Boolean to decide whether to report the statistics. finishedOff starts with the value false.


        finishOff( ) is called at the end of run( ) as the animation loop finishes. The first call to finishOff( ) will pass the if test, set the finishedOff flag to true, and print the data. The flag will then prevent a second call from repeating the output.


        A race condition could occur, with two simultaneous calls to finishOff( ) getting past the if test at the same time, but it's not serious or likely, so I ignore it.



        In the windowed application, stopGame( ) only sets running to false before returning, with no call to finishOff( ). The threaded animation loop may then execute for a short time before checking the flag, stopping, and calling printStats( ).


        This approach is fine in an application where the animation thread will be allowed to finish before the application terminates. Unfortunately, as soon as an applet's destroy( ) method returns, then the applet or the browser can exit. In this case, the animation thread may not have time to reach its printStats( ) call.


        To ensure the statistics are printed, finishOff( ) is called in the applet's stopGame( ) method. The other call to finishOff( ) at the end of run( ) is a catch-all in case I modify the game so it can terminate the animation loop without passing through stopGame( ).




        Timing Results


        The timing results are given in Table 3-3.


        Table 3-3. Average FPS/UPS for the applet version of WormChase

        Requested FPS

        20

        50

        80

        100

        Windows 98

        20/20

        50/50

        82/83

        97/100

        Windows 2000

        20/20

        46/50

        63/83

        61/100

        Windows XP

        20/20

        50/50

        83/83

        100/100



        The poor showing for the frame rate on the Windows 2000 machine is expected, but the applet performs well on more modern hardware.










          Study Results













          Study Results



          Sample


          A total of 293 articles were used in the analysis. Among the articles reviewed, 41 originated in MIS Quarterly, 38 in Information Systems Research, 119 in Information & Management, 78 in Journal of Management Information Systems, and 17 in Management Science. Whereas most (68%) were field studies, coded works also included laboratory experiments (22%), case studies (5%), and field experiments (5%). In only 23% of the sampled articles, students (undergraduate or graduate) filled out the instrument. In the remainder of the articles, workers (sometimes in conjunction with students) were the ones to whom the instrument was administered. As far as the different techniques for data collection, it is worth mentioning that the majority (85%) of our sampled studies have collected data through the use of surveys (questionnaires). The conduct of an interview was the second most used technique, with 17% of the sampled articles using it. The use of more than one technique to capture data occurred in 31% of the studies.





          Validation of Coding


          Inter-rater was assessed to verify that our coding was reliable (Miles & Huberman, 1994). A second, independent coder thus coded a subset of the sampled articles. For the 11 coded attributes, the following percentages of agreement were obtained: type of research—77%; research method—85%; pretest—82%; pilot test—95%; content validity—90%; construct validity—85%; reliability—85%; manipulation check—95%; nature of the instrument—82%; instrument validation section—92%; and use of second-generation statistical technique—95%.



          Cohen's (1960) kappa coefficient was also calculated. This coefficient is known to be a more stringent measure than simple percentages of agreement. For all criteria, the average kappa was 0.76, which is above the recommended 0.70 inter-rater minimum reliability (Bowers & Courtright, 1984; Landis & Koch, 1977; Miles & Huberman, 1994). As usual, disagreements between coders were reconciled before further analysis was performed.





          Overview of Findings



          Table 1 clearly shows that, over the past 13 years, instrument validation has improved in all the categories we have assessed. In addition, in two specific categories (pretest/pilot and reliability), the proportion of published studies validating their instruments is now greater than the proportion of published studies not validating their instruments. When comparing with Boudreau et al. (2001), the most important improvement is for construct validity, which was then assessed in 37% of the studies compared to 45% of the studies today. Improvement up to an additional 5% is noticeable in all other categories. Overall, although improvement in instrument validation is modest when comparing Boudreau et al.'s results to the current study's results, it is still comforting to observe a consistent increase in the use of all validation techniques.





































          Table 1: Survey of Instrument Validation Use in MIS Literature.


          Inst. Categories / Year




          Straub (1989)




          Boudreau et al. (2001)




          Current Study



          Pretest



          13%



          26%



          31%



          Pilot



          6%



          31%



          31%



          Pretest or Pilot [i]



          19%



          47%



          51%



          Previous Instr. Utilized



          17%



          42%



          43%



          Content Validity



          4%



          23%



          26%



          Construct Validity



          14%



          37%



          45%



          Reliability



          17%



          63%



          68%





          [i]= Because some articles used both a pretest and a pilot, the category Pretest or Pilot does not add up to Pretest plus Pilot.




          Reliability is the validation criterion that was the most frequently assessed, when compared to all other validation criteria taken singly, in both previous studies as well as this one. As was the case in Boudreau et al. (2001), a majority of studies assessing reliability of their instruments have done so through the standard coefficient of internal consistency, i.e., Cronbach's α (84%). The second most popular technique to assess reliability was inter-coder tests, which was reported by 15% of the studies that appraised the reliability of their instrument. Moreover, the use of more than one reliability method is still rare, as it was done by only 10% of the studies assessing reliability.


          A closer look at the studies that assessed construct validity reveals that diverse approaches were used for that purpose. More specifically, convergent, discriminant, and nomological validity were determined, respectively, in 50%, 58%, and 6% of these studies. As to predictive and concurrent validity, they were reported in 7% and 1.5% of these studies. Construct validity, in itself (and not in one of its five components), was recounted in 80% of the studies that assessed this kind of validity.




          Table 1 shows that the utilization of previously existing instruments has more than doubled over the last 13 years. Also, as detailed in Table 2, it appears that studies using existing instruments were sometimes more inclined to validate their instrument than studies developing their own instrument from scratch. Indeed, construct validity and reliability were more frequently assessed in studies using a previously utilized instrument than those that did not (50% vs. 42%; 74% vs. 63%). However, with regard to the use of pretest or pilot studies and content validity, these validities were assessed more often within studies creating a new instrument than within studies using an existing instrument (55% vs. 46%; 28% vs. 24%). This table reveals another interesting fact: Over the past two years, research articles that created their own instrument improved their validation practices to a greater extent than research articles that used a previously utilized instrument.




























          Table 2: Studies with Previously Utilized Instruments vs. those with New Instruments.


          Previous Instrument (n=127)




          New Instrument (n=166)




          Inst. Categories




          Boudreau et al. (2001)




          Current Study




          Boudreau et al. (2001)




          Current Study



          Pretest or Pilot



          43%



          46%



          50%



          55%



          Content Validity



          20%



          24%



          25%



          28%



          Construct Validity



          44%



          50%



          32%



          42%



          Reliability



          74%



          74%



          54%



          63%



          It is interesting to observe how confirmatory studies (133 articles, or 45% of total) compare to exploratory studies (160 articles, or 55% of total). The present survey indicates that, for all criteria except for the use of pretest or pilot studies, exploratory studies showed less interest in validating their instruments than confirmatory studies (see Table 3). Indeed, the extent to which content validity, construct validity, and reliability were assessed was more frequent among confirmatory studies than among exploratory studies. This represents the same trend that was observed in Boudreau et al. (2001).




























          Table 3: Type of Research (Confirmatory vs. Exploratory Studies).


          Confirmatory Studies (45%)




          Exploratory Studies (55%)




          Inst. Categories




          Boudreau et al. (2001)




          Current Study




          Boudreau et al. (2001)




          Current Study



          Pretest or Pilot



          47%



          49%



          47%



          53%



          Content Validity



          35%



          35%



          17%



          19%



          Construct Validity



          53%



          61%



          29%



          33%



          Reliability



          69%



          75%



          60%



          62%



          The extent to which a research method has bearing on instrument validation constitutes an interesting observation. In Straub's (1989) original study, it was argued that experimental and case researchers were less likely to validate their instruments than field study researchers. Boudreau et al.'s (2001) study showed a similar trend when comparing field studies to experimental studies, but not to case studies. The additional data used in the present study demonstrates that Straub's initial inference still holds true today on all of the previously introduced validity criteria (see Table 4). Indeed, field study researchers from our sample were more inclined to validate their instrument than experimental and case researchers. The most notable difference was for the use of construct validity, where a gap of 31% existed between experimental and field study research.




























          Table 4: Field Studies vs. Lab/Field Experiments vs. Case Studies.


          Inst. Categories




          Field Studies (n=200)




          Lab/Fields Experiments (n=80)




          Case Studies (n=13)



          Pretest or Pilot



          59%



          36%



          31%



          Previous Inst. Utilized



          47%



          38%



          23%



          Content Validity



          32%



          15%



          15%



          Construct Validity



          55%



          24%



          38%



          Reliability



          69%



          65%



          62%



          The inclusion of an Instrument Validation section, as originally suggested in Straub (1989), was tallied as frequently in the current study as it was in Boudreau et al.'s (2001) study. Indeed, only 24% of the surveyed articles included such a section. For this minority of articles, there was a greater extent of reporting a pilot or pretest study (80% vs. 42%), content validity (52% vs. 18%), construct validity (82% vs. 34%), and reliability (88% vs. 61%). These percentages are hardly surprising since if one feels compelled to include a specific section on instrument validation, it is because efforts have been done in this area. However, it is disappointing not to observe an increase in the percentage of studies that included a special section reporting their endeavor in instrument validation.


          Noticeable improvement has occurred in the use of manipulation check in the past few years. As indicated in Table 5, among the field and laboratory experiments in our sample, 30% performed one or several manipulation checks of the treatments, compared to 22% in Boudreau's et al.'s (2001) study. Moreover, percentages have particularly increased in two journals, that is, MIS Quarterly (increase of 21%) and Information Systems Research (increase of 12%). The absence of manipulation checks in the experimental studies of Management Science may be due to the tendency for articles of this journal to use directly observable measurements, such as time, rather then latent constructs.































          Table 5: Use of Manipulation Validity.


          Journal




          Boudreau et al. (2001)




          Current Study




          Information & Management



          24%



          25%




          Information Systems Research



          38%



          50%




          MIS Quarterly



          29%



          50%




          Journal of Management Information Systems



          17%



          19%




          Management Science



          0%



          0%



          All Five Journals



          22%



          30%



          A greater percentage of studies from our sample used second-generation statistical techniques (e.g., structural equation modeling) rather than first-generation statistical techniques (regression, ANOVA, LOGIT, etc.). From 15% in Boudreau et al. (2001), this percentage increased to 19% in the present study (see Table 6). However, the extent of instrument validation did not change much when comparing first-to second-generation techniques in the two studies. As was the case in Boudreau et al., studies making use of SEM techniques scored higher in all categories, particularly for construct validity and reliability. Among the studies using second-generation statistical techniques, the most commonly used tools were PLS (42%), LISREL (21%), and EQS (18%).































          Table 6: First-Generation vs. Second-Generation Statistical Techniques.


          Boudreau et al. (2001)




          Current Study




          Inst. Categories




          First Generation (85%)




          Second Generation (15%)




          First Generation (81%)




          Second Generation (19%)



          Pretest or Pilot



          44%



          64%



          48%



          63%



          Previous Inst. Utilized



          42%



          46%



          43%



          46%



          Content Validity



          19%



          43%



          23%



          39%



          Construct Validity



          29%



          82%



          36%



          86%



          Reliability



          57%



          96%



          61%



          93%



          A possible reason for this difference is that SEM analyzes both the structural model (the assumed causation) and the measurement model (the loadings of observed items). As a result, validity assessment is an integral part of SEM. The validity statistics appear explicitly in the output, and the degree of statistical validity directly affects the overall model fit indexes. In first-generation statistical techniques, on the other hand, validity and reliability are performed in separate analyses that are not related to the actual hypothesis testing and, thus, do not determine the overall fit indexes.





          Summary of Key Points


          It should be considered good news that, in the short period of two years since the last study assessing instrument validation practices, IS researchers have improved the validation of their instrument. Granted, such an improvement is certainly not as significant as what had been observed when using Straub's (1989) study as the baseline, but this is understandable given that the time period was then much longer. Although better, the current validation practices are far from perfect, and it is still necessary to state that IS researchers need to achieve greater rigor in the validation of their instruments and their research. More particularly, the following nine key findings should engage further reflection and action:




          1. Over the past two years, instrument validation practices have steadily improved.




          2. In two specific categories (pretest/pilot and reliability), the proportion of published studies validating their instruments is now greater than the proportion of published studies not validating their instruments.




          3. The assessment of construct validity has improved the most over the past two years.




          4. Published studies are increasingly using preexisting instruments; while doing so, reliability and construct validity are being more frequently assessed.




          5. Confirmatory studies are more likely to assess reliability, content validity, and construct validity than exploratory studies.




          6. Laboratory experiments, field experiments, and case studies lag behind field studies with respect to all validation criteria.




          7. Although the inclusion of an Instrument Validation subsection warrants greater reporting of validation practices, it appears infrequently in empirical studies.




          8. There has been a noticeable improvement in the use of manipulation check in the past few years; however, in some publications outlets, manipulation check is only done by a minority of IS experimenters.




          9. Published studies making use of second-generation statistical techniques (SEM) are much more likely to validate their instruments than published studies making use of first-generation statistical techniques.














          Improving Performance with Persistent JobStores










          Improving Performance with Persistent JobStores


          Performance is one topic that receives the most attention when there's the least amount of time to do anything about it. As experienced developers, we know that it should be a consideration from the onset of the project.


          When using Quartz with persistent JobStores, the biggest area of concern has to be the interaction with the relational database. Database I/O (just like file I/O) is usually not very fast. You can improve performance by doing things such as tuning the SQL, adding indexes, and manipulating tables and columns. Because performance concerns were already taken into account in the writing of Quartz, you don't want to dive right into doing these things to Quartz without having an actual performance problem, trying to solve it through a configuration setting, and trying everything possible not to manipulate source code. The good news is that Quartz is open source and you have complete insight into what it's doing and how it's doing it. If you don't like how it queries the database, it's your prerogative to fix it. Before you take this route, however, be sure to check with the users and developers on the Quartz forum to see if others have had the problem and explore the recommended suggestions.


          One very easy (and very effective) way to improve performance is to make sure the tables have indexes created on all the appropriate columns. Some of the database-creation scripts that ship with Quartz already have the commands for creating indexes. If yours does not, you can simply refer to those defined at the bottom of tables_oracle.sql and translate any syntax changes that are necessary for your RDBMs.


          Whatever you do, if you make changes that improve performance, be sure to offer them back to the community and the Quartz project.












          Directing Implementation



          [ Team LiB ]









          Directing Implementation


          GP 2.6 Manage Configurations


          Place designated work products of the process and product quality assurance process under appropriate levels of configuration management.


          Elaboration


          Examples of work products placed under configuration management include the following:



          • Noncompliance reports


          • Evaluation logs and reports






          GP 2.7 Identify and Involve Relevant Stakeholders


          Identify and involve the relevant stakeholders of the process and product quality assurance process as planned.


          Elaboration


          Examples of activities for stakeholder involvement include the following:



          • Establishing criteria for the objective evaluations of processes and work products


          • Evaluating processes and work products


          • Resolving noncompliance issues


          • Tracking noncompliance issues to closure






          GP 2.8 Monitor and Control the Process


          Monitor and control the process and product quality assurance process against the plan for performing the process and take appropriate corrective action.


          Elaboration


          Examples of measures used in monitoring and controlling include the following:



          • Variance of objective process evaluations planned and performed


          • Variance of objective work product evaluations planned and performed











            [ Team LiB ]



            Job Storage in Quartz










            Job Storage in Quartz


            Quartz supports several different types of storage mechanisms for Scheduler information. Two types of Job storage are available in Quartz:


            • Memory (nonpersistent) storage

            • Persistent storage


            By default, we've been using the memory storage mechanism in the examples from the past several chapters. Both types are designed to serve the same purpose: to store job information. How they each go about it, however, and what functionality they provide the Scheduler is very different.



            The JobStore Interface


            Quartz provides an interface for all types of job storage. The interface located in the org.quartz.spi package is called JobStore. All job storage mechanisms, regardless of where or how they store their information, must implement this interface.


            The JobStore interface has too many methods to list here, but the API for the JobStore interface can be generalized into the following categories:


            • Job-related API

            • Trigger-related API

            • Calendar-related API

            • Scheduler-related API


            Quartz users almost never access or see concrete classes that implement the JobStore interface; they are used internally by the Quartz Scheduler to retrieve job and trigger information during runtime. It is a worthwhile exercise, however, to familiarize yourself with each type so that you better understand the facilities these provide on your behalf and to choose the right one for your Quartz application.













            Section A.1. #1 More Selectors










            A.1. #1 More Selectors


            While you've already learned the most common selectors
            , here are a few more you might want to know about...



            A.1.1. Pseudo-elements




            You know all about pseudo-classes, and pseudo-elements are similar. Pseudoelements can be used to select parts of an element that you can't conveniently wrap in a <div> or a <span> or select in other ways. For example, the first-letter pseudo-element
            can be used to select the first letter of the text in a block element, allowing you to create effects like initial caps and drop caps. There's one other pseudo-element called first-line, which you can use to select the first line of a paragraph. Here's how you'd use both to select the first letter and line of a <p> element:



            p:first-letterPseudo-elements use the same syntax as pseudo-classes.{
            font-size: 3em;
            }
            p:first-line
            Here we're making the first letter of the paragraph large, and the first
            line italics.
            {
            font-style: italic;
            }






            A.1.2. Attribute selectors


            Attribute selectors are not currently well supported in current browsers; however, they could be more widely supported in the future. Attribute selectors are exactly what they sound like: selectors that allow you to select elements based on attribute
            values. You use them like this:



            img[width] { border: black thin solid; }This selector selects all images that have a
            width attribute in their XHTML.

            img[height="300"] { border: red thin solid; }
            This selector selects all images that
            have a height attribute with a value of 300.

            image[alt~="flowers"] { border: #ccc thin solid; }
            This selector selects all images that
            have an alt attribute that includes the word "flowers".






            A.1.3. Selecting by Siblings


            You can also select elements based on their preceding sibling. For example, say you want to select only paragraphs that have an <h1> element preceding them, then you'd use this selector:



            h1+p { Write the preceding element, a "+" sign, and then the sibling element.
            font-style: italic; This selector selects all paragraphs that come immediately
            after an <h1> element.

            }





            A.1.4. Combining Selectors


            You've already seen examples of how selectors can be combined in this book. For instance, you can take a class selector and use it as part of a descendant selector, like this:



            .blueberry p { color: purple; } Here we're selecting all paragraphs that are descendants
            of an element in the blueberry class.




            There's a pattern here that you can use to construct quite complex selectors. Let's step through how this pattern works:


            1. Start by defining the context for the element you want to select, like this:


              div#greentea > blockquote Here we're using a descendant selector where a <div>
              with an id "greentea" must be the parent of the <blockquote>.



            2. Then supply the element you want to select:


              div#greentea > blockquote context p element Next we add the <p> element as the element
              we want to select in the context of the <blockquote>. The <p> element must be a
              descendant of <blockquote>, which must be a child of a <div> with an id of "greentea".



            3. Then specify any pseudo-classes or pseudo-elements:


              div#greentea > blockquote context p element:first-line Then we add a pseudo-element,
              first-line, to select only the first line of the paragraph.
              { font-style: italic; }



            That's a quite complex selector! Feel free to construct your own selectors using this same method.













            11.5 Stored Functions in DML Statements











             < Day Day Up > 







            11.5 Stored Functions in DML Statements





            Stored functions



            may also

            be called from INSERT,

            UPDATE, and DELETE statements. Whereas most of the restrictions

            outlined earlier apply equally to stored functions called from DML

            statements, there is one major difference: since the parent DML

            statement is changing the state of the database, stored functions

            invoked from DML statements do not need to abide by the WNDS

            restriction. However, such stored functions may not read or modify

            the same table as the parent DML statement.





            Like queries, DML statements may call stored functions where

            expressions are allowed, including:





            • The VALUES clause of an INSERT statement

            • The SET clause of an UPDATE statement

            • The WHERE clause of an INSERT, UPDATE, or DELETE statement



            Any subqueries called from a DML statement may also call stored

            functions as well under the same set of restrictions as the parent

            DML statement.





            Often, sets of complementary stored functions are called from both

            queries and DML statements. For example, you saw earlier how the

            pkg_util.translate_date function could be called

            from a query to translate from the Oracle date format stored in the

            database to the format needed by a Java client. Similarly, the

            overloaded pkg_util.translate_date function may be

            used within an update statement to perform the reverse translation,

            as in:





            UPDATE cust_order

            SET expected_ship_dt = pkg_util.translate_date(:1)

            WHERE order_nbr = :2;







            where :1 and :2 are

            placeholders for the UTC timedate and order number passed in by the

            Java client.





            Stored functions may also be used in the WHERE clause in place of

            correlated subqueries, both to simplify the DML statement and to

            facilitate code reuse. For example, suppose you have been asked to

            push the expected ship date by five days for any order containing

            part number F34-17802. You could issue an UPDATE statement against

            the cust_order table using a correlated subquery,

            as in:





            UPDATE cust_order co

            SET co.expected_ship_dt = NVL(co.expected_ship_dt, SYSDATE) + 5

            WHERE co.cancelled_dt IS NULL and co.ship_dt IS NULL

            AND EXISTS (SELECT 1 FROM line_item li

            WHERE li.order_nbr = co.order_nbr

            AND li.part_nbr = 'F34-17802');







            After having written many subqueries against the

            line_item table, however, you feel

            it's time to write a multipurpose function and add

            it to the pkg_util package:





            FUNCTION get_part_count(ordno IN NUMBER, 

            partno IN VARCHAR2 DEFAULT NULL, max_cnt IN NUMBER DEFAULT 9999)

            RETURN NUMBER IS

            tot_cnt NUMBER(5) := 0;

            li_part_nbr VARCHAR2(20);

            CURSOR cur_li(c_ordno IN NUMBER) IS

            SELECT part_nbr

            FROM line_item

            WHERE order_nbr = c_ordno;

            BEGIN

            OPEN cur_li(ordno);

            WHILE tot_cnt < max_cnt LOOP

            FETCH cur_li INTO li_part_nbr;

            EXIT WHEN cur_li%NOTFOUND;



            IF partno IS NULL OR

            (partno IS NOT NULL AND partno = li_part_nbr) THEN

            tot_cnt := tot_cnt + 1;

            END IF;

            END LOOP;

            CLOSE cur_li;



            RETURN tot_cnt;

            END get_part_count;







            The function may be used for multiple purposes, including:





            • To count the number of line items in a given order

            • To count the number of line items in a given order containing a given

              part

            • To determine whether the given order has at least X occurrences of a

              given part



            The UPDATE statement may now use the function to locate open orders

            that have at least one occurrence of part F34-17802:





            UPDATE cust_order co

            SET co.expected_ship_dt = NVL(co.expected_ship_dt, SYSDATE) + 5

            WHERE co.cancelled_dt IS NULL and co.ship_dt IS NULL

            AND 1 = pkg_util.get_part_count(co.order_nbr, 'F34-17802', 1);



















               < Day Day Up > 



              D.7 OpenCL



              [ Team LiB ]





              D.7 OpenCL


              The source code contained in the OpenCL directory is copyrighted by the OpenCL Project and, unless otherwise noted, is distributed under the following license.



              Copyright (C) 1999-2001 The OpenCL Project. All rights reserved.

              Redistribution and use in source and binary forms, for any use, with or without
              modification, is permitted provided that the following conditions are met:

              1. Redistributions of source code must retain the above copyright notice, this
              list of conditions, and the following disclaimer.

              2. Redistributions in binary form must reproduce the above copyright notice,
              this list of conditions, and the following disclaimer in the documentation
              and/or other materials provided with the distribution.

              3. Products derived from this software may not be called "OpenCL" nor may
              "OpenCL" appear in their names without prior written permission of The OpenCL
              Project.

              THIS SOFTWARE IS PROVIDED BY THE AUTHOR(S) "AS IS" AND ANY EXPRESS OR IMPLIED
              WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
              MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ARE DISCLAIMED.

              IN NO EVENT SHALL THE AUTHOR(S) OR CONTRIBUTOR(S) BE LIABLE FOR ANY DIRECT,
              INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
              BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
              DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
              LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
              OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
              ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.




                [ Team LiB ]



                Section 4.7.&nbsp; Summary










                4.7. Summary


                In this chapter, we set aside the Sarbanes-Oxley discussion for a moment to investigate the entire open source phenomenon and the fundamental differences between it and nonfree software. After an examination of closed source development, we examined the open source methodology in some detail and articulated the tangible and intangible reasons why open source software can save you money and give you freedom of choice. Because of the rapid development cycle and the "release early, release often" mentality, the bar of quality and security is raised owing to massively parallel peer review of the source code. When a bug is identified by an end user, often he or she is a hacker who also supplies the fix, or at least a directed report on how to reliably reproduce the bug for the developer to test. Linux Torvalds' "Given enough eyeballs, all bugs are shallow" theorem essentially states that someone, somewhere, will know the fix to a bug no matter how difficult it may seem to the original developer.


                We continued by examining the use of open source in your environment. We learned that there are many opportunities for the use of OSS software, even if you are currently on a primarily Windows-based infrastructure, as most prominent open source software runs on diverse architectures, including Windows. This investigation was further refined by discussing the two main ways in which open source software fits into the SOX compliance equation, both as elements of your IT infrastructure and as support systems such as document control, system monitoring, workflow, and approval management. We also discussed some of the items you should consider if your environment is in a state of transition. One must be diligent to document and subject any changes to the same approval and testing mechanisms as those identified for internal controls procedures.


                When evaluating your IT infrastructure, your internal controls universally fall into the categories of prevention and detection; however, all controls share some common characteristics and constraints. All controls must be testable; a well-defined test must be in place to validate the control. All control tests must be repeatable; the same test should yield the same result every time. All control tests should be sustainable; that is, they should enable you to maintain your controls and certification processes over time.


                In conclusion, we introduced the sample companies that will be used as case studies for the remainder of the book to illustrate concrete examples of open source software deployment as the subject of certification, and the use of open source software in the compliance process. The sample companies demonstrate two completely different architectures; BuiltRight Construction is a small public company with an infrastructure based entirely on Microsoft Windows and other closed source technologies, and NuStuff Electronics is a medium-sized global organization with a mixed platform environment.












                Section 8.6. Property-Oriented Criteria Factories








                8.6. Property-Oriented Criteria Factories


                We've already seen that there is often more than one way to
                express what you want using the criteria API,
                depending on your stylistic preferences, or the way you think about
                things. The Property class offers another bunch of
                alternatives of which you should be aware. We will not explore this class
                in depth because it is just another way of setting up criteria, but it is
                important to explain how it works, so you won't be confused if you run
                across examples, or feel baffled while scanning Hibernate's dense JavaDoc.
                (And, frankly, after one or two examples, you'll undoubtedly get the idea
                well enough that you may decide to adopt this approach yourself.)


                Property is another factory for
                criteria, much like Restrictions, which we've
                been using in this chapter (and Order and
                Projection, for that matter). You can create
                essentially all the query refinements available with those other factories
                by using Property instead. Rather than starting
                with the kind of constraint in which you're interested, and then naming
                the property to which you want to apply it, you instead start with the
                property and pick a constraint.


                NOTE


                Enough abstraction! Show some examples!



                As before, you start by creating Criteria on
                the object you want to query. But instead of saying, for example:


                criteria.add(Restrictions.le("playTime", length));



                you can say:


                criteria.add(Property.forName("playTime").le(length));



                It's really very similar—just a slightly different emphasis—as when
                you have more than one way to phrase the same concept in English. There
                are a bunch of methods in Property that give you
                criteria, orderings, and projections. You can't construct a Property instance using
                new⁠⁠(⁠ ⁠)—you need to either start with
                the forName⁠⁠(⁠ ⁠) static factory, or
                use an existing Property instance and call
                getProperty⁠⁠(⁠ ⁠) on it to traverse to one of its
                component properties.


                Here are a few more examples to show how this approach fits in.
                Where we had used statements like:


                criteria.addOrder(Order.asc("name").ignoreCase());



                we could instead have used:


                criteria.addOrder(Property.forName("name").asc().ignoreCase());



                And with projections, the approach we'd followed, such as:


                criteria.setProjection(Projections.max("playTime"));



                could equally well have been expressed as:


                criteria.setProjection(Property.forName("playTime").max());



                NOTE


                This is almost an embarrassment of riches!



                So, take your pick. Sometimes the kind of problem you're solving, or
                the thrust of the rest of the code, will evoke one style or the other. Or
                perhaps you'll just like one better and stick with it. But at least you're
                now aware you might run into both, and should be able to understand any
                criteria expression. These factory methods are all summarized in Appendix B.


                Overwhelmed with options yet? No? Well, when criteria queries don't
                quite do the job, or you want an even more extreme alternative to all the
                choices you've seen in this chapter—especially if you're comfortable with
                the concision of SQL—you can turn to the full power of
                HQL. We investigate that in the next chapter.



                8.6.1. What about…


                …sneaking a little SQL in with your other
                criteria to take advantage of database-specific features or your mad
                DBA skillz? Using subqueries, or detached criteria
                that can be set up before you have a Hibernate session? Plugging in your
                own Java code to filter the results as the query is performed? All of
                these things and more are possible, but beyond the scope of this book.
                If you're ready for them, the Advanced query
                options
                chapter in Java
                Persistence with Hibernate
                is a good overview,
                and the Hibernate JavaDoc and source code are the definitive
                references.