Monday, January 4, 2010

Section 6.7.  Summary










6.7. Summary


In this chapter, we discussed automation and why it should be a key component of any small to medium-sized company's Sarbanes-Oxley compliance activities. We also developed guidelines by which you can assess your in-house expertise as they relate to the skill sets that will be necessary to use open source tools. We also provided actions and alternatives for acquiring the necessary skill set if it does not currently exist within your organization.


Additionally, we looked at the various control objectives of the Acquisition and Implementation domain, and identified those that relate specifically to Sarbanes-Oxley compliance, using our fictitious company as an example. In summarizing this chapter, there are three fundamental things you should take away with you:


  • Let your unique organizational structure drive the applicable domain items.

  • Automation will be critical, but resist the urge to over implement.

  • Use of good project management methodologies will better position your compliance effort for success.


The remainder of the chapter shows examples of automation, project planning, and tracking for the sample companies. BuiltRight construction has decided to redeploy a Web application to improve availability and security. The project consists of migrating an IIS Web server using ASP scripting and an Access database to an Apache server using PHP and a MySQL database. NuStuff Electronics has opted to augment its security infrastructure with an IDS. Snort has been selected as a network-based detection system, since it is the leading open source solution and there is copious documentation and several books written for its deployment. To leverage the hard work of the open source community, the NST Live Security CD will be deployed since it contains both the Snort IDS and a testing framework. These project examples are then put through the COBIT framework for approval, planning, and implementation, using the example workflow, project management, and documentation modules on the Live CD. The chapter closes with a discussion of additional example change management workflows where there might be special considerations, such as the need for additional activities on one side of the spectrum and the desire to simplify the generic change management workflow on the other, which ultimately demonstrates the flexibility of the system.












Background: Beyond the Objective













Background: Beyond the Objective

The choice of a research method is strongly coupled to the type of information that is available to the researcher. The choice of method determines what type of information will be sought for subsequent analysis. Furthermore, the type of information that is available will determine the types of analysis that may be conducted. However, the entire process must start with the research objective and how it is framed in terms of required information.


The positivist stance, prevalent in the natural sciences, is centred on the notion that all knowledge, in the form of facts, is derived from either observation or experience of real, objective, and measurable natural phenomena, thereby supporting the notion of quantitative analysis. Facts can thus be viewed as universal truths devoid of personal values and social interactions and independent of time and context. This enables researchers to focus on regularity, repeatability, and the verification and validation of causal relationships. The currency of such objective knowledge is the manipulation and metrification of objects and their relationships, expressed in the form of numbers to enable quantitative operations. This stance is difficult to sustain in failure research, where the actions, perceptions, and rationales of actors are not amenable to quantitative methods. (Note, however, that the actual findings and the factors leading to accidents can subsequently be modelled using quantitative notations.)


At the other extreme, (interpretivist, constructivist, or relativist) knowledge can be viewed as encompassing beliefs, principles, personal values, preferences, social context, and historical background, which are inevitably dynamic as they change with time (and context). Qualitative research methods originate in the social sciences, where researchers are concerned with social and cultural phenomena. Social interaction in human activity systems ensures intersubjectivity, as actors are forced to negotiate and agree on certain aspects. The humanistic perspective is outside the conventional positivist norm. The resulting emphasis is on the relevant interpretation of knowledge as held by participants in a social activity. Data sources utilised by researchers include observation, fieldwork, interviews, questionnaires, documents, texts, and the impressions and reactions of the researchers. Such qualitative perspective relies on words (Miles & Huberman, 1994), conveying feelings and perceptions, rather than numbers. Qualitative methods recognise the fact that subjects can express themselves and their feelings and, thereby, clarify the social and cultural contexts within which they operate. Meaning therefore needs to be "interpreted" in a process of "sense-making." Actions thus need to be understood in terms of intentions, which in turn are understood in their original context (Schutz, 1973). Indeed, Kaplan and Maxwell (1994) argue that the goal of understanding a phenomenon from the point of view of the main participants and their particular social, cultural, and institutional context is largely lost when the textual data are quantified.


Making sense of IS failures retrospectively is difficult. In general, there is very little objective quantitative failure information that can be relied upon. This makes the utilisation of quantitative methods less likely until all relevant information is understood. Indeed, a specific feature of failure is the unique interaction between the system, the participants, their perspectives, complexity, and technology (Perrow, 1984). Lyytinen and Hirschheim (1987) pointed out that failure is a multifaceted phenomenon of immense complexity with multiple causes and perspectives. Research into failures often ignores the complex and important role of social arrangement embedded in the actual context. This is often due to the quantitative nature of such research. More recently, Checkland and Holwell (1998) argued that the IS field requires sense-making to enable a richer concept of information systems. Understanding the interactions that lead to failures likewise requires a humanistic stance that is outside the conventional positivist norm to capture the real diversity, contention, and complexity embedded in real life. Forensic analysis thus relies on utilising qualitative approaches to obtain a richer understanding of failure phenomena in terms of action and interaction (as explored in subsequent sections).


(Note that triangulation, the mixing of quantitative and qualitative methods, offers the opportunity to combine research methods in a complementary manner in one study. A good example of such a mix in failure research would entail reliance on qualitative methods to capture the essence, context, and webs of interactions in the buildup to failure and complement the presentation by using more formal approaches to model the impact of such interactions.)











Sampled Audio Synthesis









Sampled Audio Synthesis


Sampled audio is encoded as a series of samples in a byte array, which is sent through a SourceDataLine to the mixer. In previous examples, the contents of the byte array came from an audio file though you saw that audio effects can manipulate and even add to the array. In sampled audio synthesis, the application generates the byte array data without requiring any audio input. Potentially, any sound can be generated at runtime.


Audio is a mix of sine waves, each one representing a tone or a note. A pure note is a single sine wave with a fixed amplitude and frequency (or pitch). Frequency can be defined as the number of sine waves that pass a given point in a second. The higher the frequency, the higher the note's pitch; the higher the amplitude, the louder the note.


Before I go further, it helps to introduce the usual naming scheme for notes; it's easier to talk about note names than note frequencies.



Note Names


Notes names are derived from the piano keyboard, which has a mix of black and white keys, shown in Figure 10-1.


Keys are grouped into octaves, each octave consisting of 12 consecutive white and black keys. The white keys are labeled with the letters A to G and an octave number.



Figure 10-1. Part of the piano keyboard



For example, the note named C4 is the white key closest to the center of the keyboard, often referred to as middle C. The 4 means that the key is in the fourth octave, counting from the left of the keyboard.


A black key is labeled with the letter of the preceding white key and a sharp (#). For instance, the black key following C4 is known as C#4.


A note to musicians: for simplicity's sake, I'll be ignoring flats in this discussion.



Figure 10-2 shows the keyboard fragment of Figure 10-1 again but labeled with note names. I've assumed that the first white key is C4.



Figure 10-2. Piano keyboard with note names



Figure 10-2 utilizes the C Major scale, where the letters appear in the order C, D, E, F, G, A, and B.


There's a harmonic minor scale that starts at A, but I won't be using it in these examples.



After B4, the fifth octave begins, starting with C5 and repeating the same sequence as in the fourth octave. Before C4 is the third octave, which ends with B3.


Having introduced the names of these notes, it's possible to start talking about their associated frequencies or pitches. Table 10-1 gives the approximate frequencies for the C4 Major scale (the notes from C4 to B4).


Table 10-1. Frequencies for the C4 major scale

Note name

Frequency (in Hz)

C4

261.63

C#4

277.18

D4

293.66

D#4

311.13

E4

329.63

F4

349.23

F#4

369.99

G4

392.00

G#4

415.30

A4

440.00

A#4

466.16

B4

493.88



When I move to the next octave, the frequencies double for all the notes; for instance, C5 will be 523.26 Hz. The preceding octave contains frequencies that are halved, so C3 will be 130.82 Hz.


A table showing all piano note names and their frequencies can be found at http://www.phys.unsw.edu.au/tildjw/notes.html. It includes the corresponding MIDI numbers, which I consider later in this chapter.




Playing a Note


A note can be played by generating its associated frequency and providing an amplitude for loudness. But how can this approach be implemented in terms of a byte array suitable for a SourceDataLine?


A pure note is a single sine wave, with a specified amplitude and frequency, and this sine wave can be represented by a series of samples stored in a byte array. The idea is shown in Figure 10-3.


This is a simple form of analog-to-digital conversion. So, how is the frequency converted into a given number of samples, i.e., how many lines should the sample contain?



Figure 10-3. From single note to samples



A SourceDataLine is set up to accept a specified audio format, which includes a sample rate. For example, a sample rate of 21,000 causes 21,000 samples to reach the mixer every second. The frequency of a note, e.g., 300 Hz, means that 300 copies of that note will reach the mixer per second.


The number of samples required to represent a single note is one of the following



samples/note = (samples/second) / (notes/sec)
samples/note = sample rate / frequency



For the previous example, a single note would need 21,000/300 = 70 samples. In other words, the sine wave must consist of 70 samples. This approach is implemented in sendNote( ) in the NotesSynth.java application, which is explained next.




Synthesizing Notes


NotesSynth generates simple sounds at runtime without playing a clip. The current version outputs an increasing pitch sequence, repeated nine times, each time increasing a bit faster and with decreasing volume.


NotesSynth.java is stored in SoundExamps/SynthSound/.



Here is the main( ) method:



public static void main(String[] args)
{ createOutput( );
play( );
System.exit(0); // necessary for J2SE 1.4.2 or earlier
}



createOutput( ) opens a SourceDataLine that accepts stereo, signed PCM audio, utilizing 16 bits per sample in little-endian format. Consequently, 4 bytes must be used for each sample:



// globals
private static int SAMPLE_RATE = 22050; // no. of samples/sec


private static AudioFormat format = null;
private static SourceDataLine line = null;



private static void createOutput( )
{
format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
SAMPLE_RATE, 16, 2, 4, SAMPLE_RATE, false);
/* SAMPLE_RATE // samples/sec
16 // sample size in bits, values can be -2^15 - 2^15-1
2 // no. of channels, stereo here
4 // frame size in bytes (2 bytes/sample * 2 channels)
SAMPLE_RATE // same as frames/sec
false // little endian */

System.out.println("Audio format: " + format);

try {
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
if (!AudioSystem.isLineSupported(info)) {
System.out.println("Line does not support: " + format);
System.exit(0);
}
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
}
catch (Exception e)
{ System.out.println( e.getMessage( ));
System.exit(0);
}
} // end of createOutput( )



play( ) creates a buffer large enough for the samples, plays the pitch sequence using sendNote( ), and then closes the line:



private static void play( )
{
// calculate a size for the byte buffer holding a note
int maxSize = (int) Math.round((SAMPLE_RATE * format.getFrameSize( ))/MIN_FREQ);
// the frame size is 4 bytes
byte[] samples = new byte[maxSize];

line.start( );

/* Generate an increasing pitch sequence, repeated 9 times, each
time increasing a bit faster, and the volume decreasing */
double volume;
for (int step = 1; step < 10; step++)
for (int freq = MIN_FREQ; freq < MAX_FREQ; freq += step) {
volume = 1.0 - (step/10.0);
sendNote(freq, volume, samples);
}


// wait until all data is played, then close the line
line.drain( );
line.stop( );
line.close( );
} // end of play( )



maxSize must be big enough to store the largest number of samples for a generated note, which occurs when the note frequency is the smallest. Therefore, the MIN_FREQ value (250 Hz) is divided into SAMPLE_RATE.



Creating samples

sendNote( ) translates a frequency and amplitude into a series of samples representing that note's sine wave. The samples are stored in a byte array and sent along the SourceDataLine to the mixer:



// globals
private static double MAX_AMPLITUDE = 32760; // max loudness
// actual max is 2^15-1, 32767, since I'm using
// PCM signed 16 bit

// frequence (pitch) range for the notes
private static int MIN_FREQ = 250;
private static int MAX_FREQ = 2000;

// Middle C (C4) has a frequency of 261.63 Hz; see Table 10-1


private static void sendNote(int freq, double volLevel, byte[] samples)
{
if ((volLevel < 0.0) || (volLevel > 1.0)) {
System.out.println("Volume level should be between 0 and 1, using 0.9");
volLevel = 0.9;
}
double amplitude = volLevel * MAX_AMPLITUDE;

int numSamplesInWave = (int) Math.round( ((double) SAMPLE_RATE)/freq );
int idx = 0;
for (int i = 0; i < numSamplesInWave; i++) {
double sine = Math.sin(((double) i/numSamplesInWave) *
2.0 * Math.PI);
int sample = (int) (sine * amplitude);
// left sample of stereo
samples[idx + 0] = (byte) (sample & 0xFF); // low byte
samples[idx + 1] = (byte) ((sample >> 8) & 0xFF); // high byte
// right sample of stereo (identical to left)
samples[idx + 2] = (byte) (sample & 0xFF);
samples[idx + 3] = (byte) ((sample >> 8) & 0xFF);
idx += 4;
}

// send out the samples (the single note)
int offset = 0;
while (offset < idx)
offset += line.write(samples, offset, idx-offset);
}



numSamplesInWave is obtained by using the calculation described above, which is to divide the note frequency into the sample rate.


A sine wave value is obtained with Math.sin( ) and split into two bytes since 16-bit samples are being used. The little-endian format determines that the low-order byte is stored first, followed by the high-order one. Stereo means that I must supply two bytes for the left speaker, and two for the right; in my case, the data are the same for both.




Extending NotesSynth

A nice addition to NotesSynth would be to allow the user to specify notes with note names (e.g., C4, F#6), and translate them into frequencies before calling sendNote( ). Additionally, play( ) is hardwired to output the same tones every time it's executed. It would be easy to have it read a notes files, perhaps written using note names, to play different tunes.


Another important missing element is timing. Each note is played immediately after the previous note. It would be better to permit periods of silence as well.


Consider these challenges more than deficiencies. It's easy to implement this functionality in NotesSynth.












    Puzzle 15: Hello Whirled











     < Day Day Up > 







    Puzzle 15: Hello Whirled



    The following program is a minor variation on an old chestnut. What does it print?





    /**

    * Generated by the IBM IDL-to-Java compiler, version 1.0

    * from F:\TestRoot\apps\a1\units\include\PolicyHome.idl

    * Wednesday, June 17, 1998 6:44:40 o'clock AM GMT+00:00

    */

    public class Test {

    public static void main(String[] args) {

    System.out.print("Hell");

    System.out.println("o world");

    }

    }






    Solution 15: Hello Whirled



    This puzzle looks fairly straightforward. The program contains two statements. The first prints Hell and the second prints o world on the same line, effectively concatenating the two strings. Therefore, you might expect the program to print Hello world. You would be sadly mistaken. In fact, it doesn't compile.



    The problem is in the third line of the comment, which contains the characters \units. These characters begin with a backslash (\) followed by the letter u, which denotes the start of a Unicode escape. Unfortunately, these characters are not followed by four hexadecimal digits, so the Unicode escape is ill-formed, and the compiler is required to reject the program. Unicode escapes must be well formed, even if they appear in comments.



    It is legal to place a well-formed Unicode escape in a comment, but there is rarely a reason to do so. Programmers sometimes use Unicode escapes in Javadoc comments to generate special characters in the documentation:





    // Questionable use of Unicode escape in Javadoc comment



    /**

    * This method calls itself recursively, causing a

    * <tt>StackOverflowError</tt> to be thrown.

    * The algorithm is due to Peter von der Ah\u00E9.

    */




    This technique represents an unnecessary use of Unicode escapes. Use HTML entity escapes instead of Unicode escapes in Javadoc comments:





    /**

    * This method calls itself recursively, causing a

    * <tt>StackOverflowError</tt> to be thrown.

    * The algorithm is due to Peter von der Ahé.

    */




    Either of the preceding comments should cause the name to appear in the documentation as "Peter von der Ahé," but the latter comment is also understandable in the source file.



    In case you were wondering, the comment in this puzzle was derived from an actual bug report. The program was machine generated, which made it difficult to track the problem down to its source, an IDL-to-Java compiler. To avoid placing other programmers in this position, tools must not put Windows filenames into comments in generated Java source files without first processing them to eliminate backslashes.



    In summary, ensure that the characters \u do not occur outside the context of a valid Unicode escape, even in comments. Be particularly wary of this problem in machine-generated code.

























       < Day Day Up > 



      Java Object Storage: SQL Types of Language Java or SQLJ Object Types













      Java Object Storage: SQL Types of Language Java or SQLJ Object Types


      The integration of Java in the Oracle database allowed one main function: accessing SQL object types from Java applications using JDBC or SQLJ. This was possible by mapping Java classes to the object types. Oracle9i introduced Java object persistence with SQLJ object types that enable the reverse mapping (i.e., SQL object types mapped to Java classes).


      SQLJ object types are SQL object types of the Java language. You can use these types wherever you can use a SQL object type—for example, as the type of an object table, an attribute of a type, or a column of an object-relational table. You can query and manipulate schema objects of SQLJ object types using SQL.


      You create SQLJ object types using the extended CREATE TYPE statement with the USING clause and the EXTERNAL clause.


      To implement SQLJ object types, you need the following:




      • A mapping between the SQLJ object type attributes and member methods to the corresponding Java equivalents, and a mapping between the SQLJ object type as a whole to the Java class as a whole. You do this with the EXTERNAL clause.




      • An interface to use the SQLJ object type functionality. You do this with the USING clause. The USING clause can specify one of the interfaces java.sql.SQLData or oracle.sql.ORAData, along with oracle.sql.ORADataFactory, and the corresponding Java class must implement one of these interfaces.




      Here are the steps involved in creating and using SQLJ object types:




      1. Create the custom Java class that the SQLJ object type maps to.




      2. Load the class into the Oracle9i database.




      3. Create a SQLJ object type specifying the mappings from the object type to the Java class.




      4. Use the SQLJ object type in PL/SQL and/or SQL just like any object type.




      The following sections describe how to perform each of these steps.




      Create the Custom Java Class That the SQLJ Object Type Maps To



      This class should implement the java.sql.SQLData or oracle.sql.ORAData interface. I use the definition similar to that of the address object type described earlier. Here's the code for the custom Java class:




      import java.sql.*;
      import oracle.sql.*;
      public class Address implements SQLData {
      public String line1;
      public String line2;
      public String city;
      public String state_code;
      public String zip;
      String sql_type = "ADDRESS_SQLJ";

      public Address() {
      }

      public Address (String iline1, String iline2, String icity,
      String istate, String izip) {
      this.line1 = iline1;
      this.line2 = iline2;
      this.city = icity;
      this.state_code = istate;
      this.zip = izip;
      }

      public String getSQLTypeName() throws SQLException
      {
      return sql_type;
      }

      public void readSQL(SQLInput stream, String typeName)
      throws SQLException
      {
      sql_type = typeName;

      line1 = stream.readString();
      line2 = stream.readString();
      city = stream.readString();
      state_code = stream.readString();
      zip = stream.readString();
      }

      public void writeSQL(SQLOutput stream)
      throws SQLException
      {
      stream.writeString(line1);
      stream.writeString(line2);
      stream.writeString(city);
      stream.writeString(state_code);
      stream.writeString(zip);
      }

      public static Address setAddress (String iline1, String iline2,
      String icity, String istate, String zip) {
      return new Address(iline1, iline2, icity, istate, izip);
      }

      public String getAddress() {
      return this.line1 + " " + this.line2 + " " + this.city + ", " +
      this.state_code + " " + this.zip;
      }
      }


      Once you've defined the Java source, you have to compile into a .class file using the command



      javac Address.java




      Load the Class into the Oracle9i Database


      You do this using the loadjava utility provided by Oracle. Refer to the "Oracle9i Java Stored Procedures Developers Guide" and the "Oracle9i Java Developers Guide" in the Oracle documentation for a description of the loadjava utility.


      Here's how the Address.class Java class file is loaded into the plsql9i/plsql9I schema:



      loadjava -u -plsql9i/plsql9i -r -oci8 Address.class

      The –r option resolves external references inside in the schema.



      To verify that the Address.class file is loaded into the database, use the following query:



      SQL> column object_name format a30;
      SQL> select object_name, object_type
      2 from user_objects
      3 where object_type like '%JAVA%';

      OBJECT_NAME OBJECT_TYPE
      -------------------------------- -----------
      Address JAVA CLASS




      Create a SQLJ Object Type Specifying the Mappings from the Object Type to the Java Class


      Once you've defined the custom Java class and compiled and loaded it into the database, the next step is to define the object type based on the Java class. You do this with the CREATE TYPE statement using the EXTERNAL and USING clauses. Here's an example that defines an object type based on the Address class:




      CREATE TYPE address_sqlj AS OBJECT
      EXTERNAL NAME 'Address' LANGUAGE JAVA
      USING SQLData(
      line1_sqlj varchar2(20) EXTERNAL NAME 'line1',
      line2_sqlj varchar2(20) EXTERNAL NAME 'line2',
      city_sqlj varchar2(20) EXTERNAL NAME 'city',
      state_code_sqlj varchar2(2) EXTERNAL NAME 'state_code',
      zip_sqlj varchar2(13) EXTERNAL NAME 'zip',
      STATIC FUNCTION set_address (p_line1 VARCHAR2, p_line2 VARCHAR2,
      p_city VARCHAR2, p_state_code VARCHAR2, p_zip VARCHAR2)
      RETURN address_sqlj EXTERNAL NAME 'setAddress (java.lang.String, java.lang.String,
      java.lang.String, java.lang.String, java.lang.String) return Address',
      MEMBER FUNCTION get_address RETURN VARCHAR2
      EXTERNAL NAME 'Address.getAddress() return java.lang.String'
      )
      NOT FINAL;
      /


      Here's how you can verify the output of this statement:




      SQL> desc address_sqlj

      Here's the output:








      The USING clause specifies the interface that the SQLJ object type implements. It's one of the interfaces of java.sql.SQLData or oracle.sql.ORAData, along with oracle.sql.ORADataFactory, and the corresponding Java class must implement one of these interfaces.


      The EXTERNAL clause specifies the mapping between each SQLJ object type attribute and member methods to the corresponding Java equivalents, and a mapping between the SQLJ object type as a whole to the Java class as a whole.





      Use the SQLJ Object Type in PL/SQL and/or SQL Just Like Any Object Type


      Once you've created the SQLJ object type, you can use it just like any other object type. For example, you can use it to build an object table or you can use it as the data type of a column in an object-relational table. Here's an example to illustrate this.



      First, I create an object table based on the SQLJ object type address_sqlj. Here's the code:



      CREATE TABLE address_master_sqlj OF address_sqlj;

      Then I use an INSERT statement with the set_address member method invoked on the object type to insert a row into this table. Here's the code:




      insert into address_master_sqlj
      values(address_sqlj.set_address('1 Oracle parkway',null,'Redwood Shores',
      'CA ', '41246 '));


      Next, I query the object table to select columns that are mapped to the attributes of the SQLJ object type. Here's the code:



      SELECT a.line1_sqlj, a.line2_sqlj FROM address_master_sqlj a;

      Here's the output of this query:








      Also, I can invoke member methods of the SQLJ object type as follows:



      SELECT a.get_address() FROM address_master_sqlj;

      Here's the output of this query:












      Tip�


      You can declare member methods of a SQLJ object type as STATIC. STATIC member functions can map to static Java methods of the corresponding mapping Java class, and they're used in INSERT statements to invoke either a user-defined constructor of the Java class or a static Java method that returns the class type.













      Three-Column Layouts with Float and Clear








      Three-Column Layouts with Float and Clear


      The next demo uses five visual elements: a header, three columns, and a footer. This demo is about as simple as it can be, but the structure it demonstrates can be the foundation of complex and beautifully designed pages. The page you create is going to be only 450 pixels wide, with three columnseach 150 pixels across.


      The objective is to float the three columns side by side under the header and then have the footer run across the bottom. The footer will be the width of the page, and will sit directly under whichever of the three columns is longest.


      The first step, as always, is to write the markup


      [View full width]

      <body>
      <div id="header">This is the header</div>
      <div id="contentarea">
      <div id="column1">These divs will float side by side to create columns. This is div 1 with
      a little text to give it some height...</div>
      <div id="column2">This is div 2 with a little less text so it's a different height than
      the other two...</div>
      <div id="column3">The objective is that no matter which of the three divs is longest, the
      footer sits below it...</div>
      </div><!--end contentarea-->
      <div id="footer">This is the footer</div>
      </body>


      Then you start with some simple styles to color the backgrounds of the elements so you can see their positions (Figure 5.13). Here are the styles



      div#header {width:450px; background-color:#CAF;}
      div#contentarea {width:450px; background-color:#CCC;}
      div#column1 {width:150px; background-color:#FCC;}
      div#column2 {width:150px; background-color:#CFC;}
      div#column3 {width:150px; background-color:#AAF;}
      div#footer {width:450px; background-color:#FAC;}


      Figure 5.13. This shows the basic markup with each div a different color.




      About Footers


      A footer is like a header, but it runs across the bottom of the page rather than the top, and it often contains a second set of major navigation links as well as some minor links such as a link to a privacy policy or a link to the site's terms of use and copyright information. Also, if the viewer has scrolled to the bottom of a long page, the footer links can provide main-choice options so the viewer doesn't have to scroll back to the top again.


      You can add a footer to the designs you've seen earlier in this chapter in the same way you added the header. However, if the absolutely positioned columns happen to be longer than the content area (which, being in the document flow, pushes the footer downward), the columns will extend over the footer. What you need, and will create, is a page structure where the bottom of the longest column, whichever one it happens to be, sets the position for the top of the footer.



      I added one extra structural element to the markupa div that contains the three column divs, called contentarea. The purpose of this element is to enclose the three columns so that as any of the columns get longer, the bottom of this container gets pushed down, and the footer that follows gets pushed down too. Without it, the footer moves up as close as possible to the header.


      Now, while you can see that this "wrapper" div surrounds the three columns in the markup, this is not what is happening in the browser. The CSS recommendations do not require a div to enclose floated elements, but by using clear, you can make it do just that and thereby get your footer to position correctly below the longest column.


      Let's start by floating the three columns, which pushes each one up and to the left as far as possible. Here, I've set it up so that there is room for the columns to still be side by side (Figure 5.14). Here's the CSS



      div#header {width:450px; background-color:#CAF;}
      div#contentarea {width:450px; background-color:#CCC; border:solid;}
      div#column1 {width:150px; background-color:#FCC; float:left;}
      div#column2 {width:150px; background-color:#CFC; float:left;}
      div#column3 {width:150px; background-color:#AAF; float:left;}
      div#footer {width:450px; background-color:#FAC;}


      Figure 5.14. The footer tries to move up. The containing div does not "contain" anything yet.



      You can see that the footer, in its effort to move up against the container div, is jammed up under the second, shortest column.


      I also turned on the border of the div#contentarea so you can see it. The top and bottom edge of the div touch, forming a solid black rectangle. You can see three sides of this rectangle because those sides add to the width of the div. The bottom edge of the box is obscured by the three columns. The div has no vertical height because it contains only floated elements and, therefore, behaves as if it is empty. But that's not what you want; you have to devise some way to make that div's box open up and surround the columns.


      Contrary to the CSS specification, in Internet Explorer for Windows, the div does surround the float, so it already shows the result displayed in Figure 5.15, but this does not happen in other more standards-compliant browsers. Be sure to clear the wrapper as described, or your layout will only work in Internet Explorer. This is just another reason why it is important to develop in a standards-compliant browser and then adjust for Internet Explorer afterward.

      Figure 5.15. The containing element is forced to enclose the non-floating div, thus forming a boundary below the columns that the footer cannot move above.




      The way to do this (until I show you the really good, but more complex, way to do it in "The Alsett Clearing Method" later in the chapter) is to add a non-floated element into the markup after the column divs and then force the containing div to clear it. This opens the div up to the height of the tallest column. To do this, you add an extra div into the markup as shown here


      [View full width]

      <div id="contentarea">
      <div id="column1">These divs will float side by side to create columns. This is div 1 with
      a little text to give it some height...</div>
      <div id="column2">This is div 2 with a little more text so it's a different height than
      the other two...</div>
      <div id="column3">The objective is that no matter which of the three divs is longest, the
      footer sits below it...</div>
      <div class="clearfloats"><!-- --></div>
      </div><!--end contentarea-->
      <div id="footer">This is the footer</div>


      Divs don't like to be empty. If you don't have any content for a div, simply put a comment inside it. Some people put a period or a similar small element in these "clearing" elements and then add a couple of extra rules to set the line-height and height of the div to zero so that content is not visible. The important thing is not to have this extra element take up space in your page. Sometimes you have to experiment to achieve that.



      Notice that this new element is positioned after the floated columns but before the container div closes and that it has the class "clearfloats". Now you need to add a style for this class


      The clearing div has a class in the code above and the reason it has a class is because you might want to use it multiple times on the page. (IDs can only be used once per page). Note also that although there are only left floating elements, I used clear:both instead of clear:left in case I also want to clear right-floated elements elsewhere on this fictional page.




      div#header {width:450px; background-color:#CAF;}
      div#contentarea {width:450px; background-color:#CCC; border:solid;}
      div#column1 {width:150px; background-color:#FCC; float:left;}
      div#column2 {width:150px; background-color:#CFC; float:left;}
      div#column3 {width:150px; background-color:#AAF; float:left;}
      div#footer {width:450px; background-color:#FAC;}
      div.clearfloats {clear:both;}


      Now the contentarea div is forced to enclose the columns so it can clear the non-floated element, and the footer, which follows the container in the flow, is positioned correctly beneath the columns (Figure 5.15).


      Just to show you that the footer is always below the longest column, I'll add some text to the center column to make it the longest one. I'll also turn off the border of the container div now that you know it's doing its job (Figure 5.16 on the next page).


      Figure 5.16. The footer always appears below the longest column.



      Because I added this extra markup purely to achieve a presentational effect, it's not ideal in terms of keeping this markup structural, but it's a simple way to get the result I want and its relatively simple to understand. Now that you get the idea of how this works, let's look at a more complex, all-CSS method for enclosing floats that requires no additional elements in the markup.


      The Alsett Clearing Method


      Named after its creator, Tony Alsett (www.csscreator.com), the Alsett Clearing Method uses the CSS :after pseudo-class to insert a hidden bit of non-floated content (instead of a div) at the appropriate place in the markup. The clear is then applied to this non-floated content.


      This is a superior method of clearing floats to the "clearing div" method in the previous example since it requires no additional markup.



      :after enables you to define a string of characters to be inserted after the content of an element. You set up the required CSS as a class and then add the class to the containing element. Here's the complete page markup for this demo; in this markup, I'm using the Alsett method to force the div to enclose the floated elements. You can see the results in Figure 5.17.


      Figure 5.17. There's not much visual difference between this page and Figure 5.16, but here the clearing is achieved with the Alsett method.



      Here's how it works



      [View full width]

      <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
      "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
      <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
      <head>
      <title>Clearing floats demo</title>
      <meta http-equiv="Content-type" content="text/html; charset=iso-8859-1" />
      <meta http-equiv="Content-Language" content="en-us" />
      <style type="text/css">
      body {margin:0px; padding:0px; font: 1.0em verdana, arial, sans-serif;}
      div#header {width:450px; background-color:#CAF;}
      div#contentarea {width:450px; background-color:#CCC;}
      div#column1 {width:150px; background-color:#FCC; float:left;}
      div#column2 {width:150px; background-color:#CFC; float:left;}
      div#column3 {width:150px; background-color:#AAF; float:left;}
      div#footer {width:450px; background-color:#FAC;}
      .clearfix:after { <-- a
      content: "."; <-- b
      display: block; <-- c
      height: 0; <-- d
      clear: both; <-- e
      visibility: hidden;
      }
      .clearfix {display: inline-block;} <-- f
      * html .clearfix {height: 1%;} <-- 6
      .clearfix {display: block;} <-- 6
      </style>

      </head>
      <body>
      <div id="header">This is the header</div>
      <div id="contentarea" class="clearfix">
      <div id="column1">These divs will float side by side to create columns. This is div 1 with
      a little text to give it some height...</div>
      <div id="column2">This is div 2 with a little more text so it's a different height than
      the other two...adding a little more text here makes now this column the longest.</div>
      <div id="column3">This version uses the Alsett clearing method which uses the CSS :after
      psuedo-class to add content and force the container to enclose the floats, and only
      requires a class to be added to the markup, instead of a "clearing" element.</div>
      </div><!--end contentarea-->
      <div id="footer">This is the footer</div>
      </body>
      </html>


      (a)The period is the last thing before the div closes

      (b)Inline elements don't respond to the clear property

      (c)Ensure the period is not visible

      (d)Make the container clear the period

      (e)Further ensures the period is not visible

      (f)A fix for IE Mac

      (6)The Holly hack for a bug in IE6 for Windows

      (6)


      Using comments in this code, I provided a superficial explanation of this method; if you want to learn all about the whys and wherefores of how this method actually works, go to Big John and Holly "Hack" Bergevin's site, Position Is Everything (www.positioniseverything.net). Here you'll also find lots of great information on floats in general.


      What's good about this method is that you don't need to know how it works to use it. You can just paste the styles into your style sheet and add the class to any element that you want to enclose floats; using it is simple.


      A couple of observations, though. First, in Internet Explorer for Windows, divs will enclose floats without any clearing even though this is not correct behavior. As a result, when working with floated elements you shouldn't assume that what you see in Internet Explorer works everywhere. If you want a container to enclose floats, use one of these two methods demonstrated above to ensure cross-browser clearing. Second, Internet Explorer has some buggy behaviors having to do with floats such as the guillotine bug, which can cut off the bottom of elements that contain both links and floats (this Internet Explorer bug and others are well documented at Position Is Everything). The version of the Alsett method used here addresses these Internet Explorer float bugsanother reason it's superior to the "clearing div" method I showed you first. You can strip the Alsett clearing code down to the highlighted lines in the markup above, but note that two of the lines of comments (highlighted) are also part of an Internet Explorer hack to fix the guillotine bug, so you must keep them in your CSS.








        Chapter 10. Custom Tools



        [ Team LiB ]






        Chapter 10. Custom Tools


        Although many tools are available to aid in the tasks of network administration, there will often be some task for which you need a tool that does not yet exist. In this situation, you can either find a way to live without the tool, pay someone to make it for you, or create the tool yourself.


        Most of the tools described in this book are relatively complicated and take a large effort to create. They require a mastery of the language the tool is written in, the ability to design and implement a large project, and experience with network programming. However, you can write a great number of simple tools using languages that were designed for just such a purpose. This chapter presents a brief introduction to two such scripting languages: the Bourne shell and Perl. Both of these languages are in very wide use. The Bourne shell in particular is present on every standard Unix system, and the Perl language is now installed on most modern systems.


        In this chapter you will be expected to already have familiarity with programming in some language; most any of the commonly used languages will serve as a sufficient background for you to quickly pick up the basics of Perl and the Bourne shell.






          [ Team LiB ]



          Scene Graph Creation









          Scene Graph Creation


          The scene graph is created by the constructor for WrapCheckers3D:



          public WrapCheckers3D( )
          {
          // initialization code

          GraphicsConfiguration config =
          SimpleUniverse.getPreferredConfiguration( );
          Canvas3D canvas3D = new Canvas3D(config);
          add("Center", canvas3D);
          canvas3D.setFocusable(true); // give focus to the canvas
          canvas3D.requestFocus( );

          su = new SimpleUniverse(canvas3D);


          createSceneGraph( );

          initUserPosition( ); // set user's viewpoint
          orbitControls(canvas3D); // controls for moving the viewpoint

          su.addBranchGraph( sceneBG );
          }



          The Canvas3D object is initialized with a configuration obtained from getPreferredConfiguration( ); this method queries the hardware for rendering information. Some older Java 3D programs don't bother initializing a GraphicsConfiguration object, using null as the argument to the Canvas3D constructor instead. This is bad programming practice.


          canvas3D is given focus so keyboard events will be sent to behaviors in the scene graph. Behaviors are often triggered by key presses and releases, but they may be triggered by timers, frame changes, and events generated by Java 3D internally. There aren't any behaviors in Checkers3D, so it's not necessary to set the focus. I've left these lines in since they're needed in almost every other program we'll consider.


          The su SimpleUniverse object creates a standard view branch graph and the VirtualUniverse and Locale nodes of the scene graph. createSceneGraph( ) sets up the lighting, the sky background, the floor, and floating sphere; initUserPosition( ) and orbitControls( ) handle viewer issues. The resulting BranchGroup is added to the scene graph at the end of the method:



          private void createSceneGraph( )
          {
          sceneBG = new BranchGroup( );
          bounds = new BoundingSphere(new Point3d(0,0,0), BOUNDSIZE);


          lightScene( ); // add the lights
          addBackground( ); // add the sky
          sceneBG.addChild( new CheckerFloor( ).getBG( ) ); // add floor

          floatingSphere( ); // add the floating sphere

          sceneBG.compile( ); // fix the scene
          } // end of createSceneGraph( )



          Various methods add subgraphs to sceneBG to build the content branch graph. sceneBG is compiled once the graph has been finalized to allow Java 3D to optimize it. The optimizations may involve reordering the graph and regrouping and combining nodes. For example, a chain of transformGroup nodes containing different translations may be combined into a single node. Another possibility is to group all the shapes with the same appearance properties together, so they can be rendered more quickly.


          bounds is a global BoundingSphere used to specify the influence of environment nodes for lighting, background, and the OrbitBehavior object. The bounding sphere is placed at the center of the scene and affects everything within a BOUNDSIZE units radius. Bounding boxes and polytopes are available in Java 3D.


          The scene graph by the end of WrapCheckers3D( ) is shown in Figure 15-3.


          The "Floor Branch" node is my invention to hide some details until later. Missing from Figure 15-3 is the view branch part of the scene graph.



          Lighting the Scene


          One ambient and two directional lights are added to the scene by lightScene( ). An ambient light reaches every corner of the world, illuminating everything equally.



          Color3f white = new Color3f(1.0f, 1.0f, 1.0f);
          // Set up the ambient light
          AmbientLight ambientLightNode = new AmbientLight(white);
          ambientLightNode.setInfluencingBounds(bounds);
          sceneBG.addChild(ambientLightNode);



          The color of the light is set, the ambient source is created along with bounds and added to the scene. The Color3f( ) constructor takes Red/Green/Blue values between 0.0f and 1.0f (1.0f being "full-on").


          A directional light mimics a light from a distant source, hitting the surfaces of objects from a specified direction. The main difference from an ambient light is the requirement for a direction vector.



          Vector3f light1Direction = new Vector3f(-1.0f, -1.0f, -1.0f);
          // left, down, backwards
          DirectionalLight light1 = new DirectionalLight(white, light1Direction);
          light1.setInfluencingBounds(bounds);
          sceneBG.addChild(light1);




          Figure 15-3. Partial scene graph for Checkers3D



          The direction is the vector between (0, 0, 0) and (-1, -1, -1); the light can be imagined to be multiple parallel lines with that direction, originating at infinity.


          Point and spot lights are the other forms of Java 3D lighting. Point lights position the light in space, emitting in all directions. Spot lights are focused point lights, aimed in a particular direction.




          The Scene's Background


          A background for a scene can be specified as a constant color (as shown here), a static image, or a texture-mapped geometry such as a sphere:



          Background back = new Background( );
          back.setApplicationBounds( bounds );
          back.setColor(0.17f, 0.65f, 0.92f); // sky color
          sceneBG.addChild( back );











            Section 4.11. Networking







            4.11. Networking

            4.11.1. Communicate on a Socket

            4.11.1.1. Problem

            You would like to communicate with a server using a protocol
            that is not directly supported by Adobe AIR (e.g., communicate
            with an FTP server).

            4.11.1.2. Solution

            Use the Socket class in
            the AIR API to send binary or text data to the server and
            register for events that will alert you to incoming data from the
            server.

            4.11.1.3. Discussion

            When communicating using protocols other than those directly
            supported by Adobe AIR, you may need to use the Socket API. The Socket
            API is an asynchronous API that lets you send data to a persistent
            socket endpoint and receive data from it in real time. You do not need
            to create a new Socket instance for
            each set of data sent to the same endpoint. The connection can be kept
            alive for the entire conversation between your client and the service
            to which you're connecting. This is the typical flow when using the
            Socket API:

            1. Create a connection to the endpoint.

            2. Listen for notification of connection success or
              failure.

            3. Queue data that will be sent to the endpoint.

            4. Send the data to the endpoint.

            5. Listen for data incoming from the endpoint.

            6. Repeat steps 3 through 5.

            7. Close the connection.

            The first step is to create a connection to the socket endpoint
            that consists of a host and a port number. For example, to connect to
            an endpoint the host might be foo.com and the port number might be
            5555. Create the instance of the Socket class and connect to the endpoint
            using that information. At this time, we will also set up our
            listeners to listen for the different events that the Socket can
            dispatch:

            var socket = new air.Socket();
            socket.addEventListener( air.Event.CONNECT, onSocketOpen );
            socket.addEventListener( air.ProgressEvent.SOCKET_DATA,
            onSocketData );
            socket.connect( 'foo.com', 5555 );


            We will also need to create the functions to handle the events
            for which we subscribed. The first event is the air.Event.CONNECT event. This event will tell us when the socket has been
            initiated and when communication with the service behind the endpoint
            is possible. In this example, we are sending the bytes of a UTF-8
            encoded string to the service:

            function onSocketOpen( event )
            {
            // This queues up the binary representation of the
            // string 'Bob' in UTF-8 format to be sent to the
            // endpoint.
            socket.writeUTFBytes( "Bob" );

            // Send the actual bytes to the server and clear
            // the stream. We then wait for data to be sent
            // back to us.
            socket.flush();
            }


            The air.ProgressEvent.SOCKET_DATA event
            is dispatched whenever data is received. The service we
            are connecting to uses a simple protocol: we send a UTF-8
            encoded string and it returns a UTF-8 encoded string. This
            makes accessing the data sent back to us very simple. To access this
            data, we measure the total number of bytes of data available on the
            Socket and read that many bytes as a UTF-8 encoded string using the
            readUTFBytes() method of the
            Socket class.

            function onSocketData( event )
            {
            var data =
            socket.readUTFBytes( socket.bytesAvailable );
            air.trace( data ); // Hello Bob
            }


            In our example, the protocol of communication was just a single
            string. In some cases, depending on the service with which you're
            communicating, you may need to send and receive other data types. The
            Socket class provides methods for
            reading and writing many data types, such as ints,
            Booleans, floats, and more. For example, if we were
            talking with a fictional service that required us to send a Boolean followed by an int, our onSocketOpen function in the preceding
            example could look like this:

            function onSocketOpen( event )
            {
            // First send the boolean
            socket.writeBoolean( true );
            // Now send an int
            socket.writeInt( 10 );

            // Now we send the bytes to the service and
            // clear the buffer.
            socket.flush();
            }


            This example provides a baseline of functionality that can be
            expanded upon to speak to many different protocols. The only current
            limitation is that there is not currently an SSL Socket implementation
            in AIR. For secure communication you will be limited to HTTPS:

            <html>
            <head>

            <title>Communicating on a Socket</title>
            <script type="text/javascript" src="AIRAliases.js">
            </script>

            <script>
            var socket = null;

            function init()
            {
            socket = new air.Socket();

            // Create our listeners which tell us when the Socket
            // is open and when we receive data from our service.
            socket.addEventListener( air.Event.CONNECT,
            onSocketOpen );
            socket.addEventListener( air.ProgressEvent.SOCKET_DATA,
            onSocketData );

            // Connect to our service, which is located at
            // host foo.com using port 5555.
            socket.connect( 'foo.com', 5555 );
            }

            function onSocketOpen( event )
            {
            // This queues up the binary representation of the
            // string 'Bob' in UTF-8 format to be sent to the
            // endpoint.
            socket.writeUTFBytes( "Bob" );

            // Send the actual bytes to the server and clear
            // the stream. We then wait for data to be sent
            // back to us.
            socket.flush();
            }

            function onSocketData( event )
            {
            var data = socket.readUTFBytes( socket.bytesAvailable );
            air.trace( data ); // Hello Bob
            }
            </script>

            </head>
            <body onload="init()">
            </body>
            </html>




            4.11.2. Upload a File in the Background

            4.11.2.1. Problem

            The application user has created numerous files offline, and you now want to send those
            to the server without blocking the user from doing any additional
            work.

            4.11.2.2. Solution

            The File class in Adobe AIR
            provides an upload() method
            that is designed specifically for this purpose, without
            having to create and manage HTML forms.

            4.11.2.3. Discussion

            The File.upload() method can
            upload files via HTTP/S to a server for additional processing. The
            upload takes places just like a traditional multipart file upload from
            an HTML form, but without the need to manipulate forms on the client.
            The upload process also takes place asynchronously in the background,
            allowing the application to continue processing without
            interruption.

            NOTE

            The implementation of the receiving server is beyond the scope
            of this example. Numerous technologies, and tutorials for these
            technologies, elegantly handle file upload. You're encouraged to
            investigate your options.

            The primary events that are useful are ProgressEvent.PROGRESS and Event.COMPLETE.
            These events handle notifying the application of upload progress, and
            when an individual upload is complete, respectively:

            var file =
            new air.File.documentsDirectory.
            resolvePath( 'myImage.jpg' );

            file.addEventListener( air.ProgressEvent.PROGRESS,
            doProgress );
            file.addEventListener( air.Event.COMPLETE,
            doComplete );


            ProgressEvent contains
            various properties that can help in reflecting upload progress in the
            user interface. The most notable of these properties are ProgressEvent.bytesLoaded and ProgressEvent.bytesTotal, which show how
            much of the file has been uploaded and the total size of the file.
            Event.COMPLETE is broadcast once
            the upload is complete.

            To start the upload, you first need a valid File object that points to a resource on
            disk.

            Once a valid file reference is established, developers will want
            to call the File.upload() method.
            The File.upload() method can take
            three arguments, the first of which is a URLRequest object that contains information
            about where the file should be sent. The URLRequest object can also contain
            additional data to be passed to the receiving server. This additional
            data manifests itself as HTML form fields might during a traditional
            multipart file upload:

            var request = new air.URLRequest( 
            'http://www.mydomain.com/upload' );
            file.upload( request, 'image', false );


            The second argument provided to the File.upload() method call is the name of the form field that contains the
            file data.

            The third argument is a Boolean value that tells the upload process
            whether it should try a test before sending the actual file. The test
            upload will POST approximately 10
            KB of data to the endpoint to see if the endpoint responds. If the
            service monitoring capabilities of Adobe AIR are not being used, this
            is a good way to check for potential failure of the process.

            NOTE

            More than one great web application has been caught by this
            subtlety. If the server is expecting the file data outright, a test
            upload will almost assuredly cause an error. If you intend to use
            the test facility, be sure that your server code is prepared to
            handle the scenario.

            function doProgress( event )
            {
            var pct = Math.ceil( ( event.bytesLoaded / event.
            bytesTotal ) * 100 );
            document.getElementById( 'progress' ).innerText =
            pct + "%";
            }


            The Event.COMPLETE event is
            relatively straightforward in that it signals the completion of the
            upload process. This is a good place to perform any filesystem
            maintenance that the application might otherwise need to accomplish.
            An example would be removing the just-uploaded file from the local
            disk to free up space. Another task that might be accomplished in the
            Event.COMPLETE handler is to start
            the upload of subsequent files:

            <html>
            <head>

            <title>Uploading a File in the Background</title>

            <style type="text/css">
            body {
            font-family: Verdana, Helvetica, Arial, sans-serif;
            font-size: 11px;
            color: #0B333C;
            }
            </style>

            <script type="text/javascript" src="AIRAliases.js"></script>

            <script type="text/javascript">
            var UPLOAD_URL = 'http://www.ketnerlake.com/work/watcher/
            upload.cfm';

            var file = null;

            function doComplete( e )
            {
            document.getElementById( 'progress' ).style.visibility =
            'hidden';
            document.getElementById( 'progress' ).innerText =
            'Uploading... 0%';

            document.getElementById( 'upload' ).disabled = null;
            }

            function doLoad()
            {
            file = air.File.desktopDirectory;
            file.addEventListener( air.Event.SELECT, doSelect );
            file.addEventListener( air.ProgressEvent.
            PROGRESS, doProgress );
            file.addEventListener( air.Event.
            COMPLETE, doComplete );

            document.getElementById( 'upload' ).
            addEventListener( 'click', doUpload );
            }

            function doProgress( e )
            {
            var loaded = e.bytesLoaded;
            var total = e.bytesTotal;
            var pct = Math.ceil( ( loaded / total ) * 100 );

            document.getElementById( 'progress' ).innerText =
            'Uploading... ' +
            pct.toString() + '%';
            }

            function doSelect( e )
            {
            var request = new air.URLRequest( UPLOAD_URL );

            request.contentType = 'multipart/form-data';
            request.method = air.URLRequestMethod.POST;

            document.getElementById( 'upload' ).disabled = 'disabled';
            document.getElementById( 'progress' ).style.visibility =
            'visible';

            file.upload( request, 'image', false );
            }

            function doUpload()
            {
            file.browseForOpen( 'Select File' );
            }
            </script>

            </head>
            <body onLoad="doLoad();">

            <input id="upload" type="button" value="Upload" />
            <div id="progress" style="visibility: hidden">Uploading...
            0%</div>

            </body>
            </html>











            A Graphical View of This Book









            A Graphical View of This Book


            This book has four parts: 2D programming, 3D programming with Java 3D, network programming, and two appendixes on installation. The following figures give more details about each one in a visual way. Each oval is a chapter, and the arrows show the main dependencies between the chapters. Chapters on a common theme are grouped inside dotted, rounded gray squares.



            2D Programming


            Figure P-1 shows the 2D-programming chapters.



            Figure P-1. 2D-programming chapters



            Chapter 1 is a defense of Java for gaming, which Java zealots can happily skip. The animation framework used in the 2D examples is explained in Chapter 2, followed by two chapters applying it to a simple Worms example, first as a windowed application, then as an applet, then using full screen mode, and almost full screen mode. Chapters 3 and 4 contain timing code for comparing the frame rate speeds of these approaches.


            Chapters 5 and 6 are about imaging, mostly concentrating on Java 2D. Chapter 6 has three main topics: classes for loading images, visual effects, and animation.


            Chapters 7 through 10 are about Java Sound: Chapter 8 develops classes for loading and playing WAV and MIDI audio, and Chapters 9 and 10 are on sound effects and music synthesis.


            A reader who isn't much interested in visual and audio special effects can probably skip the latter half of Chapter 6, and all of Chapters 9 and 10. However, the classes for loading images and audio developed in the first half of Chapter 6 and in Chapter 8 are utilized later.


            Chapter 11 develops a 2D Sprite class, and applies it in a BugRunner game. Chapter 12 is about side scrollers (as immortalized by Super Mario Bros.), and Chapter 13 is about isometric tile games (Civilization is an example of that genre).




            3D Programming


            The 3D-programming chapters are shown in Figure P-2.



            Figure P-2. 3D-programming chapters



            Java 3D is introduced in Chapter 14, followed by the Checkers3D example in Chapter 15; its checkerboard floor, lighting, and background appear in several later chapters.


            There are five main subtopics covered in the 3D material: models, animation, particle systems, shooting techniques, and landscape and scenery.


            Chapter 16 develops two applications, LoaderInfo3D and Loader3D, which show how to load and manipulate externally created 3D models. The PropManager class used in Loader3D is employed in other chapters when an external model is required as part of the scene. Chapter 17 develops a LatheShape class, which allows complex shapes to be generated using surface revolution.


            A 3D sprite class is described in Chapter 18, leading to a Tour3D application that allows the user to slide and rotate a robot around a scene. Chapters 19 and 20 examine two approaches for animating the parts of a figure: Chapter 19 uses keyframe sequences, and Chapter 20 develops an articulated figure whose limbs can be moved and rotated.


            Particle systems are a widely used technique in 3D games (e.g., for waterfalls, gushing blood, and explosions to name a few). Chapter 21 explains three different particle systems in Java 3D. Flocking (Chapter 22) gives the individual elements (the particles) more complex behavioral rules and is often used to animate large groups such as crowds, soldiers, and flocks of birds.


            Lots of games are about shooting things. Chapter 23 shows how to fire a laser beam from a gun situated on a checkerboard floor. Chapter 24 places the gun in your hand (i.e., an FPS).


            The 3D chapters end with landscape and scenery creation. Chapter 25 describes how to generate a 3D maze from a text file specification. Chapter 26 generates landscapes using fractals, and Chapter 27 uses a popular terrain generation package, Terragen, to create a landscape, which is then imported into the Java 3D application. Chapter 27 discusses two techniques for filling the landscape with scenery (e.g., bushes, trees, and castles).


            Chapter 28 concentrates on how to make trees grow realistically over a period of time.


            The dotted arrow from Chapters 24 to 28 indicates a less pronounced dependency; I only reuse the code for moving the user's viewpoint.




            Network Programming


            Figure P-3 shows the network-programming chapters.



            Figure P-3. Network programming chapters



            Chapter 29 supplies information on networking fundamentals (e.g., the client/server and peer-to-peer models), and explains basic network programming with sockets, URLs, and servlets. Chapter 30 looks at three chat variants: one using a client/server model, one employing multicasting, and one chatting with servlets.


            Chapter 31 describes a networked version of the FourByFour application, a turn-based game demo in the Java 3D distribution. It requires a knowledge of Java 3D. Chapter 32 revisits the Tour3D application of Chapter 18 (the robot moving about a checkerboard) and adds networking to allow multiple users to share the world. I discuss some of the advanced issues concerning networked virtual environments (NVEs), of which NetTour3D is an example.




            The Appendixes


            The appendixes are shown in Figure P-4.



            Figure P-4. The appendixes



            Appendix A describes install4j, a cross-platform tool for creating native installers for Java applications. Appendix B is about Java Web Start (JWS), a web-enabled installer for Java applications.


            Both appendixes use the same two examples. BugRunner (from Chapter 11, which discusses 2D sprites) uses the standard parts of J2SE and the J3DTimer class from Java 3D. Checkers3D, from Chapter 15, is my first Java 3D example.










              Setting Up Streams Replication











               < Day Day Up > 











              Setting Up Streams Replication


              Streams replication requires a significant amount of planning and configuration. You need to determine what you will be replicating, and to where. With Oracle Database 10g, there is also the determination of using local or downstream capture. After the planning, it's time to perform the configuration itself at both the source database and the destination database.




              Planning for Streams Replication


              There are a series of decisions that must be made prior to the replication configuration. These steps determine the nature of the replication setup that you will implement.



              Determining Your Replication Set


              You first need to determine which objects you will be replicating from your source to your destination database. Obviously, if you are creating a full replica database, every DML statement in the redo stream of the source will be converted to an LCR for application at the destination (except for DML against system and sysaux objects, of course).


              However, if you want to take advantage of Stream's flexibility, it might make more sense to carefully choose those items that are of absolute importance in case of a disaster, and exclude those database objects that can be sacrificed. In situations where you have an extremely large production database, you may be forced to cut development environments, simply to save room and bandwidth at the destination database.


              Because of the way in which Streams records LCRs in the queue, and then interprets them for application, you will get better performance if each table in your replication set has a primary key. The primary key is the default means by which the apply process can resolve the LCR into a change at the destination table. In the absence of a primary key, Streams will use a unique key that has a NOT NULL value in one of its columns. Short of that, it will need to record the value of each field in the row change. As you might imagine, this becomes more and more expensive. So, think about your data structure, and whether there are good keys for Streams replication.





              Determining Your Replication Sites


              Once you have determined what you will replicate, you have to determine where you will be replicating to. The site of the replica database will reflect your decisions about how to balance the distance of pushing the LCRs over the Internet with the need to create a physical distance between the primary and disaster recovery database. In addition to location, you have to determine what type of system will house the replica; if you have a RAC cluster that you are protecting, obviously another RAC cluster would be ideal. But you do not have to match the replica to the primary database with exact physical structure or components. It might make sense from a cost perspective to house the replica in a smaller system, where you can limp along at a reasonable level until you can get the primary systems back up and operational.


              Determining the replica database system is a sort of 'chicken and egg' situation, when combined with the decision to be made about what to replicate. Obviously, if you have a like system for the replica and source, a full database replica would be more feasible than, say, if your source is a multinode RAC with a monstrous SAN, and the replica is a single-node system with mostly local storage.





              Local or Downstream Capture?


              You need to determine if you will be performing the capture of LCRs from the source database redo logs local at the source, or downstream, at the destination database. Downstream configuration provides a better disaster recovery solution, as it would require that you push the archivelogs from the source to the destination prior to performing a capture. This means you also have an extra set of the archivelogs just in case you need them for the source. When you use downstream capture, the capture process only looks at archivelogs. When you use a local capture process, the local capture process has access to the online redo logs, and will use them and archivelogs whenever necessary.


              Downhill capture also means that you limit almost the entire configuration and administration task set to the replica database. At the source, you merely configure the push of the archivelogs to the source. That's it. The rest of the work is done at the destination site (capture and apply). This also means that a propagation job will not be required, as you can establish the same queue table for the capture and apply processes.





              Determining the Role of the Destination Database



              You need to establish how the destination (replica) database will be used. It is feasible to imagine that the replica database will not be configured for any users, but will merely sit idle until a disaster occurs. Of course, if you've been provided the opportunity to have the resources sit idle, then perhaps a logical (or better yet, a physical) standby database configuration would be more appropriate for you. Streams gets its strength from the fact that you can use the replica database even as it performs its disaster recovery role.


              If you do have the replica database open for business, you need to know how, exactly, it will be open to users. Will you allow DML against the replica objects from the production? Or will it only be used for reporting purposes (queries, but not DML)? This is a critical distinction. If you want to set up the replica to serve as a load balancer for the source, you must reconfigure in your head the entire architecture of the Streams environment. Now, you must see that you have two sources and two destinations. You will need to configure Streams to go back and forth to both locations. You will also need to configure some form of conflict resolution, in case users at both databases simultaneously update the same row. (We discuss conflict resolution later in this chapter in the 'Conflict Resolution' section.)


              If the replica objects from the source will only be used for reporting, you do not have to make these kinds of considerations, and your life will be much easier. However, keep in mind that a logical standby database is a much easier way to configure a reporting copy of the production database. Both logical standby and Streams replicas can be open and used for reporting even as changes are applied. The difference, of course, is that you can better control the set of replicated objects with Streams. But a logical standby will always be a simpler solution.








              Configuring Streams Replication


              Once you have planned how your replication environment will work, it is time to get to the business of configuration. In this section, we will discuss the ins and outs of local capture and remote propagation of LCRs, which is the most common form that Streams replication takes. Later in this chapter, we concentrate more on downstream capture for Streams replication.


              We also assume that you want to configure Streams such that you can make changes to replicated objects at both databases-in other words, both databases will be a source and a destination database for the same set of tables. Quite frankly, this is the most compelling reason to use Streams as an availability solution. But doing so is an extreme complicater. Such is life.



              init.ora Parameters


              The first order of business is to configure the initialization parameters for both the source and destination database. There are seven primary values to be concerned with, as they directly affect Streams functionality:





              • COMPATIBILE  Must be set to 10.1.0 for all the newest new.  





              • GLOBAL_NAMES  Must be set to True. This is required to identify all databases in the Streams configuration uniquely.





              • JOB_QUEUE_PROCESSES  You will need to set this to at least 2. Better to have 4 or 6.





              • OPEN_LINKS  The default is 4, which is fine-just don't set this any lower.





              • SHARED_POOL_SIZE  Streams uses the shared pool for staging the captured events as well as for communication if you capture in parallel. If no STREAMS_POOL_SIZE is set, the shared pool is used. Streams needs at least 10MB of memory, but can only use 10 percent of the shared pool. So, if you choose not to use a Streams pool, you will need to set the shared pool to at least 100MB.





              • STREAMS_POOL_SIZE  You can specify a pool that is used exclusively by Streams for captured events, parallel capture, and apply communication. By setting this parameter, you will keep Streams from muddying the already murky shared pool, and stop it from beating up on other shared pool occupants. Of course, by setting this, you also dedicate resources to Streams that cannot be used by other processes if Streams goes idle for any reason. You should set this parameter based on the number of capture processes and apply processes-that is, the level of parallelism: 10MB for each capture process, 1MB for each apply process. We always suggest generosity over scrooging-start at 15MB and go up from there.





              • UNDO_RETENTION  The capture process can be a source of significant ORA-1555 (snapshot too old) errors, so make sure you set the undo retention to a high enough value. Oracle suggests starting at 3600 (1 hour). You will have to monitor your undo stats to make sure you have enough space in the undo tablespace to keep the hour of undo around (see Chapter 9 for more on undo retention).







              Setting Up Replication Administrator Users


              After you make the necessary modifications to the init file, you need to create users that will be responsible for replication administration. Can you use an existing user to own the Streams replication admin space? Absolutely. Do we recommend it? No. Make a new, clearly defined user at both the source and the destination. This user will be granted specific privileges that will control the capture, propagation, and apply movement. This user won't own the objects being replicated, but will just perform the queuing of the LCRs to the appropriate places.











              Putting the Databases in Archivelog Mode


              Of course (of course!) your production database is running in archivelog mode, but the replica database may not be in archivelog mode. It is only required if you will be allowing users to update production tables at the replica, and you will need to capture rows and move them back to production.





              Configuring the Network


              You will need to ensure that there is connectivity between the production and replica databases. You will also need to create all necessary TNS aliases in order to facilitate Oracle Net connectivity between the two databases. Then, you build your database links to connect the Streams administrator at each site to the Streams administrator at the other site.





              Enabling Supplemental Logging


              You need supplemental logging if you do not have a primary key or a unique NOT NULL constraint on the replicated table. Supplemental logging adds the values of every column in the row to the LCR, so that the record can be appropriately applied at the destination.


              ALTER TABLE ws_app.woodscrew_inventory
              ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

              You can also add supplemental logging for the entire database (note that you may have already done so when you configured your database for LogMiner in Chapter 2):


              ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;




              Creating Streams Queues


              The next step is to create the queue table that will serve as the queue for captured and propagated LCRs. Typically, you can create a single queue that is used both for locally captured LCRs and LCRs that are propagated from other source databases. You need to create a queue at each database; it is highly recommended that you give the queue tables at each database a different name (such as ws_app01_queue and ws_app02_queue).


              BEGIN
              DBMS_STREAMS_ADM.SET_UP_QUEUE(
              queue_table => 'stream_admin.ws_app01_queue_table',
              queue_name => 'stream_admin.ws_app01_queue');
              END;
              /




              Creating a Capture Process


              At both databases, you need to create a capture process and associate it with a rule set. The DBMS_STREAMS_ADD_<object>_RULES procedures will do all of this with a single block. You can add rules at the TABLE, SUBSET, SCHEMA, or GLOBAL level with the associated ADD_TABLE_RULES, ADD_SCHEMA_RULES, and so forth. The only one that is not completely clear here is ADD_SUBSET_RULES-this is for creating a capture process for a subset of rows within a table. In our examples, we are setting up Streams for the ws_app schema in our database.


              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
              schema_name => 'ws_app',
              streams_type => 'capture',
              streams_name => 'ws_app01_capture',
              queue_name => 'ws_app01_queue',
              include_dml => true,
              include_ddl => true,
              include_tagged_lcr => false,
              source_database => NULL,
              inclusion_rule => true);
              END;
              /

              This example creates a capture process named ws_app01_capture and associates it with the ws_app01_queue we created previously. It creates a positive rule set with two rules: capture all DML for the ws_app schema, and capture all DDL for the ws_app schema. You will need to create a capture process at both the production and replica databases. When you create a capture process in this fashion, Streams automatically sets an instantiation SCN for all objects in the schema you have specified.


              The instantiation SCN is the starting point for Streams replication to begin. It specifies where in the redo stream to begin looking for change events for the replication set and then convert them to LCRs.





              Instantiating the Replica Database


              At some point, you will need to move the data from the primary database to the replica. It is not expected that you will have Streams replication set up before any data exists in any of the databases. Rather, it is assumed that one of the databases will hold a significant amount of data that will have to be moved to the replica at the beginning of the replication process. The act of moving the data to the replica, and informing Streams where in the redo history to start, is referred to as instantiation.


              First you need to move the data from the source to the destination. Transportable tablespaces, export, RMAN duplication-do whatever you have to do to get the objects moved from the source to the destination. Our examples will use original export/import (as opposed to the new Data Pump exp/imp); the benefit of using export/import is that it will capture the instantiation SCN set when you create your capture process, and move it over with the copy of the data. This is the starting point in the local queue at the destination database for the apply process to begin taking LCRs for the replication set and applying them to the destination database objects.





              Creating a Propagation


              Once we have our capture processes configured, and we have instantiated our schema, we next establish the propagations that will be used to push LCRs from the ws_app queue at each database to the queue at the other database. We must specify the source queue and the destination queue, with the destination queue suffixed with a database link name, shown here:



              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
              schema_name => 'ws_app',
              streams_name => 'ws_app01_propagation',
              source_queue_name => 'stream_admin.ws_app01_queue',
              destination_queue_name => 'stream_admin.ws_app02_queue@STR10',
              include_dml => true,
              include_ddl => true,
              include_tagged_lcr => false,
              source_database => 'ORCL',
              inclusion_rule => true);
              END;
              /


              This code will create a propagation at our source that moves LCRs from the local ws_app01_queue to the remote ws_app02_queue at the destination. It creates a positive rule set with two rules: push all LCRs that contain DML for the ws_app schema, and push all LCRs that contain DDL for the ws_app schema. Remember that with a multisource replication environment, where we will propagate in both directions, we need to set up a propagation job at both databases, with reverse values for source and destination queues.





              Creating an Apply Process


              The procedure to create an apply process comes in the same flavors as the capture process; in fact, you'll notice that you use the same procedure to create an apply process that you do to create a capture process. You simply change the STREAMS_TYPE from capture to apply, like this:


              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
              schema_name => 'ws_app',
              streams_type => 'apply',
              streams_name => 'ws_app02_apply',
              queue_name => 'ws_app02_queue',
              include_dml => true,
              include_ddl => true,
              include_tagged_lcr => false,
              source_database => 'STR10',
              inclusion_rule => true);
              END;
              /

              This should look very familiar by now. You have created an apply process named ws_app01_apply, which looks to the ws_app02_queue for LCRs to be applied. It creates a positive rule set with two rules: DML for the ws_app schema, and DDL for the ws_app schema.





              Creating Substitute Key Columns, If Necessary


              If you are replicating a table that does not have a primary key, you will need to inform Streams which columns will act as a primary key for the sake of replication. For instance, if the DBA down at Horatio's Woodscrew Company wants to replicate the woodscrew_inventory table, he would need to set a substitute key in Streams-woodscrew_inventory has no primary key.


              BEGIN
              DBMS_APPLY_ADM.SET_KEY_COLUMNS(
              object_name => 'ws_app.woodscrew_inventory',
              column_list => 'scr_id,manufactr_id,warehouse_id,region');
              END;
              /

              For any columns that are referenced as the substitution key for replication, you will need to enable supplemental logging at the source database as well.





              Configuring for Conflict Resolution


              If you have a multisource replication environment, where rows in the same objects can be updated at both sites, you will need to consider the need for conflict resolution. If necessary, you will need to configure your conflict handlers at this time. Conflict resolution, and its configuration, is detailed in the next section, 'Conflict Resolution.'






              Throwing the Switch


              After you have done all of these configuration steps, you are ready to 'throw the switch' and enable the Streams environment to begin capturing, propagating, and applying the changes in your database.


              First, enable the capture process at both databases (you would do this for the capture process at both databases, although we only code list for one).


              BEGIN
              DBMS_CAPTURE_ADM.START_CAPTURE(
              capture_name => 'ws_app01_capture');
              END;
              /

              You do not need to enable the propagation you have created-propagation agents are enabled by default. So then it is time to enable the apply processes at both databases.


              BEGIN
              DBMS_APPLY_ADM.START_APPLY(
              apply_name => 'ws_app02_apply');
              END;
              /

              After you have enabled the capture and apply processes, the Streams environment has been configured and will now be capturing new changes to the ws_app schema objects, moving them to the other database and applying them.








              Conflict Resolution


              A Streams replication environment is not complete until you have determined what kind of conflict resolution you need, and created conflict handlers to, you know, handle them. A conflict occurs in a distributed database when two users at different databases are attempting to modify the same record with different values. For instance, at each database a user updates the woodscrew_order table to change the order count value for the same order by the same customer. Which value should the Streams environment accept as the correct value? The determination of which record to keep and which to reject is known as conflict resolution.


              There are four distinct types of conflicts that must be accounted for, depending on your application:




              • Update conflicts




              • Uniqueness conflicts




              • Delete conflicts




              • Foreign key conflicts





              Update Conflicts


              Our previous example is an update conflict: two users at different databases are updating the same record at roughly the same time. For a moment, each database has a different value for the ORD_CNT for a particular customer order for screws. However, as the apply process moves the LCR for that row from the other database, Streams will find that the row has been updated already, and will signal the conflict.


              Update conflicts are the most unavoidable types of conflicts in a distributed model where the same tables are being modified at each location. Oracle has built-in conflict handlers for update conflicts that you can implement using the DBMS_APPLY_ADM package. These are set up as apply handlers that get called when a conflict is detected, as in the following example for the woodscrew_orders table:


              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'scr_id';
              cols(2) := 'ord_cnt';
              cols(3) := 'warehouse_id';
              cols(4) := 'region';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name => 'ws_app.woodscrew_orders',
              method_name => 'OVERWRITE',
              resolution_column => 'scr_id',
              column_list => cols);
              END;
              /

              You have to create a conditional supplemental log group for all columns that you list as being checked as part of this conflict handler-in this example, the SCR_ID, ORD_CNT, WAREHOSUE_ID, and REGION columns.



              ALTER TABLE ws_app.woodscrew_orders ADD SUPPLEMENTAL LOG GROUP log_group_ws_ord ('scr_id','ord_cnt','warehouse_id', 'region');





              Uniqueness Conflicts


              A uniqueness conflict can often be referred to as an insert conflict, as an insert is the most common trigger: two rows are inserted into the table at different databases with the same primary key value. The primary key constraint does not trigger an error until the LCR is moved from the other database. A uniqueness conflict can occur, as well, if the primary key is updated or changed for an existing record.


              Uniqueness conflicts should be avoided, instead of resolved. Uniqueness conflicts can be avoided by sticking to two unbending application rules: unique string creation is appended with the GLOBAL_NAME from each origination database, and the application cannot modify a primary key value once created. By sticking to those rules, you will guarantee no unique conflicts in your replication environment.





              Delete Conflicts


              A delete conflict is triggered when a row is deleted that was also deleted or updated at the other database. As with uniqueness conflicts, it is best to avoid delete conflicts instead of coding to resolve them. At the application level, deleted rows from the application should instead be marked for delete at the application, then pooled together and run in batch format. You can also restrict deletions to the primary database, such that no deletes can occur at a secondary (replica) site.





              Foreign Key Conflicts



              Foreign key conflicts occur when an LCR is being applied at a database that violates a foreign key constraint. This is primarily a problem in Streams environments with more than two source databases, where it would be possible that two different source databases are sending LCRs to a third database. If the first source sends a record that references a foreign key value that was generated at the second source, but the second source hasn't sent its foreign key generation yet, the third database will trigger a foreign key conflict.




              HA Workshop: Configuring Streams Replication







              Workshop Notes


              This workshop will configure Streams replication for the ws_app schema that we have been using for examples throughout this book; that is, we will be replicating the woodscrew, woodscrew_inventory, and woodscrew_orders tables in the ws_app schema. We will be configuring schema-level replication, so if you are following along at home, we suggest you drop the ws_app schema you have and rebuild it from scratch with just the three tables and their indices. Then, reinsert the base rows as described in Chapter 1. Because of Streams restrictions, we will not be replicating partitioned tables or IOTs. For more on Streams restrictions, see the Oracle Database 10g Streams documentation.


              The workshop will configure a bidirectional, multisource Streams environment where data can be entered at the primary and replica database. The primary will be known as ORCL; the replica is STR10. The ws_app schema already exists in ORCL, and we will have to instantiate the existing rows at STR10.


              The first phase of this workshop requires us to prepare both databases for Streams replication. The second phase sets up propagation of row changes from our primary database to our new replica (from ORCL to STR10). In the third phase, we will set up Streams to replicate back from STR10 to ORCL (making this a bidirectional multisource Streams environment).



              Step 1.  Put all databases in archivelog mode.


              SQL> archive log list

              Database log mode        No Archive Mode
              Automatic archival            Disabled
              Archive destination           USE_DB_RECOVERY_FILE_DEST
              Oldest online log sequence     303
              Current log sequence         305

              SQL> show parameter recover;
              NAME                          TYPE        VALUE
              -------------------------------------------------------------
              db_recovery_file_dest         string        /u01/product/oracle/
                                                         flash_recovery_area
              db_recovery_file_dest_size    big integer   2G

              SQL> shutdown immediate;
              SQL> startup mount;
              SQL> alter database archivelog;
              SQL> alter database open;


              Step 2.  Change initialization parameters for Streams. The Streams pool size and NLS_DATE_FORMAT require a restart of the instance.


              SQL> alter system set global_names=true scope=both;
              SQL> alter system set undo_retention=3600 scope=both;
              SQL> alter system set job_queue_processes=4 scope=both;
              SQL> alter system set streams_pool_size= 20m scope=spfile;
              SQL> alter system set NLS_DATE_FORMAT=
                  'YYYY-MM-DD HH24:MI:SS' scope=spfile;
              SQL> shutdown immediate;
              SQL> startup


              Step 3.  Create Streams administrators at the primary and replica databases, and grant required roles and privileges. Create default tablespaces so that they are not using SYSTEM.


              ---at the primary:
              SQL> create tablespace strepadm datafile
              '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;

              ---at the replica:
              SQL> create tablespace strepadm datafile
              ---at both sites:
              '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
              SQL> create user stream_admin
                 identified by stream_admin
                 default tablespace strepadm
                 temporary tablespace temp;
              SQL> grant connect, resource, dba, aq_administrator_role to stream_admin;
              SQL> BEGIN
                      DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
                      grantee  => 'stream_admin',
                      grant_privileges => true);
                      END;
                      /


              Step 4.  Configure the tnsnames.ora at each site so that a connection can be made to the other database.


              ---In $ORACLE_HOME/network/admin for the ORCL instance:
              STR10 =
               (DESCRIPTION =
                 (ADDRESS_LIST =
                   (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01)(PORT = 1521))
                 )
                 (CONNECT_DATA =
                   (SERVER = DEDICATED)
                   (SERVICE_NAME = str10)
                 )
               )
              ---In $ORACLE_HOME/network/admin for the STR10 instance:
              ORCL =
               (DESCRIPTION =
                 (ADDRESS_LIST =
                   (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01)(PORT = 1521))
                 )
                 (CONNECT_DATA =
                   (SERVER = DEDICATED)
                   (SERVICE_NAME = orcl)
                 )
               )


              Step 5.  With the tnsnames.ora squared away, create a database link for the stream_admin user at both ORCL and STR10. With the init parameter global_name set to True, the db_link name must be the same as the global_name of the database you are connecting to. Use a SELECT from the table global_name at each site to determine the global name.


              SQL> select * from global_name;
              SQL> connect stream_admin/stream_admin@ORCL
              SQL> create database link STR10
                  connect to stream_admin identified by stream_admin
                  using 'STR10';
              SQL> select sysdate from dual@STR10;
              SLQ> connect stream_admin/stream_admin@STR10
              SQL> create database link ORCL
                  connect to stream_admin identified by stream_admin
                  using 'ORCL';
              SQL> select sysdate from dual@ORCL;


              Step 6.  If you have not already done so, build the ws_app schema in ORCL. (See Chapter 2 for the ws_app schema build scripts.) We are providing the DDL for the three tables here to remind you of the structures, in case you are reading and not doing right now.


              SQL> create table woodscrew (
              scr_id           number not null,
              manufactr_id     varchar2(20) not null,
              scr_type         varchar2(20),
              thread_cnt       number,
              length           number,
              head_config      varchar2(20),
              constraint pk_woodscrew primary key (scr_id, manufactr_id)
              using index tablespace ws_app_idx);
              SQL> create index woodscrew_identity on woodscrew
                 (scr_type, thread_cnt, length,head_config)
                  tablespace ws_app_idx;
              SQL> create table woodscrew_inventory (
              scr_id           number not null,
              manufactr_id     varchar2(20) not null,
              warehouse_id     number not null,
              region           varchar2(20),
              count            number,
              lot_price  number);
              SQL> create table woodscrew_orders (
              ord_id           number not null,
              ord_date         date,
              cust_id          number not null,
              scr_id           number not null,
              ord_cnt          number,
              warehouse_id     number not null,
              region           varchar2(20),
              constraint pk_wdscr_orders primary key (ord_id, ord_date)
              using index tablespace ws_app_idx);


              Step 7.  Add supplemental logging to the ws_app tables. This is required both for the conflict resolution for the tables and for the woodscrew_inventory table that does not have a primary key. We will later identify a substitution key that will operate as the primary key for replication.


              SQL> Alter table ws_app.woodscrew add supplemental log data
                 (ALL) columns;
              SQL> alter table ws_app.woodscrew_inventory add supplemental log data
                 (ALL) columns;
              SQL> alter table ws_app.woodscrew_orders add supplemental log data
                 (ALL) columns;


              Step 8.  Create Streams queues at the primary and replica database.


              ---at ORCL (primary):
              SQL> connect stream_admin/stream_admin@ORCL
              SQL> BEGIN
               DBMS_STREAMS_ADM.SET_UP_QUEUE(
               queue_table  => 'stream_admin.ws_app01_queue_table',
               queue_name   => 'stream_admin.ws_app01_queue');
               END;
               /
              ---At STR10 (replica):
              SQL> connect stream_admin/stream_admin@STR10
              SQL> BEGIN
               DBMS_STREAMS_ADM.SET_UP_QUEUE(
               queue_table  => 'stream_admin.ws_app02_queue_table',
               queue_name   => 'stream_admin.ws_app02_queue');
               END;
               /


              Step 9.  Create the capture process at the primary database (ORCL).


              SQL> BEGIN
               DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
               schema_name     =>'ws_app',
               streams_type    =>'capture',
               streams_name    =>'ws_app01_capture',
               queue_name      =>'ws_app01_queue',
               include_dml     =>true,
               include_ddl     =>true,
               include_tagged_lcr  =>false,
               source_database => NULL,
               inclusion_rule  => true);
               END;
               /


              Step 10.  Instantiate the ws_app schema at STR10. This step requires the movement of existing rows in the ws_app schema tables at ORCL to ws_app schema tables at STR10. We will be using traditional import and export to make the movement take place. The benefit of using export/import is that the export utility will take the instantiation SCN generated when you built the capture process above, and document it with each object in the dump file. Then, when you import, the instantiation SCN will be recorded with the new objects that are built. This saves us some steps, mercifully.


              Note, as well, that when we instantiate STR10, we have to prebuild the tablespace ws_app_data and ws_app_idx, then build the ws_app user.


              ---AT ORCL:
              exp system/123db file=wsapp.dmp log=wsappexp.log object_consistent=y
              owner=ws_app

              ---AT STR10:
              ---Create ws_app tablespaces and user:
              create tablespace ws_app_data datafile
              '/u02/oracle/oradata/str10/ws_app_data01.dbf' size 100m;
              create tablespace ws_app_idx datafile
              '/u02/oracle/oradata/str10/ws_app_idx01.dbf' size 100m;
              create user ws_app identified by ws_app
              default tablespace ws_app_data
              temporary tablespace temp;
              grant connect, resource to ws_app;

              imp system/123db file=wsapp.dmp log=wsappimp.log fromuser=ws_app
              touser=ws_app streams_instantiation=y

              Import: Release 10.1.0.1.0 - Beta on Tue Jan 27 14:10:08 2004
              Copyright (c) 1982, 2003, Oracle.  All rights reserved.
              Connected to: Oracle10i Enterprise Edition Release 10.1.0.1.0 - Beta
              With the Partitioning, OLAP and Data Mining options

              Export file created by EXPORT:V10.01.00 via conventional path
              import done in US7ASCII character set and AL16UTF16 NCHAR character set
              import server uses WE8ISO8859P1 character set (possible charset conversion)
              . importing WS_APP's objects into WS_APP
              . . importing table                    "WOODSCREW"         12 rows imported
              . . importing table          "WOODSCREW_INVENTORY"          4 rows imported
              . . importing table             "WOODSCREW_ORDERS"         16 rows imported
              Import terminated successfully without warnings.


              Step 11.  Create a propagation job at the primary database (ORCL).


              SQL> BEGIN
               DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
               schema_name          =>'ws_app',
               streams_name         =>'ws_app01_propagation',
               source_queue_name    =>'stream_admin.ws_app01_queue',
               destination_queue_name=>'stream_admin.ws_app02_queue@STR10',
               include_dml          =>true,
               include_ddl          =>true,
               include_tagged_lcr   =>false,
               source_database      =>'ORCL',
               inclusion_rule       =>true);
               END;
               /


              Step 12.  Create an apply process at the replica database (STR10).


              SQL> BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
              schema_name          =>'ws_app',
              streams_type         =>'apply',
              streams_name         =>'ws_app02_apply',
              queue_name           =>'ws_app02_queue',
              include_dml          =>true,
              include_ddl          =>true,
              include_tagged_lcr   =>false,
              source_database      =>'ORCL',
              inclusion_rule       =>true);
              END;
              /


              Step 13.  Create substitution key columns for the table ws_app.woodscrew_inventory at STR10. This is required for any table that does not have a primary key. The column combination must provide a unique value for Streams.


              SQL> BEGIN
              DBMS_APPLY_ADM.SET_KEY_COLUMNS(
              object_name     =>'ws_app.woodscrew_inventory',
              column_list     =>'scr_id,manufactr_id,warehouse_id,region');
              END;
              /


              Step 14.  Configure conflict resolution at the replica site (STR10). The conflict handlers will be created for each table. Because this is a replica, we assume that in the event of a conflict, the primary database is always the correct value. Thus, we will set this up so that the incoming record will always overwrite the existing value.


              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'scr_type';
              cols(2) := 'thread_cnt';
              cols(3) := 'length';
              cols(4) := 'head_config';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name     =>'ws_app.woodscrew',
              method_name     =>'OVERWRITE',
              resolution_column=>'scr_type',
              column_list     =>cols);
              END;
              /

              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'scr_id';
              cols(2) := 'ord_cnt';
              cols(3) := 'warehouse_id';
              cols(4) := 'region';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name     =>'ws_app.woodscrew_orders',
              method_name     =>'OVERWRITE',
              resolution_column=>'scr_id',
              column_list     =>cols);
              END;
              /

              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'count';
              cols(2) := 'lot_price';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name     =>'ws_app.woodscrew_inventory',
              method_name     =>'OVERWRITE',
              resolution_column=>'count',
              column_list     =>cols);
              END;
              /


              Step 15.  Enable the capture process at the primary database (ORCL).


              BEGIN
              DBMS_CAPTURE_ADM.START_CAPTURE(
              capture_name => 'ws_app01_capture');
              END;
              /


              Step 16.  Enable the apply process at the replica database (STR10).


              BEGIN
              DBMS_APPLY_ADM.START_APPLY(
              apply_name => 'ws_app02_apply');
              END;
              /


              Step 17.  Test propagation of rows from primary (ORCL) to replica (STR10).


              AT ORCL:

              insert into woodscrew values (
              1006, 'Balaji Parts, Inc.', 'Machine', 20, 1.5, 'Slot');

              AT STR10:

              connect ws_app/ws_app
              select * from woodscrew where head_config = 'Slot';


              Step 18.  While it may seem logical, you do not need to add supplemental logging at the replica. This is because the supplemental logging attribute was brought over when we exported from ORCL and imported into STR10 with STREAMS_INSTANTIATION=Y. If you try to create supplemental logging, you will get an error:


              ORA-32588: supplemental logging attribute all column exists


              Step 19.  Create a capture process at the replica database (STR10).


              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
              schema_name     =>'ws_app',
              streams_type    =>'capture',
              streams_name    =>'ws_app02_capture',
              queue_name      =>'ws_app02_queue',
              include_dml     =>true,
              include_ddl     =>true,
              include_tagged_lcr  =>false,
              source_database => NULL,
              inclusion_rule  => true);
              END;
              /


              Step 20.  Create a propagation job from the replica (STR10) to the primary (ORCL).


              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
              schema_name          =>'ws_app',
              streams_name         =>'ws_app02_propagation',
              source_queue_name    =>'stream_admin.ws_app02_queue',
              destination_queue_name=>'stream_admin.ws_app01_queue@ORCL',
              include_dml          =>true,
              include_ddl          =>true,
              include_tagged_lcr   =>false,
              source_database      =>'STR10',
              inclusion_rule       =>true);
              END;
              /


              Step 21.  Create an apply process at the primary database (ORCL).


              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
              schema_name          =>'ws_app',
              streams_type         =>'apply',
              streams_name         =>'ws_app01_apply',
              queue_name           =>'ws_app01_queue',
              include_dml          =>true,
              include_ddl          =>true,
              include_tagged_lcr   =>false,
              source_database      =>'STR10',
              inclusion_rule       =>true);
              END;
              /


              Step 22.  Create substitution key columns for woodscrew_inventory at the primary database (ORCL).


              BEGIN
              DBMS_APPLY_ADM.SET_KEY_COLUMNS(
              object_name     =>'ws_app.woodscrew_inventory',
              column_list     =>'scr_id,manufactr_id,warehouse_id,region');
              END;
              /


              Step 23.  Create conflict resolution handlers at ORCL. Because this is the primary, we set a 'DISCARD' resolution type for rows that are generated at STR10 and conflict with rows generated at ORCL. This completes our conflict resolution method, which resembles a site priority system. All rows generated at ORCL overwrite rows generated at STR10; all rows generated at STR10 that conflict with rows at ORCL will be discarded.


              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'scr_type';
              cols(2) := 'thread_cnt';
              cols(3) := 'length';
              cols(4) := 'head_config';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name     =>'ws_app.woodscrew',
              method_name     =>'DISCARD',
              resolution_column=>'scr_type',
              column_list     =>cols);
              END;
              /

              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'scr_id';
              cols(2) := 'ord_cnt';
              cols(3) := 'warehouse_id';
              cols(4) := 'region';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name     =>'ws_app.woodscrew_orders',
              method_name     =>'DISCARD',
              resolution_column=>'scr_id',
              column_list     =>cols);
              END;
              /

              DECLARE
              cols DBMS_UTILITY.NAME_ARRAY;
              BEGIN
              cols(1) := 'count';
              cols(2) := 'lot_price';
              DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
              object_name     =>'ws_app.woodscrew_inventory',
              method_name     =>'DISCARD',
              resolution_column=>'count',
              column_list     =>cols);
              END;
              /


              Step 24.  Set the instantiation SCN at ORCL for the apply process. Because we are not moving an instantiation over from STR10 with an export dump file, we will have to manually set the instantiation SCN using DBMS_APPLY_ADM. Here, we use the current SCN. You can do this while connected to either the source or the destination; here, we are connected to the source for this phase-STR10. Thus, we push the instantiation SCN to ORCL using the stream_admin user's database link.


              DECLARE
               iscn NUMBER;
              BEGIN
              iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
              DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@ORCL(
                 source_schema_name       =>'ws_app',
                 source_database_name     =>'STR10',
                 instantiation_scn        =>iscn,
                 recursive                =>true);
              END;
              /


              Step 25.  Enable capture at the replica database (STR10).


              BEGIN
              DBMS_CAPTURE_ADM.START_CAPTURE(
              capture_name => 'ws_app02_capture');
              END;
              /


              Step 26.  Enable the apply process at the primary database (ORCL).


              BEGIN
              DBMS_APPLY_ADM.START_APPLY(
              apply_name => 'ws_app01_apply');
              END;
              /


              Step 27.  Test propagation of rows from the replica database (STR10) to the primary database (ORCL).


              AT STR10:

              SQL> connect ws_app/ws_app
              SQL> insert into woodscrew values (
               1007, 'Balaji Parts, Inc.', 'Machine', 30, 1.25, 'Slot');

              AT ORCL:

              SQL> connect ws_app/ws_app
              SQL> select * from woodscrew where head_config = 'Slot';


              Step 28.  Find a cold refreshment and congratulate yourself! You have configured a bidirectional multisource Streams replication environment.





















              Downstream Capture of LCRs


              The previous examples have all used a local capture process during the Streams environment configuration. As we mentioned previously, you can also configure Streams for downstream capture. New to Oracle Database 10g, downstream capture is a way of pushing the Streams environment completely downhill to the destination database.


              Downstream capture requires that you set up a log transport service from the source to the destination database, as you would do for a Data Guard configuration (see Chapter 7 for more on log transport services). When the archivelogs arrive at the destination database, a capture process at the destination database uses LogMiner to review the archivelogs and extract records into LCRs that are then placed in a queue at the destination.


              This allows us to forgo the usage of a propagation job, as the LCRs are queued into a location that is directly accessible by the destination database's apply process. Downstream capture also means that we have moved the entire environment, other than log transport, away from the source database. This can be extremely advantageous in many environments where the source database may need to be cleared of the administration of the Streams processes and database objects. Downstream capture also provides an extra degree of protection against site failure, as we are automatically creating another set of archived redo logs.


              The downside is that you are pushing the entire archivelogs across the network, instead of just a subset of the data that a Streams propagation would push. You also sacrifice a degree of recoverability, as the downstream capture can only review archivelogs for records. When there is a local capture process at the source, the capture process can scan the online redo log for records, so there is a quicker uptime on record changes for the replication set.


              Downstream capture also has a few restrictions that are avoided by using local capture. These restrictions come from moving a physical file to the destination; this requires a degree of operating system and hardware compatibility between the source and destination sites. When you propagate LCRs using a propagation job, you are sending logical records that have been freed of OS restrictions; with downstream capture, you are copying physical files from one location to another. So you have to have the same OS at both sites, although not the same OS version. You also need the same hardware architecture-no 32-bit to 64-bit transfers allowed.




              HA Workshop: Configuring Streams for Downstream Capture







              Workshop Notes


              This workshop will outline the steps for downstream capture of changes to the ws_app schema. We will be grossly overlooking aspects that are covered in other parts of this book (such as log transport configuration). For this workshop, we will concentrate on configuring the destination database, STR10, to capture changes that originate at the source database, ORCL.



              Step 1.  Ensure that there is network connectivity between the source and primary databases via tnsnames.ora entries.



              Step 2.  Configure log transport at the source (ORCL) to the destination (STR10). See Chapter 7 for log transport configuration.



              Step 3.  At both the source and destination, set the parameter REMOTE_ARCHIVE_ENABLE to True.


              alter system set remote_archive_enable=true scope=both;


              Step 4.  Build Streams administrators at both databases. You can avoid a Streams administrator at the source, but you still have to grant certain privileges to an existing user. Instead, just build the Streams admin and call it even.


              SQL> create user stream_admin
                 identified by stream_admin
                 default tablespace strepadm
                 temporary tablespace temp;
              SQL> grant connect, resource, dba, aq_administrator_role to stream_admin;
              SQL> BEGIN
                    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
                      grantee  => 'stream_admin',
                      grant_privileges => true);
                    END;
                    /


              Step 5.  Build a database link from the Streams admin at the destination to the Streams admin at the source.


              SQL> create database link ORCL
                  connect to stream_admin identified by stream_admin
                  using 'ORCL';
              SQL> select sysdate from dual@ORCL;


              Step 6.  Create a queue at the destination (STR10), if one does not exist.


              SQL> BEGIN
               DBMS_STREAMS_ADM.SET_UP_QUEUE(
               queue_table  => 'stream_admin.ws_app02_queue_table',
               queue_name   => 'stream_admin.ws_app02_queue');
               END;
               /


              Step 7.  Create the capture process at the destination (STR10).


              BEGIN
              DBMS_CAPTURE_ADM.CREATE_CAPTURE(
              queue_name => 'ws_app02_queue',
              capture_name => 'ws_app_dh01_capture',
              rule_set_name => NULL,
              start_scn => NULL,
              source_database => 'ORCL',
              use_database_link => true,
              first_scn => NULL,
              logfile_assignment => 'implicit');
              END;
              /


              Step 8.  Add the positive rule set for the capture process at STR10.


              BEGIN
              DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
              schema_name => 'ws_app',
              streams_type => 'capture',
              streams_name => 'ws_app_dh01_capture',
              queue_name => 'ws_app02_queue',
              include_dml => true,
              include_ddl => false,
              include_tagged_lcr => false,
              source_database => 'ORCL',
              inclusion_rule => true);
              END;
              /


              Step 9.  Instantiate the ws_app schema at the destination database (STR10). This will follow the instantiation rules and procedures listed in the previous HA Workshop, 'Configuring Streams Replication.'



              Step 10.  Create an apply process at the destination database (STR10). This will reference the queue that you set up in Step 6.



              Step 11.  Follow through with all further configuration required from the previous HA Workshop, such as enabling capture and apply processes.


































               < Day Day Up >