Friday, November 6, 2009

Section 3.1.  Installing on Linux









3.1. Installing on Linux


As with many open source projects, the primary platform of installation for Subversion is Linux. Although Subversion is far from a second-class citizen on other operating systems, such as Microsoft Windows or Apple's Mac OS X, Linux is where it feels most at home. Of course, Linux is not a single entity. There are, in fact, a wide variety of different distributions, each with its own slightly different filesystem layouts and package management systems for installation of software. As I said in the chapter introduction, describing all of these different distributions is well beyond the scope of this book. Instead, in this section I will show you how to compile and install Subversion from source, which should work for most Linux distributions. If you would rather install binaries with your distribution's package management system, you will find that there are binary packages available for most of the major distributions on the downloads page for Subversion.



3.1.1. Subversion's Prerequisites


In order to make compiling a Subversion client easy, Subversion includes most of the dependencies it needs for the client in the Subversion source distribution. On the other hand, if you want to compile a server, there are a number of different prerequisites that you need to install first. Which prerequisites you use depends on exactly which features of Subversion you need. If you read on, I describe each of the packages that you may need to install, as well as the functionality they provide and where you might find them.



Apache Portable Runtime Libraries

The Apache Portable Runtime (APR) libraries are a set of libraries that provide a cross-platform abstraction layer for developing software that will work the same on a variety of different operating systems. Subversion is built on top of APR, and makes heavy use of the library, which helps ensure that SVN runs on the variety of platforms it supports. APR is therefore a core library for SVN, and is required when building the system.


The APR libraries are available for download from a variety of mirrors, which can be reached from the Apache project's Web site, at apr.apache.org. Downloading the APR libraries is not usually necessary though, because the APR source necessary for compiling Subversion is included in the Subversion source distribution. Most compiled binary distributions of Subversion also include the necessary APR libraries, so installing them separately will likely be unnecessary.




Apache Web Server

Subversion uses the Apache Web server as one of the two servers it supports for allowing remote access to the repository. If you want support for using the Subversion extensions to WebDAV for accessing the repository, you will need Apache. If you would rather use the Subversion custom-protocolbased svnserve, or only want to use Subversion on the local machine, you do not need to compile Subversion with support for Apache.


Most Linux distributions include Apache as a part of their core packages, and your system likely already has it installed. This may not be what you need though. Subversion requires version 2.0 or later of Apache, which is not yet in predominant use, and is often not installed. If you cannot upgrade your whole system to use Apache 2, it is possible to install Apache 2 alongside an existing version of Apache 1. I will show you how to set up such a system in Section 3.4, "Configuring to Use Apache."


If you want to download the sources for Apache to compile them yourself, they can be obtained from mirrors linked from the Apache Web site, just like APR. You can download Apache from its site at httpd.apache.org. The Apache Web site also provides compiled binary versions of Apache for most platforms if you need to install Apache but don't want to compile it yourself. Unlike APR, Apache is not included with the Subversion source and must be installed separately.




Berkeley DB

The Berkeley DB (BDB) database is an embedded database, developed as an open source project by Sleepycat Software. It is a database system designed to be integrated into other programs, and is used by Subversion for its database repository backend. Berkeley DB is only required if you are going to use the Subversion database repository. If you are instead using the new filesystem-based repository introduced in version 1.1 of Subversion, you can compile without support for Berkeley DB.


Most binary distributions of Subversion will include the necessary BDB support, so you shouldn't have to acquire BDB separately. If you are compiling Subversion from scratch, or are using an installation package that doesn't include BDB, you will need to install BDB yourself in order to have support for BDB-based repositories (most Linux distributions will already have it installed, so you might want to check before attempting to install it yourself). You can download BDB from Sleepycat's Web site, at www.sleepycat.com.




Neon

Subversion uses the Neon library in its client for communications with a WebDAV server. In most cases, you will not have to download this library separately, because it is included in the Subversion source distribution. If you are installing binary packages, however, you may need to install this as one of the prerequisites. The binary packages you will need, however, are in most cases available from the same place as the Subversion package that you are installing.





3.1.2. Downloading the Source


The Subversion source code can be downloaded from the Subversion project's Web site, at subversion.tigris.org/project_packages.html, which is where you will also find a variety of already compiled binary packages for various operating systems and distributions. Source versions are available in a variety of archive formats (tar gzip, tar bzip2, and zip).


If you're compiling on a UNIX-based system, you'll probably want to download either the bzip2 archive for the latest version of SVN (.tar.bz2 extension) or the gzip archive (.tar.gz). After you have the source archive for Subversion downloaded, it's time to get things unpacked, so that you can start compiling. To perform the unpacking, you will want to send the decompressed file to the tar command for unpacking. The easiest way to perform both the decompression and unpacking is to decompress with either the gzip or bzip2 command (depending on the compressed version you downloaded), and use the -dc options to tell the command to decompress the file and then send the decompressed file to standard output. A pipe can then be used to send the decompressed file directly to tar, which you'll want to run with the xvf options to tell it to extract the archive from a file, verbosely, and a hyphen (-) to tell it to take the archive from standard input. So, for example, if you had downloaded the bzip2 compressed version 1.1.0 of Subversion, the command to unpack it would be



$ bzip2 -dc subversion-1.1.0.tar.bz2 | tar xvf -


The tar command will unpack everything into a directory named (in the case of the preceding version) subversion-1.1.0.




3.1.3. Compiling and Installing


For the most part, compiling Subversion is straightforward. It auto-detects the presence of the various prerequisites that are required for compilation of the server, and decides whether it can build the server. If none of the prerequisites are installed, it will just compile the Subversion client (which doesn't have any prerequisites beyond what is included with the source). To perform a basic compilation, just cd into the Subversion directory that you unpacked in the previous section and run the following commands.



$ ./configure
$ make
$ su
Password:
# make install


In most cases, this compiles everything that you need, and installs everything in /usr/local/. Notice that the last command (make install) requires you to super-user to the root user, in order to have the proper permissions to perform a systemwide install of Subversion.



Configuration Options

Sometimes the default compile and install is not actually what you want. For instance, you may not want to compile the server with all of the possible features, even if the prerequisites are installed; or you may want to install to somewhere other than /usr/local/. In these instances, the Subversion configure script provides several options that you can set when it is run. I explain the ones you are most likely to run into in the following. If you would like to see the complete list, you can run ./configure -help. To configure the Subversion compilation with one of these options, you should run the configure script with the desired option passed as a command-line parameter.



--prefix=PREFIX


The prefix option specifies where Subversion should be installed. By default, this is in /usr/local/. If you would rather install Subversion somewhere else, pass configure the --prefix option, with PREFIX replaced by the path where you would like Subversion to be installed. For example, if you are installing SVN on a system where you don't have root privileges, you can run ./configure -prefix=$HOME/subversion to have Subversion installed in a directory named subversion in your home directory.



--with-apache=PATH
--without-apache


These options tell the build scripts whether they should compile the Subversion server with support for Apache and WebDAV. The default behavior is for Apache to be included, but if for some reason you don't want Apache support to be compiled, passing --withoutapache to configure will disable it. Additionally, if Apache is installed in a nonstandard place on your system, you may have to tell configure where to find it. You can do that by passing the --with-apache option, with PATH replaced by the path to where Apache is installed.



--with-berkeley-db=PATH
--without-berkeley-db


These options tell the build scripts whether they should compile the Subversion server with support for the Berkeley DB. The default behavior is for BDB to be included, but if you plan on using the filesystem-based repository storage, --without-berkeley-db will disable BDB (of course, you can still use the filesystem repository even if BDB support is compiled). Also, if Berkeley DB is installed in a nonstandard place on your system, you may have to tell configure where to find it. You can do that by passing the --with-berkeley-db option, with PATH replaced by the path to where BDB is installed.



--disable-mod-activation


By default, Subversion modifies your Apache httpd.conf file to enable the Subversion WebDAV module, mod_dav_svn, when you run make install. If you don't want it to make this modification, you can pass --disable-mod-activation to the configure script.











    Recipe 8.18. Implementing Class and Singleton Methods










    Recipe 8.18. Implementing Class and Singleton Methods







    Problem


    You want to associate a new method with a class (as opposed to the instances of that class), or with a particular object (as opposed to other instances of the same class).




    Solution


    To define a class method, prefix the method name with the class name in the method definition. You can do this inside or outside of the class definition.


    The Regexp.is_valid? method, defined below, checks whether a string can be compiled into a regular expression. It doesn't make sense to call it on an already instantiated Regexp, but it's clearly related functionality, so it belongs in the Regexp class (assuming you don't mind adding a method to a core Ruby class).



    class Regexp
    def Regexp.is_valid?(str)
    begin
    compile(str)
    valid = true
    rescue RegexpError
    valid = false
    end
    end
    end
    Regexp.is_valid? "The horror!" # => true
    Regexp.is_valid? "The)horror!" # => false



    Here's a Fixnum.random method that generates a random number in a specified range:



    def Fixnum.random(min, max)
    raise ArgumentError, "min > max" if min > max
    return min + rand(max-min+1)
    end
    Fixnum.random(10, 20) # => 13
    Fixnum.random(-5, 0) # => -5
    Fixnum.random(10, 10) # => 10
    Fixnum.random(20, 10)
    # ArgumentError: min > max



    To define a method on one particular other object, prefix the method name with the variable name when you define the method:



    company_name = 'Homegrown Software'
    def company_name.legalese
    return "#{self} is a registered trademark of ConglomCo International."
    end

    company_name.legalese
    # => "Homegrown Software is a registered trademark of ConglomCo International."
    'Some Other Company'.legalese
    # NoMethodError: undefined method 'legalese' for "Some Other Company":String





    Discussion


    In Ruby, a
    singleton method is a method defined on one specific object, and not available to other instances of the same class. This is kind of analagous to the Singleton pattern, in which all access to a certain class goes through a single instance, but the name is more confusing than helpful.



    Class methods are actually a special case of
    singleton methods. The object on which you define a new method is the Class object itself.


    Some common types of
    class methods are listed here, along with illustrative examples taken from Ruby's standard library:


    • Methods that instantiate objects, and methods for retrieving an object that implements the Singleton pattern. Examples: Regexp.compile, Date.parse, Dir. open, and Marshal.load (which can instantiate objects of many different types). Ruby's standard constructor, the new method, is another example.

    • Utility or helper methods that use logic associated with a class, but don't require an instance of that class to operate. Examples: Regexp.escape, Dir.entries, File.basename.

    • Accessors for class-level or Singleton data structures. Examples: THRead.current, Struct.members, Dir.pwd.

    • Methods that implicitly operate on an object that implements the Singleton pattern. Examples: Dir.chdir, GC.disable and GC.enable, and all the methods of Process.


    When you define a
    singleton method on an object other than a
    class, it's usually to redefine an existing method for a particular object, rather than to define a brand new method. This behavior is common in frameworks, such as GUIs, where each individual object has customized behavior. Singleton method definition is a cheap substitute for subclassing when you only need to customize the behavior of a single object:



    class Button
    #A stub method to be overridden by subclasses or individual Button objects
    def pushed
    end
    end

    button_a = Button.new
    def button_a.pushed
    puts "You pushed me! I'm offended!"
    end

    button_b = Button.new
    def button_b.pushed
    puts "You pushed me; that's okay."
    end

    Button.new.pushed
    #

    button_a.pushed
    # You pushed me! I'm offended!

    button_b.pushed
    # You pushed me; that's okay.



    When you define a method on a particular object, Ruby acts behind the scenes to transform the object into an anonymous subclass of its former class. This new class is the one that actually defines the new method or overrides the methods of its superclass.













    Operations Staff











     < Day Day Up > 







    Operations Staff



    Keeping your development personnel productive and happy with a checkin system requires constant monitoring of the process. The key to the success of the SNAP system is constant monitoring, or babysitting. Nothing upsets a development team more than a system that does not work and no one available or with the resources available to fix it.



    Don't try to run the SNAP system unattended. You need properly trained people to debug and own any problems with the system. This is usually someone from the build team, but it does not have to be. If you can make your build and testing processes highly reliable, you might find that you can go extended periods without close supervision. Either way, you need resources assigned to attending and maintaining the system.



    It is worth noting here that without knowing the details of your lab process, you will not be able to properly manage a SNAP system. In other words, if your IT department owns all the hardware or you do not have a properly equipped lab, it will be difficult to manage a SNAP system because you will not have direct access to the machines or be able to modify the hardware configuration if needed.















       < Day Day Up > 



      JOINs

      Team-Fly
       

       

      Oracle® PL/SQL® Interactive Workbook, Second Edition
      By
      Benjamin Rosenzweig, Elena Silvestrova
      Table of Contents

      Appendix E. 
      Oracle 9i SQL New Features



      JOINs


      The 1999 ANSI standard introduced complete JOIN syntax in the FROM clause. The prior method was to list the tables needed in the query in the FROM clause and then to define the joins between these tables in the WHERE clause. However, the conditions of the SQL statement are also listed in the WHERE clause. It was decided to enhance this syntax because listing of the joins and the conditions in the same WHERE clause can be confusing.


      The 1999 ANSI join syntax includes cross joins, equijoins, full outer joins, and natural joins.



      CROSS JOINs


      The CROSS JOIN syntax indicates that you are creating a Cartesian product from two tables. The result set of a Cartesian product is usually meaningless, but it can be used to generate a lot of rows if you need to do some testing. The advantage of the new syntax is that it flags a Cartesian product by having the CROSS JOIN in the FROM clause.





      FOR EXAMPLE


      Prior to Oracle 9i, you would create a Cartesian product with the following syntax:



      SELECT *
      FROM instructor course

      The new syntax is as follows:



      SELECT *
      FROM instructor CROSS JOIN
      course

      The result set from this is 300. This is because the COURSE table has 30 rows and the INSTRUCTOR table has 10 rows. The CROSS JOIN will count all possible combinations resulting in the 300 rows.



      EQUI JOINs


      The EQUI JOIN syntax indicates the columns that comprise the JOINS between two tables. Prior to Oracle 9i, you would indicate a join condition in the WHERE clause by stating which two columns are part of the foreign key constraint.





      FOR EXAMPLE


      Prior to Oracle 9i, you would join the STUDENT table to the ZIPCODE table as follows:



      SELECT s.first_name, s.last_name, z.zip, z.city, z.state
      FROM student s, zipcode z
      WHERE s.zip = z.zip

      The new syntax is as follows:



      SELECT s.first_name, s.last_name, zip, z.city, z.state
      FROM student s JOIN
      zipcode z USING (zip)

      The reason for this syntax is that the join condition between the two tables is immediately obvious when looking at the tables listed in the FROM clause. This example is very short, but generally your SQL statements are very long, and it can be time consuming to find the join conditions in the WHERE clause.


      Notice that the ZIP column did not have an alias. In the new JOIN syntax, the column that is referenced in the JOIN does not have a qualifier. In the old syntax, if you did not use an alias for column ZIP, as in this example,



      SELECT s.first_name, s.last_name, zip, z.city, z.state
      FROM student s, zipcode z
      WHERE s.zip = z.zip

      Oracle would generate the following error:



      ORA-00918: column ambiguously defined

      In the new JOIN syntax, if you use a qualifier, as in this example,



      SELECT s.first_name, s.last_name, z.zip, z.city, z.state
      FROM student s JOIN
      zipcode z USING (zip)

      Oracle generates the following error:



      ORA-25154: column part of USING clause cannot have qualifier

      The new JOIN syntax also allows you to define the join condition using both sides of the join. This is done with the ON syntax. When using the ON syntax for a JOIN you must use the qualifier. This is also useful when the two sides of the join do not have the same name.


      The ON syntax can also be used for three-way joins (or more).





      FOR EXAMPLE



      SELECT s.section_no, c.course_no, c.description,
      i.first_name, i.last_name
      FROM course c
      JOIN section s
      ON (s.course_no = c.course_no)
      JOIN instructor i
      ON (i.instructor_id = s.instructor_id)

      The syntax for a multiple-table join becomes more complex. Notice that one table is mentioned at a time. The first JOIN lists columns from the first two tables in the ON section. Once the third table has been indicated, the second JOIN lists columns from the second and third tables in the ON clause.



      NATURAL JOINs


      The NATURAL JOIN is another part of the ANSI 1999 syntax that can be used when joining two tables based on columns that have the same name and datatype. The NATURAL JOIN can only be used when all the columns that have the same name in both tables comprise the join condition between these tables. You cannot use this syntax when the two columns have the same name but a different datatype. Another benefit of this join is that if you use the SELECT * syntax, the columns that appear in both tables will only appear once in the result set.





      FOR EXAMPLE



      SELECT *
      FROM instructor NATURAL JOIN zipcode

      The join that will be used here is not only on the ZIP column of both tables, but the CREATE_BY, CREATED_DATE, MODIFIED_BY, and MODIFIED_DATE columns as well.


      The student schema does not support the NATURAL JOIN condition since we have created audit columns that have the same name in each table but are not used in the foreign keys constraints among the tables.



      OUTER JOINs


      INNER JOIN or EQUI JOIN is the result of joining two tables that contain rows where a match occurred on the join condition. It is possible to lose information through an INNER JOIN because only those rows that match on the join condition will appear in the final result set.


      The result set of an OUTER JOIN will contain the same rows as the INNER JOIN plus rows corresponding to the rows from the source tables where there was no match. The OUTER JOIN has been supported by a number of versions of the Oracle SQL language. It had not been a part of the ANSI standard until the 1999 version.


      Oracle's OUTER JOIN syntax has consisted of placing a (+) next to the columns of a table where you expect to find values that do not exist in the other table.





      FOR EXAMPLE



      SELECT i.first_name, i.last_name, z.state
      FROM instructor i, zipcode z
      WHERE i.zip (+) = z.zip
      GROUP BY i.first_name, i.last_name, z.state

      In this example, the result set will include all states that are in the ZIPCODE table. If there is no instructor for a state that exists in the ZIPCODE table, the values of FIRST_NAME and LAST_NAME will be blank (NULL). This syntax gets more confusing because it must be maintained if there are more conditions in a WHERE clause. This method can only be used on one side of the outer join at a time.


      The new method of OUTER JOINS adopted in Oracle 9i allows the case of an OUTER JOIN on either side or both sides at the same time (for example, if there were some instructors who had zipcodes that were not in the ZIPCODE table, and you wanted to see all the instructors and all the states in both of these tables). This task can be accomplished by using the new OUTER JOIN syntax only. This requires the aforementioned JOIN syntax with addition of new outer join attributes as well. The choice is LEFT/RIGHT/FULL OUTER JOIN. The same OUTER JOIN can now be modified as



      SELECT i.first_name, z.state
      FROM instructor i RIGHT OUTER JOIN
      zipcode z
      ON i.zip = z.zip
      GROUP BY i.first_name, z.state

      The RIGHT indicates that the values on the right side of the JOIN may not exist in the table on the LEFT side of the join. This can be replaced by the word FULL if there are some instructors who have zipcodes that are not in the ZIPCODE table.



        Team-Fly
         

         
        Top
         


        Section 1.4.&nbsp; All Software Engineers Are Created Equal










        1.4. All Software Engineers Are Created Equal



        A software project requires much more than just writing code. There are all sorts of work products that are produced along the way: documents, schedules, plans, source code, bug reports, and builds are all created by many different team members. No single work product
        is more important than any other; if any one of them has a serious error, that error will have an impact on the end product. That means each team member responsible for any of these work products plays an important role in the project, and all of those people can make or break the final build that is delivered to the users.


        There are many project managers who, when faced with a disagreement between a programmer and a tester, will always trust the programmer. This same project manager might always trust a requirements analyst or a business analyst over a programmer, if and when they disagree. Many people have some sort of a hierarchy in their heads in which certain engineering team members are more valuable or more skilled than others. This is a dangerous idea, and it is one that has no place on an effective project team.


        One key to building better software is treating each idea objectively, no matter who suggests it or whether or not it's immediately intuitive to you. That's another reason the practices, techniques, and tools in this book cover all areas of the software project. Every one of these practices is based on an objective evaluation of all of the important activities in software development. Every discipline is equally important, and everyone on the team contributes equally to the project. A software requirements specification (SRS), for example, is no more important than the code: the code could not be created without the SRS, and the SRS would have no purpose if it weren't the basis of the software. It is in the best interest of everyone on the team to make sure that both of them have as few defects as possible, and that the authors of both work products have equal say in project decisions.


        The project manager must respect all team members, and should not second-guess their expertise. This is an important principle because it is the basis of real commitments. It's easy for a senior manager to simply issue an edict that everyone must build software without defects and do a good job; however, this rarely works well in practice. The best way to make sure that the software is built well is to make sure that everyone on the team feels respected and valued, and to gain a true commitment from each person to make the software the best it can be.












        Section 5.2. Integrating Ecosystems: Apple's iPod








        5.2. Integrating Ecosystems: Apple's iPod


        Apple's iPod is not precisely a web application. At its heart, it combines iPod hardware for playing music (and pictures and video), iTunes software for managing that content (shown in Figure 5-4), and an iTunes store that runs over the Web (shown in Figure 5-5). The iPod exemplifies the integration of new technology with existing systems, and its continuing growth into new areas (such as the web-capable iPhone) demonstrates how different technological ecosystems can coexist. Physical hardware can both benefit from network effects and create surrounding businesses based on those effects.



        Figure 5-4. iTunes software with a user's music library










        Figure 5-5. Apple's music store running inside of iTunes









        However, the iPod combines much more than just components made and controlled by Apple. The first four ecosystems we'll discuss are illustrative examples of platform innovation, and they demonstrate how Apple has captured value from its ecosystems and expanded and widely distributed this value to its partners. You'll see the overall increased returns from collaborative innovation.


        In this example, a lead company—here, Apple—conceives, designs, and orchestrates the external innovation and creativity of many other outside participants, users, suppliers, creators, affiliates, partners, and complementors to support an innovative product, service, or system. The final (and fifth) ecosystem—the iTunes and major record label partnerships—is used as a contrasting example of recombinant innovation and will be discussed later.



        5.2.1. Platform Innovation Ecosystem #1: Production


        To create the iPod, Apple first assembled a production ecosystem—a group of companies all over the world that contributed to circuit design, chipsets, the hard drive, screens, the plastic shell, and other technologies, as well as assembly of the final device.


        Although many people still think of the manufacturing process as the key place to capture added value, the iPod demonstrates that Apple—the creator, brand name, and orchestrator—has actually figured out how to capture the lion's share of the value: 30%. The rest of the value is spread across a myriad of different contributions within the network of component providers and assemblers, none of them larger than 15%. Researchers, sponsored by the Sloan Foundation, developed an analytical framework for quantitatively calculating who captures the value from a successful global outsourced innovation like Apple's iPod.


        Their study (http://pcic.merage.uci.edu/papers/2007/AppleiPod.pdf) traced the 451 parts that go into the iPod. Attributing cost and value-capture to different companies and their home countries is relatively complex, as the iPod and its components—like many other products—are made in several countries by dozens of companies, with each stage of production contributing a different amount to the final value.


        It turns out that $163 of the iPod's $299 retail value is captured by American companies and workers, with $75 to distribution and retail, $80 to Apple, and $8 to various domestic component makers. Japan contributes about $26 of value added, Korea less than $1, and the final assembly in China adds somewhat less than $4 a unit. (The study's purpose was to demonstrate that trade statistics can be misleading, as the U.S.-China trade deficit increases by $150—the factory cost—for every 30 GB video iPod unit, although the actual value added by assembly in China is a few dollars at most.)


        Suppliers throughout the manufacturing chain benefit from sales of the product and may thrive as a result, but the main value of the iPod goes to its creator, Apple. As Hal Varian commented in the New York Times:[32]

        [32] Hal R. Varian, "An iPod Has Global Value. Ask the (Many) Countries That Make It," New York Times, June 28, 2007, http://people.ischool.berkeley.edu/~hal/people/hal/NYTimes/2007-06-28.html.



        The real value of the iPod doesn't lie in its parts or even in putting those parts together. The bulk of the iPod's value is in the conception and design of the iPod. That is why Apple gets $80 for each of these video iPods it sells, which is by far the largest piece of value added in the entire supply chain.


        Those clever folks at Apple figured out how to combine 451 mostly generic parts into a valuable product. They may not make the iPod, but they created it. In the end, that's what really matters.





        5.2.2. Platform Innovation Ecosystem #2: Creative and Media


        The iPod is also part of an ecosystem and contributes to the indirect network effects of Apple's other key product: Macintosh computers. Even though iPods, and the iTunes software that supports them, were originally compatible with Macs only, the cachet of the iPod has continued to help sell Macs even after the iPod developed Windows support.


        Beyond the iPod and iTunes, Apple's creative and media ecosystem includes software such as iMovie, iDVD, Aperture, Final Cut, GarageBand, and QuickTime (a key technology for the video iPods). These are all Apple products, but many other companies also provide software and hardware for this space, notably Adobe and Quark.




        5.2.3. Platform Innovation Ecosystem #3: Accessories


        As any visit to the local electronics (or even office supply) store will show, the iPod has inspired a blizzard of accessories. Bose, Monster Cable, Griffin Technologies, Belkin, and a wide variety of technology and audio companies provide iPod-specific devices, from chargers to speakers. Similarly, iPod fashion has brought designers, such as Kate Spade, into the market for iPod cases, along with an army of lesser-known contributors. Also, automobile companies and car audio system makers are adding iPod connectors, simplifying the task of connecting iPods to car stereo systems.


        iPod accessories are a $1 billion business. In 2005, Apple sold 32 million iPods, or one every second. And for every $3 spent on an iPod, at least $1 is spent on an accessory, estimates NPD Group analyst Steve Baker. That means customers make three or four additional purchases per iPod.


        Accessory makers are happy to sell their products, of course, but this ecosystem also supports retailers, which get a higher profit margin on the accessories than on the iPod (50% rather than 25%). It also reinforces the value of the iPod itself because the 2,000 different add-ons made exclusively for the iPod motivate customers to personalize their iPods. This sends a strong signal that the iPod is "way cooler" than other players offered by Creative and Toshiba, for which there are fewer accessories. The number of accessories is doubling each year, and that's not including the docking stations that are available in a growing number of cars.


        Most industry participants were surprised by the strength and growth of the accessory market. Although earlier products such as Disney's Mickey Mouse or Mattel's Barbie supported their own huge market for accessories, those were made by the company that created the original product or by its licensors. Apple has taken a very different path, encouraging a free-for-all; it accepts that its own share of the accessories market is small, knowing that the iPod market is growing.




        5.2.4. Platform Innovation Ecosystem #4: User-Provided Metadata


        Even before the iTunes music store opened, users who ripped their CDs to digital files in iTunes got more than just the contents of the CD. Most CDs contain only the music data, not information like song titles. Entering titles for a large library of music can be an enormous task, and users shouldn't have to do it for every album.


        This problem had been solved earlier by Gracenote, which is best known for its CDDB (CD database) technology. Every album has slightly different track lengths, and the set of lengths on a particular album is almost always unique. So, by combining that identification information with song titles entered by users only when they encountered a previously unknown CD, Gracenote made it much easier for users to digitize their library of CDs.


        Rather than reinventing the wheel, Apple simply connected iTunes to CDDB, incorporating the same key feature that made CDDB work in the first place: user input. Whenever a user put in a CD that hadn't previously been cataloged, iTunes would ask that user if he wanted to share the catalog.


        Apple has no exclusive rights to CDDB, but it benefits from its existence nonetheless. Users get a much simpler process for moving their library to digital format, and contributing to CDDB requires only a decision, not any extra work.




        5.2.5. Recombinant Innovation


        The music industry is perhaps the most difficult ecosystem the iPod has to deal with—and the most frequently discussed. The music industry has dealt with the Web and the Internet broadly as a threat rather than an opportunity, as it saw its profits disappearing when the transaction costs of sharing music dropped precipitously. So, how did Steve Jobs get the record labels—which had been suing Napster and Kazaa—to sign up for the iTunes store to offer online, downloaded music?


        Apple presented its proposal to the big four music companies—Universal, Sony BMG, EMI, and Warner—as a manageable risk. Apple's control of the iPod gave it the tools it needed to create enough digital rights management (DRM)—limiting music to five computers—to convince the companies that this was a brighter opportunity for them than the completely open MP3 files that users created when they imported music from CDs. Jobs was actually able to leverage Apple's small market share into a promising position:[33]

        [33] Steven Levy, "Q&A: Jobs on iPod's Cultural Impact," Newsweek, Oct. 16, 2006, http://www.msnbc.msn.com/id/15262121/site/newsweek/print/1/displaymode/1098/.



        Now, remember, it was initially just on the Mac, so one of the arguments that we used was, "If we're completely wrong and you completely screw up the entire music market for Mac owners, the sandbox is small enough that you really won't damage the overall music industry very much." That was one instance where Macintosh's [small] market share helped us.



        Apple also played a key role in coordinating the pricing of songs among the big four music companies—99 cents a tune. This was a major step in weaning the music companies away from their high-priced retail distribution of prepackaged bundled digital goods— CDs—toward a digital distribution channel with a per-user/per-song revenue structure and potentially strong social-network effects. Downloads were also carefully priced to incentivize users who preferred having legal downloads and who trusted Apple and Steve Jobs to keep the price at that level despite the protests of the music labels.


        After 18 months of negotiations, Apple was able to get started on its own platform, and later carry the DRM strategy to Windows. (One pleasant side effect for Apple of the DRM deal with the music companies is that it is difficult for Apple to license that DRM technology, preserving Apple's monopoly there.)


        Apple provided the music companies with a revenue stream built on an approach that would let them compete with pirated music, although the companies aren't entirely excited about adapting to song-by-song models and the new distribution channels. However, EMI's decision in April 2007 to start selling premium versions of its music without DRM—still through the iTunes store but also through other sellers—suggests that there is more to come in this developing relationship. Ecosystems evolve, and business ecosystems often evolve beyond their creators' vision.










        9.4 Dependencies among Procedures



        [ Team LiB ]





        9.4 Dependencies among Procedures


        This section covers the following topics:


        • Editing and compiling a procedure that invalidates other procedures.

        • LAST_DDL_TIME and STATUS from USER_OBJECTS.

        • Compiling a schema with DBMS_UTILITY.COMPILE_SCHEMA.

        • Recompiling procedures, functions, and packages individually.

        • This section uses the script CHECK_PLSQL_OBJECTS from the previous section.


        When we first compile the HELLO procedure, the CREATED time and LAST_DDL_TIME are identical.





        OBJECT_NAME STATUS CREATED LAST_DDL_TIME
        -------------- ------- ----------------- -----------------
        HELLO(P) VALID 14-jul-2003 16:18 14-jul-2003 16:18

        If we attempt to recompile the procedure and the compile fails, the procedure is still in the data dictionary but with an INVALID status. The LAST_DDL_TIME reflects the last compile time.


        Executing a procedure that is INVALID will fail with an Oracle error:





        PLS-00905: object SCOTT.HELLO is invalid

        If Procedure A calls Procedure B and B becomes invalid, then A automatically becomes invalid. For the Figure 9-2 procedure, SAY_HELLO calls HELLO. What happens if HELLO becomes invalid?


        Figure 9-2. Simple Procedure Dependency.


        We begin with the code to HELLO.SQL and SAY_HELLO.SQL.





        -- Filename HELLO.SQL
        CREATE OR REPLACE PROCEDURE hello IS
        BEGIN
        dbms_output.put_line('Hello');
        END;
        /
        show errors

        -- Filename SAY_HELLO.SQL
        CREATE OR REPLACE PROCEDURE say_hello IS
        BEGIN
        hello;
        END;
        /
        show errors

        Compile procedures HELLO and SAY_HELLO in order. The SHOW ERRORS command reports any compilation errors. The script CHECK_PLSQL_OBJECTS shows the STATUS as VALID for each procedure in USER_OBJECTS.





        SQL> @CHECK_PLSQL_OBJECTS

        OBJECT_NAME STATUS CREATED LAST_DDL_TIM
        -------------------- ------- ------------ ------------
        HELLO(P) VALID 25-jul 12:52 25-jul 01:02
        SAY_HELLO(P) VALID 25-jul 01:01 25-jul 01:02

        Edit HELLO.SQL and change PUT_LINE to PUTLINE. The procedure will now compile with an error. Recompile HELLO with @HELLO.SQL. The status of SAY_HELLO is also invalid, yet we did not change the procedure. SAY_HELLO depends on a valid HELLO procedure. A compile error in Hello resulted in Oracle searching objects that depend on HELLO and invalidating those objects. All dependents of any procedure must be valid for that procedure to be valid, showing both objects as invalid:





        SQL> @CHECK_PLSQL_OBJECTS

        OBJECT_NAME STATUS CREATED LAST_DDL_TIM
        -------------------- ------- ------------ ------------
        HELLO(P) INVALID 25-jul 12:52 25-jul 01:05
        SAY_HELLO(P) INVALID 25-jul 01:01 25-jul 01:02

        Correct the PL/SQL code in HELLO.SQL and recompile. The HELLO should be valid with a successful recompilation. What about SAY_HELLO, is this still invalid?





        SQL> @hello

        Procedure created.

        SQL> @CHECK_PLSQL_OBJECTS

        OBJECT_NAME STATUS CREATED LAST_DDL_TIM
        -------------------- ------- ------------ ------------
        HELLO(P) VALID 25-jul 12:52 25-jul 01:17
        SAY_HELLO(P) INVALID 25-jul 01:01 25-jul 01:02

        Procedure SAY_HELLO is still invalid; however, when we execute the procedure, Oracle sees that it is invalid and attempts to validate it. This will be successful because all dependents (i.e., the HELLO procedure) are valid. Oracle compiles SAY_HELLO, sets the status to valid, and then executes the procedure. Following execution of HELLO, both procedures are valid.





        SQL> execute say_hello
        Hello

        PL/SQL procedure successfully completed.

        SQL> @CHECK_PLSQL_OBJECTS

        OBJECT_NAME STATUS CREATED LAST_DDL_TIM
        -------------------- ------- ------------ ------------
        HELLO(P) VALID 25-jul 12:52 25-jul 01:17
        SAY_HELLO(P) VALID 25-jul 01:01 25-jul 01:17

        There is understandable overhead with Oracle attempting to validate objects at run time. If HELLO is a widely used procedure and becomes invalid, there will be some performance degradation. During the normal operations of an application, Oracle may encounter many packages that became invalid and recompile them at run time. This can cause a noticeable impact to end users.


        The following discussion covers the scenario when invalid code does not recompile.


        We invalidated HELLO, recompiled it, and it became valid again. The change was a simple statement change that we corrected. A major code change to HELLO could cause recompilation failures in other procedures. Such an event would occur if the interface to HELLO changed. Changing the parameter specification, parameter types, or parameter modes can permanently invalidate other code.


        If we change a procedure and recompile, Oracle's recompilation of other procedures may fail. Why wait until run-time to realize there is broken code. When PL/SQL changes occur, you can recompile the entire suite of PL/SQL code in the schema. The Oracle DBMS_UTILITY package provides this functionality with the COMPILE_SCHAME procedure.


        To recompile all PL/SQL in a schema (this example uses the schema name, SCOTT):





        SQL> execute dbms_utility.compile_schema('SCOTT')

        PL/SQL procedure successfully completed.

        The response "procedure successfully completed" means the call to DBMS_UTILITY was successful. There may be invalid objects. Run CHECK_PLSQL_OBJECTS for invalid stored procedures. LAST_DDL_TIME shows the recompilation time of each procedure.


        If a procedure is invalid, you can SHOW ERRORS on that procedure, showing why it failed to compile with the following:


        • SHOW ERRORS <type> <schema>.<name>

        • SHOW ERRORS PROCEDURE procedure_name;

        • SHOW ERRORS FUNCTION function_name;

        • SHOW ERRORS PACKAGE package_name; (package spec errors)

        • SHOW ERRORS PACKAGE BODY package_name; (package body errors)


        To show compile errors for SAY_HELLO:





        SHOW ERRORS PROCEDURE SAY_HELLO;

        The following scenario includes three procedures. P1 calls P2, which calls P3. The procedure code is:





        CREATE OR REPLACE procedure P3 IS
        BEGIN
        dbms_output.put_line('executing p3');
        END;
        /
        CREATE OR REPLACE procedure P2 IS
        BEGIN
        P3;
        END;
        /
        CREATE OR REPLACE procedure P1 IS
        BEGIN
        P2;
        END;
        /

        Compile these procedures in the following order: P3, then P2, then P1. Execution of P1 produces the following:





        SQL> execute p1
        executing p3

        Change P3 by adding a parameter to the interface and compile the procedure.





        CREATE OR REPLACE procedure P3(N INTEGER) IS
        BEGIN
        dbms_output.put_line('executing p3');
        END;
        /

        Not knowing all the dependencies on P3, we can compile the schema.





        SQL> execute dbms_utility.compile_schema('SCOTT');

        PL/SQL procedure successfully completed.

        Check for invalid objects.





        SQL> @check_plsql_objects

        OBJECT_NAME STATUS CREATED last_ddl
        -------------------- ------- ------------ ------------
        P1(P) INVALID 25-jul 15:26 25-jul 15:35
        P2(P) INVALID 25-jul 15:26 25-jul 15:35
        P3(P) VALID 25-jul 15:26 25-jul 15:35

        We have invalid objects P1 and P2. Use SHOW ERRORS to see why these procedures failed to compile.





        SQL> show errors procedure p1
        Errors for PROCEDURE P1:

        LINE/COL ERROR
        -------- ----------------------------------------------------
        3/5 PLS-00905: object SCOTT.P2 is invalid
        3/5 PL/SQL: Statement ignored

        SQL> show errors procedure p2
        Errors for PROCEDURE P2:

        LINE/COL ERROR
        -------- -----------------------------------------------------
        3/5 PLS-00306: wrong number or types of arguments
        in call to 'P3'
        3/5 PL/SQL: Statement ignored

        Many invalid objects can pose a challenging problem. In the preceding example (P1, P2 and P3), there are two invalid objects. We changed P3 and saw from SHOW ERRORS that P2 is passing the wrong number of arguments to P3.


        The DBMS_UTILITY.COMPILE_SCHEMA compiles all the PL/SQL in a schema. You can validate individual components with the ALTER statement.





        ALTER PROCEDURE procedure_name COMPILE;

        ALTER FUNCTION function_name COMPILE

        To compile the package specification and body:





        ALTER PACKAGE package_name COMPILE;

        To compile just the package specification:





        ALTER PACKAGE package_name COMPILE SPECIFICATION;

        To compile just the package body:





        ALTER PACKAGE package_name COMPILE BODY;

        You can always determine object dependencies by querying the data dictionary view USER_DEPENDENCIES, covered in the next section. The preceding scenario includes three procedures: P1, P2, and P3. This is not a complex architecture. When there are many program units with many dependencies, the task becomes tedious. It requires repeated queries of the USER_DEPENDENCIES view. In the next section, we look at applying a general solution to querying USER_DEPENDENCIES.


        The script used to query the USER_OBJECTS view, CHECK_PLSQL_OBJECTS, filters stored procedures with a WHERE clause. This script filters stored procedures just to demonstrate an example. A change to a stored procedure can invalidate other object types. A view can use a PL/SQL function. A trigger can use a procedure, function, or package. Objects from other schemas may use our PL/SQL objects. A general dependency tracing strategy requires that you query the ALL_DEPENDENCIES view for all object types.





          [ Team LiB ]



          Section 1.5. Toward Mac OS X










          1.5. Toward Mac OS X


          After Rhapsody's DR2 release, Apple would still alter its operating system strategy but would finally be on its way toward achieving its goal of having a new system. During the 1998 Worldwide Developers Conference, Adobe's Photoshop ran on what would be Mac OS X. However, the first shipping release of Mac OS X would take another three years. Figure 111 shows an approximation of the progression from Rhapsody toward Mac OS X.




          Figure 111. An approximation of the Mac OS X timeline

          [View full size image]





          1.5.1. Mac OS X Server 1.x


          As people were expecting a DR3 release of Rhapsody, Apple announced Mac OS X Server 1.0 in March 1999. Essentially an improved version of Rhapsody, it was bundled with WebObjects, the QuickTime streaming server, a collection of developer tools, the Apache web server, and facilities for booting or administering over the network.


          Apple also announced an initiative called Darwin: a fork of Rhapsody's developer release. Darwin would become the open source core of Apple's systems.


          Over the next three years, as updates would be released for the server product, development of the desktop version would continue, with the server sharing many of the desktop improvements.




          1.5.2. Mac OS X Developer Previews


          There were four Developer Preview releases of Mac OS X, named DP1 through DP4. Substantial improvements were made during these DP releases.



          1.5.2.1. DP1

          An implementation of the Carbon API was added. Carbon represented an overhaul of the "classic" Mac OS APIs, which were pruned, extended, or modified to run in the more modern Mac OS X environment. Carbon was also meant to help Mac OS developers transition to Mac OS X. A Classic application would require an installation of Mac OS 9 to run under Mac OS X, whereas Carbon applications could be compiled to run as native applications under both Mac OS 9 and Mac OS X.




          1.5.2.2. DP2

          The Yellow Box evolved into Cocoa, originally alluding to the fact that besides Objective-C, the API would be available in Java. A version of the Java Development Kit (JDK) was included, along with a just-in-time (JIT) compiler. The Blue Box environment was provided via Classic.app (a newer version of MacOS.app) that ran as a process called truBlueEnvironment. The Unix environment was based on 4.4BSD. DP2 thus contained a multitude of APIs: BSD, Carbon, Classic, Cocoa, and Java. There was widespread dissatisfaction with the existing user interface. The Aqua user interface had not been introduced yet, although there were rumors that Apple was keeping the "real" user interface a secret.[23]

          [23] Apple had referred to the Mac OS X user interface as "Advanced Mac OS Look and Feel."



          Carbon is sometimes perceived as "the old" API. Although Carbon indeed contains modernized versions of many old APIs, it also provides functionality that may not be available through other APIs. Parts of Carbon are complementary to "new" APIs such as Cocoa. Nevertheless, Apple has been adding more functionality to Cocoa so that dependencies on Carbon can be eventually eliminated. For example, before Mac OS X 10.4, much of the QuickTime functionality was available only through Carbon. In Mac OS X 10.4, Apple introduced the QTKit Cocoa framework, which reduces or eliminates Carbon dependencies for QuickTime.






          1.5.2.3. DP3

          The Aqua user interface was first demonstrated during the San Francisco Macworld Expo in January 2000. Mac OS X DP3 included Aqua along with its distinctive elements: "water-like" elements, pinstripes, pulsating default buttons, "traffic-light" window buttons, drop shadows, transparency, animations, sheets, and so on. The DP3 Finder was Aqua-based as well. The Dock was introduced with support for photorealistic icons that were dynamically scalable up to 128x128 pixels.





          1.5.2.4. DP4

          The Finder was renamed the Desktop in DP4. The System Preferences application (Preferences.appthe precursor to System Preferences.app) made its first appearance in Mac OS X, allowing the user to view and set a multitude of system preferences such as Classic, ColorSync, Date & Time, Energy Saver, Internet, Keyboard, Login Items, Monitors, Mouse, Network, Password, and others. Prior to DP4, the Finder and the Dock were implemented within the same application. The Dock was an independent application (Dock.app) in DP4. It was divided into two sections: the left side for applications and the right side for the trash can, files, folders, and minimized windows. Other notable components of DP4 included an integrated development environment and OpenGL.



          The Dock's visual indication of a running application underwent several changes. In DP3, an application's Dock icon had a bottom edge a few pixels high that was color-coded to indicate whether the application was running. This was replaced by an ellipsis in DP4 and was followed by a triangle in subsequent Mac OS X versions. DP4 also introduced the smoke cloud animation that ensues after an item is dragged off the Dock.







          1.5.3. Mac OS X Public Beta


          Apple released a beta version of Mac OS X (Figure 112) at the Apple Expo in Paris on September 13, 2000. Essentially a publicly available preview release for evaluation and development purposes, the Mac OS X Public Beta was sold for $29.95 at the Apple Store. It was available in English, French, and German. The software's packaging contained a message from Apple to the beta testers: "You are holding the future of the Macintosh in your hands." Apple also created a Mac OS X tab on its web site that contained information on Mac OS X, including updates on third-party applications, tips and tricks, and technical support.




          Figure 112. Mac OS X Public Beta

          [View full size image]




          Although the beta release was missing important features and ostensibly lacked in stability and performance, it demonstrated several important Apple technologies at work, particularly to those who had not been following the DP releases. The beta's key features were the following:


          • The Darwin core with its xnu kernel that offered "true" memory protection, preemptive multitasking, and symmetric multiprocessing

          • The PDF-based Quartz 2D drawing engine

          • OpenGL support

          • The Aqua interface and the Dock

          • Apple's new mail client, with support for IMAP and POP

          • A new version of the QuickTime player

          • The Music Player application for playing MP3s and audio CDs

          • A new version of the Sherlock Internet-searching tool

          • A beta version of Microsoft Internet Explorer


          With Darwin, Apple would continually leverage a substantial amount of existing open source software by using it forand often integrating it withMac OS X. Apple and Internet Systems Consortium, Inc. (ISC), jointly founded the OpenDarwin project in April 2002 for fostering cooperative open source development of Darwin. GNU-Darwin is an open source Darwin-based operating system.





          The New Kernel


          Darwin's kernel is called xnu. It is unofficially an acronym for "X is Not Unix." It is also a coincidental tribute to the fact that it is indeed the NuKernel for Mac OS X. xnu is largely based on Mach and FreeBSD, but it includes code and concepts from various sources such as the formerly Apple-supported MkLinux project, the work done on Mach at the University of Utah, NetBSD, and OpenBSD.






          1.5.4. Mac OS X 10.x


          The first version of Mac OS X was released on March 24, 2001, as Mac OS X 10.0 Cheetah. Soon afterwards, the versioning scheme of the server product was revised to synchronize it with that of the desktop system. Since then, the trend has been that a new version of the desktop is released first, soon followed by the equivalent server revision.


          Table 11 lists several major Mac OS X releases. Note that the codenames are all taken from felid taxonomy.


          Table 11. Mac OS X Versions

          Version

          Codename

          Release Date

          10.0

          Cheetah

          March 24, 2001

          10.1

          Puma

          September 29, 2001

          10.2

          Jaguar

          August 23, 2002

          10.3

          Panther

          October 24, 2003

          10.4

          Tiger

          April 29, 2005

          10.5

          Leopard

          2006/2007?



          Let us look at some notable aspects of each major Mac OS X release.



          1.5.4.1. Mac OS X 10.0

          Apple dubbed Cheetah as "the world's most advanced operating system," which would become a frequently used tagline for Mac OS X.[24] Finally, Apple had shipped an operating system with features that it had long sought. However, it was clear that Apple had a long way to go in terms of performance and stability. Key features of 10.0 included the following:

          [24] Mac OS X page on Apple's web site, www.apple.com/macosx/ (accessed April 26, 2006).


          • The Aqua user interface, with the Dock and the Finder as the primary user-facing tools

          • The PDF-based Quartz 2D graphics engine

          • OpenGL for 3D graphics

          • QuickTime for streaming audio and video (shipping for the first time as an integrated feature)

          • Java 2 Standard Edition (J2SE)

          • Integrated Kerberos

          • Mac OS X versions of the three most popular Apple applications available as free downloads: iMovie 2, iTunes, and a preview version of AppleWorks

          • Free IMAP service for Mac.com email accounts



          When Mac OS X 10.0 was released, there were approximately 350 applications available for it.






          1.5.4.2. Mac OS X 10.1

          Puma was a free update released six months after 10.0's release. It offered significant performance enhancements, as indicated by Apple's following claims:


          • Up to 3x improvement in application launch speed

          • Up to 5x improvement in menu performance

          • Up to 3x improvement in window resizing

          • Up to 2x improvement in file copying


          There were substantial performance boosts in other areas such as system startup, user login, Classic startup, OpenGL, and Java. Other key features of this release included the following:


          • The ability to move the Dock from its usual place at the bottom to the left or right

          • System status icons on the menu bar to provide easier access to commonly used functions such as volume control, display settings, date and time, Internet connection settings, wireless network monitoring, and battery charging

          • iTunes and iMovie as part of system installation, and the introduction of iDVD

          • A new DVD player with a simplified interface

          • Improved iDisk functionality based on WebDAV

          • A built-in image-capturing application to automatically download and enhance pictures from digital cameras

          • The ability to burn over 4GB of data to a DVD, with support for burning recordable DVD discs directly in the Finder

          • An integrated SMB/CIFS client


          The Carbon API implementation in 10.1 was complete enough to allow important third-party applications to be released. Carbonized versions of Microsoft Office, Adobe Photoshop, and Macromedia Freehand were released soon after 10.1 went public.




          1.5.4.3. Mac OS X 10.2

          Jaguar was released at 10:20 P.M. to emphasize its version number. Its important feature additions included the following:


          • Quartz Extreme an integrated hardware acceleration layer for rendering on-screen objects by compositing them using primarily the graphics processing unit (GPU) on supported graphics cards

          • iChat an instant-messaging client compatible with AOL Instant Messaging (AIM)

          • An enhanced mail application (Mail.app) with built-in adaptive spam filtering

          • A new Address Book application with support for vCards, Bluetooth, and iSync synchronization with .Mac servers, PDAs, certain cell phones, and other Mac OS X computers (the Address Book's information was accessible to other applications)

          • QuickTime 6, with support for MPEG-4

          • An improved Finder with quick file searching from the toolbar and support for spring-loaded folders

          • Inkwell a handwriting recognition technology integrated with the text system, allowing text input using a graphics tablet

          • Rendezvous,[25] which was Apple's implementation of ZeroConf, a zero-configuration networking technology allowing enabled devices to find one another on the network

            [25] Rendezvous was later renamed Bonjour.

          • Better compatibility with Windows networks

          • Version 3 of the Sherlock Internet services tool


          Hereafter, Apple introduced new applications and incorporated technologies in Mac OS X at a bewildering pace. Other notable additions to Mac OS X after the release of Jaguar included the iPhoto digital photo management application, the Safari web browser, and an optimized implementation of the X Window System.




          1.5.4.4. Mac OS X 10.3

          Panther added several productivity and security features to Mac OS X, besides providing general performance and usability improvements. Notable 10.3 features included the following:


          • An enhanced Finder, with a sidebar and support for labels

          • Audio and video conferencing through the iChat AV application

          • Exposé a user-interface feature that can "live shrink" each on-screen window such that no windows overlap, allowing the user to find a window visually, after which each window is restored to its original size and location

          • FileVault encryption of a user's home directory

          • Secure deletion of files in a user's trash can via a multipass overwriting algorithm

          • Fast user switching

          • Built-in faxing

          • Improved Windows compatibility courtesy of better support for SMB shares and Microsoft Exchange

          • Support for HFSXa case-sensitive version of the HFS Plus file system




          1.5.4.5. Mac OS X 10.4

          Besides providing typical evolutionary improvements, Tiger introduced several new technologies such as Spotlight and Dashboard. Spotlight is a search technology consisting of an extensible set of metadata importer plug-ins and a query API for searching files based on their metadata, even immediately after new files are created. Dashboard is an environment for creating and running lightweight desktop utilities called widgets, which normally remain hidden and can be summoned by a key-press. Other important Tiger features include the following:


          • Improved 64-bit support, with the ability to compile 64-bit binaries, and 64-bit support in the libSystem shared library

          • Automator a tool for automating common procedures by visually creating workflows

          • Core Image a media technology employing GPU-based acceleration for image processing

          • Core Video a media technology acting as a bridge between QuickTime and the GPU for hardware-accelerated video processing

          • Quartz 2D Extreme a new set of Quartz layer optimizations that use the GPU for the entire drawing path (from the application to the framebuffer)

          • Quartz Composer a tool for visually creating compositions using both graphical technologies (such as Quartz 2D, Core Image, OpenGL, and QuickTime) and nongraphical technologies (such as MIDI System Services and Rich Site Summary [RSS])

          • Support for a resolution-independent user interface

          • Improved iChat AV, with support for multiple simultaneous audio and video conferences

          • PDF Kit a Cocoa framework for managing and displaying PDF files from within applications

          • Improved Universal Access, with support for an integrated spoken interface

          • An embeddable SQL database engine (SQLite) allowing applications to use SQL databases without running a separate RDBMS[26] process

            [26] Relational database management system.

          • Core Data a Cocoa technology that integrates with Cocoa bindings and allows visual description of an application's data entities, whose instances can persist on a storage medium

          • Fast Logout and Autosave for improved user experience

          • Support for access control lists (ACLs)

          • New formalized and stable interfaces, particularly for kernel programming

          • Improvements to the Web Kit (including support for creating and editing content at the DOM level of an HTML document), the Safari web browser (including RSS support), QuickTime (including support for the H.264 code and a new QuickTime Kit Cocoa framework), the Audio subsystem (including support for OpenAL, the Open Audio Library), the Mac OS X installer application, Sync Services, the Search Kit, Xcode, and so on



          The first shipping x86-based Macintosh computers used Mac OS X 10.4.4 as the operating system.




          As we have seen in this chapter, Mac OS X is a long evolution of many disparate technologies. The next version of Mac OS X is expected to continue the remarkable pace of development, especially with the transition from the PowerPC to the x86 platform.


          In Chapter 2, we will take a diverse tour of Mac OS X and its features, including brief overviews of the various layers. The remaining chapters discuss specific aspects and subsystems of Mac OS X in detail.














          Native Datatypes












          Native Datatypes

          In order to work across a wide variety of platforms, libdnet specifies a series of native intermediate datatypes to represent different networking primitives (addressing, interfaces, and firewalling). These datatypes enable libdnet to maintain an operating system agnostic stance while still providing robust functionality. The datatypes are high-level enough that the application programmer can work with them, but they also contain enough information for libdnet to internally translate them to their operating system-specific counterpart.




          struct addr {



          struct addr is a partially opaque structure used to represent a network address.




          u_short addr_type;



          addr_type is the type of address contained in the structure.




          u_short addr_bits;



          addr_bits is the size of the address in bits contained in the structure.



          Other members of this structure are internal to libdnet, and the application programmer does not need to know about them.




          };
          struct arp_entry {


          In the ARP cache functions, struct arp_entry describes an ARP table entry.




          struct addr arp_pa;



          arp_pa is the ARP protocol address.




          struct addr arp_ha;



          arp_ha is the ARP hardware address.




          };
          struct route_entry {


          In the ARP cache functions, struct route_entry describes an ARP table entry.




          struct addr route_dst;



          route_dst is the destination address.




          struct addr route_gw;



          route_gw is the default gateway to get to that destination address.




          };
          struct intf_entry {



          struct intf_entry describes a network interface.




          u_int intf_len;



          intf_len is the length of the entry.




          char intf_name[60];



          intf_name is the canonical name of the interface.




          u_short intf_type;



          intf_type is a bitmask for the type of interface.




          u_short intf_flags;



          intf_flags are the flags set on the interface.




          u_int intf_mtu;



          intf_mtu is the maximum transmission unit (MTU) of the interface.




          struct addr intf_addr;



          intf_addr is the interface's network address.




          struct addr intf_dst_addr;



          intf_dst_addr is the interface's point-to-point destination address (for things like PPP).




          struct addr intf_link_addr;



          intf_link_addr is the interface's link-layer address.




          u_int intf_alias_num;



          intf_alias_num is the number of aliases for the interface.




          struct addr intf_alias_addr_flexarr;



          intf_alias_addr is the array of aliases for the interface.




          };
          struct fw_rule {



          fw_rule describes a firewall rule.




          char fw_device[14];



          fw_device is the canonical name of the interface to which the rule applies (in other words, "fxp0", "eth0", and "any").




          uint8_t fw_op:4,



          fw_op is the type of operation (FW_OP_ALLOW or FW_OP_BLOCK).




          fw_dir:4;



          fw_dir is the direction in which the rule should be applied (FW_DIR_IN or FW_DIR_OUT).




          uint8_t fw_proto;



          fw_proto is the protocol to which the rule applies (IP_PROTO_IP, IP_PROTO_TCP, IP_PROTO_ICMP, and so on).




          struct addr fw_src;



          fw_src is the source IP address to which the rule applies.




          struct addr fw_dst;



          fw_dst is the destination IP address to which the rule applies.




          uint16_t fw_sport[2];



          fw_sport is the source port range of the rule or the ICMP type and mask.




          uint16_t fw_dport[2];



          fw_dport is the destination port range of the rule or the ICMP code and mask.




          };
          arp_t



          arp_t refers to an ARP handle used in the ARP family of functions.




          route_t



          route_t refers to a route handle used in the route table family of functions.




          intf_t



          intf_t refers to an interface handle used in the interface family of functions.




          fw_t



          fw_t refers to a firewall handle used in the firewall family of functions.




          ip_t



          ip_t refers to an IP handle used in the IP packet family of functions.




          eth_t



          eth_t refers to an Ethernet handle used in the Ethernet frame family of functions.




          blob_t



          blob_t refers to a blob handle used in the blob buffer management family of functions.




          rand_t



          rand_t refers to a random number handle used in the random number generation family of functions.