3.1. Installing on LinuxAs with many open source projects, the primary platform of installation for Subversion is Linux. Although Subversion is far from a second-class citizen on other operating systems, such as Microsoft Windows or Apple's Mac OS X, Linux is where it feels most at home. Of course, Linux is not a single entity. There are, in fact, a wide variety of different distributions, each with its own slightly different filesystem layouts and package management systems for installation of software. As I said in the chapter introduction, describing all of these different distributions is well beyond the scope of this book. Instead, in this section I will show you how to compile and install Subversion from source, which should work for most Linux distributions. If you would rather install binaries with your distribution's package management system, you will find that there are binary packages available for most of the major distributions on the downloads page for Subversion. 3.1.1. Subversion's PrerequisitesIn order to make compiling a Subversion client easy, Subversion includes most of the dependencies it needs for the client in the Subversion source distribution. On the other hand, if you want to compile a server, there are a number of different prerequisites that you need to install first. Which prerequisites you use depends on exactly which features of Subversion you need. If you read on, I describe each of the packages that you may need to install, as well as the functionality they provide and where you might find them. Apache Portable Runtime LibrariesThe Apache Portable Runtime (APR) libraries are a set of libraries that provide a cross-platform abstraction layer for developing software that will work the same on a variety of different operating systems. Subversion is built on top of APR, and makes heavy use of the library, which helps ensure that SVN runs on the variety of platforms it supports. APR is therefore a core library for SVN, and is required when building the system. The APR libraries are available for download from a variety of mirrors, which can be reached from the Apache project's Web site, at apr.apache.org. Downloading the APR libraries is not usually necessary though, because the APR source necessary for compiling Subversion is included in the Subversion source distribution. Most compiled binary distributions of Subversion also include the necessary APR libraries, so installing them separately will likely be unnecessary. Apache Web ServerSubversion uses the Apache Web server as one of the two servers it supports for allowing remote access to the repository. If you want support for using the Subversion extensions to WebDAV for accessing the repository, you will need Apache. If you would rather use the Subversion custom-protocolbased svnserve, or only want to use Subversion on the local machine, you do not need to compile Subversion with support for Apache. Most Linux distributions include Apache as a part of their core packages, and your system likely already has it installed. This may not be what you need though. Subversion requires version 2.0 or later of Apache, which is not yet in predominant use, and is often not installed. If you cannot upgrade your whole system to use Apache 2, it is possible to install Apache 2 alongside an existing version of Apache 1. I will show you how to set up such a system in Section 3.4, "Configuring to Use Apache." If you want to download the sources for Apache to compile them yourself, they can be obtained from mirrors linked from the Apache Web site, just like APR. You can download Apache from its site at httpd.apache.org. The Apache Web site also provides compiled binary versions of Apache for most platforms if you need to install Apache but don't want to compile it yourself. Unlike APR, Apache is not included with the Subversion source and must be installed separately. Berkeley DBThe Berkeley DB (BDB) database is an embedded database, developed as an open source project by Sleepycat Software. It is a database system designed to be integrated into other programs, and is used by Subversion for its database repository backend. Berkeley DB is only required if you are going to use the Subversion database repository. If you are instead using the new filesystem-based repository introduced in version 1.1 of Subversion, you can compile without support for Berkeley DB. Most binary distributions of Subversion will include the necessary BDB support, so you shouldn't have to acquire BDB separately. If you are compiling Subversion from scratch, or are using an installation package that doesn't include BDB, you will need to install BDB yourself in order to have support for BDB-based repositories (most Linux distributions will already have it installed, so you might want to check before attempting to install it yourself). You can download BDB from Sleepycat's Web site, at www.sleepycat.com. NeonSubversion uses the Neon library in its client for communications with a WebDAV server. In most cases, you will not have to download this library separately, because it is included in the Subversion source distribution. If you are installing binary packages, however, you may need to install this as one of the prerequisites. The binary packages you will need, however, are in most cases available from the same place as the Subversion package that you are installing. 3.1.2. Downloading the SourceThe Subversion source code can be downloaded from the Subversion project's Web site, at subversion.tigris.org/project_packages.html, which is where you will also find a variety of already compiled binary packages for various operating systems and distributions. Source versions are available in a variety of archive formats (tar gzip, tar bzip2, and zip). If you're compiling on a UNIX-based system, you'll probably want to download either the bzip2 archive for the latest version of SVN (.tar.bz2 extension) or the gzip archive (.tar.gz). After you have the source archive for Subversion downloaded, it's time to get things unpacked, so that you can start compiling. To perform the unpacking, you will want to send the decompressed file to the tar command for unpacking. The easiest way to perform both the decompression and unpacking is to decompress with either the gzip or bzip2 command (depending on the compressed version you downloaded), and use the -dc options to tell the command to decompress the file and then send the decompressed file to standard output. A pipe can then be used to send the decompressed file directly to tar, which you'll want to run with the xvf options to tell it to extract the archive from a file, verbosely, and a hyphen (-) to tell it to take the archive from standard input. So, for example, if you had downloaded the bzip2 compressed version 1.1.0 of Subversion, the command to unpack it would be
The tar command will unpack everything into a directory named (in the case of the preceding version) subversion-1.1.0. 3.1.3. Compiling and InstallingFor the most part, compiling Subversion is straightforward. It auto-detects the presence of the various prerequisites that are required for compilation of the server, and decides whether it can build the server. If none of the prerequisites are installed, it will just compile the Subversion client (which doesn't have any prerequisites beyond what is included with the source). To perform a basic compilation, just cd into the Subversion directory that you unpacked in the previous section and run the following commands.
In most cases, this compiles everything that you need, and installs everything in /usr/local/. Notice that the last command (make install) requires you to super-user to the root user, in order to have the proper permissions to perform a systemwide install of Subversion. Configuration OptionsSometimes the default compile and install is not actually what you want. For instance, you may not want to compile the server with all of the possible features, even if the prerequisites are installed; or you may want to install to somewhere other than /usr/local/. In these instances, the Subversion configure script provides several options that you can set when it is run. I explain the ones you are most likely to run into in the following. If you would like to see the complete list, you can run ./configure -help. To configure the Subversion compilation with one of these options, you should run the configure script with the desired option passed as a command-line parameter.
The prefix option specifies where Subversion should be installed. By default, this is in /usr/local/. If you would rather install Subversion somewhere else, pass configure the --prefix option, with PREFIX replaced by the path where you would like Subversion to be installed. For example, if you are installing SVN on a system where you don't have root privileges, you can run ./configure -prefix=$HOME/subversion to have Subversion installed in a directory named subversion in your home directory.
These options tell the build scripts whether they should compile the Subversion server with support for Apache and WebDAV. The default behavior is for Apache to be included, but if for some reason you don't want Apache support to be compiled, passing --withoutapache to configure will disable it. Additionally, if Apache is installed in a nonstandard place on your system, you may have to tell configure where to find it. You can do that by passing the --with-apache option, with PATH replaced by the path to where Apache is installed.
These options tell the build scripts whether they should compile the Subversion server with support for the Berkeley DB. The default behavior is for BDB to be included, but if you plan on using the filesystem-based repository storage, --without-berkeley-db will disable BDB (of course, you can still use the filesystem repository even if BDB support is compiled). Also, if Berkeley DB is installed in a nonstandard place on your system, you may have to tell configure where to find it. You can do that by passing the --with-berkeley-db option, with PATH replaced by the path to where BDB is installed.
By default, Subversion modifies your Apache httpd.conf file to enable the Subversion WebDAV module, mod_dav_svn, when you run make install. If you don't want it to make this modification, you can pass --disable-mod-activation to the configure script. |
Friday, November 6, 2009
Section 3.1. Installing on Linux
Recipe 8.18. Implementing Class and Singleton Methods
Recipe 8.18. Implementing Class and Singleton MethodsProblemYou want to associate a new method with a class (as opposed to the instances of that class), or with a particular object (as opposed to other instances of the same class). SolutionTo define a class method, prefix the method name with the class name in the method definition. You can do this inside or outside of the class definition. The Regexp.is_valid? method, defined below, checks whether a string can be compiled into a regular expression. It doesn't make sense to call it on an already instantiated Regexp, but it's clearly related functionality, so it belongs in the Regexp class (assuming you don't mind adding a method to a core Ruby class).
Here's a Fixnum.random method that generates a random number in a specified range:
To define a method on one particular other object, prefix the method name with the variable name when you define the method:
DiscussionIn Ruby, a Some common types of
When you define a
When you define a method on a particular object, Ruby acts behind the scenes to transform the object into an anonymous subclass of its former class. This new class is the one that actually defines the new method or overrides the methods of its superclass. |
Operations Staff
< Day Day Up > |
Operations StaffKeeping your development personnel productive and happy with a checkin system requires constant monitoring of the process. The key to the success of the SNAP system is constant monitoring, or babysitting. Nothing upsets a development team more than a system that does not work and no one available or with the resources available to fix it. Don't try to run the SNAP system unattended. You need properly trained people to debug and own any problems with the system. This is usually someone from the build team, but it does not have to be. If you can make your build and testing processes highly reliable, you might find that you can go extended periods without close supervision. Either way, you need resources assigned to attending and maintaining the system. It is worth noting here that without knowing the details of your lab process, you will not be able to properly manage a SNAP system. In other words, if your IT department owns all the hardware or you do not have a properly equipped lab, it will be difficult to manage a SNAP system because you will not have direct access to the machines or be able to modify the hardware configuration if needed. |
< Day Day Up > |
JOINs
| |||||||||||||||
Section 1.4. All Software Engineers Are Created Equal
1.4. All Software Engineers Are Created EqualA software project requires much more than just writing code. There are all sorts of work products that are produced along the way: documents, schedules, plans, source code, bug reports, and builds are all created by many different team members. No single work product There are many project managers who, when faced with a disagreement between a programmer and a tester, will always trust the programmer. This same project manager might always trust a requirements analyst or a business analyst over a programmer, if and when they disagree. Many people have some sort of a hierarchy in their heads in which certain engineering team members are more valuable or more skilled than others. This is a dangerous idea, and it is one that has no place on an effective project team. One key to building better software is treating each idea objectively, no matter who suggests it or whether or not it's immediately intuitive to you. That's another reason the practices, techniques, and tools in this book cover all areas of the software project. Every one of these practices is based on an objective evaluation of all of the important activities in software development. Every discipline is equally important, and everyone on the team contributes equally to the project. A software requirements specification (SRS), for example, is no more important than the code: the code could not be created without the SRS, and the SRS would have no purpose if it weren't the basis of the software. It is in the best interest of everyone on the team to make sure that both of them have as few defects as possible, and that the authors of both work products have equal say in project decisions. The project manager must respect all team members, and should not second-guess their expertise. This is an important principle because it is the basis of real commitments. It's easy for a senior manager to simply issue an edict that everyone must build software without defects and do a good job; however, this rarely works well in practice. The best way to make sure that the software is built well is to make sure that everyone on the team feels respected and valued, and to gain a true commitment from each person to make the software the best it can be. |
Section 5.2. Integrating Ecosystems: Apple's iPod
5.2. Integrating Ecosystems: Apple's iPod
Apple's iPod is not precisely a web application. At its heart, it combines iPod hardware for playing music (and pictures and video), iTunes software for managing that content (shown in Figure 5-4), and an iTunes store that runs over the Web (shown in Figure 5-5). The iPod exemplifies the integration of new technology with existing systems, and its continuing growth into new areas (such as the web-capable iPhone) demonstrates how different technological ecosystems can coexist. Physical hardware can both benefit from network effects and create surrounding businesses based on those effects.
Figure 5-4. iTunes software with a user's music library
Figure 5-5. Apple's music store running inside of iTunes
However, the iPod combines much more than just components made and controlled by Apple. The first four ecosystems we'll discuss are illustrative examples of platform innovation, and they demonstrate how Apple has captured value from its ecosystems and expanded and widely distributed this value to its partners. You'll see the overall increased returns from collaborative innovation.
In this example, a lead company—here, Apple—conceives, designs, and orchestrates the external innovation and creativity of many other outside participants, users, suppliers, creators, affiliates, partners, and complementors to support an innovative product, service, or system. The final (and fifth) ecosystem—the iTunes and major record label partnerships—is used as a contrasting example of recombinant innovation and will be discussed later.
5.2.1. Platform Innovation Ecosystem #1: Production
To create the iPod, Apple first assembled a production ecosystem—a group of companies all over the world that contributed to circuit design, chipsets, the hard drive, screens, the plastic shell, and other technologies, as well as assembly of the final device.
Although many people still think of the manufacturing process as the key place to capture added value, the iPod demonstrates that Apple—the creator, brand name, and orchestrator—has actually figured out how to capture the lion's share of the value: 30%. The rest of the value is spread across a myriad of different contributions within the network of component providers and assemblers, none of them larger than 15%. Researchers, sponsored by the Sloan Foundation, developed an analytical framework for quantitatively calculating who captures the value from a successful global outsourced innovation like Apple's iPod.
Their study (http://pcic.merage.uci.edu/papers/2007/AppleiPod.pdf) traced the 451 parts that go into the iPod. Attributing cost and value-capture to different companies and their home countries is relatively complex, as the iPod and its components—like many other products—are made in several countries by dozens of companies, with each stage of production contributing a different amount to the final value.
It turns out that $163 of the iPod's $299 retail value is captured by American companies and workers, with $75 to distribution and retail, $80 to Apple, and $8 to various domestic component makers. Japan contributes about $26 of value added, Korea less than $1, and the final assembly in China adds somewhat less than $4 a unit. (The study's purpose was to demonstrate that trade statistics can be misleading, as the U.S.-China trade deficit increases by $150—the factory cost—for every 30 GB video iPod unit, although the actual value added by assembly in China is a few dollars at most.)
Suppliers throughout the manufacturing chain benefit from sales of the product and may thrive as a result, but the main value of the iPod goes to its creator, Apple. As Hal Varian commented in the New York Times:[32]
[32] Hal R. Varian, "An iPod Has Global Value. Ask the (Many) Countries That Make It," New York Times, June 28, 2007, http://people.ischool.berkeley.edu/~hal/people/hal/NYTimes/2007-06-28.html.
The real value of the iPod doesn't lie in its parts or even in putting those parts together. The bulk of the iPod's value is in the conception and design of the iPod. That is why Apple gets $80 for each of these video iPods it sells, which is by far the largest piece of value added in the entire supply chain.
Those clever folks at Apple figured out how to combine 451 mostly generic parts into a valuable product. They may not make the iPod, but they created it. In the end, that's what really matters.
5.2.2. Platform Innovation Ecosystem #2: Creative and Media
The iPod is also part of an ecosystem and contributes to the indirect network effects of Apple's other key product: Macintosh computers. Even though iPods, and the iTunes software that supports them, were originally compatible with Macs only, the cachet of the iPod has continued to help sell Macs even after the iPod developed Windows support.
Beyond the iPod and iTunes, Apple's creative and media ecosystem includes software such as iMovie, iDVD, Aperture, Final Cut, GarageBand, and QuickTime (a key technology for the video iPods). These are all Apple products, but many other companies also provide software and hardware for this space, notably Adobe and Quark.
5.2.3. Platform Innovation Ecosystem #3: Accessories
As any visit to the local electronics (or even office supply) store will show, the iPod has inspired a blizzard of accessories. Bose, Monster Cable, Griffin Technologies, Belkin, and a wide variety of technology and audio companies provide iPod-specific devices, from chargers to speakers. Similarly, iPod fashion has brought designers, such as Kate Spade, into the market for iPod cases, along with an army of lesser-known contributors. Also, automobile companies and car audio system makers are adding iPod connectors, simplifying the task of connecting iPods to car stereo systems.
iPod accessories are a $1 billion business. In 2005, Apple sold 32 million iPods, or one every second. And for every $3 spent on an iPod, at least $1 is spent on an accessory, estimates NPD Group analyst Steve Baker. That means customers make three or four additional purchases per iPod.
Accessory makers are happy to sell their products, of course, but this ecosystem also supports retailers, which get a higher profit margin on the accessories than on the iPod (50% rather than 25%). It also reinforces the value of the iPod itself because the 2,000 different add-ons made exclusively for the iPod motivate customers to personalize their iPods. This sends a strong signal that the iPod is "way cooler" than other players offered by Creative and Toshiba, for which there are fewer accessories. The number of accessories is doubling each year, and that's not including the docking stations that are available in a growing number of cars.
Most industry participants were surprised by the strength and growth of the accessory market. Although earlier products such as Disney's Mickey Mouse or Mattel's Barbie supported their own huge market for accessories, those were made by the company that created the original product or by its licensors. Apple has taken a very different path, encouraging a free-for-all; it accepts that its own share of the accessories market is small, knowing that the iPod market is growing.
5.2.4. Platform Innovation Ecosystem #4: User-Provided Metadata
Even before the iTunes music store opened, users who ripped their CDs to digital files in iTunes got more than just the contents of the CD. Most CDs contain only the music data, not information like song titles. Entering titles for a large library of music can be an enormous task, and users shouldn't have to do it for every album.
This problem had been solved earlier by Gracenote, which is best known for its CDDB (CD database) technology. Every album has slightly different track lengths, and the set of lengths on a particular album is almost always unique. So, by combining that identification information with song titles entered by users only when they encountered a previously unknown CD, Gracenote made it much easier for users to digitize their library of CDs.
Rather than reinventing the wheel, Apple simply connected iTunes to CDDB, incorporating the same key feature that made CDDB work in the first place: user input. Whenever a user put in a CD that hadn't previously been cataloged, iTunes would ask that user if he wanted to share the catalog.
Apple has no exclusive rights to CDDB, but it benefits from its existence nonetheless. Users get a much simpler process for moving their library to digital format, and contributing to CDDB requires only a decision, not any extra work.
5.2.5. Recombinant Innovation
The music industry is perhaps the most difficult ecosystem the iPod has to deal with—and the most frequently discussed. The music industry has dealt with the Web and the Internet broadly as a threat rather than an opportunity, as it saw its profits disappearing when the transaction costs of sharing music dropped precipitously. So, how did Steve Jobs get the record labels—which had been suing Napster and Kazaa—to sign up for the iTunes store to offer online, downloaded music?
Apple presented its proposal to the big four music companies—Universal, Sony BMG, EMI, and Warner—as a manageable risk. Apple's control of the iPod gave it the tools it needed to create enough digital rights management (DRM)—limiting music to five computers—to convince the companies that this was a brighter opportunity for them than the completely open MP3 files that users created when they imported music from CDs. Jobs was actually able to leverage Apple's small market share into a promising position:[33]
[33] Steven Levy, "Q&A: Jobs on iPod's Cultural Impact," Newsweek, Oct. 16, 2006, http://www.msnbc.msn.com/id/15262121/site/newsweek/print/1/displaymode/1098/.
Now, remember, it was initially just on the Mac, so one of the arguments that we used was, "If we're completely wrong and you completely screw up the entire music market for Mac owners, the sandbox is small enough that you really won't damage the overall music industry very much." That was one instance where Macintosh's [small] market share helped us.
Apple also played a key role in coordinating the pricing of songs among the big four music companies—99 cents a tune. This was a major step in weaning the music companies away from their high-priced retail distribution of prepackaged bundled digital goods— CDs—toward a digital distribution channel with a per-user/per-song revenue structure and potentially strong social-network effects. Downloads were also carefully priced to incentivize users who preferred having legal downloads and who trusted Apple and Steve Jobs to keep the price at that level despite the protests of the music labels.
After 18 months of negotiations, Apple was able to get started on its own platform, and later carry the DRM strategy to Windows. (One pleasant side effect for Apple of the DRM deal with the music companies is that it is difficult for Apple to license that DRM technology, preserving Apple's monopoly there.)
Apple provided the music companies with a revenue stream built on an approach that would let them compete with pirated music, although the companies aren't entirely excited about adapting to song-by-song models and the new distribution channels. However, EMI's decision in April 2007 to start selling premium versions of its music without DRM—still through the iTunes store but also through other sellers—suggests that there is more to come in this developing relationship. Ecosystems evolve, and business ecosystems often evolve beyond their creators' vision.
9.4 Dependencies among Procedures
[ Team LiB ] |
9.4 Dependencies among ProceduresThis section covers the following topics:
When we first compile the HELLO procedure, the CREATED time and LAST_DDL_TIME are identical.
If we attempt to recompile the procedure and the compile fails, the procedure is still in the data dictionary but with an INVALID status. The LAST_DDL_TIME reflects the last compile time. Executing a procedure that is INVALID will fail with an Oracle error:
If Procedure A calls Procedure B and B becomes invalid, then A automatically becomes invalid. For the Figure 9-2 procedure, SAY_HELLO calls HELLO. What happens if HELLO becomes invalid? Figure 9-2. Simple Procedure Dependency.We begin with the code to HELLO.SQL and SAY_HELLO.SQL.
Compile procedures HELLO and SAY_HELLO in order. The SHOW ERRORS command reports any compilation errors. The script CHECK_PLSQL_OBJECTS shows the STATUS as VALID for each procedure in USER_OBJECTS.
Edit HELLO.SQL and change PUT_LINE to PUTLINE. The procedure will now compile with an error. Recompile HELLO with @HELLO.SQL. The status of SAY_HELLO is also invalid, yet we did not change the procedure. SAY_HELLO depends on a valid HELLO procedure. A compile error in Hello resulted in Oracle searching objects that depend on HELLO and invalidating those objects. All dependents of any procedure must be valid for that procedure to be valid, showing both objects as invalid:
Correct the PL/SQL code in HELLO.SQL and recompile. The HELLO should be valid with a successful recompilation. What about SAY_HELLO, is this still invalid?
Procedure SAY_HELLO is still invalid; however, when we execute the procedure, Oracle sees that it is invalid and attempts to validate it. This will be successful because all dependents (i.e., the HELLO procedure) are valid. Oracle compiles SAY_HELLO, sets the status to valid, and then executes the procedure. Following execution of HELLO, both procedures are valid.
There is understandable overhead with Oracle attempting to validate objects at run time. If HELLO is a widely used procedure and becomes invalid, there will be some performance degradation. During the normal operations of an application, Oracle may encounter many packages that became invalid and recompile them at run time. This can cause a noticeable impact to end users. The following discussion covers the scenario when invalid code does not recompile. We invalidated HELLO, recompiled it, and it became valid again. The change was a simple statement change that we corrected. A major code change to HELLO could cause recompilation failures in other procedures. Such an event would occur if the interface to HELLO changed. Changing the parameter specification, parameter types, or parameter modes can permanently invalidate other code. If we change a procedure and recompile, Oracle's recompilation of other procedures may fail. Why wait until run-time to realize there is broken code. When PL/SQL changes occur, you can recompile the entire suite of PL/SQL code in the schema. The Oracle DBMS_UTILITY package provides this functionality with the COMPILE_SCHAME procedure. To recompile all PL/SQL in a schema (this example uses the schema name, SCOTT):
The response "procedure successfully completed" means the call to DBMS_UTILITY was successful. There may be invalid objects. Run CHECK_PLSQL_OBJECTS for invalid stored procedures. LAST_DDL_TIME shows the recompilation time of each procedure. If a procedure is invalid, you can SHOW ERRORS on that procedure, showing why it failed to compile with the following:
To show compile errors for SAY_HELLO:
The following scenario includes three procedures. P1 calls P2, which calls P3. The procedure code is:
Compile these procedures in the following order: P3, then P2, then P1. Execution of P1 produces the following:
Change P3 by adding a parameter to the interface and compile the procedure.
Not knowing all the dependencies on P3, we can compile the schema.
Check for invalid objects.
We have invalid objects P1 and P2. Use SHOW ERRORS to see why these procedures failed to compile.
Many invalid objects can pose a challenging problem. In the preceding example (P1, P2 and P3), there are two invalid objects. We changed P3 and saw from SHOW ERRORS that P2 is passing the wrong number of arguments to P3. The DBMS_UTILITY.COMPILE_SCHEMA compiles all the PL/SQL in a schema. You can validate individual components with the ALTER statement.
To compile the package specification and body:
To compile just the package specification:
To compile just the package body:
You can always determine object dependencies by querying the data dictionary view USER_DEPENDENCIES, covered in the next section. The preceding scenario includes three procedures: P1, P2, and P3. This is not a complex architecture. When there are many program units with many dependencies, the task becomes tedious. It requires repeated queries of the USER_DEPENDENCIES view. In the next section, we look at applying a general solution to querying USER_DEPENDENCIES. The script used to query the USER_OBJECTS view, CHECK_PLSQL_OBJECTS, filters stored procedures with a WHERE clause. This script filters stored procedures just to demonstrate an example. A change to a stored procedure can invalidate other object types. A view can use a PL/SQL function. A trigger can use a procedure, function, or package. Objects from other schemas may use our PL/SQL objects. A general dependency tracing strategy requires that you query the ALL_DEPENDENCIES view for all object types. |
[ Team LiB ] |
Section 1.5. Toward Mac OS X
1.5. Toward Mac OS XAfter Rhapsody's DR2 release, Apple would still alter its operating system strategy but would finally be on its way toward achieving its goal of having a new system. During the 1998 Worldwide Developers Conference, Adobe's Photoshop ran on what would be Mac OS X. However, the first shipping release of Mac OS X would take another three years. Figure 111 shows an approximation of the progression from Rhapsody toward Mac OS X. Figure 111. An approximation of the Mac OS X timeline[View full size image] 1.5.1. Mac OS X Server 1.xAs people were expecting a DR3 release of Rhapsody, Apple announced Mac OS X Server 1.0 in March 1999. Essentially an improved version of Rhapsody, it was bundled with WebObjects, the QuickTime streaming server, a collection of developer tools, the Apache web server, and facilities for booting or administering over the network. Apple also announced an initiative called Darwin: a fork of Rhapsody's developer release. Darwin would become the open source core of Apple's systems. Over the next three years, as updates would be released for the server product, development of the desktop version would continue, with the server sharing many of the desktop improvements. 1.5.2. Mac OS X Developer PreviewsThere were four Developer Preview releases of Mac OS X, named DP1 through DP4. Substantial improvements were made during these DP releases. 1.5.2.1. DP1An implementation of the Carbon API was added. Carbon represented an overhaul of the "classic" Mac OS APIs, which were pruned, extended, or modified to run in the more modern Mac OS X environment. Carbon was also meant to help Mac OS developers transition to Mac OS X. A Classic application would require an installation of Mac OS 9 to run under Mac OS X, whereas Carbon applications could be compiled to run as native applications under both Mac OS 9 and Mac OS X. 1.5.2.2. DP2The Yellow Box evolved into Cocoa, originally alluding to the fact that besides Objective-C, the API would be available in Java. A version of the Java Development Kit (JDK) was included, along with a just-in-time (JIT) compiler. The Blue Box environment was provided via Classic.app (a newer version of MacOS.app) that ran as a process called truBlueEnvironment. The Unix environment was based on 4.4BSD. DP2 thus contained a multitude of APIs: BSD, Carbon, Classic, Cocoa, and Java. There was widespread dissatisfaction with the existing user interface. The Aqua user interface had not been introduced yet, although there were rumors that Apple was keeping the "real" user interface a secret.[23]
Carbon is sometimes perceived as "the old" API. Although Carbon indeed contains modernized versions of many old APIs, it also provides functionality that may not be available through other APIs. Parts of Carbon are complementary to "new" APIs such as Cocoa. Nevertheless, Apple has been adding more functionality to Cocoa so that dependencies on Carbon can be eventually eliminated. For example, before Mac OS X 10.4, much of the QuickTime functionality was available only through Carbon. In Mac OS X 10.4, Apple introduced the QTKit Cocoa framework, which reduces or eliminates Carbon dependencies for QuickTime. 1.5.2.3. DP3The Aqua user interface was first demonstrated during the San Francisco Macworld Expo in January 2000. Mac OS X DP3 included Aqua along with its distinctive elements: "water-like" elements, pinstripes, pulsating default buttons, "traffic-light" window buttons, drop shadows, transparency, animations, sheets, and so on. The DP3 Finder was Aqua-based as well. The Dock was introduced with support for photorealistic icons that were dynamically scalable up to 128x128 pixels. 1.5.2.4. DP4The Finder was renamed the Desktop in DP4. The System Preferences application (Preferences.appthe precursor to System Preferences.app) made its first appearance in Mac OS X, allowing the user to view and set a multitude of system preferences such as Classic, ColorSync, Date & Time, Energy Saver, Internet, Keyboard, Login Items, Monitors, Mouse, Network, Password, and others. Prior to DP4, the Finder and the Dock were implemented within the same application. The Dock was an independent application (Dock.app) in DP4. It was divided into two sections: the left side for applications and the right side for the trash can, files, folders, and minimized windows. Other notable components of DP4 included an integrated development environment and OpenGL. The Dock's visual indication of a running application underwent several changes. In DP3, an application's Dock icon had a bottom edge a few pixels high that was color-coded to indicate whether the application was running. This was replaced by an ellipsis in DP4 and was followed by a triangle in subsequent Mac OS X versions. DP4 also introduced the smoke cloud animation that ensues after an item is dragged off the Dock. 1.5.3. Mac OS X Public BetaApple released a beta version of Mac OS X (Figure 112) at the Apple Expo in Paris on September 13, 2000. Essentially a publicly available preview release for evaluation and development purposes, the Mac OS X Public Beta was sold for $29.95 at the Apple Store. It was available in English, French, and German. The software's packaging contained a message from Apple to the beta testers: "You are holding the future of the Macintosh in your hands." Apple also created a Mac OS X tab on its web site that contained information on Mac OS X, including updates on third-party applications, tips and tricks, and technical support. Figure 112. Mac OS X Public Beta[View full size image] Although the beta release was missing important features and ostensibly lacked in stability and performance, it demonstrated several important Apple technologies at work, particularly to those who had not been following the DP releases. The beta's key features were the following:
With Darwin, Apple would continually leverage a substantial amount of existing open source software by using it forand often integrating it withMac OS X. Apple and Internet Systems Consortium, Inc. (ISC), jointly founded the OpenDarwin project in April 2002 for fostering cooperative open source development of Darwin. GNU-Darwin is an open source Darwin-based operating system.
1.5.4. Mac OS X 10.xThe first version of Mac OS X was released on March 24, 2001, as Mac OS X 10.0 Cheetah. Soon afterwards, the versioning scheme of the server product was revised to synchronize it with that of the desktop system. Since then, the trend has been that a new version of the desktop is released first, soon followed by the equivalent server revision. Table 11 lists several major Mac OS X releases. Note that the codenames are all taken from felid taxonomy.
Let us look at some notable aspects of each major Mac OS X release. 1.5.4.1. Mac OS X 10.0Apple dubbed Cheetah as "the world's most advanced operating system," which would become a frequently used tagline for Mac OS X.[24] Finally, Apple had shipped an operating system with features that it had long sought. However, it was clear that Apple had a long way to go in terms of performance and stability. Key features of 10.0 included the following:
When Mac OS X 10.0 was released, there were approximately 350 applications available for it. 1.5.4.2. Mac OS X 10.1Puma was a free update released six months after 10.0's release. It offered significant performance enhancements, as indicated by Apple's following claims:
There were substantial performance boosts in other areas such as system startup, user login, Classic startup, OpenGL, and Java. Other key features of this release included the following:
The Carbon API implementation in 10.1 was complete enough to allow important third-party applications to be released. Carbonized versions of Microsoft Office, Adobe Photoshop, and Macromedia Freehand were released soon after 10.1 went public. 1.5.4.3. Mac OS X 10.2Jaguar was released at 10:20 P.M. to emphasize its version number. Its important feature additions included the following:
Hereafter, Apple introduced new applications and incorporated technologies in Mac OS X at a bewildering pace. Other notable additions to Mac OS X after the release of Jaguar included the iPhoto digital photo management application, the Safari web browser, and an optimized implementation of the X Window System. 1.5.4.4. Mac OS X 10.3Panther added several productivity and security features to Mac OS X, besides providing general performance and usability improvements. Notable 10.3 features included the following:
1.5.4.5. Mac OS X 10.4Besides providing typical evolutionary improvements, Tiger introduced several new technologies such as Spotlight and Dashboard. Spotlight is a search technology consisting of an extensible set of metadata importer plug-ins and a query API for searching files based on their metadata, even immediately after new files are created. Dashboard is an environment for creating and running lightweight desktop utilities called widgets, which normally remain hidden and can be summoned by a key-press. Other important Tiger features include the following:
The first shipping x86-based Macintosh computers used Mac OS X 10.4.4 as the operating system. As we have seen in this chapter, Mac OS X is a long evolution of many disparate technologies. The next version of Mac OS X is expected to continue the remarkable pace of development, especially with the transition from the PowerPC to the x86 platform. In Chapter 2, we will take a diverse tour of Mac OS X and its features, including brief overviews of the various layers. The remaining chapters discuss specific aspects and subsystems of Mac OS X in detail. |
Native Datatypes
Native Datatypes
In order to work across a wide variety of platforms, libdnet specifies a series of native intermediate datatypes to represent different networking primitives (addressing, interfaces, and firewalling). These datatypes enable libdnet to maintain an operating system agnostic stance while still providing robust functionality. The datatypes are high-level enough that the application programmer can work with them, but they also contain enough information for libdnet to internally translate them to their operating system-specific counterpart.
struct addr {
struct addr is a partially opaque structure used to represent a network address.
u_short addr_type;
addr_type is the type of address contained in the structure.
u_short addr_bits;
addr_bits is the size of the address in bits contained in the structure.
Other members of this structure are internal to libdnet, and the application programmer does not need to know about them.
};
struct arp_entry {
In the ARP cache functions, struct arp_entry describes an ARP table entry.
struct addr arp_pa;
arp_pa is the ARP protocol address.
struct addr arp_ha;
arp_ha is the ARP hardware address.
};
struct route_entry {
In the ARP cache functions, struct route_entry describes an ARP table entry.
struct addr route_dst;
route_dst is the destination address.
struct addr route_gw;
route_gw is the default gateway to get to that destination address.
};
struct intf_entry {
struct intf_entry describes a network interface.
u_int intf_len;
intf_len is the length of the entry.
char intf_name[60];
intf_name is the canonical name of the interface.
u_short intf_type;
intf_type is a bitmask for the type of interface.
u_short intf_flags;
intf_flags are the flags set on the interface.
u_int intf_mtu;
intf_mtu is the maximum transmission unit (MTU) of the interface.
struct addr intf_addr;
intf_addr is the interface's network address.
struct addr intf_dst_addr;
intf_dst_addr is the interface's point-to-point destination address (for things like PPP).
struct addr intf_link_addr;
intf_link_addr is the interface's link-layer address.
u_int intf_alias_num;
intf_alias_num is the number of aliases for the interface.
struct addr intf_alias_addr_flexarr;
intf_alias_addr is the array of aliases for the interface.
};
struct fw_rule {
fw_rule describes a firewall rule.
char fw_device[14];
fw_device is the canonical name of the interface to which the rule applies (in other words, "fxp0", "eth0", and "any").
uint8_t fw_op:4,
fw_op is the type of operation (FW_OP_ALLOW or FW_OP_BLOCK).
fw_dir:4;
fw_dir is the direction in which the rule should be applied (FW_DIR_IN or FW_DIR_OUT).
uint8_t fw_proto;
fw_proto is the protocol to which the rule applies (IP_PROTO_IP, IP_PROTO_TCP, IP_PROTO_ICMP, and so on).
struct addr fw_src;
fw_src is the source IP address to which the rule applies.
struct addr fw_dst;
fw_dst is the destination IP address to which the rule applies.
uint16_t fw_sport[2];
fw_sport is the source port range of the rule or the ICMP type and mask.
uint16_t fw_dport[2];
fw_dport is the destination port range of the rule or the ICMP code and mask.
};
arp_t
arp_t refers to an ARP handle used in the ARP family of functions.
route_t
route_t refers to a route handle used in the route table family of functions.
intf_t
intf_t refers to an interface handle used in the interface family of functions.
fw_t
fw_t refers to a firewall handle used in the firewall family of functions.
ip_t
ip_t refers to an IP handle used in the IP packet family of functions.
eth_t
eth_t refers to an Ethernet handle used in the Ethernet frame family of functions.
blob_t
blob_t refers to a blob handle used in the blob buffer management family of functions.
rand_t
rand_t refers to a random number handle used in the random number generation family of functions.