Wednesday, November 11, 2009

Chapter 15. Clocking



[ Team LiB ]









Chapter 15. Clocking


The Previous Chapter


The high speed signaling performed by HT devices is based on point-to-point differential signaling and source synchronous clocking. Details associated with link power requirements and the driver and receiver characteristics are discussed in this chapter. Also, the characteristics of the system-related signals, including RESET#, PWROK, LDTSTOP#, and LDTREQ# are discussed.


This Chapter


This chapter focuses on the source synchronous clocking environment within HT. This involves the use of the source synchronous transmit clock to load data into a receive FIFO and the transfer of data into the receiver time domain with a receive clock that unloads data from the FIFO. Additionally, the specification defines three clocking modes that require different levels of support for passing packets between these two clock domains.


The Next Chapter


The next chapter describes the configuration of devices which use the HyperTransport technology type 1 configuration header for bridges. Such devices include HyperTransport-to-HyperTransport bridges and bridges to other PCI compatible protocols (e.g. HyperTransport-to-PCI or PCI-X). In this chapter, the basic architecture of a HyperTransport-to-HyperTransport bridge is reviewed and the configuration header fields are described. Differences in usage of bit fields by HyperTransport bridge interfaces vs. PCI bridge interfaces are emphasized. The format of PCI compatible bridge headers is formally defined in the PCI-to-PCI Bridge Architecture Specification, Revision 1.1.







    [ Team LiB ]



    Section 12.1. Introduction










    12.1. Introduction


    The embedded Linux examples demonstrate certain basic APIs for various operations. Additional APIs exist that offer other functionality. You should research the additional APIs on your own to determine whether there are other, better ways to perform the operations necessary for your particular embedded system.


    One aspect of Linux you need to be familiar with is its thread model. The Linux API conforms to the key POSIX standard in the space, POSIX 1003.1c, commonly called the
    pthreads
    standard. POSIX leaves many of the implementation details up to the operating system implementer. A good source of information on pthreads is the book Pthreads Programming, by Bradford Nichols, Dick Buttlar, and Jacqueline Farrell (O'Reilly).


    The version of embedded Linux used on the Arcom board is a standard kernel tree (version 2.6) with additional ARM and XScale support from the ARM Linux Project at http://www.arm.linux.org.uk.


    A plethora of books about Linux and embedded Linux are available. Some good resources include Understanding the Linux Kernel, by Daniel P. Bovet and Marco Cesati (O'Reilly), Linux Device Drivers, by Alessandro Rubini and Jonathan Corbet (O'Reilly), and Building Embedded Linux Systems, by Karim Yaghmour (O'Reilly).


    The instructions for configuring the embedded Linux build environment and building the example Linux applications are detailed in Appendix E. Additional information about using embedded Linux on the Arcom board can be found in the Arcom Embedded Linux Technical Manual and the VIPER-Lite Technical Manual.



    In order to keep the examples in this chapter shorter and easier to read, we don't check the return values from function calls. In general, it is a good idea to validate all return codes. This provides feedback about potential problems and allows you, as the developer, to make decisions in the software based on failed calls. Basically, it makes your code more robust and, hopefully, less buggy.














    Example: Reading File Permissions









    Example: Reading File Permissions


    Program 15-4 is the function ReadFilePermissions, which is used by Programs 15-1 and 15-2. This program methodically uses the preceding functions to extract the information. Its correct operation depends on the fact that the ACL was created by Program 15-3. The function is in the same source module as Program 15-3, so the definitions are not repeated.


    Program 15-4. ReadFilePermissions: Reading Security Attributes



    DWORD ReadFilePermissions (LPCTSTR lpFileName,
    LPTSTR UsrNm, LPTSTR GrpNm)
    /* Return the UNIX-style permissions for a file. */
    {
    PSECURITY_DESCRIPTOR pSD = NULL;
    DWORD LenNeeded, PBits, iAce;
    BOOL DaclF, AclDefF, OwnerDefF, GroupDefF;
    BYTE DAcl [ACL_SIZE];
    PACL pAcl = (PACL) &DAcl;
    ACL_SIZE_INFORMATION ASizeInfo;
    PACCESS_ALLOWED_ACE pAce;
    BYTE AType;
    HANDLE ProcHeap = GetProcessHeap ();
    PSID pOwnerSid, pGroupSid;
    TCHAR RefDomain [2] [DOM_SIZE];
    DWORD RefDomCnt [] = {DOM_SIZE, DOM_SIZE};
    DWORD AcctSize [] = {ACCT_NAME_SIZE, ACCT_NAME_SIZE};
    SID_NAME_USE sNamUse [] = {SidTypeUser, SidTypeGroup};

    /* Get the required size for the security descriptor. */
    GetFileSecurity (lpFileName,
    OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION |
    DACL_SECURITY_INFORMATION, pSD, 0, &LenNeeded);
    pSD = HeapAlloc (ProcHeap, HEAP_GENERATE_EXCEPTIONS, LenNeeded);
    GetFileSecurity (lpFileName, OWNER_SECURITY_INFORMATION |
    GROUP_SECURITY_INFORMATION | DACL_SECURITY_INFORMATION,
    pSD, LenNeeded, &LenNeeded);
    GetSecurityDescriptorDacl (pSD, &DaclF, &pAcl, &AclDefF);
    GetAclInformation (pAcl, &ASizeInfo,
    sizeof (ACL_SIZE_INFORMATION), AclSizeInformation);
    PBits = 0; /* Compute the permissions from the ACL. */
    for (iAce = 0; iAce < ASizeInfo.AceCount; iAce++) {
    GetAce (pAcl, iAce, &pAce);
    AType = pAce->Header.AceType;
    if (AType == ACCESS_ALLOWED_ACE_TYPE)
    PBits |= (0x1 << (8-iAce));
    }
    /* Find the name of the owner and owning group. */
    GetSecurityDescriptorOwner (pSD, &pOwnerSid, &OwnerDefF);
    GetSecurityDescriptorGroup (pSD, &pGroupSid, &GroupDefF);
    LookupAccountSid (NULL, pOwnerSid, UsrNm, &AcctSize [0],
    RefDomain [0], &RefDomCnt [0], &sNamUse [0]);
    LookupAccountSid (NULL, pGroupSid, GrpNm, &AcctSize [1],
    RefDomain [1], &RefDomCnt [1], &sNamUse [1]);
    return PBits;
    }










      Chapter 12. Distributed Databases and Distributed Data



      [ Team LiB ]







      Chapter 12. Distributed Databases and Distributed Data




      Data in large and mid-sized companies frequently resides on multiple
      servers. The data might be distributed across various-sized servers
      running a mix of operating systems for a number of reasons, including
      scalability, performance, access, and management. As a result, the
      data needed to answer business questions may not reside on a single
      local server. The user may need to access data on several servers
      simultaneously, or the data required for an answer may need to be
      moved to a local server. Inserts, updates, or deletions of data on
      these distributed servers may also be necessary.



      There are two basic ways to deal with data in distributed databases:
      as part of a single distributed entity in which the distributed
      architecture is transparent, or by using a variety of replication
      techniques to create copies of the data in more than one location.
      This chapter will examine both of these options and the technologies
      associated with each solution. Several of the technologies described
      here can be used in combination with Oracle Application Server
      components to integrate data from several sources (for example, to
      facilitate document exchange). This combination of Oracle technology
      solutions is sometimes referred to as the Oracle Enterprise
      Integration framework.



      Grid computing introduces a new third solution to widely distributed
      data�a single database deployment model leveraging
      Oracle's Real Application Clusters and Application
      Server services. As might be expected, capabilities introduced
      earlier for distributed data play a part in this solution,
      particularly Oracle Streams, as described later in this chapter.








        [ Team LiB ]



        Praise for Exploiting Software









































        Prev don't be afraid of buying books Next






























        Praise for Exploiting Software





        "Exploiting
        Software
        highlights the most critical part of the software
        quality problem. As it turns out, software quality problems are a
        major contributing factor to computer security problems.
        Increasingly, companies large and small depend on software to run
        their businesses every day. The current approach to software
        quality and security taken by software companies, system
        integrators, and internal development organizations is like driving
        a car on a rainy day with worn-out tires and no air bags. In both
        cases, the odds are that something bad is going to happen, and
        there is no protection for the occupant/owner.



        This book will help the reader understand how to
        make software quality part of the design—a key change from
        where we are today!"



        Tony Scott Chief
        Technology Officer, IS&S General Motors Corporation







        "It's about time someone wrote a book to teach
        the good guys what the bad guys already know. As the computer
        security industry matures, books like Exploiting Software have a critical role
        to play."



        Bruce Schneier
        Chief Technology Officer Counterpane Author of
        Beyond Fear
        and Secrets and Lies







        "Exploiting
        Software
        cuts to the heart of the computer security problem,
        showing why broken software presents a clear and present danger.
        Getting past the 'worm of the day' phenomenon requires that someone
        other than the bad guys understands how software is attacked.



        This book is a wake-up call for computer
        security."



        Elinor Mills
        Abreu Reuters' correspondent







        "Police investigators study how criminals think
        and act. Military strategists learn about the enemy's tactics, as
        well as their weapons and personnel capabilities. Similarly,
        information security professionals need to study their criminals
        and enemies, so we can tell the difference between popguns and
        weapons of mass destruction. This book is a significant advance in
        helping the 'white hats' understand how the 'black hats'
        operate.



        Through extensive examples and 'attack
        patterns,' this book helps the reader understand how attackers
        analyze software and use the results of the analysis to attack
        systems. Hoglund and McGraw explain not only how hackers attack
        servers, but also how malicious server operators can attack clients
        (and how each can protect themselves from the other). An excellent
        book for practicing security engineers, and an ideal book for an
        undergraduate class in software security."



        Jeremy Epstein
        Director, Product Security & Performance webMethods,
        Inc.







        "A provocative and revealing book from two
        leading security experts and world class software exploiters, Exploiting Software enters the mind of
        the cleverest and wickedest crackers and shows you how they think.
        It illustrates general principles for breaking software, and
        provides you a whirlwind tour of techniques for finding and
        exploiting software vulnerabilities, along with detailed examples
        from real software exploits.




        Exploiting
        Software
        is essential reading for anyone responsible for
        placing software in a hostile environment—that is, everyone
        who writes or installs programs that run on the Internet."



        Dave Evans,
        Ph.D. Associate Professor of Computer Science University of
        Virginia







        "The root cause for most of today's Internet
        hacker exploits and malicious software outbreaks are buggy software
        and faulty security software deployment. In Exploiting Software, Greg Hoglund and
        Gary McGraw help us in an interesting and provocative way to better
        defend ourselves against malicious hacker attacks on those software
        loopholes.



        The information in this book is an essential
        reference that needs to be understood, digested, and aggressively
        addressed by IT and information security professionals
        everywhere."



        Ken Cutler,
        CISSP, CISA Vice President, Curriculum Development &
        Professional Services, MIS Training Institute







        "This book describes the threats to software in
        concrete, understandable, and frightening detail. It also discusses
        how to find these problems before the bad folks do. A valuable
        addition to every programmer's and security person's library!"



        Matt Bishop,
        Ph.D. Professor of Computer Science University of California at
        Davis Author of
        Computer Security: Art and Science







        "Whether we slept through software engineering
        classes or paid attention, those of us who build things remain
        responsible for achieving meaningful and measurable vulnerability
        reductions. If you can't afford to stop all software manufacturing
        to teach your engineers how to build secure software from the
        ground up, you should at least increase awareness in your
        organization by demanding that they read Exploiting Software. This book clearly
        demonstrates what happens to broken software in the wild."



        Ron Moritz,
        CISSP Senior Vice President, Chief Security Strategist Computer
        Associates







        "Exploiting
        Software
        is the most up-to-date technical treatment of
        software security I have seen. If you worry about software and
        application vulnerability, Exploiting
        Software
        is a must-read. This book gets at all the timely
        and important issues surrounding software security in a technical,
        but still highly readable and engaging, way.



        Hoglund and McGraw have done an excellent job of
        picking out the major ideas in software exploit and nicely
        organizing them to make sense of the software security jungle."



        George Cybenko,
        Ph.D. Dorothy and Walter Gramm Professor of Engineering, Dartmouth
        Founding Editor-in-Chief,
        IEEE Security and Privacy







        "This is a seductive book. It starts with a
        simple story, telling about hacks and cracks. It draws you in with
        anecdotes, but builds from there. In a few chapters you find
        yourself deep in the intimate details of software security. It is
        the rare technical book that is a readable and enjoyable primer but
        has the substance to remain on your shelf as a reference. Wonderful
        stuff."



        Craig Miller,
        Ph.D. Chief Technology Officer for North America Dimension
        Data







        "It's hard to protect yourself if you don't know
        what you're up against. This book has the details you need to know
        about how attackers find software holes and exploit
        them—details that will help you secure your own systems."



        Ed Felten, Ph.D.
        Professor of Computer Science Princeton University

















































        Amazon






        PBX Lease Returns











         < Day Day Up > 





        PBX Lease Returns



        At the time of the implementation, 22 of the 55 Cisco Expansion Port Network (EPN) PBXs were leased, which meant that the IP Telephony implementation schedule was largely dictated by the PBX lease return dates. To keep the massive effort of returning the large quantity of leased equipment organized and on schedule, the team leader who was responsible for the retrofit cleanup effort entered all the Cisco PBX leases onto a spreadsheet and developed a Microsoft project plan to keep the returns on track. The initiative involved returning each leased PBX, along with its ancillary parts, throughout the San Jose campus.



        In 2003, the two main leased PBXs were removed and fully disconnected. Chapter 7, "Moving Forward: Continuing to Be Cisco's First and Best Customer," outlines that process in more detail.



        "Getting the equipment out of the buildings was the easy part," says Reid Bourdet, SPM IT project manager and team lead who was responsible for returning all the Cisco leased legacy PBX equipment. Like a lot of large enterprises, Cisco had taken each lease agreement and allocated that lease to various buildings. "Because of the sheer size of the deployment, we had to pull all of the equipment back together to rebuild the original lease, ensuring that it was all there, and matched the original equipment list before we returned it," Bourdet recalls.



        The other challenge was to ensure that when the PBX deinstallations were conducted, steps were taken to prevent creating alarms from the equipment that remained. Cisco developed a procedure that entailed removing the software, removing all the trunks�the lines coming into the PBX�and then removing the cabinet from the CPU. "You have to tell the CPU that the cabinets are no longer there. If you don't, the CPU is always looking for them and will alarm the system," Bourdet cautions.



        Each PBX hardware deinstallation took an average of one working day, whereas the software removal required an additional three to four hours. Staffing involved four technicians who were familiar with the PBX network and knowledgeable on trunking technology. An additional telecom administrator removed the phone sets from the software configuration. "It's really important that the individuals doing the actual deinstalls are qualified and familiar with this type of network," Bourdet said. "We were fortunate enough to have the necessary resources on staff. If we didn't, we would have outsourced that part of the initiative to PBX-certified individuals."



        Although the equipment disconnection and retrieval went smoothly for the team, they did experience a problem reconciling their equipment list with the vendor's. "We followed the lessor's instructions to the letter, but we still ended up with disagreements about the quantities of equipment that we returned," Bourdet says. The leasing company did not inventory the equipment when it arrived at the facility, instead turning it over to a secondary market vendor. "The secondary vendor then either miscounted it, or things got lost between here and there, because our records and their records did not match. If I had it to do over, I would have gone an additional step that included a box level inventory of the equipment, rather than a consolidated list," Bourdet adds.





        Results of PBX Lease Return Initiative



        STATUS:



        • 99.9 percent of all leased equipment up for renewal was returned.

        • More than U.S. $3.5 million dollars (market value) of equipment was returned, including 22 PBX EPNs and 10,000 phones.

        • U.S. $128,888 per month was saved in leased equipment cost.

        • All final leftover equipment was inventoried and identified for lab testing use or reselling opportunities.



        RESULTS:



        • The San Jose campus is completely retrofitted and is now 100 percent IP Telephony-ready.






        Best Practices: PBX Lease Returns



        • Enter all PBX lease renewal dates and associated equipment onto a spreadsheet for tracking purposes, and build a project plan that schedules the deinstallations and returns.

        • Develop a process that prevents alarms when removing cabinets from the PBX.

        • Ensure that only PBX-certified technicians are involved in the deinstallation.

        • Carefully match the equipment list on the original lease agreement to the inventory being returned, create a box-level inventory list, and get a signed receiving list from the vendor.














           < Day Day Up > 



          35.1 NEED FOR CTI











           < Day Day Up > 











          35.1 NEED FOR CTI



          CTI technology is finding applications in every business sector. Why? The following are the reasons:




          • Throughout the world, the use of telephones is very high. In most developed countries, the telephone density (number of telephones per 100 population) is anywhere between 50 and 90. In developing countries, it is lower—between 2 and 20, but still higher than the number of computers.




          • To access the information using a telephone is much easier than using PC, mostly because voice is the most convenient mode of communication, and operating a telephone is much easier than operating a PC.




          • With the advent of automation, much of information is available on computers—whether from banks, libraries, transport agencies, service organizations, and others—some of this information needs to be provided to users/customers.




          • Throughout the world, manpower costs are rising. If the monotonous work of distributing routine information can be automated, organizations can save money and use their human resources for more productive work.




          • CTI provides the platform for unifying the various messaging systems. It benefits the user because it provides a single device for accessing different messaging systems such as e-mail, voice mail, and databases.




          For these reasons, CTI is emerging as one of the highest growth areas in IT. However, CTI is not a new technology; it is a combination of many existing technologies to provide innovative applications to users.










          The purpose of computer telephony integration (CTI) is to access information stored in computers through telephones. CTI is gaining practical importance due to the high number of telephones and availability of large amounts of information in computers.


















          Note 

          CTI finds applications in every business sector—banking, education, transportation, service organizations, entertainment, and others.




















           < Day Day Up > 



          Chapter 6: Digital Encoding











           < Day Day Up > 










          Chapter 6: Digital Encoding



          In a digital communication system, the first step is to convert the information into a bit stream of ones and zeros. Then the bit stream has to be represented as an electrical signal. In this chapter, we will study the various representations of the bit stream as an electrical signal.




          6.1 REQUIREMENTS FOR DIGITAL ENCODING


          Once the information is converted into a bit stream of ones and zeros, the next step is to convert the bit stream into its electrical representation. The electrical signal representation has to be chosen carefully for the following reasons:




          • The electrical representation decides the bandwidth requirement.




          • The electrical representation helps in clocking— the beginning and ending of each bit.




          • Error detection can be built into the signal representation.




          • Noise immunity can be increased by a good electrical representation.




          • The complexity of the decoder can be decreased.












          The bit stream is encoded into an equivalent electrical signal using digital encoding schemes. The encoding scheme should be chosen keeping in view the bandwidth requirement, clocking, error detection capability, noise immunity, and complexity of the decoder.














          A variety of encoding schemes have been proposed that address all these issues. In all communication systems, the standards specify which encoding technique has to be used. In this chapter, we discuss the most widely used encoding schemes. We will refer to these encoding schemes throughout the book, and hence a good understanding of these is most important.



















           < Day Day Up > 



          Example: Initialization And Use Of The Counters



          [ Team LiB ]









          Example: Initialization And Use Of The Counters


          The following three diagrams and associated descriptions explain the initialization of HyperTransport buffer counts, followed by the actions taken by the transmitter and receiver as two packets are sent across the link. The diagrams have been simplified to show a single flow control buffer and the corresponding receiver and transmitter counters used to track available entries. In this example, assume the following:



          • The flow control buffer illustrated is the Posted Request Command (CMD) buffer.


          • The designer of the receiver interface has decided to construct this flow control buffer with a depth of five entries. Because this is a buffer for receiving requests, each entry in the buffer will hold up to 8 bytes (this covers the case of either four or eight byte request packets)


          • Following initialization, the transmitter wishes to send two Posted Request packets to the receiver.



          Basic Steps In Counter Initialization And Use









          1. At reset, the transmitter counters in each device are reset = 0. This prevents the initiation of any packet transfers until buffer depth has been established.



          2. At reset, the receiver interfaces load each of the RCV counters with a value that indicates how many entries its corresponding flow control buffer supports (shown as N in the diagram). This is necessary because the receiver is allowed to implement buffers of any depth.



          3. Each device then transmits its initial receiver buffer depth information to the other device using NOP packets. Each NOP packet can indicate a range of 0-3 entries. If the receiver buffer being reported is deeper than 3 entries, the device will send additional NOPs which carry the remainder of the count.



          4. As each device receives the initial NOP information, it updates its transmitter flow control counters, adding the value indicated in the NOP fields to the appropriate counter total.



          5. When a device has a non-zero value in the counter, it can send packets of the appropriate type across the link. Each time it sends packet(s), the device subtracts the number of packets sent from the current transmitter counter value. If the counter decrements to 0, the transmitter must wait for NOP updates before proceeding with any more packet transmission.



          Initializing The Flow Control Counter

          In Figure 5-4 on page 111, assume that the designer of Device 2 has implemented a Posted Request (CMD) buffer with a depth of 5 entries. After reset, it must convey this initial buffer availability to the transmitter in the other device before any Posted Request packets may be sent.


          Figure 5-4. Flow Control Counter Initialization



          The basic steps in flow control counter initialization are:









          1. The transmitter in Device 1 initializes its Posted Request (CMD) counter to 0 at reset (all transmit counters reset = 0). It then waits for the receiver on the other side to update this counter with the starting buffer depth available (this will be the maximum depth the receiver supports).



          2. Device 2 loads its receiver Posted Request counter = 5 (its maximum).



          3. Device 2 then sends two NOP packets which carry this buffer availability information: the first NOP has a 11b (3) in the Post CMD field (Byte 1, bits 0,1 above), and the second NOP has a 10b (2) in this field. Total = 5.



          4. Upon receipt of these two NOPs, the Device 1 has updated its transmit counter, first by three then again by two. It now has 5 "credits" available for sending Posted Request packets � representing five separate Posted Requests which may be initiated.



          5. Having sent the NOPs, the Device 2 RCV counter is now at 0, and will remain that way until additional packets are received, processed, and move out of the buffer, thereby creating new entries.



          Note that this process will be repeated for each of the six required flow control buffers; it will also be done for the six isochronous flow control buffers if they are supported. In the NOP packet format (see above), six transmit registers can be updated at once using the six fields provided. The Isoc bit (Byte 2, bit 5) would be set if the NOP update was to be applied to the isochronous flow control buffer set.



          Device 1 Sends Two Posted Request Packets

          Figure 5-5 on page 113 shows the Device 1 transmitter sending two Posted Request packets. Also illustrated is the state of the flow control registers after this has been done, but before the receiver has processed the incoming packets.


          Figure 5-5. Device 1 Sends Two Packets



          What the diagram shows:



          1. After the flow control counters are initialized after reset, the transmitter sends packets for which credits are available in the transmit counter, then subtracts the number of packets sent from the current total. In this case, two credits were subtracted from the starting count of 5. Maintaining its own counter assures the transmitter will never send packets which can't be taken, even if there is a considerable lag in NOP updates from the receiver.


          2. The receiver will not update its RCV counter or indicate new entries in its NOP packets until it moves a packet out of the flow control buffer.




          New Entries Available: Update Flow Control Information

          The last diagram, Figure 5-6 on page 114, shows the updating of flow control information after the receiver has processed the two packets it received previously. These could have been consumed internally, or been forwarded to another link if Device 2 is a bridge or tunnel.


          Figure 5-6. Device 2 Updates Flow Control Information



          What has happened:



          1. When Device 2 moves packets out of its buffer, it indexes its receive counter to reflect the new availability (2), and sends this information to Device 1 with a NOP update packet carrying the value 2 in the PostCMD field. After sending the update NOP, Device 2 clears its receive counter.


          2. After sending packets and subtracting credits from its transmit counter, Device 1 will parse incoming NOPs for updates from the receiver enabling it to bump credits in one or more of its counters. In this case, Device 1 bumps its transmit counter by 2 upon receiving the NOP update. It again has 5 credits to use for sending Posted Request command packets.



          This dynamic updating of flow control information happens continuously during idle times on the bus. If the receiver has no change in buffer entry availability to report, the NOPs it sends will have all six fields in the NOP packet cleared.









            [ Team LiB ]



            5.6 Security Considerations











             < Day Day Up > 





            5.6 Security Considerations



            Although not specifically a security consideration, the EJB programming model constrains an enterprise bean to a limited set of resources. Specifically, in a J2SE environment, an enterprise bean is prohibited from using many Java resources as a result of EJB's isolation architecture. This architecture scales well in a multiprocess and multithreaded WAS environment. Each enterprise bean needs to be designed to run by itself and not interfere with other enterprise beans. For example, enterprise beans must not include native method calls, as native code could adversely affect other enterprise beans, as well as be nonportable. This also means that enterprise beans must not write to static fields or attempt to share state with other enterprise beans except via remote method calls or via resource managers, such as JDBC and JMS. Note that because the EJB container is middleware and acts as a broker, it can make native calls and use restricted Java resources.



            Given the isolation architecture of EJB components, each enterprise bean needs to interact with other enterprise beans via remote method calls. As the interaction will be location independent and may cross machine boundaries, it is important to define and enforce a secure association mechanism between EJB components. The channel of communication should provide data confidentiality and integrity. Using SSL communication between the EJB containers may address these requirements. It is also important for the security context associated with the client enterprise bean to be passed on to the target enterprise bean so that the identity of the client gets propagated through the invocation. A mechanism that achieves secure association between the client and the target EJB components needs to address the quality-of-service requirements.













               < Day Day Up > 



              Chapter 7. Modeling Ordered Interactions: Sequence Diagrams










              Chapter 7. Modeling Ordered Interactions: Sequence Diagrams


              Use cases allow your model to describe what your system must be able to do; classes allow your model to describe the different types of parts that make up your system's structure. There's one large piece that's missing from this jigsaw; with use cases and classes alone, you can't yet model how your system is actually going to its job. This is where interaction diagrams
              , and specifically sequence diagrams, come into play.


              Sequence diagrams are an important member of the group known as interaction diagrams. Interaction diagrams model important runtime interactions between the parts that make up your system and form part of the logical view of your model, shown in Figure 7-1.



              Figure 7-1. The Logical View of your model contains the abstract descriptions of your system's parts, including the interactions between those parts



              Sequence diagrams are not alone in this group; they work alongside communication diagrams (see Chapter 8) and timing diagrams (see Chapter 9) to help you accurately model how the parts that make up your system interact.


              Sequence diagrams are the most popular of the three interaction diagram types. This could be because they show the right sorts of information or simply because they tend to make sense to people new to UML.



              Sequence diagrams are all about capturing the order of interactions between parts of your system. Using a sequence diagram, you can describe which interactions will be triggered when a particular use case is executed and in what order those interactions will occur. Sequence diagrams show plenty of other information about an interaction, but their forté is the simple and effective way in which they communicate the order of events within an interaction.












              References










               < Free Open Study > 





              References



              Boehm, Barry (1988).

              "A Spiral Model of Software Development and Enhancement." IEEE Computer, 21(5):

              61�72.



              Cantor, Murray R. (1998). Object-Oriented Project Management with UML, 1st ed. New York, NY: John Wiley & Sons, Inc.



              Department of the Air Force, Software Technology Support Center (1996).

              "Guidelines for Successful Acquisition and Management of Software Intensive Systems." Version 2.0, June.



              Deutsch, Michael S., and Ronald R. Willis (1988). Software Quality Engineering: A Total Technical and Management Approach, 1st ed. Englewood Cliffs, NJ: Prentice Hall PTR/Sun Microsystems Press.



              Graham, Ian (1995). Migrating to Object Technology, 1st ed. Reading, MA: Addison-Wesley.



              IEEE 1074-1997 (1998).

              "IEEE Standard for Developing Software Life Cycle Processes." New York, NY: The Institute of Electrical and Electronics Engineers.



              Martin, James (1991). Rapid Application Development, 1st ed. New York, NY: Macmillan.



              McConnell, Steve (1996). Rapid Development: Taming Wild Software Schedules, 1st ed. Redmond, WA: Microsoft Press.



              Pressman, Roger S. (1993).

              "Understanding Software Engineering Practices: Required at SEI Level 2 Process Maturity." Software Engineering Training Series, Software Engineering Process Group, July 30.



              Pressman, Roger S. (2001). Software Engineering: A Practitioner's Approach, 5th ed. Boston, MA: McGraw-Hill.



              Royce, W.W. (1970).

              "Managing the Development of Large Software Systems: Concepts and Techniques." Proceedings WESCON, August. Los Alamitos, CA.



              www.software.org/pub/darpa/erd/erdpv010004.html. DeSantis, Richard, John Blyskal, Assad Moini, and Mark Tappan (1997). SPC-97057 CMC Version 01.00.04, Herndon, VA: Software Productivity Consortium.














                 < Free Open Study > 



                Chapter 5. Hashes










                Chapter 5. Hashes


                Hashes and arrays are the two basic "aggregate" data types supported by most modern programming lagnguages. The basic interface of a hash is similar to that of an array. The difference is that while an array stores items according to a numeric index, the index of a hash can be any object at all.


                Arrays and strings have been built into programming languages for decades, but built-in hashes are a relatively recent development. Now that they're around, it's hard to live without them: they're at least as useful as arrays.


                You can create a Hash by calling
                Hash.new
                or by using one of the special sytaxes Hash[] or {}. With the Hash[] syntax, you pass in the initial elements as comma-separated object references. With the {} syntax, you pass in the initial contents as comma-separated key-value pairs.



                empty = Hash.new # => {}
                empty ={} # => {}
                numbers = { 'two' => 2, 'eight' => 8} # => {"two"=>2, "eight"=>8}
                numbers = Hash['two', 2, 'eight', 8] # => {"two"=>2, "eight"=>8}



                Once the hash is created, you can do hash lookups and element assignments using the same syntax you would use to view and modify array elements:



                numbers["two"] # => 2
                numbers["ten"] = 10 # => 10
                numbers # => {"two"=>2, "eight"=>8, "ten"=>10}



                You can get an array containing the keys or values of a hash with Hash#keys or Hash#values. You can get the entire hash as an array with Hash#to_a:



                numbers.keys # => ["two", "eight", "ten"]
                numbers.values # => [2, 8, 10]
                numbers.to_a # => [["two", 2], ["eight", 8], ["ten", 10]]



                Like an array, a hash contains references to objects, not copies of them. Modifications to the original objects will affect all references to them:



                motto = "Don't tread on me"
                flag = { :motto => motto,
                :picture => "rattlesnake.png"}
                motto.upcase!
                flag[:motto] # => "DON'T TREAD ON ME"



                The defining feature of an array is its ordering. Each element of an array is assigned a Fixnum object as its key. The keys start from zero and there can never be gaps. In contrast, a hash has no natural ordering, since its keys can be any objects at all. This feature make hashes useful for storing lightly structured data or key-value pairs.


                Consider some simple data for a person in an address book. For a side-by-side comparison I'll represent identical data as an array, then as a hash:



                a = ["Maury", "Momento", "123 Elm St.", "West Covina", "CA"]
                h = { :first_name => "Maury",
                :last_name => "Momento",
                :address => "123 Elm St."
                :city => "West Covina",
                :state => "CA" }



                The array version is more concise, and if you know the numeric index, you can retrieve any element from it in constant time. The problem is knowing the index, and knowing what it means. Other than inspecting the records, there's no way to know whether the element at index 1 is a last name or a first name. Worse, if the array format changes to add an apartment number between the street address and city, all code that uses a[3] or a[4] will need to have its index changed.


                The hash version doesn't have these problems. The last name will always be at :last_name, and it's easy (for a human, anyway) to know what :last_name means. Most of the time, hash lookups take no longer than array lookups.


                The main advantage of a hash is that it's often easier to find what you're looking for. Checking whether an array contains a certain value might require scanning the entire array. To see whether a hash contains a value for a certain key, you only need to look up that key. The set library (as seen in the previous chapter) exploits this behavior to implement a class that looks like an array, but has the performance characteristics of a hash.


                The downside of using a hash is that since it has no natural ordering, it can't be sorted except by turning it into an array first. There's also no guarantee of order when you iterate over a hash. Here's a contrasting case, in which an array is obviously the right choice:



                a = [1, 4, 9, 16]
                h = { :one_squared => 1, two_squared => 4, three_squared => 9,
                :four_squared => 16 }



                In this case, there's a numeric order to the entries, and giving them additional labels distracts more than it helps.


                A hash in Ruby is actually implemented as an array. When you look up a key in a hash (either to see what's associated with that key, or to associate a value with the key), Ruby calculates the

                hash code
                of the key by calling its
                hash
                method. The result is used as a numeric index in the array. Recipe 5.5 will help you with the most common problem related to
                hash codes.


                The performance of a hash depends a lot on the fact that it's very rare for two objects to have the same hash code. If all objects in a hash had the same hash code, a hash would be much slower than an array. Code like this would be a very bad idea:



                class BadIdea
                def hash
                100
                end
                end



                Except for strings and other built-in objects, most objects have a hash code equivalent to their internal object ID. As seen above, you can override Object#hash to change this, but the only time you should need to do this is if your class also overrides Object#==. If two objects are considered equal, they should also have the same hash code; otherwise, they will behave strangely when you put them into hashes. Code like the fragment below is a very good idea:



                class StringHolder
                attr_reader :string
                def initialize(s)
                @string = s
                end

                def ==(other)
                @string == other.string
                end

                def hash
                @string.hash
                end
                end
                a = StringHolder.new("The same string.")
                b = StringHolder.new("The same string.")
                a == b # => true
                a.hash # => -1007666862
                b.hash # => -1007666862













                22.3 Using Ranges (of Any Kind)















                22.3 Using Ranges (of Any Kind)



                As discussed throughout this book, ranges are the most accurate way to reflect the inherent inaccuracy in estimates at various points in the Cone of Uncertainty. You can combine ranges with the other techniques described in this chapter (that is, ranges of coarse time periods, using ranges for a risk-quantified estimate instead of plus-or-minus qualifiers, and so on).


                When you present an estimate as a range, consider the following questions:





                • What level of probability should your range include? Should it include ±1 standard deviation (68% of possible outcomes), or does the range need to be wider?





                • How do your company's budgeting and reporting processes deal with ranges? Be aware that companies' budgeting and reporting processes often won't accept ranges. Ranges are often simplified for reasons that have little to do with software estimation, such as "The company budgeting spreadsheet won't allow me to enter a range." Be sensitive to the restrictions your manager is working under.





                • Can you live with the midpoint of the range? Occasionally, a manager will simplify a range by publishing the low end of the range. More often, managers will average the high and low ends and use that if they are not allowed to use a range.





                • Should you present the full range or only the part of the range from the nominal estimate to the top end of the range? Projects rarely become smaller over time, and estimates tend to err on the low side. Do you really need to present the low end to high end of your estimate, or should you present only the part of the range from the nominal estimate to the high end?





                • Can you combine the use of ranges with other techniques? You might want to consider presenting your estimate as a range and then listing assumptions or quantified risks.








                Tip #108 

                Use an estimate presentation style that reinforces the message you want to communicate about your estimate's accuracy.





                Usefulness of Estimates Presented as Ranges


                Project stakeholders might think that presenting an estimate as a wide range makes the estimate useless. What's really happening is that presentation of the estimate as a wide range accurately conveys the fact that the estimate is useless! It isn't the presentation that makes the estimate useless; it's the uncertainty in the estimate itself. You can't remove the uncertainty from an estimate by presenting it without its uncertainty. You can only ignore the uncertainty, and that's to everyone's detriment.



                The two largest professional societies for software developers—the IEEE Computer Society and the Association of Computing Machinery—have jointly decided that software developers have a professional responsibility to include uncertainty in their estimates. Item 3.09 in the IEEE-CS/ACM Software Engineering Code of Ethics reads as follows:




                Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. In particular, software engineers shall, as appropriate:



                3.09 Ensure realistic quantitative estimates of cost, scheduling, personnel, quality and outcomes on any project on which they work or propose to work and provide an uncertainty assessment of these estimates. [emphasis added]



                Including uncertainty in your estimates isn't just a nicety, in other words. It's part of a software professional's ethical responsibility.





                Ranges and Commitments


                Sometimes, when stakeholders push back on an estimation range, they're really pushing back on including a range in the commitment. In that case, you can present a wide estimation range and recommend that too much variability still exists in the estimate to support a meaningful commitment.


                After you've reduced uncertainty enough to support a commitment, ranges are generally not an appropriate way to express the commitment. An estimation range illustrates what the nature of the commitment is—more or less risky—but the commitment itself should normally be expressed as a single-point number.






                Tip #109 

                Don't try to express a commitment as a range. A commitment needs to be specific.
















                Retrofit Cleanup











                 < Day Day Up > 





                Retrofit Cleanup



                When the implementation team began the conversion to IP Telephony, an IP call center solution was not yet in place. The decision was made to remove as many lines off the PBX as possible, conduct a partial retrofit, and convert everyone except for call center agents, their backups, and any business-critical analog lines. The team conducted a final cleanup at the end of the conversion to ensure that it would have ample time to carefully review all analog lines housed in the same buildings as call center agents.



                It would have been detrimental to the implementation team if we had accidentally removed a business-critical phone line used by our call center team. Because we were careful in our decision to remove only those lines that were traced and identified, we are proud of the fact that during the course of the year, we did not bring down any call center agents or their designated analog lines during the retrofit.



                After the retrofit was over and the dust was settled, the cleanup phase began. "We had to examine each situation separately and make the decision, for example, whether to pull out an analog line or replace it with CallManager," Bourdet says. "If a line was designated 'critical use,' we replaced it with an outside line." Other situations included engineering lab applications with call-in numbers for product demos and high-speed modem lines.



                The cleanup ensures that all lines removed and disconnected from the PBX are not business critical and provides ample time to carefully review all the "unidentified" analog lines and trace them in an attempt to find owners. By doing this, the implementation team was able to carefully remove nearly 17,000 ports from the PBX�3000 of which were analog lines. Most enterprise companies like Cisco have thousands of lines that, through the years, have either been forgotten about or are simply unused. By being mindful of this extra step, the Cisco cleanup effort eliminated thousands of unused lines and resulted in annual cost savings of up to U.S. $42,000.



                TIP





                Keep your original cutsheets so that you have a working list of the various lines that you need to review again during the cleanup phase. Maintaining this log saves you time during the cleanup process.




                TIP





                Seldom is there the opportunity or time to trace every unverified or questionable line on your network. Use the cleanup phase to ensure that every line going onto your network is viable, and disconnect those lines that are no longer in use. Note all untraced lines on your cutsheets for future reference.




                Because of the accelerated pace of the project, a guest phone might be functioning incorrectly, or a legacy phone or missed wall bracket might get lost in the shuffle. The cleanup effort provides an opportunity to walk the floor once more to ensure a clean 100 percent campus conversion.



                "One of the things that made it much easier for us was that the implementation team gave us a printout of what was remaining in the PBX," Bourdet says. "We started working with that and then went through a period of discovery with the users to determine the best course of action." Paying close attention to the applications that were left running through the PBX and working with users to determine what their needs were allowed the cleanup team to tailor a solution that fit the needs of the user and of the organization.



                Using the same individuals for the cleanup effort as for the PBX lease returns, the team conducted the cleanup building by building, concentrating on Cisco-owned equipment only after the lease return efforts were completed. As of April 2001, San Jose still had more than 22,000 ports remaining on the PBX. One year later, those ports were reduced to less than 2000. Today, both G3R PBXs have been removed, and not a single PBX port remains active. Chapter 7 provides a complete review of the PBX removal process and the final cleanup effort.













                   < Day Day Up > 



                  Application-Originated Drivers












































                  Application-Originated Drivers


                  The monitor has its own set of drivers for serial ports and Ethernet. Chapter 9
                  discussed MicroMonitor’s Ethernet features. As you have seen, some of
                  the functionality of the serial port connected to the console is made
                  available to the application (through mon_printf() and mon_putchar(),
                  for instance). Recall that all of MicroMonitor’s drivers are polled, so
                  high performance applications probably need to override the monitor and
                  establish new, probably interrupt-driven, drivers. Establishing new
                  drivers is perfectly acceptable because the monitor imposes no
                  kernel-level/user-level restriction. Hence, interfacing to the device
                  directly is fine. The driver interface depends on RTOS and hardware
                  details beyond the scope of this discussion. The point is that the
                  application is not forced to use the facilities provided by
                  MicroMonitor (refer to Figure 3.1, page 67, for a better view of this).






































                  The Elements of a Linux Firewall










                   < Free Open Study > 











                  The Elements of a Linux Firewall


                  This section details the following primary goals of a firewall, and the next section describes how to customize a Red Hat Linux system to meet these goals and act as a firewall:




                  • Minimizing exposure




                  • Locking out the bad guys




                  • Masquerading




                  • Servicing the network






                  Minimizing Exposure


                  The first priority of a firewall must be to protect itself. After all, if your firewall is compromised, your entire network is exposed. Thus, a good firewall always has minimal exposure to attack.


                  Being "exposed" to an attack means the machine is running software that has a vulnerability that can be exploited to compromise the system. Typically, an all too-common bug in software known as a buffer overflow allows an attacker to overload the program with carefully created input that causes the program to give the attacker a command line shell. If the program was running as the root user, then this command shell will be running with root privileges, meaning that the attacker has full permissions to the system! (Other types of attacks exist, such as denial-of-service attacks, for which there is little defense without the cooperation of your ISP, and even simple bugs that allow attackers to gain access they're not supposed to have.) If a program with a buffer overflow vulnerability is a network program, then it may also be vulnerable to a remote exploit. If the program is a server program such as an FTP or web server, then a buffer overflow bug can be exploited by a user on a remote system, making the exploit a remote exploit. Remote exploits are the holy grail of system crackers, since they allow attackers to compromise a system without ever even seeing it.


                  All programs have bugs, and so any server program can have a remote buffer overflow exploit. Even if none exist for a given program, that may not mean that none exist, it simply means that none are known—yet! In other words, any time you have a program running on a system, you are exposing the system to any remote exploits that may exist in that program. It's a risk vs. reward tradeoff: You tolerate the risk of running a program because its usefulness outweighs the risk.


                  Thus, any server system should run as few services as possible, since unused services are adding risk but no value. This is especially true of a firewall system, since the potential impact of having the system compromised is so great; it makes the risk less acceptable. Thus, a firewall should have as few services running as possible. The firewall as presented in this book runs no services whatsoever that are accessible to the outside network, and runs only a few that are required for the inside network. If you probe this configuration from the outside with a port scanning program (such as the nmap program mentioned later in this chapter), then you won't find anything, and in fact the system may appear to not even be running!


                  Taking this thought to its logical conclusion, you should uninstall any software you don't use. This way it doesn't take up disk space, and you're guaranteed it isn't running! Additionally, the smaller the installation footprint the better, because tools such as Tripwire (mentioned later in this chapter) are easier to manage with smaller installations. For this case study, you should instruct the system to install only the packages you actually need and omit the ones you do not; there are more details on this later in the section on "Slimming Down the Package List."






                  Locking Out the Bad Guys


                  If the firewall's first job is to protect itself, then it's second job (and ultimate goal!) is to protect the network. This essentially means preventing external users from accessing the internal network, at the Internet Protocol (IP) packet level. That is, the firewall simply rejects unauthorized packets bound for the internal network.


                  This is typically accomplished by configuring the firewall with a set of rules that are consulted for each incoming packet. Each rule consists of a pattern used to identify a packet, and an action to take when a packet matches that pattern. Usually, the pattern can contain things like the source or destination IP addresses, the Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) port of the packet, the state of the TCP/IP connection, and so on; almost any aspect of the TCP/IP headers can be used in the pattern. The actions typically include accepting the packet, rejecting the packet (by sending an appropriate rejection packet back to the sender), or ignoring the packet entirely. A firewall's configuration is simply a set of rules that implement the policy defined by the network administrators (in this case, by you)!


                  Clearly, these rulesets are where the real work gets done in a firewall. If there is a mistake in the ruleset, then the network could be exposed. Additionally, sometimes patterns and actions can be pretty subtle, making them tricky to work with. However, there is a general rule of thumb most security experts use: First deny everything, then allow only what you explicitly want.


                  Really, this is just common sense: If your house has 200 doors but you only use 2, then you really want to just lock all the doors, and only unlock the ones you need when you need to. In the case of your network, the ports, IP addresses, and so on are the doors, so you should create a default firewall rule that rejects all access to the network, and then only allow specific access to the specific ports and IP addresses you need. The configuration script in Listing 16-1 presented later in the section "Creating the Startup Scripts" has more details.


                  This will all become clearer later as you read about the script that configures the actual firewall ruleset, discussed in the section "Presenting an iptables Script."






                  Masquerading


                  Unfortunately, we're not talking about a Victorian-era costume party; no "paper faces on parade" here. Masquerading is a term sometimes used in the Linux community to refer to a special case of Network Address Translation (NAT). NAT is used to make firewalls, well, usable.


                  The problem with firewalls is that usually they work in both directions: They shut down internal users' access to the outside network just as effectively as they shut down external access to the internal network. This means that legitimate internal users have a hard time accessing Internet resources, such as web sites, File Transfer Protocol (FTP) sites, remote servers via the SSH protocol, and so on. NAT and the special case of masquerading are a way to restore most of the users' ability to access the network, while still retaining the security features provided by a firewall. You should view firewalls and NAT as separate concepts, even though they are implemented in Linux by the same functionality (known as the netfilter or iptables code.)


                  Masquerading runs on the firewall, and simply takes all frames from internal machines and "mangles" them (which is actually the technical term) so that they appear to have originated from the firewall itself. That is, the firewall masquerades as the originator of the packet. The packet is then relayed to the destination, and any replies to it are unmangled before being returned to the original computer. From the perspective of the destination machine on the Internet, masqueraded packets appear to have originated from the firewall machine, whereas in reality they originated from one of a number of machines behind the firewall.


                  NAT is more sophisticated than simple masquerading. Essentially, NAT is fullfeatured packet mangling. Masquerading mangles a specific part of the packet for a specific purpose: It mangles the source IP address in order to avoid revealing the existence of the original PC. NAT, however, supports a wider variety of patterns that it can operate on and rules that it can take.


                  With NAT, you can do things like forward connections to port 80 (the HTTP port) to another internal web server, even though the firewall itself runs no such server. (This is sort of the opposite of Masquerading; this particular form of NAT is known as Destination NAT or DNAT.) Doing so allows the web server to remain safely behind the firewall, exposing only port 80. From outside the firewall, the firewall server itself appears to be the web server, even though that's not the case. Figure 16-1 compares the functioning of masquerading and the reverse or destination NAT just described.






                  Figure 16-1: DNAT vs. masquerading


                  In Figure 16-1, two boxes depict network packet transformations that are performed by the firewall. The lefthand box depicts Destination NAT (DNAT), in which incoming requests to a server on the protected network are actually received by the firewall, and forwarded transparently to the actual server. In this box, the solid lines represent actual packets as transmitted from hosts, whereas dotted lines represent packets after they have been transformed by the firewall. What this figure really boils down to is that each host—both inside and outside the firewall—"believes" that it is talking only to the firewall, rather than talking to each other through a middleman. The firewall handles all appropriate transformations to complete that illusion, with the hosts being none the wiser. All software on both hosts runs normally.


                  The righthand box depicts masquerading, also roughly known as Source NAT. This figure is the analog of the lefthand box, and depicts a client on the protected network attempting to access a server out on the Internet. In this case, the client actually believes that it really is communicating with the real Internet server, and has no idea that the firewall is even present. The firewall, meanwhile, transforms the client's original packets (shown as solid lines) to appear to have come from the firewall itself, so that the remote Internet server believes that it is talking to the firewall. (The transformed packets are depicted as dashed lines.) The server's return packets are then transformed again, this time to tweak the destination address so that they eventually make it back to the client that originally requested them.


                  Just remember this summary: in the DNAT case, both hosts think that they're talking directly to the firewall, and have no idea that the firewall is merely acting as a middleman. With masquerading, though, the internal protected client believes it is talking directly to the server, whereas the server believes it is talking to the firewall. In both cases, the firewall handles all the transformations that need to occur. Keep this notion in mind as you read the rest of this chapter; it helps to come back to it if the various firewall concepts get blurred in your mind.


                  The Linux kernel supports firewalls and NAT. Each of the major versions of the kernel has its own mechanism, though. In the 2.0 series, the kernel really only supported IP masquerading and limited firewalling. In the 2.2 series, an enhanced system known as ipchains was developed, and in 2.4 the ipchains functionality was further expanded and called iptables (also known as netfilter).


                  Red Hat Linux 7.3 uses kernels from the 2.4 series, and so this book describes the use of iptables rather than the earlier versions. However, Debian GNU/Linux and Slackware Linux use the older 2.2 series kernels by default; if you're using those systems, you'll have to read up on ipchains in order to adapt the content in this chapter. Fortunately, the differences are not huge, so by now this should not be too difficult for you. (If you do need information on ipchains, a good starting point would be Linux IPCHAINS-HOWTO at http://www.tldp.org/HOWTO/IPCHAINS-HOWTO.html.)






                  Servicing the Network



                  The final task of a firewall is to provide some basic services to the other machines on the network. Generally, this involves running a Dynamic Host Configuration Protocol (DHCP) server so that other machines can obtain IP addresses, and a Domain Name Service (DNS) server so that the other machines have access to domain name lookups. Later in this chapter, you'll find discussions about how to set up these services on your firewall.


                  Some people might argue that services such as DNS and DHCP are not appropriate to install on the firewall computer itself, as doing so would both bog down the firewall with needless computation and make the firewall vulnerable to exploits that exist in the DNS and DHCP server software. Indeed, for many networks it might be more appropriate to have dedicated DHCP and DNS servers. However, this chapter is describing a firewall configuration for a network of modest size, and so it's probably not a big deal to run these services directly on the firewall.


                  Now that you understand the basics of a firewall and what one has to accomplish, the next section discusses the details of how to build a firewall from a Red Hat Linux 7.3 system.




















                   < Free Open Study > 



                  Configuring a Solaris DHCP Server














                  Configuring a Solaris DHCP Server


                  The Solaris client (dhcpagent) and server (in.dhcpd) solution features backward compatibility with other methods already in use, particularly the Reverse Address Resolution Protocol (RARP) and static configurations. In addition, the address of any workstation's network interfaces can be changed after the system has been booted. The dhcpagent client for Solaris features caching and automated lease renewal and is fully integrated with IP configuration (ifconfig). The in.dhcpd server for Solaris can provide both primary and secondary DHCP services and is fully integrated with the NIS+ Network Information Service. The Solaris DHCP server has the ability to handle hundreds of concurrent requests and also has the ability to boot diskless clients. Multiple DHCP support is provided through the Network File System (NFS). Although we won't cover these advanced features in this chapter, it's worthwhile considering them when making a decision to use RARP or DHCP (or some other competing dynamic IP allocation method).


                  The main program used to configure DHCP under Solaris is /usr/sbin/dhcpconfig, which is a shell script that performs the entire configuration for you. Alternatively, you can use the dhtadm or pntadm applications to manage the DHCP configuration table (/var/dhcp/dhcptab). The dhcpconfig program is menu-based, making it easy to use. The first menu displayed when you start the program looks like this:


                  ***              DHCP Configuration              ***
                  Would you like to:
                  1) Configure DHCP Service
                  2) Configure BOOTP Relay Agent
                  3) Unconfigure DHCP or Relay Service
                  4) Exit
                  Choice:

                  The first menu option allows the DHCP service to be configured for initial use. If your system has never used DHCP before, you must start with this option. You will be asked to confirm DHCP startup options, such as the timeout periods made on lease offers (that is, between sending DHCPOFFER and receiving a DHCPREQUEST) and whether or not to support legacy BOOTP clients. You will also be asked about bootstrapping configuration, including the following settings:




                  • Time zone




                  • DNS server




                  • NIS server




                  • NIS+ server




                  • Default router




                  • Subnet mask




                  • Broadcast address




                  These settings can all be offered to the client as part of the DHCPOFFER message. The second menu option allows the DHCP server to act simply as a relay agent. After entering a list of BOOTP or DHCP servers to which requests can be forwarded, the relay agent should be operational. Finally, you may choose to unconfigure either the full DHCP service or the relay service, which will revert all configuration files.


                  If you selected option 1, you will first be asked if you want to stop any current DHCP services:



                  Would you like to stop the DHCP service? (recommended) ([Y]/N)


                  Obviously, if you are supporting live clients, you should not shut down the service. This is why DHCP configuration needs to take place outside normal business hours, so that normal service is not disrupted. If you have ensured that no clients are depending on the in.dhcpd service, you can answer yes to this question and proceed. Next, you will be asked to identify the datastore for the DHCP database:


                  ### DHCP Service Configuration ###
                  ### Configure DHCP Database Type and Location ###
                  Enter datastore (files or nisplus) [nisplus]:

                  The default value is the NIS+ Network Information Service, covered in Chapter 27. However, if you are not using NIS+ to manage network information, you may choose the files option. If you choose the files option, you will need to identify the path to the DHCP datastore directory:


                  Enter absolute path to datastore directory [/var/dhcp]:

                  The default path is the /var/dhcp directory. However, if your /var partition is small or running on low on space, and you have a large network to manage, you may wish to locate the datastore directory somewhere else. You will then be asked if you wish to enter any nondefault DHCP options:


                  Would you like to specify nondefault daemon options (Y/[N]):

                  Most users will choose the standard options. However, if you wish to enable additional facilities like BOOTP support, you will need to answer yes to this question. You will then be asked whether you want to have transaction logging enabled:


                  Do you want to enable transaction logging? (Y/[N]):Y

                  Transaction logs are very useful for debugging, but grow rapidly in size over time- especially on a busy network. The size of the file will depend on the syslog level that you wish to enable as well:



                  Which syslog local facility [0-7] do you wish to log to? [0]:


                  Next, you will be asked to enter expiry times for leases that have been offered to client:



                  How long (in seconds) should the DHCP server keep outstanding OFFERs? [10]:


                  The default is 10 seconds, which is satisfactory for a fast network. However, if you are operating on a slow network or expect to be servicing slow clients (like 486 PCs and below), you may wish to increase the timeout. In addition, you can also specify that the dhcptab file be reread during a specified interval, which is useful only if you have made manual changes using dhtadm:



                  How often (in minutes) should the DHCP server rescan the dhcptab? [Never]:


                  If you wish to support BOOTP clients, you should indicate this at the next prompt:


                  Do you want to enable BOOTP compatibility mode? (Y/[N]):

                  After configuring these nondefault options, you will be asked to configure the standard DHCP options. The first option is the default lease time, which is specified in days:


                  Enter default DHCP lease policy (in days) [3]:

                  This value is largely subjective, although it can be estimated from the address congestion of your network. If you are only using an average 50 percent of the addresses on your network, then you can probably set this value to 7 days without concern. If you are at the 75 percent level, you may wish to use the default value of 3 days. If you are approaching saturation, you should select daily lease renewal.






                  Tip 

                  If the number of hosts exceeds the number of available IP addresses, you may need to enter a fractional value to ensure the most equitable distribution of addresses.



                  Most sites will wish to allow clients to renegotiate their existing leases:



                  Do you want to allow clients to renegotiate their leases? ([Y]/N):


                  However, just like a normal landlord, you may sometimes be compelled to reject requests for lease renewal-especially if your network is saturated. You must now enable DHCP support for at least one network for DHCP to operate:


                  Enable DHCP/BOOTP support of networks you select? ([Y]/N):

                  For an example local network of 192.65.34.0, you will be asked the following questions:



                  Configure BOOTP/DHCP on local LAN network: 192.65.34.0? ([Y]/N):


                  You should (of course!) answer yes if this is the network that you wish to configure DHCP for. Next, you will need to determine whether you wish DHCP to insert hostnames into the hosts file for you, based on the DHCP data:



                  Do you want hostnames generated and inserted in the files hosts table? (Y/[N]):


                  Most sites will use DNS or similar for name resolution, rather than the hosts file, so this option is not recommended. One situation where you may wish to generate hostnames is a terminal server or web server pool, where the hostnames are arbitrary and frequently change in number. In this case, you simply need to enter a sensible basename for the hostnames to generated from:



                  What rootname do you want to use for generated names? [yourserver-]:


                  For a web server bank, you could use a descriptive name like 'www-.' Next, you will be asked to define the IP address range that you want the DHCP server to manage, beginning with the starting address:


                  Enter starting IP address [192.65.34.0]:

                  Next, you must specify the number of clients. In our Class C network, this will be 254:


                  Enter the number of clients you want to add (x < 65535):

                  Once you have defined the network that you wish to support, you're ready to start using DHCP. An alternative method for invoking dhcpconfig is from the command line, passing key parameters as arguments. For example, to set up a DHCP server for the domain paulwatters.com, with the DNS server 204.56.54.22 and a lease time of 14,400 seconds (4 hours), the following command would be used:



                  # dhcpconfig -D -r SUNWbinfiles -p /var/dhcp -l 14400 \
                  -d paulwatters.com -a 204.56.54.22 -h dns -y paulwatters.com


                  To unconfigure a DHCP server, the following command should be executed:


                  # dhcpconfig -U -f -x -h

                  This command removes host entries from the name service, the dhcptab, and the network tables.


                  An alternative to the dhcpconfig command is the dhcpmgr GUI interface, which performs the following operations:




                  • Configure DHCP




                  • Configure BOOTP




                  • Administer DHCP




                  • Administer BOOTP




                  • Administer DHCP addresses and macros




                  • Administer DHCP options




                  • Migrate DHCP data stores




                  • Move data from one DHCP server to another





                  Figure 38-2 shows the GUI interface for dhcpmgr.






                  Figure 38-2: DHCP client for Microsoft Windows.





                  EXAM TIP  

                  You should be able to identify the main functions of dhcpmgr.












                  Newer Posts Older Posts Home