Monday, October 26, 2009

Selective Privilege Enablement




















Selective Privilege Enablement


One of the greatest advantages to using roles is that roles provide the ability to selectively enable and disable privileges for the user. Unlike directly granted system and object privileges, which are enabled (and thus “on”) all the time, roles can be granted to a user but not enabled. The privileges for the user are not “on” until the role is enabled.


I’ve seen people try to mimic selective privileges by having their application, at execution time, dynamically grant and revoke object privileges. For example, they might create a procedure that the application will call to enable the privileges. Likewise, there will be a procedure to undo the user’s privileges.


In the following example, privileges to control access to DATA_OWNER’s objects have been granted to the SEC_MGR schema.



system@KNOX10g> -- delegate privs to the security administrator
system@KNOX10g> GRANT ALL ON data_owner.t TO sec_mgr
2 WITH GRANT OPTION;

Grant succeeded.

system@KNOX10g> GRANT EXECUTE ON data_owner.foo TO sec_mgr
2 WITH GRANT OPTION;

Grant succeeded.


If you were to implement dynamic privilege enablement, you might create a program similar to the following:



sec_mgr@KNOX10g> CREATE OR REPLACE PROCEDURE set_privs
2 AS
3 BEGIN
4 EXECUTE IMMEDIATE 'grant select on DATA_OWNER.T to '
5 || USER;
6 END;
7 /

Procedure created.

sec_mgr@KNOX10g> CREATE OR REPLACE PROCEDURE unset_privs
2 AS
3 BEGIN
4 EXECUTE IMMEDIATE 'revoke select on DATA_OWNER.T from '
5 || USER;
6 END;
7 /

Procedure created.

sec_mgr@KNOX10g> GRANT EXECUTE ON set_privs TO scott;

Grant succeeded.

sec_mgr@KNOX10g> GRANT EXECUTE ON unset_privs TO scott;

Grant succeeded.


For the user to selectively enable their privileges, the application simply calls the SET_PRIVS procedure while logged in as the appropriate user.


scott@KNOX10g> select * from data_owner.t;
select * from data_owner.t
*
ERROR at line 1:
ORA-00942: table or view does not exist


scott@KNOX10g> exec sec_mgr.set_privs;

PL/SQL procedure successfully completed.

scott@KNOX10g> select * from data_owner.t;

D
-
X

This is a bad design. The major flaw exists because the privileges aren’t restricted to the user for just that application. A user may log in to the database and gain access to the object without having to do anything. This is because they query after the SET_PRIVS has been executed and before the UNSET_PRIVS. Do not use this design.


An alternate design for supporting selective privileges is based on assigning the roles to a user but not enabling the role by default. For example, to prevent a user from accessing a particular application’s data, you might force the users to possess the APP_USER role. An application user is granted the role, but the role is not enabled by default. When the user accesses the database via the application, the application knows it has to enable the role and does so transparently for the user:


 sec_mgr@KNOX10g> CREATE ROLE app_user_role;

Role created.

sec_mgr@KNOX10g> -- Grant privileges to role
sec_mgr@KNOX10g> GRANT ALL ON data_owner.t TO app_user_role;

Grant succeeded.

sec_mgr@KNOX10g> -- Grant role to user(s)
sec_mgr@KNOX10g> GRANT app_user_role TO scott;

Grant succeeded.

sec_mgr@KNOX10g> -- Disable this role by default.
sec_mgr@KNOX10g> -- Privileges are not available
sec_mgr@KNOX10g> -- until role is enabled
sec_mgr@KNOX10g> ALTER USER scott DEFAULT ROLE ALL
2 EXCEPT app_user_role;

User altered.

If the user logs in via SQL*Plus and tries to query the application’s tables, they will fail because the privileges to do so aren’t available until the role is enabled:


scott@KNOX10g> SELECT * FROM data_owner.t;
SELECT * FROM data_owner.t
*
ERROR at line 1:
ORA-00942: table or view does not exist


scott@KNOX10g> SET ROLE app_user_role;

Role set.

scott@KNOX10g> SELECT * FROM data_owner.t;

D
-
X

As you may conclude, this solution doesn’t appear to be much better than the procedural-based method. The only difference is the set role implementation only enables the privileges for the database session, whereas the SET_PRIVS procedure will enable the privileges for all database sessions.


In the preceding examples, knowing or not knowing the existence of a procedure or role that has to be executed or enabled provides no security. Basing security on the simple knowledge of things that can be easily guessed or derived is called “security through obscurity,” and it’s not considered a best practice when used alone.






Caution 


Forcing applications and users to explicitly enable roles or privileges does not provide adequate security, and believing it does only fosters a false sense of security.





Selective Privilege Use Cases


The real power of selective privileges implemented via roles is exploited when using roles that require something other than just knowing the role’s name. You will look at two ways to secure roles soon. Before you do that, you’ll look at several important use cases that frame the complexities and requirements for selective privileges through roles.




Privileges Only When Accessed via an Application


One frequent requirement is to only allow user database access when the access is via an application. This is very popular with web applications. You might wonder why this is so hard. The answer: standards and interoperability.


Normally, standards and interoperability are good things. In the security world, standards and interoperability for establishing database connections can be a challenge, because they may facilitate unwanted access into the database. The Oracle database supports numerous applications and protocols—ODBC, JDBC, Oracle Net clients, Web Services, HTTP, FTP, and so on. These protocols and capabilities are important to ensuring interoperability with commercial applications as well as facilitating emerging ways to access data.


From a security standpoint, each one represents an additional window into the database that needs to be secured. The best mechanism for securing them is to shut them off. That may not be a practical choice if other applications need access to the protocols.


What you typically see are users accessing the database via a known application and protocol. The security requirement then is to ensure that this is the only way the users can get to the database. The applications may very well be providing extra layers of security to prevent the users from poking around in the database. Ensuring the users aren’t in the database running ad hoc queries is a good thing. It doesn’t take much for a user to intentionally or inadvertently launch a denial of service attack via some poorly formed SQL.


The net, as seen in Figure 7-2, is that you want to restrict access to your application tables to the application and the users such that the users are only accessing the tables via the application.






Figure 7-2: Frequent privilege requirement constrains user access to the database only when accessed via an application.




Privileges Based on User and Application


Refining the problem a bit, there is a more complex problem in which the security privileges are based not only on the user, but also on the application. As seen in Figure 7-3, the user may have several applications accessing the same database. The difference between this and the one just described is you’re assuming the user has access to the same database through multiple applications. This is a popular model for two application types. The first is a web-based application. The second is an ad hoc reporting tool.






Figure 7-3: Privileges for a user accessing the database via multiple applications should vary depending on which application the user is using.

The figure depicts the security concern: the user will point the ad hoc query tool at the web application data or the financial application data. Because the application data may not rely on database security alone (if at all), the user may have full access, or more access, than you’d like. This application data wasn’t intended to be accessed in an ad hoc manner. To maintain least privilege, privileges should be based on the user and the application, not the user and all the applications.





Privileges Based on User, Application, Location, and Time



Another variation on privileges can be seen when the same user, using the same application, accesses a database in different ways, from different places, at different times. The point is that arbitrarily complex security policies may be required.


As seen in Figure 7-4, the user may be accessing the application from within her office. Access to the office is physically controlled by the security guard at the entrance of the building and a lock on the office door. Because of the physical security controls, a user accessing the application from within their office provides some sense of assurance that it’s really the user. Therefore, the user is allowed all privileges necessary. When the user travels to a field office, the location may still be controlled, but there is less assurance that it’s really the user. Therefore, they only get read and insert privileges. Finally, access via a wireless device is trusted the least. Wireless devices are lost and stolen frequently. As such, the user has the ability only to read data. No data manipulation would be allowed. All of these could be constrained to certain hours of the day. If the user tries to gain access on Sunday morning at 3 A.M., they are not allowed.






Figure 7-4: User privileges may vary based on access method, location, and time of day.


The common thread among all these use cases is that the privileges aren’t based on the user’s identity alone. Privileges should be based on everything you know—the application, how the user authenticated, where they are, the time of day, and of course, the user’s identity. The trick to making this work is selective privilege support. Roles and their ability to be enabled and disabled are critical to making this work. Just as important is the ability to secure the selective enablement of these roles and privileges.


There’s also a philosophical reason why securing the roles is important. If the roles are simply enabled whenever the application makes the SET ROLE call, then the application is in fact controlling the database privileges. That is, the database has no say in the matter. It’s just sheepishly following along with whatever the application tells it. This is not defense in depth and therefore is not optimal. Let’s see how to secure database roles. It involves two variations on roles: password-protected roles and secure application roles.





















Chapter 5: Creational Patterns




















Chapter 5 -
Creational Patterns
Patterns in Java, Volume 1: A Catalog of Reusable Design Patterns Illustrated with UML, Second Edition
by Mark Grand
John Wiley & Sons � 2002


























Chapter 5: Creational Patterns




Overview



Creational patterns provide guidance on how to create objects when their creation requires making decisions. These decisions will typically involve dynamically deciding which class to instantiate or which objects an object will delegate responsibility to. The value of creational patterns is to tell us how to structure and encapsulate these decisions.


Often, there is more than one creational pattern that you can apply to a situation. Sometimes you can combine multiple patterns advantageously. In other cases, you must choose between competing patterns. For these reasons, it is important to be acquainted with all of the patterns described in this chapter.


If you have time to learn only one pattern in this chapter, the most commonly used one is Factory Method. The Factory Method pattern is a way for an object to initiate the creation of another object without having to know the class of the object created.


The Abstract Factory pattern is a way for objects to initiate the creation of a variety of different kinds of objects without knowing the classes of the objects created, but ensuring that the classes are a correct combination.


The Builder pattern is a way to determine the class of an object created by its contents or context.


The Prototype pattern allows an object to create customized objects without knowing their exact class or the details of how to create them.


The Singleton pattern is a way for multiple objects to share a common object without having to know whether it already exists.


The Object Pool pattern is a way to reuse objects rather than create new objects.
















Factories



[ Team LiB ]





Factories



When creation of an object, or an entire AGGREGATE, becomes complicated or reveals too much of the internal structure, FACTORIES provide encapsulation.



Much of the power of objects rests in the intricate configuration of their internals and their associations. An object should be distilled until nothing remains that does not relate to its meaning or support its role in interactions. This mid-life cycle responsibility is plenty. Problems arise from overloading a complex object with responsibility for its own creation.


A car engine is an intricate piece of machinery, with dozens of parts collaborating to perform the engine's responsibility: to turn a shaft. One could imagine trying to design an engine block that could grab on to a set of pistons and insert them into its cylinders, spark plugs that would find their sockets and screw themselves in. But it seems unlikely that such a complicated machine would be as reliable or as efficient as our typical engines are. Instead, we accept that something else will assemble the pieces. Perhaps it will be a human mechanic or perhaps it will be an industrial robot. Both the robot and the human are actually more complex than the engine they assemble. The job of assembling parts is completely unrelated to the job of spinning a shaft. The assemblers function only during the creation of the car�you don't need a robot or a mechanic with you when you're driving. Because cars are never assembled and driven at the same time, there is no value in combining both of these functions into the same mechanism. Likewise, assembling a complex compound object is a job that is best separated from whatever job that object will have to do when it is finished.


But shifting responsibility to the other interested party, the client object in the application, leads to even worse problems. The client knows what job needs to be done and relies on the domain objects to carry out the necessary computations. If the client is expected to assemble the domain objects it needs, it must know something about the internal structure of the object. In order to enforce all the invariants that apply to the relationship of parts in the domain object, the client must know some of the object's rules. Even calling constructors couples the client to the concrete classes of the objects it is building. No change to the implementation of the domain objects can be made without changing the client, making refactoring harder.


A client taking on object creation becomes unnecessarily complicated and blurs its responsibility. It breaches the encapsulation of the domain objects and the AGGREGATES being created. Even worse, if the client is part of the application layer, then responsibilities have leaked out of the domain layer altogether. This tight coupling of the application to the specifics of the implementation strips away most of the benefits of abstraction in the domain layer and makes continuing changes ever more expensive.


Creation of an object can be a major operation in itself, but complex assembly operations do not fit the responsibility of the created objects. Combining such responsibilities can produce ungainly designs that are hard to understand. Making the client direct construction muddies the design of the client, breaches encapsulation of the assembled object or AGGREGATE, and overly couples the client to the implementation of the created object.


Complex object creation is a responsibility of the domain layer, yet that task does not belong to the objects that express the model. There are some cases in which an object creation and assembly corresponds to a milestone significant in the domain, such as "open a bank account." But object creation and assembly usually have no meaning in the domain; they are a necessity of the implementation. To solve this problem, we have to add constructs to the domain design that are not ENTITIES, VALUE OBJECTS, or SERVICES. This is a departure from the previous chapter, and it is important to make the point clear: We are adding elements to the design that do not correspond to anything in the model, but they are nonetheless part of the domain layer's responsibility.


Every object-oriented language provides a mechanism for creating objects (constructors in Java and C++, instance creation class methods in Smalltalk, for example), but there is a need for more abstract construction mechanisms that are decoupled from the other objects. A program element whose responsibility is the creation of other objects is called a FACTORY.


Figure 6.12. Basic interactions with a FACTORY


Just as the interface of an object should encapsulate its implementation, thus allowing a client to use the object's behavior without knowing how it works, a FACTORY encapsulates the knowledge needed to create a complex object or AGGREGATE. It provides an interface that reflects the goals of the client and an abstract view of the created object.


Therefore:


Shift the responsibility for creating instances of complex objects and AGGREGATES to a separate object, which may itself have no responsibility in the domain model but is still part of the domain design. Provide an interface that encapsulates all complex assembly and that does not require the client to reference the concrete classes of the objects being instantiated. Create entire AGGREGATES as a piece, enforcing their invariants.



There are many ways to design FACTORIES. Several special-purpose creation patterns� FACTORY METHOD, ABSTRACT FACTORY, and BUILDER�were thoroughly treated in Gamma et al. 1995. That book mostly explored patterns for the most difficult object construction problems. The point here is not to delve deeply into designing FACTORIES, but rather to show the place of FACTORIES as important components of a domain design. Proper use of FACTORIES can help keep a MODEL-DRIVEN DESIGN on track.


The two basic requirements for any good FACTORY are


  1. Each creation method is atomic and enforces all invariants of the created object or AGGREGATE. A FACTORY should only be able to produce an object in a consistent state. For an ENTITY, this means the creation of the entire AGGREGATE, with all invariants satisfied, but probably with optional elements still to be added. For an immutable VALUE OBJECT, this means that all attributes are initialized to their correct final state. If the interface makes it possible to request an object that can't be created correctly, then an exception should be raised or some other mechanism should be invoked that will ensure that no improper return value is possible.

  2. The FACTORY should be abstracted to the type desired, rather than the concrete class(es) created. The sophisticated FACTORY patterns in Gamma et al. 1995 help with this.


Choosing FACTORIES and Their Sites


Generally speaking, you create a factory to build something whose details you want to hide, and you place the FACTORY where you want the control to be. These decisions usually revolve around AGGREGATES.


For example, if you needed to add elements inside a preexisting AGGREGATE, you might create a FACTORY METHOD on the root of the AGGREGATE. This hides the implementation of the interior of the AGGREGATE from any external client, while giving the root responsibility for ensuring the integrity of the AGGREGATE as elements are added, as shown in Figure 6.13 on the next page.


Figure 6.13. A FACTORY METHOD encapsulates expansion of an AGGREGATE.


Another example would be to place a FACTORY METHOD on an object that is closely involved in spawning another object, although it doesn't own the product once it is created. When the data and possibly the rules of one object are very dominant in the creation of an object, this saves pulling information out of the spawner to be used elsewhere to create the object. It also communicates the special relationship between the spawner and the product.


In Figure 6.14, the Trade Order is not part of the same AGGREGATE as the Brokerage Account because, for a start, it will go on to interact with the trade execution application, where the Brokerage Account would only be in the way. Even so, it seems natural to give the Brokerage Account control over the creation of Trade Orders. The Brokerage Account contains information that will be embedded in the Trade Order (starting with its own identity), and it contains rules that govern what trades are allowed. We might also benefit from hiding the implementation of Trade Order. For example, it might be refactored into a hierarchy, with separate subclasses for Buy Order and Sell Order. The FACTORY keeps the client from being coupled to the concrete classes.


Figure 6.14. A FACTORY METHOD spawns an ENTITY that is not part of the same AGGREGATE.


A FACTORY is very tightly coupled to its product, so a FACTORY should be attached only to an object that has a close natural relationship with the product. When there is something we want to hide�either the concrete implementation or the sheer complexity of construction�yet there doesn't seem to be a natural host, we must create a dedicated FACTORY object or SERVICE. A standalone FACTORY usually produces an entire AGGREGATE, handing out a reference to the root, and ensuring that the product AGGREGATE'S invariants are enforced. If an object interior to an AGGREGATE needs a FACTORY, and the AGGREGATE root is not a reasonable home for it, then go ahead and make a standalone FACTORY. But respect the rules limiting access within an AGGREGATE, and make sure there are only transient references to the product from outside the AGGREGATE.


Figure 6.15. A standalone FACTORY builds AGGREGATE.


When a Constructor Is All You Need


I've seen far too much code in which all instances are created by directly calling class constructors, or whatever the primitive level of instance creation is for the programming language. The introduction of FACTORIES has great advantages, and is generally underused. Yet there are times when the directness of a constructor makes it the best choice. FACTORIES can actually obscure simple objects that don't use polymorphism.


The trade-offs favor a bare, public constructor in the following circumstances.


  • The class is the type. It is not part of any interesting hierarchy, and it isn't used polymorphically by implementing an interface.

  • The client cares about the implementation, perhaps as a way of choosing a STRATEGY.

  • All of the attributes of the object are available to the client, so that no object creation gets nested inside the constructor exposed to the client.

  • The construction is not complicated.

  • A public constructor must follow the same rules as a FACTORY: It must be an atomic operation that satisfies all invariants of the created object.


Avoid calling constructors within constructors of other classes. Constructors should be dead simple. Complex assemblies, especially of AGGREGATES, call for FACTORIES. The threshold for choosing to use a little FACTORY METHOD isn't high.


The Java class library offers interesting examples. All collections implement interfaces that decouple the client from the concrete implementation. Yet they are all created by direct calls to constructors. A FACTORY could have encapsulated the collection hierarchy. The FACTORY's methods could have allowed a client to ask for the features it needed, with the FACTORY selecting the appropriate class to instantiate. Code that created collections would be more expressive, and new collection classes could be installed without breaking every Java program.


But there is a case in favor of the concrete constructors. First, the choice of implementation can be performance sensitive for many applications, so an application might want control. (Even so, a really smart FACTORY could accommodate such factors.) Anyway, there aren't very many collection classes, so it isn't that complicated to choose.


The abstract collection types preserve some value in spite of the lack of a FACTORY because of their usage patterns. Collections are very often created in one place and used in another. This means that the client that ultimately uses the collection�adding, removing, and retrieving its contents�can still talk to the interface and be decoupled from the implementation. The selection of a collection class typically falls to the object that owns the collection, or to the owning object's FACTORY.


Designing the Interface


When designing the method signature of a FACTORY, whether standalone or FACTORY METHOD, keep in mind these two points.



  • Each operation must be atomic.
    You have to pass in everything needed to create a complete product in a single interaction with the FACTORY. You also have to decide what will happen if creation fails, in the event that some invariant isn't satisfied. You could throw an exception or just return a null. To be consistent, consider adopting a coding standard for failures in FACTORIES.


  • The FACTORY will be coupled to its arguments.
    If you are not careful in your selection of input parameters, you can create a rat's nest of dependencies. The degree of coupling will depend on what you do with the argument. If it is simply plugged into the product, you've created a modest dependency. If you are picking parts out of the argument to use in the construction, the coupling gets tighter.


The safest parameters are those from a lower design layer. Even within a layer, there tend to be natural strata with more basic objects that are used by higher level objects. (Such layering will be discussed in different ways in Chapter 10, "Supple Design," and again in Chapter 16, "Large-Scale Structure.")


Another good choice of parameter is an object that is closely related to the product in the model, so that no new dependency is being added. In the earlier example of a Purchase Order Item, the FACTORY METHOD takes a Catalog Part as an argument, which is an essential association for the Item. This adds a direct dependency between the Purchase Order class and the Part. But these three objects form a close conceptual group. The Purchase Order's AGGREGATE already referenced the Part, anyway. So giving control to the AGGREGATE root and encapsulating the AGGREGATE'S internal structure is a good trade-off.


Use the abstract type of the arguments, not their concrete classes. The FACTORY is coupled to the concrete class of the products; it does not need to be coupled to concrete parameters also.


Where Does Invariant Logic Go?


A FACTORY is responsible for ensuring that all invariants are met for the object or AGGREGATE it creates; yet you should always think twice before removing the rules applying to an object outside that object. The FACTORY can delegate invariant checking to the product, and this is often best.


But FACTORIES have a special relationship with their products. They already know their product's internal structure, and their entire reason for being involves the implementation of their product. Under some circumstances, there are advantages to placing invariant logic in the FACTORY and reducing clutter in the product. This is especially appealing with AGGREGATE rules (which span many objects). It is especially unappealing with FACTORY METHODS attached to other domain objects.


Although in principle invariants apply at the end of every operation, often the transformations allowed to the object can never bring them into play. There might be a rule that applies to the assignment of the identity attributes of an ENTITY. But after creation that identity is immutable. VALUE OBJECTS are completely immutable. An object doesn't need to carry around logic that will never be applied in its active lifetime. In such cases, the FACTORY is a logical place to put invariants, keeping the product simpler.


ENTITY FACTORIES Versus VALUE OBJECT FACTORIES


ENTITY FACTORIES differ from VALUE OBJECT FACTORIES in two ways. VALUE OBJECTS are Immutable; the product comes out complete in its final form. So the FACTORY operations have to allow for a full description of the product. ENTITY FACTORIES tend to take just the essential attributes required to make a valid AGGREGATE. Details can be added later if they are not required by an invariant.


Then there are the issues involved in assigning identity to an ENTITY�irrelevant to a VALUE OBJECT. As pointed out in Chapter 5, an identifier can either be assigned automatically by the program or supplied from the outside, typically by the user. If a customer's identity is to be tracked by the telephone number, then that telephone number must obviously be passed in as an argument to the FACTORY. When the program is assigning an identifier, the FACTORY is a good place to control it. Although the actual generation of a unique tracking ID is typically done by a database "sequence" or other infrastructure mechanism, the FACTORY knows what to ask for and where to put it.


Reconstituting Stored Objects


Up to this point, the FACTORY has played its part in the very beginning of an object's life cycle. At some point, most objects get stored in databases or transmitted through a network, and few current database technologies retain the object character of their contents. Most transmission methods flatten an object into an even more limited presentation. Therefore, retrieval requires a potentially complex process of reassembling the parts into a live object.


A FACTORY used for reconstitution is very similar to one used for creation, with two major differences.



  1. An ENTITY FACTORY used for reconstitution does not assign a new tracking ID.
    To do so would lose the continuity with the object's previous incarnation. So identifying attributes must be part of the input parameters in a FACTORY reconstituting a stored object.


  2. A FACTORY reconstituting an object will handle violation of an invariant differently.
    During creation of a new object, a FACTORY should simply balk when an invariant isn't met, but a more flexible response may be necessary in reconstitution. If an object already exists somewhere in the system (such as in the database), this fact cannot be ignored. Yet we also can't ignore the rule violation. There has to be some strategy for repairing such inconsistencies, which can make reconstitution more challenging than the creation of new objects.


Figures 6.16 and 6.17 (on the next page) show two kinds of reconstitution. Object-mapping technologies may provide some or all of these services in the case of database reconstitution, which is convenient. Whenever there is exposed complexity in reconstituting an object from another medium, the FACTORY is a good option.


Figure 6.16. Reconstituting an ENTITY retrieved from a relational database


Figure 6.17. Reconstituting an ENTITY transmitted as XML


To sum up, the access points for creation of instances must be identified, and their scope must be defined explicitly. They may simply be constructors, but often there is a need for a more abstract or elaborate instance creation mechanism. This need introduces new constructs into the design: FACTORIES. FACTORIES usually do not express any part of the model, yet they are a part of the domain design that helps keep the model-expressing objects sharp.


A FACTORY encapsulates the life cycle transitions of creation and reconstitution. Another transition that exposes technical complexity that can swamp the domain design is the transition to and from storage. This transition is the responsibility of another domain design construct, the REPOSITORY.





    [ Team LiB ]



    The Graphical Environment








    The Graphical Environment


    The underlying graphical system is based on the X Window System, also known simply as X (but not the X-Files), maintained by the X.Org Foundation. The graphical desktop environment — the look and feel — is based on the GNOME (pronounced guh-NOME) system. Both X and GNOME are open source software, of course, and I describe them in more detail in Chapters 11 and 12. For now, all you need to know about X and GNOME is that they give you an attractive and functional place to work.


    GNOME is attractive, highly customizable, and provides links to all of the software described in this book. I describe and explain the applications in later chapters.


    There's plenty more to discover about Ubuntu, and the rest of this book is dedicated to showing how.


    • If you want to use live Ubuntu now, Chapter 5 starts with the basics.

    • If you want to permanently install Ubuntu on a hard drive, Chapters 3 and 4 show you how to prepare a PC and install Ubuntu.


    It's your choice. Read on!









    Chapter 10. IP Fragmentation and Reassembly

    Team-Fly
     

     

    TCP/IP Illustrated, Volume 2: The Implementation
    By
    Gary R. Wright, W. Richard Stevens
    Table of Contents


    Chapter 10. IP Fragmentation and Reassembly




      Section 10.1. 
      Introduction


      Section 10.2. 
      Code Introduction


      Section 10.3. 
      Fragmentation


      Section 10.4. 
      ip_optcopy Function


      Section 10.5. 
      Reassembly


      Section 10.6. 
      ip_reass Function


      Section 10.7. 
      ip_slowtimo Function


      Section 10.8. 
      Summary


    Team-Fly
     

     
    Top
     


    Web Service Tests













    Web Service Tests

    Finally, we get to the point of hooking the whole thing up and using the functionality that we built previously through the Web service. The first test we will write is to verify the id field, just as we did previously with the stub and database versions. We will need a Recording inserted into the database just as we did with the database tests. Here is the first test:


    [TestFixture]
    public class CatalogGatewayFixture : RecordingFixture
    {
    private CatalogGateway gateway = new CatalogGateway();
    private ServiceGateway.RecordingDto dto;

    [SetUp]
    public new void SetUp()
    {
    base.SetUp();
    dto = gateway.FindByRecordingId(Recording.Id);
    }

    [Test]
    public void Id()
    {
    Assert.AreEqual(Recording.Id, dto.id);
    }
    }

    As you can see, the tests do follow a similar pattern. We retrieve a RecordingDto, in this case from a class named CatalogGateway, and verify that what was inserted into the database matches what is in the RecordingDto. Let’s work on getting this to compile and run.




    Web Service Producer and Consumer Infrastructure


    Now, we need to address the question of how the test will access the Web service. We will use the “Add Web Reference” capability of Visual Studio .NET to generate the required code to handle the mechanics of consuming the Web service. But before we can create a Web reference, we need to define the Web service itself using WSDL. We will not author the WSDL by hand; we will once again rely on a tool to help us with the task.


    We will use the WebMethod framework that comes as part of the ASP.NET Web services infrastructure. This framework allows generation of the required Web service WSDL by introspection of a class written in one of the .NET-supported languages. The framework defines a set of custom attributes that can be used to control the WSDL-generation process. Visual Studio .NET further simplifies the process of authoring a Web service by providing a Web service creation wizard that generates a template ASP.NET application to host the Web service. All we have to do is to define the code behind the page class that will implement the Web service’s functionality.



    Web Service Producer


    We will name our Web service class CatalogServiceInterface. The wizard will generate the CatalogServiceInterface.aspx.cs template file, and we will add the FindByRecordingId WebMethod to this class:



    [WebService(Namespace="http://nunit.org/services", Name="CatalogGateway")]
    public class CatalogServiceInterface : System.Web.Services.WebService
    {
    private DatabaseCatalogService service =
    new DatabaseCatalogService();

    public CatalogServiceInterface()
    {
    //CODEGEN: This call is required by the ASP.NET Web Services Designer
    InitializeComponent();
    }

    [WebMethod]
    public RecordingDto FindByRecordingId(long id)
    {
    return service.FindByRecordingId(id);
    }
    }


    The CatalogServiceInterface class uses the DatabaseCatalogService that we built previously for retrieval and transformation.




    Web Service Consumer


    Now that we have defined the WebMethod interface, we can generate the required code to consume the Web service. We will add a Web reference to our project by selecting the URL of the Web service to locate the WSDL of the Web service. The Web Reference Wizard takes care of the rest and generates the CatalogGateway proxy class that our client code can use to consume the Web service. The generated proxy supports both a synchronous and an asynchronous service consumption model, but in this example, we use only the synchronous communication mode.


    Also, you may notice that the Web Reference Wizard generated a different RecordingDto class that is used as the type of the return value of the FindByRecordingId WebMethod on the client side. This is expected because the Web service consumer and producer can be built on two different platforms, and the consumer may have a different type of system. The Add Web Reference Wizard used only the information available in our WSDL file to generate the required client-side type mappings, and because the RecordingDto is not a primitive type directly supported by the Simple Object Access Protocol (SOAP) serialization, we ended up with two RecordingDto types—one on the server and one on the client. The Web services do not guarantee full type fidelity—this is the price we have to pay for interoperability.




    Running the Test


    Now we are ready to run the CheckId test in the CatalogGatewayFixture. We get a failure because the Web service does not have the database access connection properly configured. We will need to add the connection settings to the Web.config file. We run it again and we still have a failure; this time, the Web service was denied access to the database because we have not defined the proper database login name for the Web service.




    Web Services Security


    Security is one of the most important aspects of building distributed systems. Security for Web services is a very large subject, and its treatment is beyond the scope of this book. There are several specifications in progress that address the issues of Web services security on the SOAP messaging level.


    We will briefly touch on some of the security-integration aspects to the extent necessary to proceed with our work. We will consider the Web services security only as it is supported by the existing HTTP transport infrastructure. Because ASP.NET Web services are hosted as typical ASP.NET applications, the existing ASP.NET security infrastructure for authentication and authorization can be reused for Web services. We will need to decide how to manage the security context propagation from the ASP.NET tier to the ADO.NET tier of our application. We have a couple of options here:





    • Direct security context propagation by using trusted connections to the databaseUsing this approach requires creating database login accounts for the ASP.NET user principals. If we support anonymous access to the ASP.NET application hosting our Web service, we will need to create an ASPNET account in the database server and grant this account the proper access rights to our catalog. If the ASP.NET application requires proper user authentication for access to the Web service, we may need to create several database accounts for each authenticated user principal. One of the main advantages of this approach is the fact that we can reuse existing database server security infrastructure to protect access to the data; however, this approach will also lead to the creation of individual connection pools for each user principal, and it does not scale well for a large number of users.





    • Mapped security contextsIn this approach, we will have a single database login account for the entire Web service application. We can do either of the following:




      • Explicitly pass the user credentials to the database server for authentication (the username and password for this database login account will be stored in the Web.config file).




      • Map the caller principal on the ASP.NET service thread to the desired database login name and use the trusted connection to the database.






    The mapped security context approach does not require creating a large number of connections (one for each distinct user principal).


    We will be using the direct security context propagation approach and allow anonymous access to the Web service application to avoid creation of multiple database connections. Following this decision, we will need to add the ASP.NET user to the list of database login accounts and grant read/write rights for the catalog schema to this account. After making this change, we are ready to run our test again. Now it passes, and we are ready to move on and write the rest of the tests.


    The tests for most of the fields were the same, but the tests TrackCount and ReviewCount were slightly different. In DatabaseCatalogService and in StubCatalogService, we could get the number of tracks or reviews by calling the Length method on the tracks and reviews fields, respectively. When we tried this on the Web service tests, we got a null reference exception. It turns out that if there are no tracks or reviews, the fields are not initialized as empty arrays— they are left only as null values. So we had to change the tests on the CatalogGatewayFixture to the following:


       [Test]
    public void TrackCount()
    {
    Assert.AreEqual(0, Recording.GetTracks().Length);
    Assert.IsNull(dto.tracks);
    }
    [Test]
    public void ReviewCount()
    {
    Assert.AreEqual(0, Recording.GetReviews().Length);
    Assert.IsNull(dto.reviews);
    }

    After we made these changes, we recompiled the code and reran all three sets of tests. They all pass, so we are done with the tests for the functionality. One last time we revisited the commonality of the CatalogGatewayFixture, DatabaseCatalogServiceFixture, and StubCatalogServiceFixture, and again we could not come up with anything that reduced the code duplication and increased the communication of what we were trying to do. So, this is one of those cases where there is code duplication but removing it reduces the communication.













    Recipe 21.9. Changing Text Color










    Recipe 21.9. Changing Text Color





    Problem


    You want to display
    multicolored text on the console.




    Solution


    The simplest solution is to use HighLine. It lets you enclose
    color commands in an ERb template that gets interpreted within HighLine and printed to standard output. Try this colorful bit of code to test the capabilities of your terminal:



    require 'rubygems'
    require 'highline/import'

    say(%{Here's some <%= color('dark red text', RED) %>.})
    say(%{Here's some <%= color('bright red text on a blue background',
    RED+BOLD+ON_BLUE) %>.})
    say(%{Here's some <%=
    color('blinking bright cyan
    text', CYAN+BOLD+BLINK) %>.})
    say(%{Here's some <%= GREEN+UNDERLINE %>underlined dark green text<%=CLEAR%>.})



    Some of these features (particularly the blinking and underlining) aren't supported on all terminals.




    Discussion


    The
    HighLine#color
    method encloses a display string in special command strings, which start with an escape character and a left square bracket:



    HighLine.new.color('Hello', HighLine::GREEN)
    # => "\e[32mHello\e[0m"



    These are ANSI escape sequences. Instead of displaying the string "\e[32m", an ANSI-compatible terminal treats it as a command: in this case, a command to start printing characters in green-on-black. The string "\e[0m" tells the terminal to go back to white-on-black.


    Most modern Unix terminals support ANSI escape sequences, including the Mac OS X terminal. You should be able to get green text in your irb session just by calling puts "\e[32mHello\e[0m" (try it!), but HighLine makes it easy to get color without having to remember the ANSI sequences.


    Windows terminals don't support ANSI by default, but you can get it to work by loading ANSI.SYS (see below for a relevant Microsoft support article).


    An alternative to HighLine is the Ncurses library.[4] It supports color terminals that use a means other than ANSI, but these days, most color terminals get their color support through ANSI. Since Ncurses is much more complex than HighLine, and not available as a gem, you should only use Ncurses for color if you're already using it for its other features.

    [4] Standard Curses doesn't support color because it was written in the 1980s, when monochrome ruled the world.


    Here's a rough equivalent of the HighLine program given above. This program uses the Ncurses::program wrapper described in Recipe 21.5. The wrapper sets up Ncurses and initializes some default color pairs:



    Ncurses.program do |s|
    # Define the red-on-blue color pair used in the second string.
    # All the default color pairs use a black background.
    Ncurses.init_pair(8, Ncurses::COLOR_RED, Ncurses::COLOR_BLUE)

    Ncurses::attrset(Ncurses::COLOR_PAIR(1))
    s.mvaddstr(0,0, "Here's some dark red text.")

    Ncurses::attrset(Ncurses::COLOR_PAIR(8) | Ncurses::A_BOLD)
    s.mvaddstr(1,0, "Here's some bright red text on a blue background.")
    Ncurses::attrset(Ncurses::
    COLOR_PAIR(6) | Ncurses::A_BOLD |
    Ncurses::A_BLINK)
    s.mvaddstr(2,0, "Here's some blinking bright cyan
    text.")

    Ncurses::attrset(Ncurses::COLOR_PAIR(2) | Ncurses::A_UNDERLINE)
    s.mvaddstr(3,0, "Here's some underlined dark green text.")

    s.getch
    end



    An Ncurses program can draw from a palette of color pairscombinations of foreground and background colors. Ncurses::program sets up a default palette of the seven basic ncurses colors (red, green, yellow, blue, magenta, cyan, and white), each on a black background. You can change this around if you like, or define additional color pairs (like the red-on-blue defined in the example). The following Ncurses program prints out a color chart of all foreground-background pairs. It makes the text of the chart bold, so that the text doesn't become invisible when the background is the same color.



    Ncurses.program do |s|
    pair = 0
    Ncurses::COLORS.each_with_index do |background, i|
    Ncurses::COLORS.each_with_index do |foreground, j|
    Ncurses::init_pair(pair, foreground, background) unless pair == 0
    Ncurses::attrset(Ncurses::COLOR_PAIR(pair) | Ncurses::A_BOLD)
    s.mvaddstr(i, j*4, "#{foreground},#{background}")
    pair += 1
    end
    end
    s.getch
    end



    You can modify a color pair by combining it with an Ncurses constant. The most useful constants are Ncurses::A_BOLD, Ncurses::A_BLINK, and Ncurses::A_UNDERLINE. This works the same way (and, on an ANSI system, uses the same ANSI codes) as HighLine's BOLD, BLINK, and UNDERLINE constants. The only difference is that you modify an Ncurses color with the OR operator (|), and you modify a HighLine color with the addition operator.




    See Also


    • Recipe 1.3, "Substituting Variables into an Existing String," has more on ERb

    • http://en.wikipedia.org/wiki/ANSI_escape_code has technical details on ANSI color codes

    • The examples/ansi_colors.rb file in the HighLine gem

    • You can get a set of Ncurses bindings for Ruby at http://ncurses-ruby.berlios.de/; it's also available as the Debian package libncurses-ruby

    • If you want something more lightweight than the highline gem, try the termansicolor gem instead: it defines methods for generating the escape sequences for ANSI colors, and nothing else

    • "How to Enable ANSI.SYS in a Command Window" (http://support.microsoft.com/?id=101875)













    Chapter 20. Multitasking and Multithreading










    Chapter 20. Multitasking and Multithreading






    You can't concentrate on more than What's six times nine? one thing at once. You won't get very far reading this book if someone is interrupting you every five seconds asking you to do arithmetic problems. But any computer with a modern operating system can do many things at once. More precisely, it can simulate that ability by switching very quickly back and forth between tasks.


    In a multitasking operating system, each program, or process, gets its own space in memory and a share of the CPU's time. Every time you start the Ruby interpreter, it runs in a new process. On Unix-based systems, your script can spawn subprocesses: this feature is very useful for running external command-line programs and using the results in your own scripts (see Recipes 20.8 and 20.9, for instance).


    The main problem with processes is that they're expensive. It's hard to read while people are asking you to do arithmetic, not because either activity is particularly difficult, but because it takes time to switch from one to the other. An operating system spends a lot of its time as overhead, switching between processes, trying to make sure each one gets a fair share of the CPU's time.


    The other problem with processes is that it's difficult to get them to communicate with each other. For simple cases, you can use techniques like those described in Recipe 20.8. You can implement more complex cases with Inter-Process Communication and named pipes, but we say, don't bother. If you want your Ruby program to do two things at once, you're better off writing your code with threads.


    A thread is a sort of lightweight process that runs inside a real process. One Ruby process can host any number of threads, all running more or less simultaneously. It's faster to switch between threads than to switch between processes, and since all of a process's threads run in the same memory space, they can communicate simply by sharing variables.


    Recipe 20.3 covers the basics of multithreaded programming. We use threads throughout this book, except when only a subprocess will work (see, for instance, Recipe 20.1). Some recipes in other chapters, like Recipes 3.12 and 14.4, show threads used in context.


    Ruby implements its own threads, rather than using the operating system's implementation. This means that multithreaded code will work exactly the same way across platforms. Code that spawns subprocesses generally work only on Unix.


    If threads are faster and more portable, why would anyone write code that uses subprocesses? The main reason is that it's easy for one thread to stall all the others by tying up an entire process with an uninterruptible action. One such action is a system call. If you want to run a system call or an external program in the
    background, you should probably fork off a subprocess to do it. See Recipe 16.18 for a vivid example of thisa program that we need to spawn a subprocess instead of a subthread, because the subprocess is going to play a music file.












    6.16 Sorting Dotted-Quad IP Values in Numeric Order




    I l@ve RuBoard










    6.16 Sorting Dotted-Quad IP Values in Numeric Order




    6.16.1 Problem



    You want to sort strings that represent IP numbers in numeric order.





    6.16.2 Solution



    Break apart the strings and sort the pieces numerically. Or just use
    INET_ATON( ).





    6.16.3 Discussion



    If a table contains IP numbers represented as strings in dotted-quad
    notation (for example, 111.122.133.144),
    they'll sort lexically rather than numerically. To
    produce a numeric ordering instead, you can sort them as four-part
    values with each part sorted numerically. To accomplish this, use a
    technique similar to that for sorting hostnames, but with the
    following differences:




    • Dotted quads always have four segments, so there's
      no need to prepend dots to the value before extracting substrings.


    • Dotted quads sort left to right, so the order in which substrings are
      used in the ORDER BY clause is
      opposite to that used for hostname sorting.


    • The segments of dotted-quad values are numbers, so add zero to each
      substring to tell MySQL to using a numeric sort rather than a lexical
      one.



    Suppose you have a hostip table with a
    string-valued ip column containing IP numbers:



    mysql> SELECT ip FROM hostip ORDER BY ip;
    +-----------------+
    | ip |
    +-----------------+
    | 127.0.0.1 |
    | 192.168.0.10 |
    | 192.168.0.2 |
    | 192.168.1.10 |
    | 192.168.1.2 |
    | 21.0.0.1 |
    | 255.255.255.255 |
    +-----------------+


    The preceding query produces output sorted in lexical order. To sort
    the ip values numerically, you can extract each
    segment and add zero to convert it to a number using an
    ORDER BY clause like this:



    mysql> SELECT ip FROM hostip
    -> ORDER BY
    -> SUBSTRING_INDEX(ip,'.',1)+0,
    -> SUBSTRING_INDEX(SUBSTRING_INDEX(ip,'.',-3),'.',1)+0,
    -> SUBSTRING_INDEX(SUBSTRING_INDEX(ip,'.',-2),'.',1)+0,
    -> SUBSTRING_INDEX(ip,'.',-1)+0;
    +-----------------+
    | ip |
    +-----------------+
    | 21.0.0.1 |
    | 127.0.0.1 |
    | 192.168.0.2 |
    | 192.168.0.10 |
    | 192.168.1.2 |
    | 192.168.1.10 |
    | 255.255.255.255 |
    +-----------------+


    A simpler solution is possible if you have MySQL 3.23.15 or higher.
    Then you can sort the IP values using the INET_ATON(
    )
    function, which converts a network address directly to
    its underlying numeric form:



    mysql> SELECT ip FROM hostip ORDER BY INET_ATON(ip);
    +-----------------+
    | ip |
    +-----------------+
    | 21.0.0.1 |
    | 127.0.0.1 |
    | 192.168.0.2 |
    | 192.168.0.10 |
    | 192.168.1.2 |
    | 192.168.1.10 |
    | 255.255.255.255 |
    +-----------------+


    If you're tempted to sort by simply adding zero to
    the ip value and using ORDER
    BY on the result, consider the values that kind of
    string-to-number conversion actually will produce:



    mysql> SELECT ip, ip+0 FROM hostip;
    +-----------------+---------+
    | ip | ip+0 |
    +-----------------+---------+
    | 127.0.0.1 | 127 |
    | 192.168.0.2 | 192.168 |
    | 192.168.0.10 | 192.168 |
    | 192.168.1.2 | 192.168 |
    | 192.168.1.10 | 192.168 |
    | 255.255.255.255 | 255.255 |
    | 21.0.0.1 | 21 |
    +-----------------+---------+


    The conversion retains only as much of each value as can be
    interpreted as a valid number. The remainder would be unavailable for
    sorting purposes, each though it's necessary to
    produce a correct ordering.










      I l@ve RuBoard



      sort








       

       










      sort



      You're familiar with the basic operation of sort:





      $ sort names

      Charlie

      Emanuel

      Fred

      Lucy

      Ralph

      Tony

      Tony

      $



      By default, sort takes each line of the specified input file and sorts it into ascending order. Special characters are sorted according to the internal encoding of the characters. For example, on a machine that encodes characters in ASCII, the space character is represented internally as the number 32, and the double quote as the number 34. This means that the former would be sorted before the latter. Note that the sorting order is implementation dependent, so although you are generally assured that sort will perform as expected on alphabetic input, the ordering of numbers, punctuation, and special characters is not always guaranteed. We will assume we're working with the ASCII character set in all our examples here.



      sort has many options that provide more flexibility in performing your sort. We'll just describe a few of the options here.



      The -u Option



      The -u option tells sort to eliminate duplicate lines from the output.





      $ sort -u names

      Charlie

      Emanuel

      Fred

      Lucy

      Ralph

      Tony

      $



      Here you see that the duplicate line that contained Tony was eliminated from the output.



      The -r Option



      Use the -r option to reverse the order of the sort:





      $ sort -r names Reverse sort

      Tony

      Tony

      Ralph

      Lucy

      Fred

      Emanuel

      Charlie

      $



      The -o Option



      By default, sort writes the sorted data to standard output. To have it go into a file, you can use output redirection:





      $ sort names > sorted_names

      $



      Alternatively, you can use the -o option to specify the output file. Simply list the name of the output file right after the -o:





      $ sort names -o sorted_names

      $



      This sorts names and writes the results to sorted_names.



      Frequently, you want to sort the lines in a file and have the sorted data replace the original. Typing





      $ sort names > names

      $



      won't work�it ends up wiping out the names file. However, with the -o option, it is okay to specify the same name for the output file as the input file:





      $ sort names -o names

      $ cat names

      Charlie

      Emanuel

      Fred

      Lucy

      Ralph

      Tony

      Tony

      $



      The -n Option



      Suppose that you have a file containing pairs of (x, y) data points as shown:





      $ cat data

      5 27

      2 12

      3 33

      23 2

      -5 11

      15 6

      14 -9

      $



      Suppose that you want to feed this data into a plotting program called plotdata, but that the program requires that the incoming data pairs be sorted in increasing value of x (the first value on each line).



      The -n option to sort specifies that the first field on the line is to be considered a number, and the data is to be sorted arithmetically. Compare the output of sort used first without the -n option and then with it:





      $ sort data

      -5 11

      14 -9

      15 6

      2 12

      23 2

      3 33

      5 27

      $ sort -n data Sort arithmetically

      -5 11

      2 12

      3 33

      5 27

      14 -9

      15 6

      23 2

      $



      Skipping Fields



      If you had to sort your data file by the y value�that is, the second number in each line�you could tell sort to skip past the first number on the line by using the option





      +1n



      instead of -n. The +1 says to skip the first field. Similarly, +5n would mean to skip the first five fields on each line and then sort the data numerically. Fields are delimited by space or tab characters by default. If a different delimiter is to be used, the -t option must be used.





      $ sort +1n data Skip the first field in the sort

      14 -9

      23 2

      15 6

      -5 11

      2 12

      5 27

      3 33

      $



      The -t Option



      As mentioned, if you skip over fields, sort assumes that the fields being skipped are delimited by space or tab characters. The -t option says otherwise. In this case, the character that follows the -t is taken as the delimiter character.



      Look at our sample password file again:





      $ cat /etc/passwd

      root:*:0:0:The super User:/:/usr/bin/ksh

      steve:*:203:100::/users/steve:/usr/bin/ksh

      bin:*:3:3:The owner of system files:/:

      cron:*:1:1:Cron Daemon for periodic tasks:/:

      george:*:75:75::/users/george:/usr/lib/rsh

      pat:*:300:300::/users/pat:/usr/bin/ksh

      uucp:*:5:5::/usr/spool/uucppublic:/usr/lib/uucp/uucico

      asg:*:6:6:The Owner of Assignable Devices:/:

      sysinfo:*:10:10:Access to System Information:/:/usr/bin/sh

      mail:*:301:301::/usr/mail:

      $



      If you wanted to sort this file by username (the first field on each line), you could just issue the command





      sort /etc/passwd



      To sort the file instead by the third colon-delimited field (which contains what is known as your user id), you would want an arithmetic sort, skipping the first two fields (+2n), specifying the colon character as the field delimiter (-t:):





      $ sort +2n -t: /etc/passwd Sort by user id

      root:*:0:0:The Super User:/:/usr/bin/ksh

      cron:*:1:1:Cron Daemon for periodic tasks:/:

      bin:*:3:3:The owner of system files:/:

      uucp:*:5:5::/usr/spool/uucppublic:/usr/lib/uucp/uucico

      asg:*:6:6:The Owner of Assignable Devices:/:

      sysinfo:*:10:10:Access to System Information:/:/usr/bin/sh

      george:*:75:75::/users/george:/usr/lib/rsh

      steve:*:203:100::/users/steve:/usr/bin/ksh

      pat:*:300:300::/users/pat:/usr/bin/ksh

      mail:*:301:301::/usr/mail:

      $



      Here we've emboldened the third field of each line so that you can easily verify that the file was sorted correctly by user id.



      Other Options



      Other options to sort enable you to skip characters within a field, specify the field to end the sort on, merge sorted input files, and sort in "dictionary order" (only letters, numbers, and spaces are used for the comparison). For more details on these options, look under sort in your Unix User's Manual.












         

         


        BRUTE FORCE









































        Prev don't be afraid of buying books Next






























        BRUTE FORCE



        The first approach we will explore is to give
        our tests direct access to the visual components in the GUI. We can
        do this in one of several ways:











        1. make them public instance variables so that the
          test can access them directly











        2. provide an accessor for each component so that
          the test can get hold of them











        3. make the test class an inner class of the class
          that creates the GUI











        4. use reflection to gain access to the
          components











        All of these approaches have the drawback that
        the components cannot be local to a method; they must be instance
        variables. Also, all but the last require extra production code
        that is used only for testing.



        Since I learned Object Oriented Programming in a
        Smalltalk environment, I find the idea of making instance variables
        public unthinkable. This rules out option one. Also, I like to keep
        my test code separate from production code, so I avoid option
        three
        as well. The last option is very overhead intensive to do manually. We'll explore some frameworks
        later in this chapter that make use of reflection, but hide the
        details.



        That leaves the second approach: adding
        accessors for the components.



        Keep in mind that the code was developed
        incrementally: enough new test to fail, followed by enough new code
        to pass. During development, both the code and the tests were
        refactored as needed. Notice how the tests are in two
        TestCase classes. Notice the different setUp()
        methods.


















        As we discuss more fully in the JUnit chapter,
        TestCase is a mechanism to group tests that rely on the same
        fixture, not tests that happen to use the same class. When we need
        a different fixture, we should create a new TestCase class. As you
        add tests, add them to the TestCase that maintains the fixture that
        they require.
















        Given that preamble, here's the test code. First
        we have a TestCase that verifies that the required
        components are present. Notice how we've used EasyMock. For this
        test we aren't concerned with the interaction of the GUI with the
        underlying object (i.e., the mock) so we've used
        niceControlFor() which provides default implementations
        for all of the methods in MovieListEditor.




        public class TestWidgets extends TestCase {
        private MockControl control;
        private MovieListEditor mockEditor;
        private MovieListWindow window;


        protected void setUp() {
        control = EasyMock.niceControlFor(MovieListEditor.class);
        mockEditor = (MovieListEditor) control.getMock();
        control.activate();
        window = new MovieListWindow(mockEditor);
        window.init();
        window.show();
        }

        public void testList() {
        JList movieList = window.getMovieList();
        assertNotNull("Movie list should be non null",movieList);
        assertTrue("Movie list should be showing",movieList.isShowing());
        }

        public void testField() {
        JTextField movieField = window.getMovieField();
        assertNotNull("Movie field should be non null",movieField);
        assertTrue("Movie field should be showing",movieField.isShowing());
        }



        public void testAddButton() {
        JButton addButton = window.getAddButton();
        assertNotNull("Add button should be non null",addButton);
        assertTrue("Add button should be showing",addButton.isShowing());
        assertEquals("Add button should be labeled \"Add\"",
        "Add",
        addButton.getText());
        }

        public void testDeleteButton() {
        JButton deleteButton = window.getDeleteButton();
        assertNotNull("Delete button should be non null",deleteButton);
        assertTrue("Delete button should be showing",deleteButton.isShowing());
        assertEquals("Delete button should be labeled \"Delete\"",
        "Delete",
        deleteButton.getText());
        }
        }






        The other TestCase tests for correct
        operation of the GUI and uses a more involved mock. Here we build
        the mock in each test method, setting different method call
        expectations and return values in each. Note, however, the common
        mock creation code in setUp().




        public class TestOperation extends TestCase {
        private static final String LOST IN SPACE = "Lost In Space";
        private Vector movieNames;
        private MovieListWindow window;
        private MockControl control=null;
        private MovieListEditor mockEditor = null;


        protected void setUp() {
        movieNames = new Vector() {
        { add("Star Wars"); add("Star Trek"); add("Stargate"); }
        };
        window = null;

        MockControl control = EasyMock.controlFor(MovieListEditor.class);
        MovieListEditor mockEditor = (MovieListEditor) control.getMock();
        }

        public void testMovieList() {
        mockEditor.getMovies();
        control.setReturnValue(movieNames, 1);

        control.activate();

        MovieListWindow window = new MovieListWindow(mockEditor);
        window.init();
        window.show();



        JList movieList = window.getMovieList();
        ListModel movieListModel = movieList.getModel();
        assertEquals("Movie list is the wrong size",
        movieNames.size(),
        movieListModel.getSize());

        for (int i = 0; i
        <
        movieNames.size(); i++) {
        assertEquals("Movie list contains bad name",
        movieNames.get(i),
        movieListModel.getElementAt(i));
        }

        control.verify();
        }

        public void testAdd() {
        Vector movieNamesWithAddition = new Vector(movieNames);
        movieNamesWithAddition.add(LOST IN SPACE);

        mockEditor.getMovies();
        control.setReturnValue(movieNames, 1);

        mockEditor.add(LOST IN SPACE);
        control.setVoidCallable(1);

        mockEditor.getMovies();
        control.setReturnValue(movieNamesWithAddition, 1);

        control.activate();

        MovieListWindow window = new MovieListWindow(mockEditor);
        window.init();
        window.show();

        JTextField movieField = window.getMovieField();
        movieField.setText(LOST IN SPACE);

        JButton addButton = window.getAddButton();
        addButton.doClick();

        JList movieList = window.getMovieList();
        ListModel movieListModel = movieList.getModel();
        assertEquals("Movie list is the wrong size after add",
        movieNamesWithAddition.size(),
        movieListModel.getSize());

        assertEquals("Movie list doesn't contain new name",
        LOST IN SPACE,
        movieListModel.getElementAt(movieNames.size()));

        control.verify();
        }

        public void testDelete() {
        Vector movieNamesWithDeletion = new Vector(movieNames);
        movieNamesWithDeletion.remove(1);



        mockEditor.getMovies();
        control.setReturnValue(movieNames, 1);

        mockEditor.delete(1);
        control.setVoidCallable(1);

        mockEditor.getMovies();
        control.setReturnValue(movieNamesWithDeletion, 1);

        control.activate();

        MovieListWindow window = new MovieListWindow(mockEditor);
        window.init();
        window.show();

        JList movieList = window.getMovieList();
        movieList.setSelectedIndex(1);

        JButton deleteButton = window.getDeleteButton();
        deleteButton.doClick();

        ListModel movieListModel = movieList.getModel();
        assertEquals("Movie list is the wrong size after delete",
        movieNamesWithDeletion.size(),
        movieListModel.getSize());

        control.verify();
        }
        }






        The GUI class is below, with a screen shot of
        the resulting window shown in Figure 8.2.




        public class MovieListWindow extends JFrame {
        private JList movieList;
        private JButton addButton;
        private MovieListEditor myEditor;
        private JTextField movieField;
        private JButton deleteButton;

        public MovieListWindow(MovieListEditor anEditor) {
        super();
        myEditor = anEditor;
        }

        public JList getMovieList() {
        return movieList;
        }

        public JTextField getMovieField() {
        return movieField;
        }

        public JButton getAddButton() {
        return addButton;
        }



        public JButton getDeleteButton() {
        return deleteButton;
        }

        public void init() {
        setLayout();
        initMovieList();
        initMovieField();
        initAddButton();
        initDeleteButton();
        pack();
        }

        private void setLayout() {
        getContentPane().setLayout(new FlowLayout());
        }

        private void initMovieList() {
        movieList = new JList(getMovies());
        JScrollPane scroller = new JScrollPane(movieList);
        getContentPane().add(scroller);
        }

        private void initMovieField() {
        movieField = new JTextField(16);
        getContentPane().add(movieField);
        }

        private void initAddButton() {
        addButton = new JButton("Add");
        addButton.addActionListener(new ActionListener() {
        public void actionPerformed(ActionEvent e) {
        myEditor.add(movieField.getText());
        movieList.setListData(getMovies());
        }
        });
        getContentPane().add(addButton);
        }

        private void initDeleteButton() {
        deleteButton = new JButton("Delete");
        deleteButton.addActionListener(new ActionListener() {
        public void actionPerformed(ActionEvent e) {
        myEditor.delete(movieList.getSelectedIndex());
        movieList.setListData(getMovies());
        }
        });
        getContentPane().add(deleteButton);
        }











        private Vector getMovies() {
        Vector movies = myEditor.getMovies();
        return (movies == null) ? new Vector() : movies;
        }
        }










        Figure 8.2. The resulting GUI.




























































        Amazon