|
The void keyword can be used as a type specifier When used as a type specifier, it is most often used as a function C++ does not require that void be used to indicate Examplevoid func(void) See Alsodeclaration, type, Chapter 2, Chapter 5 |
|
|
The void keyword can be used as a type specifier When used as a type specifier, it is most often used as a function C++ does not require that void be used to indicate Examplevoid func(void) See Alsodeclaration, type, Chapter 2, Chapter 5 |
|
24.2. Preparing the ProjectNow for some work in the Terminal application. Before it can be taken over by Xcode, the Graphviz project has to be unpacked from its archive and configured to the tools and libraries available on your system. First, extract the archive; point the command interface at the directory that contains the archive, and invoke tar. The example here assumes that we're working with version 2.2 of Graphviz: $ cd Projects The options to tar(xzvf) told it to extract the contents after unzipping them from the named file and to be verbose, or print the name of each file. The verbosity isn't strictly necessary, but the extraction takes time, and it's nice to have something to show what's going on. Next, the project has to be configured. If you point your command line interface at the newly created graphviz-2.2 directory and list the contents of the current directory, you'll find files named INSTALL and README and a script named configure. I can't promise that these files will be at the root of every open-source project you download, but you'll usually find them. $ cd graphviz-2.2 The usual scenario is that if a project has a configure script, you must execute ./configure from the root of the project in order to adapt the project's makefiles and headers to the architecture, tool set, and library suite of your machine. The INSTALL file will usually contain annotations of any special flags or features this particular configure script accepts. However, in this case, INSTALL appears to be unchanged from its generic content. Examining the README file shows that the build instructions are in graphviz-2.2/doc/build.html. The build.html file confirms that Graphviz shares the same build-and-install recipe as almost every other open-source project (do not type this in): ./configure It also says that ./configure --help will show us all the available options, which are numerous. For the purposes of this tutorial, we'll take none of them. So now we invoke ./configure. A glance at its voluminous output suggests why it is necessary: $ ./configure During this process, the configuration script fails to find the header for the FreeType typographic library. As Graphviz relies on FreeType, not only for labeling but also for scaling objects, this will cripple our version of the package. In real life, we'd take the trouble to make up the gapit involves installing X Window support and adding --with_freetype=/usr/X11R6 to the parameters of configurebut we're doing this only for the exercise. By the end of this process, all the makefiles in graphviz-2.2 and its subdirectories will have been created and the header files adjusted to the peculiarities of your development environment. If all you cared about was obtaining Graphviz, the easiest thing would be simply to type make at the next command prompt and, assuming all went well, sudo make install at the command prompt after that. But in our scenario, you want to bring Xcode into the picture. Maybe you want to modify Graphviz, or you think you'll have to edit its source for it to compile properly. Maybe you just want to study it. You need an interactive, integrated development environment. It's at this point in the life cycleafter configuration but before the first makethat Xcode can enter. |
The Student Directory
The most significant new addition to StudentDirectoryTest appears in testRandomAccess. When it adds the students to the directory, the test closes it. It then creates a new directory instance to be used for the student lookups. By doing this, the test demonstrates at least some notion of persistence. An additional performance test might be worthwhile to demonstrate that a lookup into the directory takes the same amount of time regardless of where it appears in the file. Additions of students to the directory should also execute in constant time.
In contrast, most of the StudentDirectory class has changed. The StudentDirectory class now encapsulates a DataFile instance to supply directory functionality. It provides a few additional specifics, including the key field to use (the student id) and the base filename for the data and key files. Beyond that, the class merely delegates messages to the DataFile object. |
4.7. Variable Scope RevisitedWhen we first discussed the notion of variable scope, Every JavaScript execution context has a scope chain In top-level JavaScript code (i.e., code not contained within any function definitions), the scope chain consists of a single object, the global object. All variables are looked up in this object. If a variable does not exist, the variable value is undefined. In a (nonnested) function, however, the scope chain consists of two objects. The first is the function's call object, and the second is the global object. When the function refers to a variable, the call object (the local scope) is checked first, and the global object (the global scope) is checked second. A nested function would have three or more objects in its scope chain. Figure 4-1 illustrates the process of looking up a variable name in the scope chain of a function. Figure 4-1. The scope chain and variable resolution |
Getting StartedTo get started, we're going to create a simple imagejust an image showing a background colorand then send it as a JPEG image to the browser. To start, you create an image object with imagecreate, which you use like this:
The x_size and y_size values are in pixels. Here's how we create our first image:
To set colors for the image, you use imagecolorallocate, passing it the image you're working with, as well as the red, green, and blue components as values from 0 to 255:
The first time you call imagecolorallocate, this function sets the background color. Subsequent calls set various drawing colors, as we'll see. Here's how we set the background color to light gray (red = 200, green = 200, blue = 200):
To send a JPEG image back to the browser, you have to tell the browser that you're doing so with the header function to set the image's type, and then you send the image with the imagejpeg function like this (do this before any other output is sent to the browser):
Here are some of the image-creating functions for various image formats:
After sending the image, you can destroy the image object with the imagedestroy function; all this is shown in phpbox.php, Example 1. Example 1. Displaying an image, phpbox.php
The results appear in Figure 1, where you can see our image, which is simply all background color. Cool, a JPEG image created on the server! Figure 1. Displaying an image.[View full size image] |
Chapter 9. Getting PHP to Talk to MySQLNow that you're comfortable using the MySQL client tools to manipulate data in the database, you can begin using PHP to display and modify data from the database. PHP has standard functions for working with the database. First, we're going to discuss PHP's built-in database functions. We'll also show you how to use the PEAR database functions that provide the ability to use the same functions to access any supported database. This type of flexibility comes from a process called abstraction. In this chapter, you'll learn how to connect to a MySQL server from PHP, learn how to use PHP to access and retrieve stored data, and how to correctly display information to the user. |
[ Team LiB ] |
15.2 Transforming XML into HTMLYou may also have heard about the In a web application, XSLT can transform Example 15-2. Transforming XML to HTML (phone_html.jsp)<%@ page contentType="text/html" %> At the top of the page, the taglib directive for To transform the XML data, you first need to get hold of the XSLT
The <c:import> action is very versatile. You The transformation is performed by a JSTL action named
The XML document to transform can be specified as the body, as in Descriptions of all the XSLT elements would fill a Example 15-3. XSL stylesheet that generates an HTML table (htmltable.xsl)<?xml version="1.0"?> The XSLT elements are similar to JSP action elements in that they An XSLT stylesheet can use parameters to represent dynamic data, <?xml version="1.0"?> The parameter in this example limits the To pass the parameter value to the XSLT stylesheet, you must use a <x:transform xslt="${stylesheet}"> Here I pass on a request parameter value to the stylesheet, but you XML documents, including XSLT stylesheets, can contain references to |
[ Team LiB ] |
35.7. Input RoutingIngress IP packets for which no route can be found in the cache by ip_route_input are checked against the routing tables by ip_route_input_slow, which is defined in net/ipv4/route.c and whose logic is shown in Figures 35-5(a) and 35-5(b). In this section, we describe the internals of this routine in detail. Figure 35-5a. ip_route_input_slow functionFigure 35-5b. ip_route_input_slow functionThe function starts with a few sanity checks on the source and destination addresses; for instance, the source IP address must not be a multicast address. I already listed most of those checks in the section "Verbose Monitoring" in Chapter 31. More sanity checks are done later in the function. The routing table lookup is done with fib_lookup, the routine introduced in the section "fib_lookup Function." If fib_lookup cannot find a matching route, the packet is dropped; additionally, if the receiving interface is configured with forwarding enabled, an ICMP_UNREACHABLE message is sent back to the source. Note that the ICMP message is sent not by ip_route_input_slow but by its caller, who takes care of it upon seeing a return value of RTN_UNREACHABLE. In case of success, ip_route_input_slow distinguishes the following three cases:
In the first two cases, the packet is to be delivered locally, and in the third, it needs to be forwarded. The details of how local delivery and forwarding are handled can be found in the sections "Local Delivery" and "Forwarding." Here are some of the tasks they both need to take care of:
35.7.1. Creation of a Cache EntryI said already in the section "Cache Lookup" in Chapter 33 that ip_route_input (and therefore ip_route_input_slow, in case of a cache miss) can be called just to consult the routing table, not necessarily to route an ingress packet. Because of that, ip_route_input_slow does not always create a new cache entry. When invoked from IP or an L4 protocol (such as IP over IP), the function always creates a cache entry. Currently, the only other possibility is invocation by ARP. Routes generated by ARP are cached only when they would be valid for proxy ARP. See the section "Processing ARPOP_REQUEST Packets" in Chapter 28. The new entry is allocated with dst_alloc. Of particular importance are the following initializations for the new cache entry:
35.7.2. Preferred Source Address SelectionThe route added to the routing cache is unidirectional, meaning that it will not be used to route traffic in the reverse direction toward the source IP address of the packet being routed. However, in some cases, the reception of a packet can trigger an action that requires the local host to choose a source IP address that it can use when transmitting a packet back to the sender.[*] This address, the preferred source IP address,[] must be saved with the routing cache entry that routed the ingress packet. Here are two cases where that address, which is saved in a field called rt_spec_dst, comes in handy:
The preferred source is selected through the fib_validate_source function mentioned in the section "Helper Routines" and called by ip_route_input_slow. ip_route_input_slow initializes the preferred source IP address rt_spec_dst based on the destination address of the packet being routed:
In this example, when transmitting packets to the hosts of the 10.0.1.0/24 subnet, the kernel will use 10.0.3.100 as the source IP address. Of course, only locally configured addresses are accepted: this means that for the previous command to be accepted, 10.0.3.100 must have been configured on one of the local interfaces, but not necessarily on the same device used to reach the 10.0.1.0/24 subnet. (Remember that in Linux, addresses belong to the host, not to the devices; see the section "Responding from Multiple Interfaces" in Chapter 28.) An administrator normally provides a source address when she does not want to use the one that would be picked by default from the egress device. Figure 35-6 summarizes how rt_spec_dst is selected. 35.7.3. Local DeliveryThe following types of packets are delivered locally by initializing dst->input appropriately, as we saw in the section "Initialization of Function Pointers for Ingress Traffic":
Figure 35-6. Selection of rt_spec_dstip_route_input_slow recognizes two kinds of broadcasts:
ip_route_input_slow accepts broadcasts only if they are generated by the IP protocol. You might think that this a superfluous check, given that ip_route_input_slow is called to route IP packets. However, as I said in the section "Cache Lookup" in Chapter 33, the input buffer to ip_route_input (and therefore to ip_route_input_slow in case of a cache miss) does not necessarily represent a packet to be routed. If everything goes fine, a new cache entry, rtable, is created, initialized, and inserted into the routing cache. Note that there is no need to handle Multipath for packets that are delivered locally. 35.7.4. ForwardingIf the packet is to be forwarded but the configuration of the ingress device has disabled forwarding If the matching route returned by fib_lookup includes more than one next hop, fib_select_multipath is used to choose among them. When multipath caching is supported, the selection is taken care of differently. The section "Effects of Multipath on Next Hop Selection" describes the algorithm used for the selection. The source address is validated with fib_validate_source. Then, based on the factors we saw in the section "Transmitting ICMP_REDIRECT Messages" in Chapter 31, the kernel may decide to send an ICMP_REDIRECT to the source. In that case, the ICMP message is sent not by ip_route_input_slow directly, but by ip_forward, which takes care of it upon seeing the RTCF_DOREDIRECT flag. As we saw in the section "Creation of a Cache Entry," the result of a routing lookup is not always cached. 35.7.5. Routing FailureWhen a packet cannot be routed, either because of host configuration or because no route matches, the new route is added to the cache with dst->input initialized to ip_error. This means that all the ingress packets matching this route will be processed by ip_error. That function, when invoked by dst_input, will generate the proper ICMP_UNREACHABLE message depending on why the packet cannot be routed, and will drop the packet. Adding the erroneous route to the cache is useful because it can speed up the error processing of further packets sent to the same incorrect address. ICMP messages are rate limited by ip_error. We already saw in the section "Egress ICMP REDIRECT Rate Limiting" in Chapter 33 that ICMP_REDIRECT messages are also rate limited by the DST. The rate limiting discussed here is independent of the other, but is enforced using the same fields of the dst_entry. This is possible because given any route, these two forms of rate limiting are mutually exclusive: one applies to ICMP_REDIRECT messages and the other one applies to ICMP_UNREACHABLEmessages. Here is how rate limiting is implemented by ip_error with a simple token bucket algorithm. The timestamp dst.rate_last is updated every time ip_error is invoked to generate an ICMP message. dst.rate_tokens specifies how many ICMP messagesalso known as the number of tokens, or the budgetcan be sent before the rate limiting kicks in and new ICMP_UNREACHABLE transmission requests will be ignored. The budget is decremented each time an ICMP_UNREACHABLE message is sent, and is incremented by ip_error itself. The budget cannot exceed the maximum number ip_rt_error_burst, which represents, as its name suggests, the maximum number of ICMP messages a host can send in 1 second (i.e., the burst). Its value is expressed in Hz so that it is easy to add tokens based on the difference between the local time jiffies and dst.rate_last. Figure 35-7. ip_mkroute_input functionWhen ip_error is invoked and at least one token is available, the function is allowed to transmit an ICMP_UNREACHABLE message. The ICMP subtype is derived from dst.error, which was initialized by ip_route_input_slow when fib_lookup failed to find a route. |
All right, we've got a handle on what we need to do if we want our albums' tracks to be kept in the right order. What
about the additional information we'd like to keep, such as the disc on
which the track is found? When we map a collection of associations, we've
seen that Hibernate creates a join table in which to store the
relationships between objects. And we've just seen how to add an index
column to the ALBUM_TRACKS table to
maintain an ordering for the collection. Ideally, we'd like to have the
ability to augment that table with more information of our own choosing,
in order to record the other details we'd like to know about album
tracks.
As it turns out, we can do just that, and in a very straightforward
way.
Up to this point we've seen two ways of getting tables into our
database schema. The first was by explicitly mapping properties of a
Java object onto columns of a table. The second was defining a
collection (of values or associations), and specifying the table and
columns used to manage that collection. As it turns out, there's nothing
that prevents us from using a single table in both ways. Some of its
columns can be used directly to map to our own objects' properties,
while the others can manage the mapping of a collection. This lets us
achieve our goals of recording the tracks that make up an album in an
ordered way, augmented by additional details to support multidisc
albums.
NOTE
This flexibility takes a little getting used to, but it makes
sense, especially if you think about mapping objects to an existing
database schema.
We'll want a new data object, AlbumTrack,
to contain information about how a track is used on an album. Since
we've already seen several examples of how to map full-blown entities
with independent existence, and there really isn't a need for our
AlbumTrack object to exist outside
the context of an Album entity, this is a good
opportunity to look at mapping a
component. Recall that in Hibernate jargon an entity is an object
that stands on its own in the persistence mechanism: it can be created,
queried, and deleted independently of any other objects, and therefore
has its own persistent identity (as reflected by its mandatory id property). A component, in contrast, is an
object that can be saved to and retrieved from the database, but only as
a subordinate part of some other entity. In this case, we'll define a
list of AlbumTrack objects as a component part of
our Album entity. Example 5-4 shows a
mapping for the Album class that achieves
this.
Code View: <?xml version="1.0"?> |
Once we've created the file Album.hbm.xml, we need to add it to the list
of mapping resources in hibernate.cfg.xml. Open up the hibernate.cfg.xml file in src, and add the line
highlighted in bold in Example 5-5.
<?xml version='1.0' encoding='utf-8'?> |
A lot of this is similar to mappings we've seen before, but the
tracks list is worth some careful
examination. The discussion gets involved, so let's step back a minute
and recall exactly what we're trying to accomplish.
We want our album to keep an ordered list of the tracks that make
it up, along with additional information about each track that tells
which disc it's on (in case the album has multiple discs) and the
track's position within the disc. This conceptual relationship is shown
in the middle of Figure 5-1. The association between albums and tracks
is mediated by an "Album Tracks" object that adds disc and position
information, as well as keeping them in the right order. The model of
the tracks themselves is familiar (we're leaving out artist and comment
information in this diagram, in an effort to keep it simple). This model
is what we've captured in the album mapping document, Example 5-4. Let's examine
the details of how it was done. Later we'll look at how Hibernate turns
this specification into Java code (the bottom part of Figure 5-1) and a
database schema (the top part).
All right, armed with this reminder and elaboration of the
conceptual framework, we're ready to look at the details of Example 5-4:
This mapping achieves the goals with which we started with
illustrates how arbitrary information can be attached to a collection of
associations.
The source for the component class itself can be found in Example 5-6, and it might
help clarify this discussion. Compare this source code with its
graphical representation at the bottom of Figure 5-1.
You may have noticed that we chose an explicit column name
of TRACK_ID to use for
the <many-to-one> link to the TRACK table. We've actually been doing this in
a number of places, but previously it didn't require an entire separate
line. It's worth talking about the reasoning behind this choice. Without
this instruction, Hibernate will just use the property name (track) for the column name. You can use any
names you want for your columns, but Java
Database Best Practices encourages naming foreign key columns
the same as the primary keys in the original tables to which they refer.
This helps data modeling tools recognize and display the "natural joins"
the foreign keys represent, which makes it easier for people to
understand and work with the data. This consideration is also why I
included the table names as part of the primary keys' column
names.
I was all set to explain that by choosing to use a composite
element to encapsulate our augmented track list, we'd have to write the
Java source for AlbumTrack ourselves. I was sure
this went far beyond the capabilities of the code generation tool. Much
to my delight, when I tried ant
codegen to see what sort of errors would result, the command
reported success, and both Album.java and AlbumTrack.java appeared in the source
directory!
NOTE
Sometimes it's nice to be proved
wrong.
It was at this point that I went back and added the <use-in-tostring> meta element for the track's many-to-one mapping inside the
component. I wasn't sure this would work either, because the only
examples of its use I'd found in the reference manual were attached to
actual <property> tags. But work it
did, exactly as I had hoped.
The Hibernate best practices encourage using fine-grained classes
and mapping them as components. Given how easily the code generation
tool allows you to create them from your mapping documents, there is
absolutely no excuse for ignoring this advice. Example 5-6 shows the
source generated for our nested composite mapping.
Code View: package com.oreilly.hh.data; |
This looks similar to the generated code for entities we've seen
in previous chapters, but it lacks an id property, which makes
sense. Component classes don't need identifier fields, and they need not
implement any special interfaces. The class JavaDoc is shared with the
Album class, in which this component is used. The
source of the Album class itself is a typical
generated entity, so there's no need to reproduce it here.
At this point we can build the schema for these new mappings, via
ant schema. Example 5-7 shows
highlights of the resulting schema creation process. This is the
concrete HSQLDB representation of the schema modeled at the top of Figure 5-1.
... |
|
Figure 5-2 shows our enriched schema in HSQLDB's
graphical management interface.
You might wonder why we use the separate
Track class at all, rather than simply embedding
all that information directly in our enhanced
AlbumTrack collection. The simple answer is that
not all tracks are part of an album—some might be singles, downloads, or
otherwise independent. Given that we need a separate table to keep track
of these anyway, it would be a poor design choice to duplicate its
contents in the AlbumTracks table rather than
associating with it. There is also a more subtle advantage to this
approach, which is actually used in my own music database: this
structure allows us to share a single track file between multiple
albums. If the exact same recording appears on an album, a "best of"
collection, and one or more period collections or sound tracks, linking
all these albums to the same track file saves disk space.
Another point worth noting about the ALBUM_TRACK schema is that there is no obvious
ID column. If you look back at the
schema definition Hibernate emitted for
ALBUM_TRACK in Example 5-7, you'll see the phrase primary key
(ALBUM_ID, LIST_POS). Hibernate has noticed that, given the
relationships we've requested in Album.hbm.xml, a row in the ALBUM_TRACK table can be uniquely identified
by a combination of the ID of the Album with
which it's associated and the index within the list it's modeling, so it
has set these up as a composite key
for the table. This is a nice little optimization we didn't even have to
think about. Also notice that one of those columns is a property of the
AlbumTrack class while the other is not. We'll
look at a slightly different way to model this relationship in Chapter 7.
Let's look at some sample code showing how to use these new data
objects. Example 5-8 shows
a class that creates an album record and its list of tracks, then prints
it out to test the debugging support that we've configured through the
toString( ) method.
Code View: package com.oreilly.hh; |
In this simple example we're creating an album with just one disc.
This quick-and-dirty method can't cope with many variations, but it does
allow the example to be compressed nicely.
We also need a new target at the end of build.xml to invoke the class. Add the lines
of Example 5-9 at the
end of the file (but inside the <project> element, of course).
<target name="atest" description="Creates and persists some album data" |
With this in place, assuming you've generated the schema, run
ant ctest followed by ant atest. (Running <ctest> first is optional, but having some
extra data in there to begin with makes the album data somewhat more
interesting. Recall that you can run these targets in one command as
ant ctest atest, and if you want to
start by erasing the contents of the database first, you can invoke
ant schema ctest atest.) The
debugging output produced by this command is shown in Example 5-10. Although
admittedly cryptic, you should be able to see that the album and tracks
have been created, and the order of the tracks has been
maintained.
atest: |
If we run our old query test, we can see both the old and new
data, as in Example 5-11.
% ant qtest |
Finally, Figure 5-3 shows a query in the HSQLDB interface that
examines the contents of the ALBUM_TRACKS table.
…deleting, rearranging, and otherwise manipulating these
interrelated pieces of information? This is actually supported
fairly automatically, as the next section
illustrates.
< Day Day Up > |
13.4 Using SSL from within J2EE ProgramsUsing the SSL support built into a J2EE product is a very simple and effective approach but in some cases may have some limits because the communication between the client and the server can use only the capabilities offered by the J2EE container. A direct SSL socket connection between client and server allows more sophisticated and responsive applications. For example, a Java servlet can communicate through an SSL connection and interoperate with other services and processes that are not necessarily written in the Java language. This can be done by using a Java package that provides SSL function. 13.4.1 JSSEThe Java Secure Socket Extension is a set of Java packages and APIs that enable the use of SSL from within Java programs.[7] JSSE provides a framework and an implementation for the SSL and TLS protocols and includes functionality for data encryption, server authentication, message integrity, and optional client authentication. Using JSSE, developers can enforce secure passage of data between a client and a server running any application protocol over TCP/IP, such as HTTP, TELNET, or FTP.
From a programmatic point of view, the main advantage of using JSSE is that a programmer does not have to bother with the details of the record and handshake protocols and with the encryption and decryption of the information exchanged. The underlying JSSE implementation will take care of those details at runtime. This way, the risk of creating subtle but dangerous security vulnerabilities is minimized. Moreover, JSSE simplifies application development by providing developers with a building block that can be integrated directly into existing applications. As SSL and TLS are standard protocols, Java programs using JSSE can communicate via SSL with other processes and services that may not have been written in the Java language. The JSSE design resembles that of the Java Cryptography Architecture and Java Cryptography Extension (see Section 11.1 on page 377). Like the JCA and the JCE, JSSE provides both an API framework and an implementation of that API and is designed so that any vendor's implementation can be installed and used without having to recompile the application. JSSE uses the same provider architecture defined in the JCA and the JCE (see Section 11.1.3 on page 382). This enables applications to be vendor neutral, implementation independent, and, whenever possible, algorithm independent. The JSSE API consists of the package javax.net and its subpackage javax.net.ssl. This API supplements the core network and cryptographic services defined in the java.net and java.security packages by providing extended networking socket classes, trust managers (see Section 13.4.2), key managers, SSL contexts, and a socket-factory framework for encapsulating socket-creation behavior. This API supports SSL V2.0 and V3.0, TLS V1.0, and HTTPS. These security protocols encapsulate a normal bidirectional stream socket, and the JSSE API adds transparent support for authentication, encryption, and data-integrity protection. In Section 4.10.1 on page 145, we studied how servlets can become HTTPS clients by invoking an HTTPS URL programmatically. In Section 13.5 on page 462, we show how a servlet can use the JSSE API. 13.4.2 Trust ManagersA trust manager is a software component responsible for deciding whether the credentials presented by a peer during an authentication process should be considered trusted. In JSSE, trust managers are represented as implementations of the javax.net.ssl.TrustManager interface. TrustManager instances are created by either using a javax.net.ssl.TrustManagerFactory object or instantiating a class implementing the TrustManager interface. 13.4.3 TruststoresA truststore is a regular keystore that is used when authenticating a peer to decide whether the peer should be trusted. In JSSE, truststores are represented as java.security.KeyStore objects (see Section 11.2.7 on page 395). As the entries in a truststore are used to make trust decisions during an authentication process, an entry should be added to a truststore only by a system administrator and only if the entity represented by that entity is considered trusted. The J2SE reference implementation comes with a default truststore file called cacerts, located in the lib/security subdirectory of the Java home directory. This file contains the digital certificates of common CAs, such as VeriSign and Thawte. The default password to access and modify this truststore is changeit. Users of a Java system can specify a different truststore by setting the system property javax.net.ssl.trustStore to the truststore file of their preference. This can be done programmatically, through a call to java.lang.System.setProperty(), or statically, using the -Djavax.net.ssl.trustStore option of the java command. Often, it is useful to keep regular keystore files separated from truststore files. A regular keystore file contains private information, such as an entity's own certificates and corresponding private keys, whereas a truststore contains public information, such as that entity's trusted certificate entries, including CA certificates. Using two different files instead of a single keystore file provides for a cleaner separation between an entity's own certificate and key entries and other entities' certificates. This physical separation, which reflects the logical distinction of private- and public-key material, gives system administrators more flexibility in managing the security of the system. For example, a truststore file could be made write protected so that only system administrators are allowed to add entries to it. Conversely, a user's keystore can be write accessible to its owner. |
< Day Day Up > |