Monday, November 9, 2009

Font Enumeration



Font Enumeration


Font enumeration is the process of obtaining from GDI a list of all fonts available on a device. A program can then select one of these fonts or display them in a dialog box for selection by the user. I'll first briefly describe the enumeration functions and then show how to use the ChooseFont function, which fortunately makes font enumeration much less necessary for an application.

The Enumeration Functions


In the old days of Windows, font enumeration required use of the EnumFonts function:


EnumFonts (hdc, szTypeFace, EnumProc, pData) ;


A program could enumerate all fonts (by setting the second argument to NULL) or just those of a particular typeface. The third argument is an enumeration callback function; the fourth argument is optional data passed to that function. GDI calls the callback function once for each font in the system, passing to it both LOGFONT and TEXTMETRIC structures that defined the font, plus some flags indicating the type of font.

The EnumFontFamilies function was designed to better enumerate TrueType fonts under Windows 3.1:


EnumFontFamilies (hdc, szFaceName, EnumProc, pData) ;


Generally, EnumFontFamilies is called first with a NULL second argument. The EnumProc callback function is called once for each font family (such as Times New Roman). Then the application calls EnumFontFamilies again with that typeface name and a different callback function. GDI calls the second callback function for each font in the family (such as Times New Roman Italic). The callback function is passed an ENUMLOGFONT structure (which is a LOGFONT structure plus a "full name" field and a "style" field containing, for example, the text name "Italic" or "Bold") and a TEXTMETRIC structure for non-TrueType fonts and a NEWTEXTMETRIC structure for TrueType fonts. The NEWTEXTMETRIC structure adds four fields to the information in the TEXTMETRIC structure.

The EnumFontFamiliesEx function is recommended for applications running under the 32-bit versions of Windows:


EnumFontFamiliesEx (hdc, &logfont, EnumProc, pData, dwFlags) ;


The second argument is a pointer to a LOGFONT structure for which the lfCharSet and lfFaceName fields indicate what fonts are to be enumerated. The callback function gets information about each font in the form of ENUMLOGFONTEX and NEWTEXTMETRICEX structures.

The ChooseFont Dialog


We had a little introduction to the ChooseFont common dialog box back in Chapter 11. Now that we've encountered font enumeration, the inner workings of the ChooseFont function should be obvious. The ChooseFont function takes a pointer to a CHOOSEFONT structure as its only argument and displays a dialog box listing all the fonts. On return from ChooseFont, a LOGFONT structure, which is part of the CHOOSEFONT structure, lets you create a logical font.

The CHOSFONT program, shown in Figure 17-7, demonstrates using the ChooseFont function and displays the fields of the LOGFONT structure that the function defines. The program also displays the same string of text as PICKFONT.

Figure 17-7. The CHOSFONT program.



CHOSFONT.C


/*-----------------------------------------
CHOSFONT.C -- ChooseFont Demo
(c) Charles Petzold, 1998
-----------------------------------------*/

#include <windows.h>
#include "resource.h"

LRESULT CALLBACK WndProc (HWND, UINT, WPARAM, LPARAM) ;

int WINAPI WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance,
PSTR szCmdLine, int iCmdShow)
{
static TCHAR szAppName[] = TEXT ("ChosFont") ;
HWND hwnd ;
MSG msg ;
WNDCLASS wndclass ;

wndclass.style = CS_HREDRAW | CS_VREDRAW ;
wndclass.lpfnWndProc = WndProc ;
wndclass.cbClsExtra = 0 ;
wndclass.cbWndExtra = 0 ;
wndclass.hInstance = hInstance ;
wndclass.hIcon = LoadIcon (NULL, IDI_APPLICATION) ;
wndclass.hCursor = LoadCursor (NULL, IDC_ARROW) ;
wndclass.hbrBackground = (HBRUSH) GetStockObject (WHITE_BRUSH) ;
wndclass.lpszMenuName = szAppName ;
wndclass.lpszClassName = szAppName ;

if (!RegisterClass (&wndclass))
{
MessageBox (NULL, TEXT ("This program requires Windows NT!"),
szAppName, MB_ICONERROR) ;
return 0 ;
}

hwnd = CreateWindow (szAppName, TEXT ("ChooseFont"),
WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, CW_USEDEFAULT,
CW_USEDEFAULT, CW_USEDEFAULT,
NULL, NULL, hInstance, NULL) ;

ShowWindow (hwnd, iCmdShow) ;
UpdateWindow (hwnd) ;
while (GetMessage (&msg, NULL, 0, 0))
{
TranslateMessage (&msg) ;
DispatchMessage (&msg) ;
}
return msg.wParam ;
}

LRESULT CALLBACK WndProc (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam)
{
static CHOOSEFONT cf ;
static int cyChar ;
static LOGFONT lf ;
static TCHAR szText[] = TEXT ("\x41\x42\x43\x44\x45 ")
TEXT ("\x61\x62\x63\x64\x65 ")

TEXT ("\xC0\xC1\xC2\xC3\xC4\xC5 ")
TEXT ("\xE0\xE1\xE2\xE3\xE4\xE5 ")
#ifdef UNICODE
TEXT ("\x0390\x0391\x0392\x0393\x0394\x0395 ")
TEXT ("\x03B0\x03B1\x03B2\x03B3\x03B4\x03B5 ")

TEXT ("\x0410\x0411\x0412\x0413\x0414\x0415 ")
TEXT ("\x0430\x0431\x0432\x0433\x0434\x0435 ")

TEXT ("\x5000\x5001\x5002\x5003\x5004")
#endif
;
HDC hdc ;
int y ;
PAINTSTRUCT ps ;
TCHAR szBuffer [64] ;
TEXTMETRIC tm ;

switch (message)
{
case WM_CREATE:

// Get text height

cyChar = HIWORD (GetDialogBaseUnits ()) ;

// Initialize the LOGFONT structure

GetObject (GetStockObject (SYSTEM_FONT), sizeof (lf), &lf) ;

// Inialize the CHOOSEFONT structure
cf.lStructSize = sizeof (CHOOSEFONT) ;
cf.hwndOwner = hwnd ;
cf.hDC = NULL ;
cf.lpLogFont = &lf ;
cf.iPointSize = 0 ;
cf.Flags = CF_INITTOLOGFONTSTRUCT |
CF_SCREENFONTS | CF_EFFECTS ;
cf.rgbColors = 0 ;
cf.lCustData = 0 ;
cf.lpfnHook = NULL ;
cf.lpTemplateName = NULL ;
cf.hInstance = NULL ;
cf.lpszStyle = NULL ;
cf.nFontType = 0 ;
cf.nSizeMin = 0 ;
cf.nSizeMax = 0 ;
return 0 ;

case WM_COMMAND:
switch (LOWORD (wParam))
{
case IDM_FONT:
if (ChooseFont (&cf))
InvalidateRect (hwnd, NULL, TRUE) ;
return 0 ;
}
return 0 ;

case WM_PAINT:
hdc = BeginPaint (hwnd, &ps) ;

// Display sample text using selected font

SelectObject (hdc, CreateFontIndirect (&lf)) ;
GetTextMetrics (hdc, &tm) ;
SetTextColor (hdc, cf.rgbColors) ;
TextOut (hdc, 0, y = tm.tmExternalLeading, szText, lstrlen (szText)) ;

// Display LOGFONT structure fields using system font

DeleteObject (SelectObject (hdc, GetStockObject (SYSTEM_FONT))) ;
SetTextColor (hdc, 0) ;

TextOut (hdc, 0, y += tm.tmHeight, szBuffer,
wsprintf (szBuffer, TEXT ("lfHeight = %i"), lf.lfHeight)) ;
TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfWidth = %i"), lf.lfWidth)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfEscapement = %i"),
lf.lfEscapement)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfOrientation = %i"),
lf.lfOrientation)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfWeight = %i"), lf.lfWeight)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfItalic = %i"), lf.lfItalic)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfUnderline = %i"), lf.lfUnderline)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfStrikeOut = %i"), lf.lfStrikeOut)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfCharSet = %i"), lf.lfCharSet)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfOutPrecision = %i"),
lf.lfOutPrecision)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfClipPrecision = %i"),
lf.lfClipPrecision)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfQuality = %i"), lf.lfQuality)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfPitchAndFamily = 0x%02X"),
lf.lfPitchAndFamily)) ;

TextOut (hdc, 0, y += cyChar, szBuffer,
wsprintf (szBuffer, TEXT ("lfFaceName = %s"), lf.lfFaceName)) ;

EndPaint (hwnd, &ps) ;
return 0 ;
case WM_DESTROY:
PostQuitMessage (0) ;
return 0 ;
}
return DefWindowProc (hwnd, message, wParam, lParam) ;
}




CHOSFONT.RC


//Microsoft Developer Studio generated resource script.

#include "resource.h"
#include "afxres.h"

/////////////////////////////////////////////////////////////////////////////
// Menu

CHOSFONT MENU DISCARDABLE
BEGIN
MENUITEM "&Font!", IDM_FONT
END




RESOURCE.H


// Microsoft Developer Studio generated include file.
// Used by ChosFont.rc

#define IDM_FONT 40001



As usual with the common dialog boxes, a Flags field in the CHOOSEFONT structure lets you pick lots of options. The CF_INITLOGFONTSTRUCT flag that CHOSFONT specifies causes Windows to initialize the dialog box selection based on the LOGFONT structure passed to the ChooseFont structure. You can use flags to specify TrueType fonts only (CF_TTONLY) or fixed-pitch fonts only (CF_FIXEDPITCHONLY) or no symbol fonts (CF_SCRIPTSONLY). You can display screen fonts (CF_SCREENFONTS), printer fonts (CF_PRINTERFONTS), or both (CF_BOTH). In the latter two cases, the hDC field of the CHOOSEFONT structure must reference a printer device context. The CHOSFONT program uses the CF_SCREENFONTS flag.

The CF_EFFECTS flag (the third flag that the CHOSFONT program uses) forces the dialog box to include check boxes for underlining and strikeout and also allows the selection of a text color. It's not hard to implement text color in your code, so try it.

Notice the Script field in the Font dialog displayed by ChooseFont. This lets the user select a character set available for the particular font; the appropriate character set ID is returned in the LOGFONT structure.

The ChooseFont function uses the logical inch to calculate the lfHeight field from the point size. For example, suppose you have Small Fonts installed from the Display Properties dialog. That means that GetDeviceCaps with a video display device context and the argument LOGPIXELSY returns 96. If you use ChooseFont to choose a 72-point Times Roman Font, you really want a 1-inch tall font. When ChooseFont returns, the lfHeight field of the LOGFONT structure will equal -96 (note the minus sign), meaning that the point size of the font is equivalent to 96 pixels, or one logical inch.

Good. That's probably what we want. But keep the following in mind:



  • If you set one of the metric mapping modes under Windows NT, logical coordinates will be inconsistent with the physical size of the font. For example, if you draw a ruler next to the text based on a metric mapping mode, it will be not match the font. You should use the Logical Twips mapping mode described above to draw graphics that are consistent with the font size.

  • If you're going to be using any non-MM_TEXT mapping mode, make sure the mapping mode is not set when you select the font into the device context and display the text. Otherwise, GDI will interpret the lfHeight field of the LOGFONT structure as being expressed in logical coordinates.

  • The lfHeight field of the LOGFONT structure set by ChooseFont is always in pixels, and it is only appropriate for the video display. When you create a font for a printer device context, you must adjust the lfHeight value. The ChooseFont function uses the hDC field of the CHOOSEFONT structure only for obtaining printer fonts to be listed in the dialog box. This device context handle does not affect the value of lfHeight.


Fortunately, the CHOOSEFONT structure includes an iPointSize field that provides the size of the selected font in units of 1/10 of a point. Regardless of the device context and mapping mode, you can always convert this field to a logical size and use that for the lfHeight field. The appropriate code can be found in the EZFONT.C file. You can probably simplify it based on your needs.

Another program that uses ChooseFont is UNICHARS, shown in Figure 17-8. This program lets you view all the characters of a font and is particularly useful for studying the Lucida Sans Unicode font, which it uses by default for display, or the Bitstream CyberBit font. UNICHARS always uses the TextOutW function for displaying the font characters, so you can run it under Windows NT or Windows 98.

Figure 17-8. The UNICHARS program.


UNICHARS.C


/*-----------------------------------------------
UNICHARS.C -- Displays 16-bit character codes
(c) Charles Petzold, 1998
-----------------------------------------------*/

#include <windows.h>
#include "resource.h"

LRESULT CALLBACK WndProc (HWND, UINT, WPARAM, LPARAM) ;

int WINAPI WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance,
PSTR szCmdLine, int iCmdShow)
{
static TCHAR szAppName[] = TEXT ("UniChars") ;
HWND hwnd ;
MSG msg ;
WNDCLASS wndclass ;

wndclass.style = CS_HREDRAW | CS_VREDRAW ;
wndclass.lpfnWndProc = WndProc ;
wndclass.cbClsExtra = 0 ;
wndclass.cbWndExtra = 0 ;
wndclass.hInstance = hInstance ;
wndclass.hIcon = LoadIcon (NULL, IDI_APPLICATION) ;
wndclass.hCursor = LoadCursor (NULL, IDC_ARROW) ;
wndclass.hbrBackground = (HBRUSH) GetStockObject (WHITE_BRUSH) ;
wndclass.lpszMenuName = szAppName ;
wndclass.lpszClassName = szAppName ;

if (!RegisterClass (&wndclass))
{
MessageBox (NULL, TEXT ("This program requies Windows NT!"),
szAppName, MB_ICONERROR) ;
return 0 ;
}

hwnd = CreateWindow (szAppName, TEXT ("Unicode Characters"),
WS_OVERLAPPEDWINDOW | WS_VSCROLL,
CW_USEDEFAULT, CW_USEDEFAULT,
CW_USEDEFAULT, CW_USEDEFAULT,
NULL, NULL, hInstance, NULL) ;

ShowWindow (hwnd, iCmdShow) ;
UpdateWindow (hwnd) ;
while (GetMessage (&msg, NULL, 0, 0))
{
TranslateMessage (&msg) ;
DispatchMessage (&msg) ;
}
return msg.wParam ;
}

LRESULT CALLBACK WndProc (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam)
{
static CHOOSEFONT cf ;
static int iPage ;
static LOGFONT lf ;
HDC hdc ;
int cxChar, cyChar, x, y, i, cxLabels ;
PAINTSTRUCT ps ;
SIZE size ;
TCHAR szBuffer [8] ;
TEXTMETRIC tm ;
WCHAR ch ;

switch (message)
{
case WM_CREATE:
hdc = GetDC (hwnd) ;
lf.lfHeight = - GetDeviceCaps (hdc, LOGPIXELSY) / 6 ; // 12 points
lstrcpy (lf.lfFaceName, TEXT ("Lucida Sans Unicode")) ;
ReleaseDC (hwnd, hdc) ;

cf.lStructSize = sizeof (CHOOSEFONT) ;
cf.hwndOwner = hwnd ;
cf.lpLogFont = &lf ;
cf.Flags = CF_INITTOLOGFONTSTRUCT | CF_SCREENFONTS ;

SetScrollRange (hwnd, SB_VERT, 0, 255, FALSE) ;
SetScrollPos (hwnd, SB_VERT, iPage, TRUE ) ;
return 0 ;

case WM_COMMAND:
switch (LOWORD (wParam))
{
case IDM_FONT:
if (ChooseFont (&cf))
InvalidateRect (hwnd, NULL, TRUE) ;
return 0 ;
}
return 0 ;
case WM_VSCROLL:
switch (LOWORD (wParam))
{
case SB_LINEUP: iPage -= 1 ; break ;
case SB_LINEDOWN: iPage += 1 ; break ;
case SB_PAGEUP: iPage -= 16 ; break ;
case SB_PAGEDOWN: iPage += 16 ; break ;
case SB_THUMBPOSITION: iPage = HIWORD (wParam) ; break ;

default:
return 0 ;
}

iPage = max (0, min (iPage, 255)) ;

SetScrollPos (hwnd, SB_VERT, iPage, TRUE) ;
InvalidateRect (hwnd, NULL, TRUE) ;
return 0 ;

case WM_PAINT:
hdc = BeginPaint (hwnd, &ps) ;

SelectObject (hdc, CreateFontIndirect (&lf)) ;

GetTextMetrics (hdc, &tm) ;
cxChar = tm.tmMaxCharWidth ;
cyChar = tm.tmHeight + tm.tmExternalLeading ;

cxLabels = 0 ;

for (i = 0 ; i < 16 ; i++)
{
wsprintf (szBuffer, TEXT (" 000%1X: "), i) ;
GetTextExtentPoint (hdc, szBuffer, 7, &size) ;

cxLabels = max (cxLabels, size.cx) ;
}

for (y = 0 ; y < 16 ; y++)
{
wsprintf (szBuffer, TEXT (" %03X_: "), 16 * iPage + y) ;
TextOut (hdc, 0, y * cyChar, szBuffer, 7) ;

for (x = 0 ; x < 16 ; x++)
{
ch = (WCHAR) (256 * iPage + 16 * y + x) ;
TextOutW (hdc, x * cxChar + cxLabels,
y * cyChar, &ch, 1) ;
}
}

DeleteObject (SelectObject (hdc, GetStockObject (SYSTEM_FONT))) ;
EndPaint (hwnd, &ps) ;
return 0 ;

case WM_DESTROY:
PostQuitMessage (0) ;
return 0 ;
}
return DefWindowProc (hwnd, message, wParam, lParam) ;
}




UNICHARS.RC


//Microsoft Developer Studio generated resource script.

#include "resource.h"
#include "afxres.h"

/////////////////////////////////////////////////////////////////////////////
// Menu

UNICHARS MENU DISCARDABLE
BEGIN
MENUITEM "&Font!", IDM_FONT
END




RESOURCE.H


// Microsoft Developer Studio generated include file.
// Used by Unichars.rc

#define IDM_FONT 40001



Random Number Generation












Random Number Generation

Libdnet also offers the application programmer a rich set of functions to manipulate pseudo-random numbers. This functionality is useful in many network applications, including packet generation and security testing.




rand_t *rand_open (void);



rand_open() obtains a random number handle for fast cryptographic and strong pseudo-random number generation. The initial seed for the generator is derived from the system random data source device (if one exists; /dev/arandom or /dev/urandom under Unix variants) or from the current time and random stack contents. Upon success, the function returns a valid blob_t handle pointer; upon failure (malloc()), the function returns NULL.




int rand_get (rand_t *r, void *buf, size_t len);



rand_get() writes len random bytes from r into buf. The function does not fail and returns 0.




int rand_set (rand_t *r, const void *seed, size_t len);



rand_set() reinitializes r with the seed seed of len bytes. This function is useful when you desire a random sequence, but it needs to be repeatable (in other words, for network protocol stress testing). The function does not fail and returns 0.




int rand_add (rand_t *r, const void *buf, size_t len);



rand_add() writes len bytes of entropy data from buf into r. The function does not fail and returns 0.




uint8_t rand_uint8(rand_t *r);



rand_uint8() returns an unsigned 8-bit pseudo-random value.




uint16_t rand_uint16(rand_t *r);



rand_uint16() returns an unsigned 16-bit pseudo-random value.




uint32_t rand_uint32(rand_t *r);



rand_uint32() returns an unsigned 32-bit pseudo-random value.





int rand_shuffle(rand_t *r, void *base, size_t nmemb, size_t size);




rand_shuffle() pseudo-randomly shuffles an array of elements nmemb of size bytes, starting at base and using r. Note that this function performs an implicit malloc(). Upon success, the function returns 0; upon failure, the function returns -1.




rand_t *rand_close (rand_t *r);



rand_close() frees the memory associated with r. The function returns NULL.















Table of content




I l@ve RuBoard


















  
• Table of Contents


































UNIX® Fault Management: A Guide for System Administration
By
Brad Stone, Julie Symons
  
Publisher: Prentice Hall PTR
Pub Date: December 01, 1999
ISBN: 0-13-026525-X
Pages: 368


































































































































































































































































































































































































































































































































































   Copyright
   Preface
   Acknowledgments
   Chapter 1. 
Analyzing the Role of System Operators
   
 
Trends in System Operations
   
   Chapter 2. 
Enumerating Possible Events
   
 
Defining Fault Management
   
 
Event Categories
   
   Chapter 3. 
Using Monitoring Frameworks
   
 
Distinguishing Monitoring Frameworks
   
 
IT/Operations
   
 
Unicenter TNG
   
 
Event Monitoring Service
   
 
PLATINUM ProVision
   
 
BMC PATROL
   
 
MeasureWare
   
   Chapter 4. 
Monitoring the System
   
 
Identifying Important System Monitoring Categories
   
 
Using Standard Commands and Tools
   
 
Using System Instrumentation
   
 
Using Graphical Status Monitors
   
 
Using Event Monitoring Tools
   
 
Security Monitoring
   
 
Using Diagnostic Tools
   
 
Monitoring System Peripherals
   
 
Collecting System Performance Data
   
 
Using System Performance Data
   
 
Avoiding System Problems
   
 
Recovering from System Problems
   
 
Comparing System Monitoring Tools
   
 
Case Study: Recovering from Memory Faults
   
   Chapter 5. 
Monitoring the Disks
   
 
Identifying Important Disk Monitoring Categories
   
 
Using Standard Commands and Tools
   
 
Using System Instrumentation
   
 
Using Event Monitoring Tools
   
 
Using Diagnostic Tools
   
 
Collecting Disk Performance Data
   
 
Using Disk Performance Data
   
 
Avoiding Disk Problems
   
 
Recovering from Disk Problems
   
 
Comparing Disk Monitoring Products
   
 
Case Study: Configuring and Monitoring for Mirrored Disks
   
   Chapter 6. 
Monitoring the Network
   
 
Identifying Important Network Components to Monitor
   
 
Using Graphical Network Status Monitors
   
 
Monitoring Network Interface Card and Cable Failures
   
 
Monitoring Networking and Transport Protocols
   
 
Monitoring Network Services
   
 
Monitoring Network Hosts
   
 
Collecting Network Performance Data
   
 
Using Network Performance Data
   
 
Avoiding Network Problems
   
 
Recovering from Network Problems
   
   Chapter 7. 
Monitoring the Application
   
 
Important Application Components to Monitor
   
 
Identifying Application Types
   
 
Using Standard Commands and Tools
   
 
Using System Instrumentation
   
 
Fault Detection Tools
   
 
Monitoring Tools for ERP Applications
   
 
Resource and Performance Monitoring Tools
   
 
Controlling Application Performance
   
 
Recovering from Application Problems
   
 
Comparison of Application Monitoring Products
   
   Chapter 8. 
Monitoring the Database
   
 
Identifying Important Database Monitoring Categories
   
 
Using Standard Database Commands and Tools
   
 
Using Fault Detection and Recovery Tools
   
 
Resource and Performance Monitoring Tools
   
 
Using Database Performance Data
   
 
Avoiding Database Problems
   
 
Recovering from Database Problems
   
 
Comparison of Database Monitoring Products
   
   Chapter 9. 
Enterprise Management
   
 
Monitoring Across an Enterprise
   
 
Identifying Events
   
 
Using Event Correlation Tools
   
 
Monitoring Multiple Systems
   
 
Enterprise Management Frameworks
   
 
Using Multiple Tools
   
   Chapter 10. 
UNIX Futures
   
 
Future Trends in Fault Management
   
   appendix A. 
Standards
   
 
Using SNMP and MIBs
   
 
Using DMI and MIFs
   
   Glossary




I l@ve RuBoard




6.7 Discussion Questions











Team-Fly

 

 

















Documenting Software Architectures: Views and Beyond
By
Paul Clements, Felix Bachmann, Len Bass, David Garlan, James Ivers, Reed Little, Robert Nord, Judith Stafford
Table of Contents

Chapter 6. 
Advanced Concepts







6.7 Discussion Questions







1:

A user invokes a Web browser to download a file. Before doing so, the browser retrieves a plug-in to handle that type of file. Is this an example of a dynamic architecture? How would you document it?


2:

Suppose that communication across layers in a layered system is carried out by signaling events. Is event signaling a concern that is part of the layered style? If not, how would you document this system?


3:

Consider a shared-data system with a central database accessed by several components in a client-server fashion. What are your options for documenting the two-style nature of this system? Which option(s) would you choose, and why?


4:

A bridging element is one that can appear in view packets in two separate views. Both views will have room for documenting the element's interface and its behavior. Assuming that we do not wish to document information in two places, how would you decide where to record that information? Suppose that the bridging element is a connector with one role for one style and one role for another. Where would you record the information then?


5:

Sketch a top-level context diagram for a hypothetical system as it might appear in the following views, assuming in each case that the view is appropriate for that system: (a) uses, (b) layered, (c) pipe-and-filter, (d) client-server, (e) deployment.













    Team-Fly

     

     





    Top



    Recipe 27.6 Creating a JavaBean to Connect with Amazon



    [ Team LiB ]






    Recipe 27.6 Creating a JavaBean to Connect with Amazon




    Problem



    You want to create a JavaBean
    as a type of Amazon search utility class.





    Solution



    Set up your Amazon API as described in Recipe 27.5, then code a JavaBean that uses the
    com.amazon.soap.axis package from this API.





    Discussion



    The JavaBean in Example 27-5, named
    AmazonBean, imports the
    com.amazon.soap.axis package. This package is
    stored in amazonapi.jar, which (generated by
    Recipe 27.5). Store the JAR in the web
    application's WEB-INF/lib
    directory and the AmazonBean in
    WEB-INF/classes (or also in a JAR in
    WEB-INF/lib).



    Example 27-5 connects with Amazon in its
    getSearchResults(
    )
    method. The AmazonBean
    formats and displays the search results in structureResults(
    )
    . The code comments describe
    what's going on in detail.




    Example 27-5. A JavaBean class that searches Amazon

    package com.jspservletcookbook;

    import java.net.URL;

    import com.amazon.soap.axis.*;

    public class AmazonBean {

    //The developer's token
    private final static String AMAZON_KEY = "DCJEAVDSXVPUD";


    //NOTE: AWS Version 3 uses "http://xml.amazon.com/xml3"
    private final static String END_POINT =
    "http://soap.amazon.com/onca/soap";

    private final static String AMAZON_TAG = "webservices-20";

    private URL endpointUrl;

    private String lineSep = "\n";
    private String totalResults;
    private String keyword;
    private String page;
    private String type;
    private String mode;


    public AmazonBean( ){}//no-arguments constructor required for a bean

    //an easy way to test the bean outside of a servlet
    public static void main(String[] args) throws Exception{

    AmazonBean bean = new AmazonBean( );
    bean.setKeyword("Lance%20Armstrong");
    bean.setType("heavy");
    bean.setMode("books");
    bean.setPage("1");

    System.out.println( bean.getSearchResults( ) );
    }

    //Structure the search result as a String
    public String structureResult(ProductInfo info){

    //Amazon searches return ProductInfo objects, which
    //contains array of Details object. A Details object
    //represents an individual search result
    Details[] details = info.getDetails( );

    String results = "";

    //each found book includes an array of authors in its Details
    String[] authors = null;

    String usedP = null;//UsedPrice object

    String rank = null;//SalesRank object

    //for each returned search item...
    for (int i = 0; i < details.length; i++){

    if(mode != null && mode.equals("books")){
    authors = details[i].getAuthors( ); }

    //Include the product name
    results +=
    "<strong>"+(i+1)+". Product name:</strong> " +
    details[i].getProductName( ) + lineSep;

    //If they are books include each author's name
    if(mode != null && mode.equals("books")){

    for (int j = 0; j < authors.length; j++){
    results += "Author name "+(j+1)+": " + authors[j] +
    lineSep;

    }//for
    }//if

    usedP = details[i].getUsedPrice( );//get the used price

    rank = details[i].getSalesRank( );//get the sales rank

    results += "Sales rank: " + (rank == null ? "N/A" : rank) +
    lineSep +"List price: " + details[i].getListPrice( ) + lineSep +
    "Our price: " + details[i].getOurPrice( ) + lineSep +
    "Used price: " + (usedP == null ? "N/A" : usedP) + lineSep +
    lineSep;

    }

    return results;

    }//structureResult

    //Connect with Amazon Web Services then call structureResult( )
    public String getSearchResults( ) throws Exception{

    endpointUrl = new URL(END_POINT);
    AmazonSearchService webService = new AmazonSearchServiceLocator( );
    //Connect to the AWS endpoint
    AmazonSearchPort port = webService.getAmazonSearchPort(endpointUrl);
    KeywordRequest request = getKeywordRequest( );
    //Return results of the search
    ProductInfo prodInfo = port.keywordSearchRequest(request);
    //Set totalResults with any provided results total
    setTotalResults( prodInfo.getTotalResults( ) );
    //Make sure the book-search results are structured and displayed
    return structureResult(prodInfo);

    }//getSearchResults

    //Setter and getter methods...


    public void setLineSep(String lineSep){
    this.lineSep=lineSep;
    }

    public String getLineSep( ){
    return lineSep;
    }

    //A KeywordRequest object initialized with search terms, the mode, the
    //number of pages to be returned, the type ('lite' or 'heavy'), and the
    //developer's token.
    public KeywordRequest getKeywordRequest( ){
    KeywordRequest request = new KeywordRequest( );
    request.setKeyword(keyword);//the search terms
    request.setMode(mode);//the mode, as in 'books'
    request.setPage(page);//the number of pages to return
    request.setType(type);//the type, 'lite' or 'heavy'
    request.setDevtag(AMAZON_KEY);//developer's token
    request.setTag(AMAZON_TAG);//the tag, 'webservices-20'
    return request;

    }


    public void setKeyword(String keyword){
    this.keyword = keyword;
    }

    public String getKeyword( ){
    return keyword;
    }

    public void setMode(String mode){
    this.mode = mode;
    }

    public String getMode( ){
    return mode;
    }

    public void setPage(String page){
    this.page = page;
    }

    public String getPage( ){
    return page;
    }

    public void setType(String type){
    this.type = type;
    }

    public String getType( ){
    return type;
    }

    public void setTotalResults(String results){
    totalResults = results;
    }

    public String getTotalResults( ){
    return totalResults;
    }
    }//AmazonBean



    The bean has a main( ) method that allows you to
    test the bean from the command line. Here is code from that method
    that creates a bean instance, searches for a book using the search
    terms "Lance Armstrong," and
    displays some results:



    AmazonBean bean = new AmazonBean( );
    bean.setKeyword("Lance%20Armstrong");
    bean.setType("heavy");
    bean.setMode("books");
    bean.setPage("1");
    System.out.println( bean.getSearchResults( ) );


    To run the bean from a command line, make sure include all of the
    necessary Axis-related libraries on your classpath (see Recipe 27.5).
    The following command line runs the bean to test it. Note that this
    command line includes the amazonapi.jar file
    generated by Recipe 27.5:



    java -cp .;jaxrpc.jar;axis.jar;amazonapi.jar;commons-logging.jar;commons-discovery.
    jar;saaj.jar com.jspservletcookbook.AmazonBean





    If you set the type option to heavy (as opposed to lite), then the
    search returns the book's sales rank at Amazon. The
    lite SOAP responses do not include a value for sales rank.







    See Also



    The AWS SDK http://www.amazon.com/gp/aws/download_sdk.html/002-2688331-0628046;
    Recipe 27.7 on using a servlet and a JavaBean
    to connect with AWS .








      [ Team LiB ]



      Initiation



      [ Team LiB ]





      Initiation


      Refactoring toward deeper insight can begin in many ways. It may be a response to a problem in the code�some complexity or awkwardness. Rather than apply a standard transformation of the code, the developers sense that the root of the problem is in the domain model. Perhaps a concept is missing. Maybe some relationship is wrong.


      In a departure from the conventional view of refactoring, this same realization could come when the code looks tidy, if the language of the model seems disconnected from the domain experts, or if new requirements are not fitting in naturally. Refactoring might result from learning, as a developer who has gained deeper understanding sees an opportunity for a more lucid or useful model.


      Seeing the trouble spot is often the hardest and most uncertain part. After that, developers can systematically seek out the elements of a new model. They can brainstorm with colleagues and domain experts. They can draw on systematized knowledge written as analysis patterns or design patterns.





        [ Team LiB ]



        Section 4.3.&nbsp; Defining a Chain










        4.3. Defining a Chain










        The elements of the notification chain's list are of type notifier_block, whose definition is the following:



        struct notifier_block
        {
        int (*notifier_call)(struct notifier_block *self, unsigned long, void *);
        struct notifier_block *next;
        int priority;
        };



        notifier_call is the function to execute, next is used to link together the elements of the list, and priority represents the priority of the function. Functions with higher priority are executed first. But in practice, almost all registrations leave the priority out of the notifier_block definition, which means it gets the default value of 0 and execution order ends up depending only on the registration order (i.e., it is a semirandom order). The return values of notifier_call are listed in the upcoming section, "Notifying Events on a Chain."


        Common names for notifier_block instances are xxx_chain, xxx_notifier_chain, and xxx_notifier_list.












        Protocol Analyzers














        Protocol Analyzers


        Today there is a wide variety of troubleshooting tools for analyzing network problems. Cable testers, designed simply to register common electrical faults, may include sophisticated time domain reflectometry and digital signal processing to precisely localize fault conditions and rapidly examine performance over a broad bandwidth. Handheld troubleshooting tools may combine some cable-testing capabilities with NIC and hub testing, as well as with higher-layer protocol tests such as Ping and Traceroute. SNMP consoles, once found only on management platforms such as OpenView and SunNet Manager, are now commonly supplied with individual devices. They are also attached to Web servers, and used for network drawing and documentation tools, as well as built into handheld diagnostic devices.


        As a result, many of the tasks that once required a protocol analyzer can be resolved with other tools. Nevertheless, most of the hardest and subtlest problems still require protocol analysis for complete resolution.




        Analyze This


        In its simplest form, a protocol analyzer captures all the traffic on a medium, parses it according to the rules of any network protocols that are present, and displays the results. Ethernet, the most widespread shared-network medium, is the most common interface for protocol analyzers, but any other Physical-layer or Data-link-layer medium that can redirect signals to a probe will have protocol analysis tools. Of course, on shared media and high-speed media, the analysis software will have to work harder than it would on point-to-point lines or low-throughput networks (see figure).






        Figure 1: Protocol analyzers capture network traffic, parse each frame to identify the protocol that defines it, and pass the decoded traffic along for display or for further analysis.

        Most shared-medium network technologies-Ethernet, Fast Ethernet, Token Ring, and FDDI NICs-support a "promiscuous mode" operation, where all the traffic on a segment or ring can be processed. Ordinarily, a NIC processes only frames with its own MAC address as the destination, as well as broadcast frames, which are intended for everyone by definition. If the NIC is switched into promiscuous mode, it can also capture unicast and multicast traffic sent to other nodes.


        Most current protocol analyzers run on ordinary portable PCs, though some of the high-end models use proprietary OSs and hardware for detecting rare error conditions that ordinary NICs are blind to. (Some NICs that operate in promiscuous mode suppress bad frames, which is undesirable for protocol analysis if your network produces such things.)


        The analyzer software takes the frames received by the NIC and typically writes them into a big capture buffer in RAM. Ordinary commercial CPUs can keep up with a saturated 10BaseT network, but not with 100BaseT. One option for dealing with high-speed networks is to apply filters before capture, eliminating particular protocols or particular addresses that seem extraneous to the problem. Alternatively, many analyzers now have dedicated hardware that can capture full-duplex, 100BaseT traffic-normally a package of RAM with as direct a connection as possible to the network, and an interface to send captured data to the analyzer software.


        From a captured frame it's easy to display the source and destination MAC addresses. A Type identifier field indicates what higher-level protocol defines the payload of the frame. Applying the rules for IP, IPX, AppleTalk, SNA, DECnet, and other layer-3 protocols lets the software display the network addresses for the source and destination nodes, and branch out and unpack the higher-layer protocols. Once the Application layer is reached, the decoding job is finished and the captured traffic is ready for display.






        Key Features



        One essential element the analyzer adds to the display is a time stamp for each packet, which may be absolute or relative to some other time. Packet order and the duration of delays between particular packets are often crucial diagnostic information. Some analyzers have the valuable feature of supporting multiple views for the different protocol stack layers, reducing the quantity of data and simplifying the display.


        Filtering the captured traffic at this point is also a valuable technique for focusing on the most relevant data. It's also quite useful for the analyzer software to convert hexadecimal data fields to ASCII text, at least when the payload is e-mail, text files, or HTTP data. Yet another very useful feature lets the user substitute names-either DNS or NetBIOS or NetWare device names, or arbitrary mappings of addresses to meaningful names-for numeric addresses in the capture display.


        Traditionally, protocol analyzers stopped at this point and left it up to the often-flabbergasted user to figure out what was going on. Users needed to know in detail how the steps of a login procedure were carried out and what the meaning of various time-outs and error conditions were. They needed to understand how RIP and OSPF operated to define the transit paths through the network. They needed to understand how long SNA would be willing to wait for an acknowledgment before timing out and disconnecting.


        While the definition of protocols is usually well-documented, the way they behave on a live network and the way they interact with one another is not generally well-documented, and often known only to engineers and programmers who have spent large portions of their careers examining protocol analyzer trace files.


        The protocol analyzer makers responded to the shortage of experience in this area with software-based expert systems that could identify multi-packet patterns and suggest error conditions that might have caused the symptoms. Hewlett-Packard's Expert Advisor and Network Associates' Expert Sniffer were two of the earliest implementations of expert systems technology for protocol analysis, but most other vendors do much the same thing nowadays.


        Filtering is another key feature in protocol analysis, as I mentioned earlier. Filtering before capturing runs the risk of not capturing the key problem indicators, but is sometimes unavoidable if the traffic level is too high for the analyzer to capture in real time. It's better to ignore rationally chosen traffic streams than to ignore random frames whenever the software can't keep up. Filtering after capture is an essential part of simplifying the display, and has no disadvantages because the data remains in the capture buffer or trace file, even if you filter too aggressively and fail to display the key data on the first attempt.


        Aside from filtering by protocol and addresses, most analyzers permit you to define filters based on bit patterns anywhere within the frame. For example, you could easily search for packets containing specific text or with custom combinations of multiple layer parameters.


        Capture buffers can be written to disk as trace files. The Network Associates Sniffer format is the most commonly used format, employed by practically every vendor. Trace files that can't be diagnosed at one site or by a particular model of analyzer can be sent elsewhere for deeper investigation.






        Double Duty


        Most modern protocol analyzers also perform network-monitoring functions. As long as they're vacuuming up all the traffic on a segment, it doesn't take much work to count total frames, collisions, error frames, and broadcasts; to keep track of which conversations generate the most traffic; or to display the rates at which these events take place. These statistics can be presented as graphs or speedometer gauges, or captured to files in order to generate reference baselines for future comparison.


        If you're familiar with what an RMON system does, you'll probably recognize the similarity among these network-monitoring functions. RMON probes are generally distinct from RMON consoles with SNMP as the protocol that allows them to communicate, while protocol analyzers were not originally designed with this standardized distribution of functions. However, many protocol analyzers today can accept frames captured by an RMON probe and decode them as if they had captured them directly. This is a valuable capability, as the rise of switched networks threatens some of the utility of the protocol analyzer as a diagnostic tool.


        Because protocol analyzers depend on promiscuous-mode NICs (or on some form of signal duplication or "mirroring"), their reach is bounded by the edge of the Ethernet collision domain (or the presence of a bridge or router on other topologies).


        In the past, when dozens or even hundreds of users shared a single network segment, the protocol analyzer could capture all the problems and clues from a single location. Today, as users are divided into smaller subnets, or even into microsegments with a single switch port per user, the scope of the protocol analyzer has shrunk. On a switched Ethernet network with full-duplex ports, the protocol analyzer can't even capture traffic without a special capture port on the switch.


        In some cases, switches come with RMON probes built into each segment. Unfortunately, these implementations are usually stripped-down versions of RMON that don't include the packet capture function that a protocol analyzer would require. Sometimes switches support a backplane connection that could give the protocol analyzer access to all of the traffic on the switch, but more often port-mirroring functions only support the redirection of one port at a time to the protocol analyzer.


        One interesting extension of protocol analysis is the relatively new field of intrusion detection. The same sorts of "expert analysis" and pattern recognition that are employed to identify the signatures of misconfigured routers and broadcast storms can also be used to lock on to suspicious patterns of failed logins and inappropriate file browsing. The front end of the protocol analyzer is much the same as the front end of the intrusion detector; the primary difference lies in the patterns they are trained to detect.






        Glory Days


        In some respects, the heyday of the protocol analyzer has passed. Ten or even five years ago, network protocols were not implemented as sure-footedly as they are today. Many early networking mistakes have been solved and left behind, despite the fact that a network manager's day is still a full one. A modern switched network can practically be a plug-and-play environment.


        Nevertheless, when a problem comes along that the cable testers, the handheld troubleshooting tools, and the SNMP management platform can't pin down, the only alternative is to fire up the old protocol analyzer and start capturing and decoding packets.




        This tutorial, number 131, by Steve Steinke, was first published in the June 1999 issue of Network Magazine.


















        Three Ways Not to Lose Files













        Three Ways Not to Lose Files

        Now you’re probably quaking in your boots (or sandals, depending on where you live). You figure that, if you so much as touch the keyboard, you will do horrible, irreparable damage and spend the next week spinning tapes. It’s not that bad. This section tells you some tricks to avoid deleting files by mistake in the first place.




        Are you sure you wanna clobber this one?


        When you delete files with rm, use the -i (for interactive) switch:



        rm -i s*

        This line tells rm to ask you before it deletes each file, prompting you with the filename and a question mark. You press the y key if you want to delete it, and anything else to tell UNIX not to delete. (Remember that the question UNIX asks is, "Should I delete this?" and not "Do you want to keep this?")


        The main problem with -i is that it can become tedious when you want to delete a large number of files. When you do that, you probably use wildcards. To be safe, check that the wildcards refer to the files you think they do. To make that check, use the ls command with the same wildcard. If you want to delete all the files that start with section, for example, and you think that you can get away with typing only sec and an asterisk, you had better check what sec* refers to. First give this command:



        ls sec*

        UNIX responds with an appropriate list:


        second.version  section04  section08  section12  section16
        section01 section05 section09 section13 section17
        section02 section06 section10 section14 section18
        section03 section07 section11 section15 section19

        Hey, look! There’s that file second.version. You don’t want to delete it, so it looks like you have to type section* to get the correct files in this case.





        Idiot-proofing save files


        The best way to make temporary backup copies of files is to make a directory named save and put all saved copies of files there, as shown in this example:



        mkdir save
        cp important.file save

        These commands tell UNIX to make a directory named save and then to make a copy of important.file to save/important.file. If you reverse the order of the names, nothing happens. Suppose that you type this line instead:



        cp save important.file

        UNIX makes this observation:


        cp : <save> directory

        UNIX is saying that you can copy a file to a directory but that you can’t copy a directory to a file. As a result, UNIX doesn’t copy anything. To copy a file back from the save directory, you have to use its full name: save/important.file.


        A variation of this process is a two-step delete. Suppose that you have a bunch of files you want to get rid of but some good files are mixed in the same directory. Make a directory named trash, and then use mv to move the files you plan to delete to the trash directory:



        mkdir trash
        mv thisfile thatfile these* trash
        mv otherfile somefile trash


        Then use the ls command to check the contents of trash. If something is in that directory you want, move it back to the current directory by using this command:



        mv trash/these.are.still.good .

        (The dot at the end means to put the file back in the current directory.) After you’re sure that nothing other than trash is in trash, you can use rm with the -r option:



        rm -r trash

        This line tells rm to get rid of trash and everything in it.





        Don’t write on that!


        Another thing you can do to avoid damage to important files is to make them read-only. When you make files read-only, you prevent cp and text editors from changing them. You can still delete them, although rm, mv, and ln ask you before doing so. The chmod command changes the mode of a file (as explained in Chapter 5). Here’s how to use chmod to make a file read-only:


        chmod -w crucial-file

        The -w means not writable. To make changes to the file later, do another chmod but use +w instead. (This stuff doesn’t involve inspired command syntax, but the old syntax was even worse and used octal digits.) After a file is made not writable, editors can’t change it. The vi program and some versions of emacs even display a note on-screen that the file is read-only. If you try to delete it, rm, mv, or ln asks you in a uniquely user-hostile way whether that’s really what you had in mind. Suppose that you type the following line and crucial-file is a read-only file:



        rm crucial-file

        UNIX responds with this line:


        crucial-file: 444 mode ?

        The number may not be 444: It may be 440 or 400 (depending on whether your system administrator has set things up so that people can normally see the contents of other people’s files). As with rm -i, you press the y key if you want to delete the file, or anything else to say that you don’t want to delete this valuable data.












        Chapter 3: Resource Lifecycle











         < Day Day Up > 












        Chapter 3: Resource Lifecycle







        "Seek not, my soul, the life of the immortals; but enjoy to the full the resources that are within thy reach."



        Pindar









        Overview



        Once a resource has been acquired, its lifecycle must be managed effectively and efficiently. Managing a resource involves making it available to users, handling inter-resource dependencies, acquiring any dependent resources if necessary, and finally releasing resources that are no longer needed.



        The Caching (83) pattern describes how the lifecycle of frequently-accessed resources can be managed to reduce the cost of re-acquisition and release of these resources, while maintaining the identity of the resources. It is a very common pattern that is used in a large number of highly-scalable enterprise solutions. In contrast to the Caching pattern, the Pooling (97) pattern optimizes acquisition and release of resources, while not maintaining the identity of the resources. Pooling is therefore preferable for stateless resources, as they require little or no initialization. Similar to Caching, Pooling is also used widely and includes examples such as pooling of components in component platforms and pooling of threads in distributed applications. Both Caching and Pooling are only applicable to reusable resources. Both patterns typically apply to exclusive reusable resources that are used serially by users. However, in some cases it may make sense to use Caching or Pooling for concurrently-accessible reusable resources. In such cases, both Caching and Pooling are oblivious of whether the resources are concurrently accessible or not, since resource access only takes place once the resources have been fetched from the cache or the pool.



        Two or more entities, such as acquired resources, resource users, or resource providers, can interact with each other and produce changes to a software system. In such a situation, the entities are considered to be active and capable of participating in interactions that result in changes. Given such entities, it is important that any changes that are produced keep the system in a consistent state. The Coordinator (111) pattern ensures that in a task involving multiple entities, the system state remains consistent and thus overall stability is maintained.



        The Resource Lifecycle Manager (128) pattern manages all the resources of a software system, thereby freeing both the resources to be managed, as well as their resource users, from the task of proper resource management. The Resource Lifecycle Manager is responsible for managing the lifecycle of all types of resource, including both reusable and non-reusable resources.


















         < Day Day Up >