Tuesday, December 15, 2009

Summary










 < Free Open Study > 





Summary



Getting the requirements correct is the most important part of a software development project. Once a software development team begins to collect the project requirements, it is critical that the project have a consistent format for maintaining and presenting them. This chapter described the construction of the Software Requirements Specification (SRS), used for the ongoing maintenance and presentation of the project requirements. The SRS is critical to the entire software development life cycle. Not only is it the derivative document for the software design specification, but it is also the base document for generating validation and acceptance tests. Validation is the determination of whether or not the project manager built the right product. Satisfying the requirements determines successful validation. The SRS is the mechanism for capturing those validation criteria�the system requirements.



During the SRS building process, the project manager must constantly be aware of the quality characteristics of the SRS: correctness, unambiguousness, completeness, and consistency, ranked for importance and/or stability, verifiability, modifiability, and traceability. The evaluation of these characteristics is a continuous process as the SRS is being built.












     < Free Open Study > 



    4.4 Testing Expected Errors











     < Day Day Up > 







    4.4 Testing Expected Errors





    It is important to test the error-handling behavior of

    production code in addition to

    its normal behavior. Such tests generate an error and assert that the

    error is handled as expected. In other words, an expected error

    produces a unit test success.





    The canonical example of a unit test that checks expected error

    handling is one that tests whether an expected exception is thrown,

    as shown in Example 4-8.







    Example 4-8. Unit test for expected exception


    LibraryTest.java

    public void testRemoveNonexistentBook( ) {

    try {

    library.removeBook( "Nonexistent" );

    fail( "Expected exception not thrown" );

    } catch (Exception e) {}

    }









    The expected error behavior is that an exception is thrown when the

    removeBook( ) method is called for a nonexistent

    Book. If the exception is thrown, the unit test

    succeeds. If it is not thrown, fail() is

    called. The fail( ) method is another useful

    variation on the basic assert method. It is equivalent to

    assertTrue(false), but it reads better.





    Since the removeBook( ) method now throws an

    exception, the

    testRemoveBook( ) unit test should be updated, as

    shown in Example 4-9.







    Example 4-9. Unit test that fails when an exception is thrown


    LibraryTest.java

    public void testRemoveBook( ) {

    try {

    library.removeBook( "Dune" );

    } catch (Exception e) {

    fail( e.getMessage( ) );

    }

    Book book = library.getBook( "Dune" );

    assertNull( "book is not removed", book );

    }









    This example uses fail( ) to cause the test to

    fail when an unexpected exception is thrown. The

    exception's message attribute is used as the assert

    message.





    The same general pattern is followed to test expected error behavior

    that is not represented by an exception: the test fails if the error

    is not seen and succeeds if it is. Example 4-10 shows

    a unit test that attempts to get a nonexistent

    Book from the Library and

    asserts that the expected null Book is returned.







    Example 4-10. Unit test checking the expected error getting a nonexistent Book


    LibraryTest.java

    public void testGetNonexistentBook( ) {

    Book book = library.getBook( "Nonexistent" );

    assertNull( book );

    }





















       < Day Day Up > 



      Chapter 9: Carrier Modulation











       < Day Day Up > 










      Chapter 9: Carrier Modulation



      When a signal is to be transmitted over a transmission medium, the signal is superimposed on a carrier, which is a high-frequency sine wave. This is known as carrier modulation. In this chapter, we will study the various analog and digital carrier modulation techniques. We also will discuss the various criteria based on which a particular modulation technique has to be chosen.




      9.1 WHAT IS MODULATION?


      Modulation can be defined as superimposition of the signal containing the information on a high-frequency carrier. If we have to transmit voice that contains frequency components up to 4kHz, we superimpose the voice signal on a carrier of, say, 140MHz. The input voice signal is called the modulating signal. The transformation of superimposition is called the modulation. The hardware that carries out this transformation is called the modulator. The output of the modulator is called the modulated signal. The modulated carrier is sent through the transmission medium, carrying out any other operations required on the modulated signal such as filtering. At the receiving end, the modulated signal is passed through a demodulator, which does the reverse operation of the modulator and gives out the modulating signal, which contains the original information. This process is depicted in Figure 9.1. The modulating signal is also called the baseband signal. In a communication system, both ends should have the capability to transmit and receive, and therefore the modulator and the demodulator should be present at both ends. The modulator and demodulator together are called the modem.






      Figure 9.1: Modulation and demodulation.









      Modulation is the superimposition of a signal containing the information on a high-frequency carrier. The signal carrying the information is called the modulating signal, and the output of the modulator is called the modulated signal.































       < Day Day Up > 



      References











       < Day Day Up > 











      References


      [1] CIMdata, www.cimdata.com, 2003.


      [2] The PDM Information Center, www.pdmic.com, 2003.


      [3] Estublier J., J-M Favre, and P. Morat, “Toward SCM/PDM Integration?” Proceedings of 8th International Symposium on System Configuration Management (SCM-8), Lecture Notes in Computer Science, No. 1439, Springer Verlag, 1998, pp. 75–94.


      [4] Estublier, J., “Distributed Objects for Concurrent Engineering,” Proceedings of 9th International Symposium on System Configuration Management (SCM-9), Lecture Notes in Computer Science, No. 1675, Springer Verlag, 1999, pp. 172–185.


      [5] EDS PLM Solution, vendor of the PDM system Metaphase, www.sdrc.com, 2003.


      [6] Axelsson, K., Metodisk Systemstrukturering—Att Skapa Samstämmighet Mellan Informationssystemarkitektur Och Verksamhet ( A Method for Structuring of Information Systems—Information Systems Architecture and Organization in Concert), (in Swedish), Ph.D. Thesis, Department of Computer and Information Science, Linköping University, Sweden, 1998.


      [7] Silberschatz, H., F. Korth, and S. Sudarshan, Database System Concepts, 3rd ed., New York: McGraw-Hill, 1997.


      [8] Kroenke, D., Database Processing: Fundamentals, Design and Implementation, Upper Saddle River, NJ: Prentice Hall, 1996.


      [9] MatrixOne, Inc., www.matrixone.com, 2003.


      [10] ENOVIA Solutions, www.enovia.com, 2003.


      [11] ISO TCI194/SC4/WG5, “Overview and Fundamental Principles,” STEP Part 1, November 1991.


      [12] ANSI/EIA-649-1998, National Consensus Standard for Configuration Management, American National Standards Institute, New York, 1998.


      [13] Swedish Standards Institute, Quality Management—Guidelines for Configuration Management, ISO 10 007, 1995.


      [14] MIL-STD-2549, Configuration Management Data Interface, Department of Defense Interface Standard, Washington, D.C., June 1997.


      [15] MIL-STD-973, Configuration Management, U.S. Department of Defense, Washington, D.C., April 1992.


      [16] Configuration Management Information Center, www.pdmic.com/cmic/index.shtml, 2003.


      [17] Open Document Management API, Version 2.0, Association for Information and Image Management, September 19, 1997.


      [18] Miller, E., Manufacturing Industries Move Toward Engineering Collaboration, CIMdata, Ann Arbor, MI, 2001.


      [19] Project Management Institute, www.pmi.org, 2003.


      [20] Pikosz, P., J. Malmström, and J. Malmqvist, “Strategies for Introducing PDM Systems in Engineering Companies,” Proceedings of Advances in Concurrent Engineering—CE’97, Rochester, MI, 1996, pp. 425–434.


      [21] Harris, S. B., “Business Strategy and the Role of Engineering Product Data Management: A Literature Review and Summary of the Emerging Research Questions,” Journal of Engineering Manufacturing, Institution of Mechanical Engineers, Vol. 210, Part B, 1996, pp. 207–220.


      [22] Abramovici M., D. Gerhard, and L. Langenberg, “Application of PDM Technology for Product Life Cycle Management,” Preprints from 4th International Seminar on Life Cycle Engineering, Berlin, Germany, 1997.


      [23] Jansson, L., “Business Oriented Product Structures,” Ph.D. thesis, Department of Production Control, Chalmers University of Technology, Gothenburg, Sweden, 1993.


      [24] Hegge, H. M., “Intelligent Product Family Descriptions for Business Applications,” Ph.D. thesis, Eindhoven University of Technology, The Netherlands, 1995.


      [25] Schwarze, S., “The Procedure of Product Configuration and Handling the Configuration Knowledge,” BWI Research Paper, No.3, Zentrum für Unternehmenswissenschaft, ETH, Zürich, Switzerland, November 1993.


      [26] Svensson, D., and J. Malmqvist, “Strategies for Product Structure Management in Manufacturing Firms,” Proceedings of DETC’00, Paper No. DET2000/CIE-14607, Baltimore, MD, 2000.


      [27] Wright, I. C., “A Review of Research into Engineering Change Management: Implications for Product Design,” Design Studies, Vol. 18, No. 1, 1997, pp. 33–42.


      [28] Pikosz, P., and J. Malmqvist, “A Comparative Study of Engineering Change Management in Three Swedish Engineering Companies,” Proceedings of DETC’98, Paper No. DET98/EIM-5684, Atlanta, GA, 1998.


      [29] Mesihovic, S., and J. Malmqvist, “Product Data Management (PDM) System Support for the Engineering Configuration Process,” 14th European Conference on Artificial Intelligence, Configuration Workshop, Berlin, Germany, August 20–25, 2000.


      [30] Isaksson, O., et al., “Trends in Product Modeling—An ENDREA Perspective,” Proceedings Product Models 2000, Linköping, Sweden, November 7–8, 2001.


      [31] Hubka, V., and W. E. Eder, Theory of Technical Systems, Berlin, Germany: Springer-Verlag, 1988.


      [32] Blanchard, B. S., and W. J. Fabrycky, System Engineering and Analysis, 3rd ed., London: Prentice-Hall International, 1998.


      [33] International Council on System Engineering (INCOSE), www.incose.org, 2003.


      [34] Andreasen, M. M., “Designing on a ‘Designer’s Workbench’ DWB,” Proceedings of the 9th Workshop on Qualitative Reasoning, Rigi, Switzerland, 1992.


      [35] Schenck, D., and P. R. Wilson, Information Modeling the EXPRESS Way, New York: Oxford University Press, 1994.


      [36] Eriksson, H. E., and M. Penker, UML Toolkit, New York: Wiley & Sons, 1998.


      [37] Elmasri, R., and S. B. Navathe, Fundamentals of Database Systems, Redwood City, CA: The Benjamin/Cummings Publishing Company, 1989.


      [38] CIMdata Europe’99—Conference Proceedings, Nice, France, 1999.


      [39] Fuxin, F., et al., “Product Modeling in the American Manufacturing Industry,” Division of Computer-Aided Design, Department of Mechanical Engineering, Luleå University of Technology, Luleå, Sweden, 2000.


      [40] “Lean Enterprise: Implementing Lean Practices,” www.boeing.com/commercial/initiatives, 2003.



















       < Day Day Up > 



      9.10 'sctp_recvmsg' Function



      [ Team LiB ]






      9.10 sctp_recvmsg Function


      Just like sctp_sendmsg, the sctp_recvmsg function provides a more user-friendly interface to the advanced SCTP features. Using this function allows a user to retrieve not only its peer's address, but also the msg_flags field that would normally accompany the recvmsg function call (e.g., MSG_NOTIFICATION, MSG_EOR, etc.). The function also allows the user to retrieve the sctp_sndrcvinfo structure that accompanies the message that was read into the message buffer. Note that if an application wishes to receive sctp_sndrcvinfo information, the sctp_data_io_event must be subscribed to with the SCTP_EVENTS socket option (ON by default). The sctp_recvmsg function takes the following form:


      ssize_t sctp_recvmsg(int sockfd, void *msg, size_t msgsz, struct sockaddr *from, socklen_t *fromlen, struct sctp_sndrcvinfo *sinfo, int *msg_flags);

      Returns: the number of bytes read, �1 on error


      On return from this call, msg is filled with up to msgsz bytes of data. The message sender's address is contained in from, with the address size filled in the fromlen argument. Any message flags will be contained in the msg_flags argument. If the notification sctp_data_io_event has been enabled (the default), the sctp_sndrcvinfo structure will be filled in with detailed information about the message as well. Note that if an implementation maps the sctp_recvmsg to a recvmsg function call, the flags field of the call will be set to 0.






        [ Team LiB ]



        12.4 Life cycle processes











         < Day Day Up > 











        12.4 Life cycle processes


        For different engineering domains, there are different standards and models for different PLCs. Some of the standards address the life cycles of systems, and others address a particular domain (e.g., software). Life cycle standards are closely related to PDM and SCM, as PDM and SCM provide infrastructure and support for these processes.




        12.4.1 ISO/IEC FDIS 15288 Systems Engineering—System Life Cycle Processes


        This standard provides a common framework for covering the life cycle of a system. This life cycle span is from the conception of the system to its retirement.


        The standard defines a set of processes and terminology. These processes can apply at any level in the hierarchy of a system structure. The standard also provides processes that support the definition, control, and improvement of the life cycle processes used in an organization or project.


        The system life cycle processes are:





        • Agreement processes. The process specifies the requirements for the establishment of agreements with organizational entities, internal and external to the organization.





        • Enterprise processes. This process manages the capability of the organization to acquire and supply products or services through the initiation, support, and control of the projects. They provide resources and infrastructure necessary to support projects.





        • Project processes. This process provides the establishment and evolution of project plans, to assess actual achievement and progress in relation to the plans and to control the execution of the project through fulfillment.





        • Technical processes. The process defines the requirements for a system to transform the requirements into an effective product, to permit consistent reproduction of the product where necessary, to use the product to provide the required services, to sustain the provision of those services, and to dispose of the product when it is retired from service.




        This standard is related to ISO/IEC 12207 Information Technology— Software Life Cycle Processes—Amendment 1.






        12.4.2 ISO 12207:1995 Software Life Cycle Processes


        This standard covers the life cycle of software from conceptualization of ideas through retirement and consists of processes for acquiring and supplying software products and services. The standard may be adjusted for an individual organization, project, or application. It may also be used when software is a stand-alone entity or an embedded or integral part of the total system.


        The standard categorizes all life cycle processes into three main groups: primary life cycle, supporting life cycle, and organizational life cycle.


        The primary life cycle processes are defined in the standard as follows:





        • Acquisition process. The process begins with the initiation of the need to acquire a system. The process continues with the preparation and issue of a request for proposal, selection of a supplier, and management of the acquisition process through the acceptance of the system.





        • Supply process. The process may be initiated either by a decision to prepare a proposal to answer a request for a proposal or by signing and entering into a contract with a vendor to provide the system. The process continues with the determination of procedures and resources needed to manage and assure the project software product.





        • Development process. This process contains activities and tasks for the developer, such as requirement analysis, design, coding, integration, testing, and installation and acceptance.





        • Operation process. The process contains operator activities and tasks such as operation of the software product and operational support to users.





        • Maintenance process. This process contains activities and tasks for the maintainer. The process is started when the software product undergoes modifications due to a problem or the need for improvement or adaptation. The process includes migration and ends with the retirement of the software product.




        To support the primary life cycle processes, there are eight different processes supporting life cycles. They are:





        1. Documentation process. This process records information produced by a life cycle process or activity.





        2. CM process. The process applies administrative and technical procedures to identify and define software items in a system; control modifications and releases of the items; record and report the status and modification requests; ensure the consistency, and correctness of the items; and control storage, handling, and delivery of the items.





        3. Quality assurance process. This process provides assurance that the software products and processes in the PLC correspond to the specified requirements and the established plans.





        4. Verification process. The process provides activities to determine whether the software products of an activity fulfill the requirements or conditions.





        5. Validation process. This process provides activities to determine whether the requirements and the final as-built system or software product fulfill its specific intended use.





        6. Joint review process. The process evaluates the status and products of an activity of a project. Joint reviews are held at both project management and technical levels.





        7. Audit process. The process determines compliance with the requirements, plans, and contract.





        8. Problem resolution process. The process analyzes and resolves any problems encountered during the execution of development, operation, maintenance, or other processes.




        The standard also identifies organizational life cycle process. The activities and tasks in an organizational process are the responsibility of the organization using that process. The organizational processes are:





        • Management process. The manager is responsible for product management, project management, and task management for applicable processes such as supply, development, operation, maintenance, and supporting processes.





        • Infrastructure process. The purpose is to establish and maintain the infrastructure needed for any other process, including hardware, software, tools, techniques, standards, and facilities for development, operation, or maintenance.





        • Improvement process. The purpose is to establish, assess, measure, control, and improve a software life cycle process.





        • Training process. This process provides the required trained personnel.




        Normative references are found in Table 12.5.


























        Table 12.5: References to Other Standards

        ISO/AFNOR:1989




        Dictionary of Computer Science



        ISO/IEC 2382-1:1993




        Information Technology—Vocabulary—Part 1: Fundamental Terms



        ISO/IEC 2382-20:1990




        Information Technology—Vocabulary—Part 20: System Development



        ISO 8402:1994




        Quality Management and Quality Assurance—Vocabulary



        ISO 9001:1994




        Quality Systems—Model for Quality Assurance in Design, Development, Production, Installation, and Servicing



        ISO/IEC 9126:1991




        Information Technology—Software Product Evaluation—Quality Characteristics and Guidelines for Their Use







        12.4.3 ISO 9000-3:1997 Quality Management and Quality Assurance Standards—Part 3


        ISO 9000 Part 3 has a long title, Guidelines for the Application of ISO 9001:1994 to the Development, Supply, Installation and Maintenance of Computer Software. It provides guidance in applying the requirements of ISO 9001:1994, in which computer software design, development, installation, and maintenance are treated together as an element of a commercial contract entered into by a supplier, as a product available for a market sector or as software embedded in a hardware product.


        The quality system requirements are:





        • Management responsibility. This is to define the quality policy of the organization and to identify and ensure the availability of the resources required to satisfy the quality requirements.





        • Quality system. This is to develop and describe a quality system and to implement procedures in accordance with the quality policy it describes.





        • Contract review. This is to develop and document procedures to coordinate the review of software development contracts.





        • Design control. This is to establish and maintain documented procedures to control and verify the design of the product to ensure the satisfaction of the specified requirements.





        • Document and data control. This is to develop procedures to control all documents and data relating to the requirements. CM procedures should be used to implement document and data control.





        • Purchasing. This is to develop and maintain procedures to ensure that purchased products conform to specified requirements. These procedures should control the selection of subcontractors, the use of purchasing data, and the verification of purchased products.





        • Control of customer-supplied product. This is to establish and maintain documented procedures for the control of verification, storage, and maintenance of customer-supplied products.





        • Product identification and traceability. This is to develop a procedure for identifying the product during its life cycle. A CM system may provide this capability.





        • Process control. This is to identify and plan the production, installation, and servicing processes that directly affect quality and ensure that these processes are executed under controlled conditions.





        • Inspection and testing. This is to develop procedures for inspection and testing activities to verify the satisfaction of the product requirements specified. If third-party products are to be included, procedures shall be developed for the verification of such products in accordance with the requirements of the contract.





        • Control of inspection, measuring, and test equipment. This is to develop procedures for the control, calibration, and maintenance of inspection, measuring, and testing equipment.





        • Control of nonconforming products. This is to develop procedures to prevent the inappropriate use of nonconforming products.





        • Corrective and preventive action. This is to develop procedures for implementing corrective and preventive action.





        • Handling, storage, packaging, preservation, and delivery. This is to develop and document procedures for the handling, storage, packaging, preservation, and delivery of the products of the organization. In addition, this is to develop product handling methods and procedures that prevent product damage or deterioration and to designate secure areas in which to store and protect the products.





        • Control of quality records. This is to identify and define the quality information to be collected. It is also to develop a quality record-keeping system and develop procedures for its maintenance and control.





        • Internal quality audits. This is to develop internal quality audit procedures that determine whether quality activities and results comply with documented quality plans, procedures, and programs, to evaluate the performance of the quality system, and to verify the effectiveness of corrective actions.





        • Training. This is to develop quality-training procedures.





        • Servicing. This is to establish and maintain procedures for performing servicing procedures and verifying and reporting that the servicing meets the specified requirements.





        • Statistical techniques. This is to select the statistical techniques to be used to establish, control, and verify process capabilities and product characteristics.




        From this comprehensive list, we can see that many quality requirements need systematic support for documentation, version management, CM, and requirements management (i.e., just the support provided by PDM and SCM tools).





















         < Day Day Up > 



        Chapter 18. Programming with Exceptions




        I l@ve RuBoard

        Chapter
        18. Programming with Exceptions






        Topics in this Chapter








        • A Simple Example of Exception Processing






        • Syntax of C++ Exceptions






        • Exceptions with Class Objects






        • Type Cast Operators






        • Summary






        In this chapter, I will deal with a relatively new C++ topic: programming with exceptions. Exception is a language mechanism that allows the programmer to separate source code that describes the main case of processing from source code that describes exceptional situations. Exceptional situations are situations that should not occur during normal processing but from time to time do occur. Separating this exception processing from the main case makes the main case easier to read and to maintain. It also makes exceptional cases easier to read and to maintain.



        This definition is somewhat vague, is it not? It does leave room for interpretation. Indeed, what some people view as an exceptional or abnormal situation, other people perceive as a genuine part of system operations. For example, when you allocate memory on the heap, the algorithm should describe what happens if the request is satisfied. Since it is possible that the computer runs out of memory, the algorithm should also specify what happens when memory is not available. Is running out of memory an exception? Most people would say yes.



        Similarly, when the program reads data interactively from the online user, the algorithm specifies the processing of valid data. What happens if the user makes a mistake and inputs data that is invalid? Is this an exception? Most people would say no, online mistakes are a way of life, and the algorithms for processing these mistakes should be viewed as part of basic system functionality, not something that happens only rarely.



        Similarly, when you read data from a file in a loop, the algorithm specifies what happens when the next record is read梙ow different parts of the record should be processed. Since it is possible that there are no more records in the file, the algorithm should define what happens when there are no more records to read. Is reaching the end-of-file indeed an exception? Most people would say no, this is an event that marks the end of one stage of processing (reading file records in) and the beginning of the next stage of processing (computations on data in memory).



        Regardless of whether the programmer perceives the situation as mainline processing with some additional exceptional cases (the first example) or as a set of different cases of approximately equal importance (the second and the third examples), the issue of clogging the source code with diverse computational tasks is both real and important.



        To be able to make intelligent decisions on structuring your algorithms, you should have a good understanding of the tools available in the programming language. This is why I will try first and foremost to explain what exceptions (as a C++ programming technique) are, what syntax they impose on the programmer, how to use them correctly, and what incorrect uses you should try to avoid.



        Initially, C++ did not support exceptions and relied on C mechanisms for exception processing by using global variables accessible to the whole program (e.g., errno) or jumps and calls to special functions whose names are fixed but whose contents might be specified by the programmer (e.g., setjmp and longjmp).



        The C++ exception facility is a relatively new language feature. Similar to C++ templates, the exception mechanism is complex. The experience in using exceptions is rather limited, and the advantages of their use for system design are not demonstrated yet. In addition, the use of C++ exceptions increases the program execution time and the size of the executable program. This is why I do not think you should use exceptions at every opportunity that presents itself. Eventually, however, they should become a legitimate part of your programming toolbox.







        I l@ve RuBoard

        B.8. DBTS











         < Day Day Up > 







        B.8. DBTS





        The Debian Bug Tracking

        System is unusual in that all input and manipulation of issues is

        done via email: each issue gets its own dedicated email address. The

        DBTS scales pretty well: http://bugs.debian.org/ has 277,741 issues,

        for example.





        Since interaction is done via regular mail clients, an environment

        that is familiar and easily accessible to most people, the DBTS is

        good for handling high volumes of incoming reports that need quick

        classification and response. There are disadvantages too, of course.

        Developers must invest the time needed to learn the email command

        system, and users must write their bug reports without a web-form to

        guide them in choosing what information to write. There are tools

        available to help users send better bug reports, such as the

        command-line reportbug program or the

        debbugs-el package for Emacs. But

        most people won't use these tools,

        they'll just write email manually, and they may or

        may not follow the bug reporting guidelines posted by your project.





        The DBTS has a read-only web interface, for viewing and querying

        issues.





        http://www.chiark.greenend.org.uk/~ian/debbugs/

















           < Day Day Up > 



          File Internals










          File Internals


          When you're studying complex file vulnerabilities, such as race conditions and linking attacks, having a basic grasp of UNIX file internals is useful. Naturally, UNIX implementations differ quite a bit under the hood, but this explanation takes a general approach that should encompass the major features of all implementations. This discussion doesn't line up 100% with a particular UNIX implementation, but it should cover the concepts that are useful for analyzing file system code.



          File Descriptors


          UNIX provides a consistent, file-based interface that processes can use to work with a fairly disparate set of system resources. These resources include files, hardware devices, special virtual devices, network sockets, and IPC mechanisms. The uniformity of this file-based interface and the means by which it's supported in the kernel provide a flexible and interoperable system. For example, the code used to talk with a peer over a named pipe could be used to interact with a network socket or interact with a program file, and retargeting would involve little to no modification.


          For every process, the UNIX kernel keeps a list of its open files, known as the file descriptor table. This table contains pointers to data structures (discussed in more detail in Chapter 10) in the kernel that encapsulate these system resources. A process generally opens a normal, disk-backed file by calling open() and passing a pathname to open. The kernel resolves the pathname into a specific file on the disk and then loads the necessary file data structures into memory, reading some information from disk. The file is added to the file descriptor table, and the position, or index, of the new entry in the file descriptor table is handed back to the process. This index is the file descriptor, which serves as a unique numeric token the process can use to refer to the file in future system calls.


          Figure 9-3 shows a file descriptor table for a simple daemon. File descriptors 0, 1, and 2, which correspond to standard input, standard output, and standard error, respectively, are backed by the device driver for the /dev/null file, which simply discards anything it receives. File descriptor 3 refers to a configuration file the program opened, named /etc/config. File descriptor 4 is a TCP network connection to the 1.2.3.4 machine's Web server.





          Figure 9-3. Simplified view of a file descriptor table








          File descriptors are typically closed when a process exits or calls close() on a file descriptor. A process can mark certain file descriptors as close-on-exec, which means they are automatically closed if the process executes another program. Descriptors that aren't marked close-on-exec persist when the new program runs, which has some security-related consequences addressed in Chapter 10. File descriptors are duplicated automatically when a process uses a fork(), and a process can explicitly duplicate them with a dup2() or fcntl() system call.




          Inodes


          The details of how file attributes are stored are up to the file system code, but UNIX has a data structure it expects the file system to be able to fill out from its backing data store. For each file, UNIX expects an information node (inode) that the file system can present. In the more straightforward, classic UNIX file systems, inodes are actual data structures existing in physical blocks on the disk. In modern file systems, they aren't quite as straightforward, but the kernel still uses the concept of an inode to track all information for a file, regardless of how that information is ultimately stored.


          So what's in an inode? Inodes have an inode number, which is unique in the file system. Every file system mounted on a UNIX machine has a unique device number. Therefore, every file on a UNIX system can be uniquely identified by the combination of its device number and its inode number. Inodes contain a file type field that can indicate the file is an ordinary file, a character device, a block device, a UNIX domain socket, a named pipe, or a symbolic link. Inodes also contain the owner ID, group ID, and file permission bits for the file as well as the file size in bytes; access, modification, and inode timestamps; and the number of links to the file.


          The term "inode" can be confusing, because it refers to two different things: an inode data structure stored on a disk and an inode data structure the kernel keeps in memory. The inode data structure on the disk contains the aforementioned file attributes as well as pointers to data blocks for the file on the disk. The inode data structure in kernel memory contains all the disk inode information as well as additional attributes and data and pointers to associated kernel functions for working with the file. When the kernel opens a file, it creates an inode data structure and asks the underlying file system driver to fill it out. The file system code might read in an inode from the disk and fill out the kernel's inode data structure with the retrieved information, or it could do something completely different. The important thing is that for the kernel, each file is manipulated, tracked, and maintained through an inode.


          Inodes are organized and cached so that the kernel and file system can access them quickly. The kernel primarily deals with files using inodes rather than filenames. When a process makes a system call that has a pathname argument, the kernel resolves the pathname into an inode, and then performs the requested operation on the inode. This explanation is a bit oversimplified, but it's enough for the purposes of this discussion. Anyway, when a file is opened and stored in the file descriptor table, what's placed there is a pointer to a chain of data structures that eventually leads to the inode data structure associated with the file.


          Note



          Chapter 10 explains the data structures involved in associating the file descriptor table with an inode data structure. These constructs are important for understanding how files and file descriptors are shared among processes, but you can set them aside for now.






          Directories


          A directory's contents are simply the list of files the directory contains. Each item in the list is called a directory entry, and each entry contains two things: a name and an inode number. You might have noticed that the filename isn't stored in the file inode, so it's not kept on the file system as a file attribute. This is because filenames are only instructions that tell the kernel how to walk through directory entries to retrieve an inode number for a file.


          For example, specifying the filename /tmp/testing/test.txt tells the kernel to start with the root directory inode, open it, and read the directory entry with the name tmp. This information gives the kernel an inode number that corresponds to the tmp directory. The kernel opens that inode and reads the entry with the name testing. This information gives the kernel an inode number for the testing directory. The kernel then opens this inode and reads the directory entry with the name test.txt. The inode number the kernel gets is the inode of the file, which is all that the kernel needs for operating on the file.


          Figure 9-4 shows a simple directory hierarchy. Each box represents an inode. The directory inodes have a list of directory entries below them, and each ordinary file inode contains its file contents below its attributes. The figure shows the following simple directory hierarchy:


          fred.txt
          jim/
          bob.txt




          Figure 9-4. Directories at play

          [View full size image]




          The leftmost inode is a directory containing the fred.txt file and the jim directory. You don't know this directory's name because you have to see its parent directory to learn that. It has an inode number of 1000. The jim directory has an inode of 700, and you can see that it has only one file, bob.txt.


          If a process has a current directory of the directory in inode 1000, and you call open("jim/bob.txt", O_RDWR), the kernel translates the pathname by reading the directory entries. First, the directory at inode 1000 is opened, and the directory entry for jim is read. The kernel then opens the jim directory at inode 700 and reads the directory entry for bob.txt, which is 900. The kernel then opens bob.txt at inode 900, loads it into memory, and associates it with an entry in the file descriptor table.













          Web site











           < Day Day Up > 











          Web site


          A Web site, www.cs.lth.se/pdm-scm/book, includes a set of presentation slides and additional material for personal study and to support the use of this book in teaching. Instructors may freely use and modify the presentation material.



          Acknowledgments


          We developed this book while working in three different places. We wrote in parallel, read and commented, or directly changed the text that another just wrote. Sometimes each of us worked on separate chapters, sometimes we worked together on the same chapter. It was literary a distributed development project. To make it possible to work in this way, we used the concurrent versions system (CVS) SCM tool. Over time, we realized more and more that CVS was an enormous help—and that we could not have finished our work without such support. We wish to thank all enthusiastic developers of this simple and yet so powerful tool.


          Many people have contributed to the development of this book. First, we wish to thank Daniel Svensson and Allan Hedin, who actively contributed in several chapters. The inspiration for the book came from our work on the project PDM and SCM: Similarities and Differences, organized by the Association of Swedish Engineering Industries. The final report of that project was the basis for many parts of this book. We wish to thank the project participants for many fruitful discussions and for important input to the book. Many thanks to Göran Östlund, Johan Ranby, Jan-Ola Krüger, Magnus Larsson, Thomas Nilsson, Olle Eriksson, Daniel Svensson, and Allan Hedin. Special thanks to Göran Östlund, the representative of the Association of Swedish Engineering Industries, who kindly offered all the project material for the work on the book. Jacky Estublier and Geoffrey Clemm read the project report and an early version of the text for the book and greatly encouraged us to continue to work on the book. Many people from the ABB, Ericsson, Mentor Graphics, SAAB, and SUN companies have helped us greatly to make the case studies up to date, accurate, and interesting. We are obliged to them very much. Special gratitude goes to Peter Lister, an active member of International Council on Systems Engineering (INCOSE), who reviewed the entire book in detail and whose comments led invariably to its improvement. We wish to thank Victor Miller, who did a great job of reviewing all chapters and enhancing the writing style. We are particularly indebted to Tim Pitts and Tiina Ruonamaa from Artech House for their enormous and sustained support during the writing. Finally, we wish to express to our families our gratitude and love, to the children Tea and Luka, Johan and Emma, and Fredrik, Emma, and Kaspar, and to our spouses, Gordana, Maria, and Roger, without whose indefinite, unfailing, and generous support this book could not have been written.



















           < Day Day Up > 



          18.2 Process Improvement Economics




          I l@ve RuBoard










          18.2 Process Improvement Economics


          Although this report contains general industry data, each company needs to create an individualized plan and budget for its improvement strategy. Table 18.1 presents information based on the size of companies in terms of software personnel. The cost data in Table 18.1 is expressed in terms of "cost per capita" or the approximate costs for each employee in software departments. The cost elements include training, consulting fees, capital equipment, software licenses, and improvements in office conditions.





















































































          Table 18.1. Process Improvement Expenses per Capita

          Stage



          Meaning



          Small Staff (< 100)



          Medium Staff (< 1,000)



          Large Staff (< 10,000)



          Giant Staff (> 10,000)



          Average



          0



          Assessment



          $100



          $125



          $150



          $250



          $156



          1



          Management



          1,500



          2,500



          3,500



          5,000



          3,125



          2



          Process



          1,500



          2,500



          3,000



          4,500



          2,875



          3



          Tools



          3,000



          6,000



          5,000



          10,000



          6,000



          4



          Infrastructure



          1,000



          1,500



          3,000



          6,500



          3,000



          5



          Reuse



          500



          2,500



          4,500



          6,000



          3,375



          6



          Industry leadership



          1,500



          2,000



          3,000



          4,500



          2,750



          Total expenses



          $9,100



          $17,125



          $22,150



          $36,750



          $21,281





          The sizes in Table 18.1 refer to the software populations, and divide organizations into four rough size domains: fewer than 100 software personnel, fewer than 1,000 personnel, fewer than 10,000 personnel, and more than 10,000 which implies giant software organizations such as IBM, Accenture Consulting, and Electronic Data Systems (EDS), all of which have more than 50,000 software personnel corporatewide.


          As Table 18.1 shows, software process assessments are fairly inexpensive. But improving software processes and tool suites after a software process assessment can be very expensive indeed.


          It sometimes happens that the expenses of process improvement would be high enough so that companies prefer to bring in an outsource vendor. This option is used most often by companies that are below average. If a company is far behind similar companies, then turning over software development and maintenance to an outside company that already uses state-of-the-art processes and tool suites may make good business sense.


          Another important topic is the time it will take to move through each stage of the process improvement sequence. Table 18.2 illustrates the approximate number of calendar months devoted to moving from stage to stage. Smaller companies can move much more rapidly than large corporations and government agencies. Large companies often have entrenched bureaucracies with many levels of approval. Thus, change in large companies is often slow and sometimes very slow.


          For large companies process improvement is of necessity a multiyear undertaking. Corporations and government agencies seldom move quickly, even if everyone is moving in the same direction. When there is polarization of opinion or political opposition, progress can be very slow or nonexistent.





























































































          Table 18.2. Process Improvement Stages in Calendar Months

          Stage



          Meaning



          Small Staff (< 100)



          Medium Staff (< 1,000)



          Large Staff (< 10,000)



          Giant Staff (> 10,000)



          Average



          0



          Assessment



          2.00



          2.00



          3.00



          4.00



          2.75



          1



          Management



          3.00



          6.00



          9.00



          12.00



          7.50



          2



          Process



          4.00



          6.00



          9.00



          15.00



          8.50



          3



          Tools



          4.00



          6.00



          9.00



          12.00



          7.75



          4



          Infrastructure



          3.00



          4.00



          9.00



          12.00



          7.00



          5



          Reuse



          4.00



          6.00



          12.00



          16.00



          9.50



          6



          Industry leadership



          6.00



          8.00



          9.00



          12.00



          8.75



          Sum (worst case)



          26.00



          38.00



          60.00



          83.00



          51.75



          Overlap (best case)



          16.90



          26.60



          43.20



          61.42



          33.64





          An important question is, what kind of value or return on investment will occur from software process improvements? Table 18.3 shows only the approximate improvements for schedules, costs, and quality (here defined as software defect levels). The results are expressed as percentage improvements compared to the initial baseline at the start of the improvement process.


          The best projects in the best companies can deploy software with only about 5% of the latent defects of similar projects in lagging companies. Productivity rates are higher by more than 300%, and schedules are only about one-fourth as long. These notable differences can be used to justify investments in software process improvement activities.


          As can be seen from this rough analysis, the maximum benefits do not occur until stage 5, when full software reusability programs are implemented. Since reusability has the best return and greatest results, our clients often ask why it is not the first stage.


          The reason that software reuse is delayed until stage 5 is that a successful reusability program depends on mastering software quality. Effective software quality control implies deploying a host of precursor technologies such as formal inspections, formal test plans, formal quality assurance groups, and formal development processes. Unless software quality is at state-of-the-art levels, any attempt to reuse materials can be hazardous. Reusing materials that contain serious errors will result in longer schedules and higher costs than having no reusable artifacts.




































































          Table 18.3. Improvements in Software Defect Levels, Productivity, and Schedules

          Stage



          Meaning



          Delivered Defects (%)



          Development Productivity (%)



          Development Schedule (%)



          0



          Assessment



          0.00



          0.00



          0.00



          1



          Management



          -10.00



          10.00



          -12.00



          2



          Process



          -50.00



          30.00



          -17.00



          3



          Tools



          -10.00



          25.00



          -12.00



          4



          Infrastructure



          -5.00



          10.00



          -5.00



          5



          Reuse



          -85.00



          70.00



          -50.00



          6



          Industry leadership



          -5.00



          50.00



          -5.00


           

          Total



          -95.00



          365.00



          -75.00










            I l@ve RuBoard



            Section 7.10. From the Java Library: java.util.StringTokenizer










            [Page 332 (continued)]

            7.10. From the Java Library: java.util.StringTokenizer


            java.sun.com/docs



            One of the most widespread string-processing tasks is that of breaking up a string into its components, or tokens. For example, when processing a sentence, you may need to break the sentence into its constituent words, which are considered the sentence tokens. When processing a namepassword string, such as "boyd:14irXp", you may need to break it into a name and a password. Tokens are separated from each other by one or more characters which is known as delimiters. For a sentence, white space, including blank spaces, tabs, and line feeds, serve as the delimiters. For the password example, the colon character serves as a delimiter.



            [Page 333]

            Java's java.util.StringTokenizer class is specially designed for breaking strings into their tokens (Fig. 7.17). When instantiated with a String parameter, a String-Tokenizer breaks the string into tokens, using white space as delimiters. For example, if we instantiated a StringTokenizer as in the code




            Figure 7.17. The java.util.StringTokenizer class.







            StringTokenizer sTokenizer
            = new StringTokenizer("This is an English sentence.");


            it would break the string into the following tokens, which would be stored internally in the StringTokenizer in the order shown:


            This
            is
            an
            English
            sentence.


            Note that the period is part of the last token ("sentence."). This is because punctuation marks are not considered delimiters by default.


            If you wanted to include punctuation symbols as delimiters, you could use the second StringTokenizer() constructor, which takes a second String parameter (Fig. 7.17). The second parameter specifies a string of characters that should be used as delimiters. For example, in the instantiation,



            [Page 334]
            StringTokenizer sTokenizer
            = new StringTokenizer("This is an English sentence.", "\b\t\n,;.!");


            punctuation symbols (periods, commas, and so on) are included among the delimiters. Note that escape sequences (\b\t\n) are used to specify backspaces, tabs, and new lines.


            The hasMoreTokens() and nextToken() methods can be used to process a delimited string one token at a time. The first method returns true as long as more tokens remain; the second gets the next token in the list. For example, here is a code segment that will break a standard URL string into its constituent parts:


            String url = "http://java.trincoll.edu/~jjj/index.html";
            StringTokenizer sTokenizer = new StringTokenizer(url,":/");
            while (sTokenizer.hasMoreTokens()) {
            System.out.println(sTokenizer.nextToken());
            }


            This code segment will produce the following output:


            http
            java.trincoll.edu
            ~jjj
            index.html


            The only delimiters used in this case were the ":" and "/" symbols. And note that nextToken() does not return the empty string between ":" and "/" as a token.












            Chapter 19. SDL Banned Function Calls











            Chapter 19. SDL Banned Function Calls





            In this chapter:


            The Banned APIs

            242

            Why the "n" Functions Are Banned

            245

            Important Caveat

            246

            Choosing StrSafe vs. Safe CRT

            246

            Using StrSafe

            246

            Using Safe CRT

            247

            Other Replacements

            248

            Tools Support

            248

            ROI and Cost Impact

            249

            Metrics and Goals

            249




            When the C runtime library (CRT) was first created about 25 years ago, the threats to computers were different; machines were not as interconnected as they are today, and attacks were not as prevalent. With this in mind, a subset of the C runtime library must be deprecated for new code and, over time, removed from earlier code. It's just too easy to get code wrong that uses these outdated functions. Even some of the classic replacement functions are prone to error, too.


            Following is a partial list of Microsoft security bulletins that could have been prevented if the banned application programming interfaces (APIs) that led to the security bug had been removed from the code:


            Microsoft Bulletin Number

            Product and Code

            Function

            MS02-039

            Microsoft SQL Server 2000

            sprintf

            MS05-010

            License Server

            lstrcpy

            MS04-011

            Microsoft Windows (DCPromo)

            wvsprintf

            MS04-011

            Windows (MSGina)

            lstrcpy

            MS04-031

            Windows (NetDDE)

            wcscat

            MS03-045

            Windows (USER)

            wcscpy



            You can get more info on these security bulletins at http://www.microsoft.com/technet/security/current.aspx. Note that many other software vendors and projects have had similar vulnerabilities.















            Newer Posts Older Posts Home