Friday, January 11, 2008

OOAD Methodologies

OOAD Methodologies

OOAD methodologies fall into two basic types. The ternary (or three-pronged) type is the natural evolution of existing structured methods and has three separate notations for data, dynamics, and process. The unary type asserts that because objects combine processes (methods) and data, only one notation is needed. The unary type is considered to be more object-like and easier to learn from scratch, but has the disadvantage of producing output from analysis that may be impossible to review with users.
Dynamic modeling is concerned with events and states, and generally uses state transition diagrams. Process modeling or functional modeling is concerned with processes that transform data values, and traditionally uses techniques such as data flow diagrams.

Booch

Grady Booch's approach to OOAD ( Object-Oriented Design with Applications, Benjamin/Cummings, 1994) is one of the most popular, and is supported by a variety of reasonably priced tools ranging from Visio to Rational Rose. Booch is the chief scientist at Rational Software, which produces Rational Rose. (Now that James Rumbaugh and Ivar Jacobson have joined the company, Rational Software is one of the major forces in the OOAD world.)
Booch's design method and notation consist of four major activities and six notations, as shown schematically in Table.
While the Booch methodology covers requirements analysis and domain analysis, its major strength has been in design. However, with Rumbaugh and Jacobson entering the fold, the (relative) weaknesses in analysis are disappearing rapidly. Booch represents one of the better developed OOAD methodologies, and now that Rational Rose is moving away from its previous tight link with C++ to a more open approach that supports 4GLs such as PowerBuilder, the methodology's popularity should increase rapidly.
For systems with complex rules, state diagrams are fine for those with a small number of states, but are not usable for systems with a large number of states. Once a single-state transition diagram has more than eight to 10 states, it becomes difficult to manage. For more than 20 states, state transition diagrams become excessively unwieldy.

Coad and Yourdon

Coad and Yourdon published the first practical and reasonably complete books on OOAD (Object-Oriented Analysis and Object-Oriented Design, Prentice-Hall, 1990 and 1991, respectively). Their methodology focuses on analysis of business problems, and uses a friendlier notation than that of Booch, Shlaer and Mellor, or the others that focus more on design.
In Coad and Yourdon, analysis proceeds in five stages, called SOSAS:
* Subjects: These are similar to the levels or layers in data-flow diagrams and should contain five to nine objects.
* Objects: Object classes must be specified in this stage, but Coad and Yourdon provide few guidelines for how to do this.
* Structures: There are two types: classification structures and composition structures. Classification structures correspond to the inheritance relationship between classes. Composition structures define the other types of relationships between classes. Coad and Yourdon do not deal as well as Rumbaugh, Jacobson, and several other methodologies do with these structures.
* Attributes: These are handled in a fashion very similar to that in relational analysis.
* Services: The identification of what other methodologies call methods or operations.
In design, these five activities are supplanted by and refined into four components:
* problem domain component: classes that deal with the problem domain; for example, Customer classes and Order classes
* human interaction component: user-interface classes such as window classes
* task management component: system-management classes such as error classes and security classes
* data management component: database access method classes and the like
Although Coad and Yourdon's methodology is perhaps one of the easiest OO methodologies to learn and get started with, the most common complaint is that it is too simple and not suitable for large projects. However, if you adhere to a premise that you should use those pieces of a methodology that work, and add other parts from other methodologies as required, Coad and Yourdon's methodology is not as limiting as its critics claim.

Fusion

In 1990, Derek Coleman of Hewlett-Packard led a team in the U.K. to develop a set of requirements for OOAD, and conducted a major survey of methods in use at HP and elsewhere. The chief requirement was a simple methodology with an effective notation.
The result was Fusion, which Coleman and others developed by borrowing and adapting ideas from other methodologies. They incorporated some major ideas from Booch, Jacobson, Rumbaugh, and others, and explicitly rejected many other ideas from these methodologies. Articles that Fusion practitioners have written have been some of the most pragmatic and useful about OOAD, but, unless you conduct a significant research effort, you generally hear much more about other methodologies than about Fusion.
Coleman did not use some of the major components of Rumbaugh and Shlaer and Mellor in Fusion, because the components were not found to be useful in practice. Some writers have called this encouraging and remarkable, and consider it indirect proof that excessive emphasis on state models comes from Rumbaugh and Shlaer and Mellor's telecommunications and realtime system backgrounds.
Fusion's pragmatic approach seems to hold considerable potential for client/server applications, but this methodology is not being marketed as aggressively as most of the other methodologies.

Jacobson: Objectory and OOSE

Although Jacobson's full OOAD methodology, Objectory, is proprietary (to use it you must buy consulting services and a CASE tool, OrySE, from Rational Software), it is probably the most serious attempt by an OOAD tool vendor to support the entire software development life cycle. Jacobson is considered to be one of the most experienced OO experts for applying OO to business problems such as client/server applications.
Jacobson's Object-Oriented Software Engineering (OOSE) is a simplified version of Objectory, which Jacobson himself has declared inadequate for production applications. According to Jacobson: "You will need the complete ... description which, excluding large examples, amounts to more than 1200 pages" (Object-Oriented Systems Engineering, Addison-Wesley, 1992).
Object modeling and many other OO concepts in Objectory and OOSE are similar to OO concepts in other methodologies. The major distinguishing feature in Jacobson is the use case. A use-case definition consists of a diagram and a description of a single interaction between an actor and a system; the actor may be an end user or some other object in the system. For example, the use-case description of an order-entry application would contain a detailed description of how the actor (the user) interacts with the system during each step of the order entry, and would include descriptions of all the exception handling that might occur.
According to Jacobson, a use case is any description of a single way to use a system or application, or any class of top-level usage scenario, that captures how actors use their black-box applications. An actor is an interface to the system, that is, something with which the system communicates, and may be a person or another program. Jacobson adds that a use case is any behaviorally related sequence of transactions that a single actor performs in a dialog with a system, in order to provide some measurable value to the actor.
Generally, you employ use cases to document user requirements in terms of user dialogs with a system. Use cases appear first in the requirements model, and are then used to generate a domain object model with objects drawn from the entities of the business as mentioned in the use cases. This is then converted into an analysis model by classifying the domain objects into three types: interface objects, entity objects, and control objects.
The big danger with OOSE is the assumption that you can express all sequences and business in use cases. One of my best object analysts ran into major difficulties while trying to force a large, complex client/server application into the Jacobson methodology. Several major authors have subsequently declared that for many complex systems and almost all expert systems, it cannot be done.
However, the use-case descriptions and their corresponding interaction diagrams can provide a very useful view of many parts of a system, and Objectory and OOSE have a good, simple notation. For client/server applications with rules of typical, rather than extreme, complexity, Jacobson provides a sound approach. The use-case analyses are easier to review with end users than is much of the output from other OO analysis methodologies; and use cases use narrative descriptions in plain English and are much easier to review with individuals who are not OO experts than are object models with object interaction diagrams.

LBMS SEOO

Systems Engineering OO (SEOO) is a proprietary methodology and toolkit from the U.K.-based company LBMS, which has its U.S. headquarters in Houston. SEOO is tightly integrated with Windows 4GLs such as PowerBuilder, and is perceived to be a very pragmatic and useful tool, but this perception may be due in part to a stronger marketing effort than is often made for nonproprietary methodologies. LBMS focuses on selling CASE tools that support its methodology, and SEOO is the only methodology described in this article that is not documented in a published book.
Because SEOO is proprietary, there is not as much detailed information available about it as there is about other methodologies, and it is somewhere between difficult and impossible to try it out just to compare it with the others.
The four major components of the SEOO methodology are:
* work-breakdown structures and techniques
* an object modeling methodology
* GUI design techniques
* relational database linkages to provide ER modeling and 4GL-specific features
Of all the major OOAD approaches, only SEOO gives the feeling of having started with non-OO approaches and then adapting to OO. A very positive aspect of this is the heavy focus on data management and data modeling. SEOO is intended to be object oriented while retaining the advantages of traditional data modeling. This makes the methodology well-suited for client/server database applications.
SEOO is unique in treating data, triggers, and referential-integrity rules as a set of shared objects in a database. It treats a data model as a view of the shared objects, which also include constraints, rules, and dynamics (state transitions and so on). SEOO draws a clear line between shared objects and other objects, and regards the shared objects as important interfaces between subsystems. This technique allows a distinction, for example, between customer behavior shared by all applications and customer object behavior unique to a single application. It is a technique with which a purist would quibble, but which is eminently practical.

Rumbaugh OMT

James Rumbaugh's methodology, as described in his book Object-Oriented Modeling and Design (Prentice-Hall, 1991), has always been one of my favorites. It offers one of the most complete descriptions yet written of an OO analysis methodology. Although it is somewhat lacking in OO design and construction, it contains a large number of ideas and approaches that are of significant use to analysts and designers.
Rumbaugh starts by assuming that a requirements specification exists. Analysis consists of building three separate models:
* the Object Model (OM): definition of classes, together with attributes and methods; the notation is similar to that of ER modeling with methods (operations) added
* the Dynamic Model (DM): state transition diagrams (STDs) for each class, as well as global event-flow diagrams
* the Functional Model (FM): diagrams very similar to data flow diagrams
Because Rumbaugh's notation (as well as that of Booch and Shlaer and Mellor) is supported by low-end drawing tools such as Visio, I have used it not only for OO analysis, but for a number of diagrams in recent proposals. For example, I recently drew a diagram of project deliverables (in a proposal for a large OO development project) using Visio and Rumbaugh notation to show the object hierarchy of deliverables which, at the first level of inheritance, included hardware, software, and printed deliverables.
Of the major methodologies, Rumbaugh's is one of those with which I feel most comfortable. It supports many traditional diagrams from structured methodologies, and contains a much richer object modeling notation than Coad and Yourdon's. Its weakness at this time is that is it less useful for client/server application design and construction, but experienced analysts can overcome this with enough experience with class libraries for their development tool, if they have successfully completed other OO projects.

Shlaer and Mellor OO Analysis

When Shlaer and Mellor OO analysis first came out in 1988, it represented one of the earliest examples of OO methodology and it has evolved very positively since then. (See Shlaer and Mellor's books, Object-Oriented Systems Analysis -- Modeling the World in Data and Object Lifecycles: Modeling the World in States, Prentice-Hall, 1988 and 1992, respectively.)
Originally an object-based extension of data modeling, the Shlaer and Mellor methodology starts with an information model describing objects, attributes, and relationships. (Note that this is more like a data model than an object model.) Next, a state model documents the states of objects and the transitions between them. Finally, a data-flow diagram shows the process model.
This methodology seems to be influenced strongly by relational design, but I have not seen it used for client/server development. This does not mean that it is not usable for such work, but the applications occasionally cited as examples of its use seem to be in the areas of real-time or process control. This may have to do with the fact that an earlier version, the Ward/Mellor approach, is widely used in the realtime world.

The Steps in Booch's Methodology

Steps Notations
Logical structure Class Diagrams, Object Diagrams
Physical structure Module Diagrams,Process Diagrams
Dynamics of Classes State Transition Diagrams
Dynamics of Instances Timing Diagrams

Various DataBase Models

Hierarchical Model :

The hierarchical data model organizes data in a tree structure. There is a hierarchy of parent and child data segments. This structure implies that a record can have repeating information, generally in the child data segments. Data in a series of records, which have a set of field values attached to it. It collects all the instances of a specific record together as a record type. These record types are the equivalent of tables in the relational model, and with the individual records being the equivalent of rows. To create links between these record types, the hierarchical model uses Parent Child Relationships. These are a 1:N mapping between record types. This is done by using trees, like set theory used in the relational model, "borrowed" from maths. For example, an organization might store information about an employee, such as name, employee number, department, salary. The organization might also store information about an employee's children, such as name and date of birth. The employee and children data forms a hierarchy, where the employee data represents the parent segment and the children data represents the child segment. If an employee has three children, then there would be three child segments associated with one employee segment. In a hierarchical database the parent-child relationship is one to many. This restricts a child segment to having only one parent segment. Hierarchical DBMSs were popular from the late 1960s, with the introduction of IBM's Information Management System (IMS) DBMS, through the 1970s.


Network Model :

The popularity of the network data model coincided with the popularity of the hierarchical data model. Some data were more naturally modeled with more than one parent per child. So, the network model permitted the modeling of many-to-many relationships in data. In 1971, the Conference on Data Systems Languages (CODASYL) formally defined the network model. The basic data modeling construct in the network model is the set construct. A set consists of an owner record type, a set name, and a member record type. A member record type can have that role in more than one set, hence the multiparent concept is supported. An owner record type can also be a member or owner in another set. The data model is a simple network, and link and intersection record types (called junction records by IDMS) may exist, as well as sets between them . Thus, the complete network of relationships is represented by several pairwise sets; in each set some (one) record type is owner (at the tail of the network arrow) and one or more record types are members (at the head of the relationship arrow). Usually, a set defines a 1:M relationship, although 1:1 is permitted. The CODASYL network model is based on mathematical set theory.


Relational Model :

(RDBMS - relational database management system) A database based on the relational model developed by E.F. Codd. A relational database allows the definition of data structures, storage and retrieval operations and integrity constraints. In such a database the data and relations between them are organised in tables. A table is a collection of records and each record in a table contains the same fields.
Properties of Relational Tables:
· Values Are Atomic
· Each Row is Unique
· Column Values Are of the Same Kind
· The Sequence of Columns is Insignificant
· The Sequence of Rows is Insignificant
· Each Column Has a Unique Name Certain fields may be designated as keys, which means that searches for specific values of that field will use indexing to speed them up. Where fields in two different tables take values from the same set, a join operation can be performed to select related records in the two tables by matching values in those fields. Often, but not always, the fields will have the same name in both tables. For example, an "orders" table might contain (customer-ID, product-code) pairs and a "products" table might contain (product-code, price) pairs so to calculate a given customer's bill you would sum the prices of all products ordered by that customer by joining on the product-code fields of the two tables. This can be extended to joining multiple tables on multiple fields. Because these relationships are only specified at retreival time, relational databases are classed as dynamic database management system. The RELATIONAL database model is based on the Relational Algebra.


Object/Relational Model :
Object/relational database management systems (ORDBMSs) add new object storage capabilities to the relational systems at the core of modern information systems. These new facilities integrate management of traditional fielded data, complex objects such as time-series and geospatial data and diverse binary media such as audio, video, images, and applets. By encapsulating methods with data structures, an ORDBMS server can execute comple x analytical and data manipulation operations to search and transform multimedia and other complex objects.
As an evolutionary technology, the object/relational (OR) approach has inherited the robust transaction- and performance-management features of it s relational ancestor and the flexibility of its object-oriented cousin. Database designers can work with familiar tabular structures and data definition languages (DDLs) while assimilating new object-management possibi lities. Query and procedural languages and call interfaces in ORDBMSs are familiar: SQL3, vendor procedural languages, and ODBC, JDBC, and proprie tary call interfaces are all extensions of RDBMS languages and interfaces. And the leading vendors are, of course, quite well known: IBM, Inform ix, and Oracle.


Object-Oriented Model :
Object DBMSs add database functionality to object programming languages. They bring much more than persistent storage of programming language objects. Object DBMSs extend the semantics of the C++, Smalltalk and Java object programming languages to provide full-featured database programming capability, while retaining native language compatibility. A major benefit of this approach is the unification of the application and database development into a seamless data model and language environment. As a result, applications require less code, use more natural data modeling, and code bases are easier to maintain. Object developers can write complete database applications with a modest amount of additional effort.According to Rao (1994), "The object-oriented database (OODB) paradigm is the combination of object-oriented programming language (OOPL) systems and persistent systems. The power of the OODB comes from the seamless treatment of both persistent data, as found in databases, and transient data, as found in executing programs." In contrast to a relational DBMS where a complex data structure must be flattened out to fit into tables or joined together from those tables to form the in-memory structure, object DBMSs have no performance overhead to store or retrieve a web or hierarchy of interrelated objects. This one-to-one mapping of object programming language objects to database objects has two benefits over other storage approaches: it provides higher performance management of objects, and it enables better management of the complex interrelationships between objects. This makes object DBMSs better suited to support applications such as financial portfolio risk analysis systems, telecommunications service applications, world wide web document structures, design and manufacturing systems, and hospital patient record systems, which have complex relationships between data.

Thursday, January 10, 2008

DISTRIBUTED OBJECT COMPUTING, REUSABLE SOFTWARE COMPONENTS

Distributed object computing is about building applications in a modular way with components. Components are typically designed for distribution across networks for use on multivendor, multiplatform computing systems. Because components are meant for distribution, standard interfaces and communication methods are important.
An interesting paper written by Simon Phipps of IBM (see the link on the related entries page) provides insight into how the Web and component technology fit together:
TCP/IP provide a network-independent transport layer.
Web clients and servers remove platform and operating system dependencies.
Component software (e.g., Java or ActiveX) eliminates the hassles associated with buying and installing software.
XML makes data independent of software.
XML has become a critical part of component technologies because it allows for the exchange of structured information among any type of systems. XML creates a framework for documents in which data has meaning as defined by tags. Any system or application that understands XML tags can access data in XML documents. The distributed object/XML relationship is discussed further in the next section.
In an enterprise environment, components developed for in-house use may reside on multiple servers in multiple departments. Some objects are used throughout the organization and may be the core components of an application. Other components may add special functionality or exist temporarily on user systems. The component approach lets network administrators and develops quickly update components as needed, without changing the whole system.
If multiple companies are involved in business-to-business relationships over private networks (or the Internet), object technologies can be used to integrate business processes. All that is necessary is a standard object interface that lets each company access common data or objects.
Some of the advantages of component technology are listed next. While component technology may be used to build custom in-house applications, such as accounting systems, Web browsers and browser add-ins provide the most immediate example of component technology.
If the application needs updating, only specific components need upgrading, not the entire application.
New components can be added at any time to expand the functionality of the program.
Users don't need to install every component that makes up an application, only the components they need.
Individual components can be sold on the commercial market to provide functions that developers need to build applications or that users need to expand programs they already use.
Software development time is reduced because existing components can be reused to build new applications.
Using standard interfaces and programming languages like Java, components from different developers and vendors can be combined.
Maintenance costs are reduced because individual components can be upgraded without having to change the entire application.
Component models provide the basis for inter-service communications and component integration. Web sites can offer sophisticated services for users by performing interactive tasks that involve calls by one server to multiple other servers. These multitiered environments allow tasks to be broken up into different services that run on different computers. Services such as application logic, information retrieval, transaction monitoring, data presentation, and management may run on different computers that communicate with one another to provide end users with a seamless application interface.
Distributed Component Architectures
A standard component model and inter-component communication architecture are critical in furthering the use of component technology on the open Web. The most common component models are CORBA, EJB (Enterprise Java Beans), and Microsoft COM, as discussed shortly.
Distributing applications over networks leads to some interesting problems. In a stand-alone system, components run as a unit in the memory space of the same computer. If a problem occurs, the components can easily communicate that problem with one another. But if components are running on different computers, they need a way to communicate the results of their work or problems that have occurred.
An ORB (object request broker) handles the plumbing that allows objects to communicate over a network. You can think of the ORB as a sort of software bus, or backbone, that provides a common interface through which many different kinds of objects can communicate in a peer- to-peer scheme. One such ORB is CORBA (Common Object Request Broker Architecture). CORBA is cross-platform and allows components written for different operating systems and environments to work together.
An object makes a request and sends it to the ORB. The ORB then locates the requested object or an object that can provide services, and establishes communication between the client and server. The receiving object then responds to the request and returns a response to the ORB, which formats and forwards the response to the requester.
In this model, objects simply specify a task to perform. The location of the object that can satisfy the request is not important. The end user sees applications as being seamless, even though services and data may be coming from many places on the network.
The ORB process is similar to a remote procedure call with the added benefit that the ORB itself is capable of locating other objects that can service requests. Actually, an ORB is an alternative to RPCs (remote procedure calls) and message-oriented middleware.
To run sophisticated applications and transactions over networks, there is a need to register components and coordinate their activities so that critical transactions can take place. For example, if data is being written to multiple databases at different locations, a transaction monitor is needed to make sure that all those writes take place. Otherwise, they must all be rolled back. Microsoft Transaction Server is an example. It coordinates the interaction of components and ensures that transactions are implemented safely. Because it provides these features in an object-based environment, it is essentially a transaction-based object request broker. See "Transaction Processing" for more information.
The most important object models are described here:
CORBA (Common Object Request Broker Architecture) The basic messaging technology specification defined by the OMG (Object Management Group) in its OMA (Object Management Architecture). CORBA has been implemented by a number of companies and is becoming an important standard for implementing distributed applications on the Internet. See "CORBA (Common Object Request Broker Architecture)."
COM/DCOM (Component Object Model/Distributed COM) COM is Microsoft's basic object model. An early implementation was OLE (Object Linking and Embedding), which gave Windows applications their basic container and object-linking capabilities (within a single computer). DCOM is the network version of COM that allows objects running in different computers attached to a network to interact. The latest version of COM is COM+ for Windows 2000, which adds many new features and works by way of an XML-based message-passing approach. In 1999, Microsoft announced a DCOM replacement called SOAP (Simple Object Access Protocol) that uses XML as a universal data exchange mechanism. See "COM (Component Object Model)."
EJB (Enterprise Java Beans) JavaBeans are software components that are based on the Java platform. EJB is contained within Sun's J2EE (Java2 Platform, Enterprise Edition). JavaBeans can be combined to build larger applications, in the same way that OLE objects can be combined into compound documents. Enterprise JavaBeans are components for enterprise networks. Recently, the best features of CORBA and EJB are merging into a model that can compete against the entrenched COM/DCOM model. See "Java."
DCOM is the most common of the technologies at this point, mainly because of existing support, developer knowledge, and the pervasiveness of Windows clients. CORBA has better multivendor, multiplatform support and is best for heterogeneous environments. CORBA was originally designed for tightly controlled enterprise network environments and well-managed inter-company connections. Both DCOM and CORBA are considered enterprise application development technologies, although both have been extended to work over the Internet. But they are considered too complex for most Web applications. EJB has gained the widespread support of Web developers. It is implemented in application servers from IBM, BEA Systems, and iPlanet (the Sun and Netscape Alliance). The CORBA 3.0 specification defines CORBA Beans, which combines features of CORBA and EJB with additional support for XML.
Microsoft's SOAP is important because it represents a move from the traditional remote procedure call method for exchanging information among objects to a message-passing scheme that uses XML. Microsoft now believes that messaging model best for the Web, as opposed to connection-oriented models such as RPC (remote procedure call) and Java's RMI (remote method invocation). By allowing objects to interact via XML, data interoperability is enhanced. SOAP carries XML messages across the Web via HTTP, which is pretty much a bottom line for interoperability.
The Object Management Group that managed CORBA standards has been working with OASIS (Organization for the Advancement of Structured Information Standards) to define how XML can be used to support CORBA services. EJB uses XML to provide information about Bean's interfaces, data types, and structure.

Distributed object computing uses resuable software components


Reusable Software component:
Reusable software components are designed to apply the power and benefit of reusable, interchangeable parts from other industries to the field of software construction. Other industries have long profited from reusable components. Reusable electronic components are found on circuit boards. A typical part in your car can be replaced by a component made from one of many different competing manufactuers. Lucrative industries are built around parts construction and supply in most competitive fields..

Example of Reusable Software component:

JAVA BEANS:

A Java Bean is a reusable software component...

JavaBeans technology is the component architecture for the Java 2 Platform, Standard Edition (J2SE). Components (JavaBeans) are reusable software programs that you can develop and assemble easily to create sophisticated applications...

UML SEQUENCE DIAGRAM

http://www.ibm.com/developerworks/rational/library/3101.html

Tuesday, December 25, 2007

RATIONAL ROSE AS A CASE TOOL

Rational Rose visual modeling facilitates and
automates parts of the software development process
through the use of the Unified Modeling Language. The
UML is meta-language receiving strong industry support
that specifies a standardized set of graphical notations
and their syntax that is useful in object-oriented analysis
and design (OOAD).

The Rational Software Corporation offers a
Software Engineering Educational Development
(SEED) program that provides copies of various software
packages and training materials, including Rational
Rose, to qualifying educational institutions. Information
about the SEED program may be obtained from
seed@rational.com or from the company web site at
http://www.rational.com. The SEED program can provide
both individual 1-year licenses that can be provided
to both students and faculty to use the Rational Rose
software as well as server licenses so that the software
can be installed in a laboratory environment. In addition
to Rational Rose, many other software tools are also
available under the SEED program such as those for
configuration management or real-time system development.
The SEED program can also make available
training materials for various courses related to OOAD,
UML, and Rational Rose (Sweeney 2000).
Rational Rose has many advantages as a UML
visual modeling tool, but its flexibility may be the most
valuable. This flexibility is achieved by the software’s
ability to support many object-oriented languages including
Java, MFC C++, Visual Basic, as well as Oracle
8i databases. Rational Rose promotes software component
reuse and better utilizes scarce software development
resources. Rational Rose supports both forward
and reverse engineering. Support for forward engineering
means that Rational Rose can generate code based
on a visual model of the application, whereas reverse
engineering support allows Rational Rose to convert
existing code into a visual model. The combination of
forward and reverse engineering is often referred to as
round-trip engineering.

Successful OOAD processes are often described
as having the characteristics of being both “iterative”
and “incremental” (Fowler 2000; Reed 2000; Liberty
1998). In the context of OOAD, iterative means
that the development proceeds piecemeal toward completion,
in such a way that as Fowler notes, “each iteration
builds production-quality software, tested and integrated,
that satisfies a subset of the requirements of the
project (Fowler 2000)”. Each iteration builds incrementally
on previous iterations during the construction phase
of the project. Rational Rose supports this iterative/
incremental development approach in part through
its round-trip engineering capabilities and by virtue of
its ability to be used in a team development environment.

CRC and Sequence Diagram

Hi girls..
Check out this link to get a better understanding of CRC and sequence diagram

http://www.cs.toronto.edu/~jm/340S/02/PDF2/SequenceD.pdf