About
Community
Bad Ideas
Drugs
Ego
Erotica
Fringe
Society
Technology
Hack
Introduction to Hacking
Hack Attack
Hacker Zines
Hacking LANs, WANs, Networks, & Outdials
Magnetic Stripes and Other Data Formats
Software Cracking
Understanding the Internet
Legalities of Hacking
Word Lists
register | bbs | search | rss | faq | about
meet up | add to del.icio.us | digg it

Defense Information Infrastructure Common Operatin


NOTICE: TO ALL CONCERNED Certain text files and messages contained on this site deal with activities and devices which would be in violation of various Federal, State, and local laws if actually carried out or constructed. The webmasters of this site do not advocate the breaking of any law. Our text files and message bases are for informational purposes only. We recommend that you contact your local law enforcement officials before undertaking any project based upon any information obtained from this or any other web site. We do not guarantee that any of the information contained on this system is correct, workable, or factual. We are not responsible for, nor do we assume any liability for, damages resulting from the use of any information on this site.
Defense Information Infrastructure
Common Operating Environment(DII-COE)
Version 4.0


Distributed Computing

Software Requirements Specification (SRS)


28 January 199

Changes since 8 July 1997 draft:

Strikeouts are in blue text, new text is in red. One piece of Green text
indicates a new AF requirement for LDAP support which needs to be discussed
in the DCWG forum prior to incorporation into the SRS officially.

1. Section 3.2.1.6.2: Split this future requirement into CORBA specific
requirements and DCE specific requirements, covered later in document.
2. Section 3.2.1.12.1: Changed notation to include reference to COE Baseline
Specification and added notation about checking with COE engineering office
to verify list of supported platforms.
3. Section 3.2.2.2.2: Added "or LDAP" to the requirement per AF/AFMSS. This
is a candidate requirement, for DCWG discussion prior to adding to SRS
document, and is in green text.
4. Section 3.2.2.9: Added this section based on new requirements received at
Jan 22 DCWG.
5. Section 3.2.2.10: Added this section based on new requirements received at
Jan 22 DCWG.
6. Section 3.2.3.9: Added this section based on new requirements received at
Jan 22 DCWG.
7. Section 3.2.3.10: Added this section based on new requirements received
at Jan 22 DCWG.

Changes since 20 June draft:

Last minute changes on 7/8 noted in blue, italicised text

1. Section 1.1, 1.2: Editorial changes for readability and completeness.
2. Section 1.2: Added verbage to the effect that the component will include
services in addition to basic RPC-level functionality.
3. Section 2.1: Updated document references.
4. Section 2.2: Added description of CORBAservices combined document and
instructions for resolving discrepancies.
5. Section 3.2: Added second paragraph, noting that some requirements are
inherent and therefore not specified.
6. Section 3.2.1: Added mention of related infrastructure services.
7. Section 3.2.1.1: Security - replaced requirements with general reference
to requirements in the securty SRS, with examples of specific items of
applicability.
8. Section 3.2.1.1.2: Fortezza/MISSI Integration - moved requirement from
3.2.2.2.4.
9. Section 3.2.1.1.3: Firewalls - moved requirement from 3.2.2.2.7. Added guards on 7/8 per
comment from Ed Haubach, and added note to explain some of the context around the
requirements for operation with firewalls and guards. Also added a requirement in Appendix B
for network services in this regard.
10. Section 3.2.1.2.2: Replication - reworded to distinguish replication of elements of the
distributed computing implementation vice replication of application level services and eliminate
load distribution wording.
11. Section 3.2.3.2.3: Flexible Deployment - changed title to "Flexible Domain Configuration" to
distinguish it from new requirements, described below.
12. Section 3.2.1.2.4: Dynamic Addressing - new requirement.
13. Section 3.2.1.2.5: Mobile Operation - this is a new requirements, but is TBD - yet to be
validated.
14. Section 3.2.1.3.1: Time - added words for secure, global. Removed words restricting to within
a domain.
15. Section 3.2.1.4: Added note regarding scalability maximums. Changed title from
"Performance" to "Performance and Architecture" per comment from Ed Haubach.
16. Section 3.2.1.4.1.1: Added requirements information previously incomplete.
17. Section 3.2.1.4.1.2: Added requirements information previously incomplete.
18. Section 3.2.1.4.1.3: Added requirements information previously incomplete.
19. Section 3.2.1.4.1.4: Server Load Balancing - Rephrased the requirement to clarify and added
note describing the inherent capabilities of DCE and Encina.
20. Section 3.2.1.4.2: Deleted.
21. Section 3.2.1.5.1.1 Replication - rephrased requirement to reflect differences in the way
infrastructure servers and application servers might be handled.
22. Section 3.2.1.5.1.3 Reliability - appended requirement to clarify.
23. Section 3.2.1.6 Language Support - removed Ada'83 requirement, assuming that ada'83 code
will be updated to ada'95 by the COE 4.0 timeframe. Changed first note from Ada'83 to Ada'95
and clarified. Removed second note (should have said Ada'83 anyway). On 7/8 added wording to
explicitly state that Ada tasks must work with threads in the distributed computing component an
in the operating system per comment from Ed Haubach.
24. Section 3.2.1.6.2 Added placeholder for future requirement for Java support based on
comment from Ed Haubach.
25. Section 3.2.1.7.5: Changed from future requirement to database management heterogeneity
requirement and included the items listed as requirements in following paragraphs.
26. Section 3.2.1.7.6: Added requirement based on items previously listed under 3.2.1.7.5.
27. Section 3.2.1.8.3: Performance - this requirement is too dependent upon external factors to
express a definite performance requirement in terms of queueing operations per second. Therefore,
it has been sidelighted as a note.
28. Section 3.2.1.8.4: Priorities - rephrased to remove reference to JMCIS.
29. Section 3.2.1.8.6: On 7/8, changed minimimum queue size from 2048 to 50K to reflect V4.0
(previously stated as a future requirement), per Ed Haubach.
30. Section 3.2.1.8.10: Changed future requirement to Queued Object Size.
31. Section 3.2.1.8.11: Added requirement.
32. Section 3.2.1.8.12: Queue Locking - added requirement.
33. Section 3.2.1.9.2: on 7/8 added capability to start/stop TP monitor, per Ed Haubach.
34. Section 3.2.1.9.5: Combined with 3.2.1.9.6.
35. Section 3.2.1.9.6: Combined with 3.2.1.9.5 (previously Transaction Processing Management)
and reworded requirement.
36. Section 3.2.1.16.1: Changed title from "documentation" to "developer documentation" and
appended the sentence to add "prohibited by the COE or which would be incompatible with other
COE capabilities ".
37. Section 3.2.2.1.1: Updated compliance requirement to be DCE V1.2.1 (assuming that this will
be available in segmented format by the time COE V4.0 is scheduled for integration and test).
38. Section 3.2.2.2.2: Naming - removed requirement for X.500 based global directory service and
turned it into a note identifying it as a possible future requirement.
39. Section 3.2.2.2.4: Fortezza/MISSI Integration - moved requirement to 3.2.1.1.2 and made
more general to apply to both DCE and CORBA.
40. Section 3.2.1.9.2: changed title from "configuration management" to "Transaction
Management" and moved configuration management requirements into DCE and CORBA specific
requirements paragraphs.
41. Section 3.2.1.9.3: Merged requirements into DCE specific requirements in section 3.2.2.5.1.
42. Section 3.2.1.9.5: Merged requirements into 3.2.1.9.2.
43. Section 3.2.1.9.6: Merged requirements into DCE specific requirements in section 3.2.2.5.1.
44. Section 3.2.2.2.2: Rephrased DNS wording to not imply that DNS was the Global Directory
Service.
45. Section 3.2.2.2.7: Firewalls - moved requirement to 3.2.1.1.3 and made more general to apply
to both DCE and CORBA.
46. Section 3.2.2.2.8: Added requirement to interface to NTP for synchronized time outside of the
cell.
47. Section 3.2.2.2.9: Added this requirement.
48. Section 3.2.2.3.1: On 7/8, extended requirement to specify at least DFS client and Exporter
functionality.
49. Section 3.2.2.3.3, "Cell management" moved from under "Applications" to be under
"management".
50. Section 3.2.2.4.1: bindings - language requirements already discussed in 3.2.1, and merged
gss-api stuff with 3.2.2.4.2
51. Section 3.2.2.4.3: Added requirements for C++ interface to GSS-API.
52. Section 3.2.2.4.5: Removed requirement for Ada'83 support. On 7/8, changed list of
languages to be "for each of the supported programming languages".
53. Section 3.2.2.4.7: Added requirement for ease of use APIs. On 7/8, combined with section
3.2.2.4.6, Toolkit, and rephrased to include several more examples of idioms that an ease of use
toolkit could implement.
54. Section 3.2.2.5: new section added, entitled "management".
55. Section 3.2.2.5.1: Added requirement (includes requirements moved from other placed). On
7/8 reworded item q) to itemize servers to be replicated, remoed DFS replication from item m),
and added audit daemon start/stop to item t), and added item u) for inter-cell authentication
registration. Also added "configure" to item f), item v) to backup/restore DCE server data, item
w) to browse/search the CDS namespace, and modified r) and j) to include modificaiton of all
attributes.
56. Section 3.2.2.6.2: Rephrased requirement to not require the network utilities to be DCE'ized,
implying that the utility itself is modified, thus allowing for other methods of satisfying the
requirement that don't modify the utility itself. On 7/8, modified this to include SMTP, SNMP,
and HTTP, and changed the title of the requirement from "Network Utilities" to "Network
Protocols", per comments from Ed Haubach.
57. Section 3.2.2.6.3: On 7/8 added requirement for wrapper templates per comment from Ed
Haubach.
58. Section 3.2.2.7: Added requirement for DCE default configuration in segment delivery.
59. Section 3.2.2.8: Added section to describe DCE specific documentation requirements.
60. Section 3.2.3.1: Background - substantially reworded the material in this section, which was
previously extracted from email messages. On 7/8, added paragraph at the top of section to
indicate that background material should be removed once CORBA requirments and transition
are better understood.
61. Section 3.2.3.3: Changed title from "CORBA services" to "CORBA Interfaces"
62. Section 3.2.3.3.1: Elaborated on the scope of the CORBA specification. On 7/8 the
itemization of languages was removed since languages are already covered earlier.
63. Section 3.2.3.3.3: Removed Query service from list since it was already used in the previous
requirement. On 7/8, moved time service to 3.2.3.3.2, since it will probably be needed in order to implement the security service.
64. Section 3.2.3.4.1: Added comment regarding Opendoc as the basis for the Compound
Document Presentation and Data Interchange CORBAfacility.
65. Section 3.2.3.5.1: Changed title from "Interface Browser" to "Interface Repository Browser".
66. Section 3.2.3.6.2: On 7/8 reworded to specify "each of the supported programming
languages", instead of itemizing them.
67. Section 3.2.3.8.3: Added requirement for CORBA/DCOM interoperability.
68. Section 3.14.2.1-4: Put these paragraphs under 3.14.2 and rephrased requirements to use
consistent terminology.
69. Section 6: Moved AOG recommendations to Appendix A.
70. Appendix A: Moved requirements for other COE areas to Appendix B.

SECTION 1 SCOPE 13
1.1 IDENTIFICATION 13
1.2 SYSTEM OVERVIEW 13
1.3 DOCUMENT OVERVIEW 13
SECTION 2 REFERENCED DOCUMENTS 15
2.1 GOVERNMENT DOCUMENTS 15
2.2 NON-GOVERNMENT DOCUMENTS 15
SECTION 3 REQUIREMENTS 19
3.1 REQUIRED STATES AND MODES 19
3.2 CSCI CAPABILITY REQUIREMENTS 19
3.2.1 Fundamental or Common Requirements 19
3.2.1.1 Security 19
3.2.1.2 Dynamic Reconfigurability 21
3.2.1.3 Synchronized Time 21
3.2.1.4 Performance and Architecture 21
3.2.1.5 Fault Tolerance 23
3.2.1.6 Language Support 23
3.2.1.7 Transaction Processing 24
3.2.1.8 Queueing 25
3.2.1.9 Management 26
3.2.1.10 Segmentation 27
3.2.1.11 Standards 27
3.2.1.12 Platforms 27
3.2.1.13 Legacy Compatibility 28
3.2.1.14 Product Quality. 28
3.2.1.15 Training 28
3.2.1.16 Documentation 28
3.2.2 DCE Specific Requirements 28
3.2.2.1 DCE Version 29
3.2.2.2 DCE services 29
3.2.2.3 DCE Applications 30
3.2.2.4 DCE Software Development 30
3.2.2.5 Management 31
3.2.2.6 Compatibility and Migration Support 32
3.2.2.7 DCE Default Configuration 32
3.2.2.8 DCE Documentation 32
3.2.2.9 Java Language Support 33
3.2.2.10 Microsoft NT Support 33
3.2.3 CORBA Specific Requirements 33
3.2.3.1 Background 33
3.2.3.2 CORBA Version 34
3.2.3.3 CORBA Interfaces 34
3.2.3.4 CORBAfacilities 36
3.2.3.5 CORBA Applications 36
3.2.3.6 CORBA Software Development 36
3.2.3.7 Management 36
3.2.3.8 Compatibility and Migration Support 36
3.2.3.9 Java Language Support 38
3.2.3.10 Microsoft NT Support 38
3.3 CSCI EXTERNAL INTERFACE REQUIREMENTS 38
3.3.1 Interface identification and diagrams 38
3.3.2 Project-unique identifier of interface 38
3.4 CSCI INTERNAL INTERFACE REQUIREMENTS 38
3.5 CSCI INTERNAL DATA REQUIREMENTS 38
3.6 ADAPTATION REQUIREMENTS 38
3.7 SAFETY REQUIREMENTS 38
3.8 SECURITY AND PRIVACY REQUIREMENTS 38
3.9 CSCI ENVIRONMENT REQUIREMENTS 38
3.9.1 Platform Requirements 38
3.9.2 Network Requirements 39
3.10 COMPUTER RESOURCE REQUIREMENTS 39
3.10.1 Computer hardware requirements 39
3.10.2 Computer hardware resource utilization requirements 39
3.10.3 Computer software requirements 39
3.10.4 Computer communications requirements 39
3.11 SOFTWARE QUALITY FACTORS 39
3.12 DESIGN AND IMPLEMENTATION CONSTRAINTS 39
3.13 PERSONNEL-RELATED REQUIREMENTS 39
3.14 TRAINING-RELATED REQUIREMENTS 39
3.14.1 Product training 39
3.14.2 DII COE Training 39
3.14.2.1 Installation Training 39
3.14.2.2 System Management Training 39
3.14.2.3 Software Development Training 40
3.15 LOGISTICS-RELATED REQUIREMENTS 40
3.16 OTHER REQUIREMENTS 40
3.17 PACKAGING REQUIREMENTS 40
3.18 PRECEDENCE AND CRITICALITY OF REQUIREMENTS 40
SECTION 4 QUALIFICATION PROVISIONS 41
SECTION 5 REQUIREMENTS TRACEABILITY 43
SECTION 6 NOTES 45
APPENDIX A WORKING GROUP PRODUCT RECOMMENDATIONS 47
7.1 RECOMMENDATIONS TO THE DII COE ARCHITECTURE OVERSIGHT BOARD 47
7.1.1 Transarc DCE 47
7.1.2 Open Horizons Connection [with qualifications] 47
7.1.3 HAL DCE Cell Manager 48
7.1.4 Transarc Encina (including the Recoverable Queueing Service) 48
7.1.5 TBD CORBA 48
7.1.6 TBD CORBA/Ada Mappings 48
APPENDIX B REQUIREMENTS FOR OTHER COE COMPONENTS 49
8.1 OPERATING SYSTEM REQUIREMENTS 49
8.1.1 Time 49
8.2 MANAGEMENT SERVICES 50
8.2.1 Common Desktop Environment 50
8.2.1.1 Single Login Integration 50
8.2.2 System Management 50
8.2.2.1 DCE/System Management Integration 50
8.3 COMMON SUPPORT APPLICATIONS 50
8.3.1 Netscape/Mosaic 50
8.3.2 Java 50
8.4 SOFTWARE DEVELOPMENT SERVICES 51
8.4.1 Design 51
8.4.1.1 Object Oriented Analysis and Design 51
8.4.2 Testing 51
8.4.2.1 Automatic test generation tools. 51
8.5 GENERAL SYSTEM ENGINEERING 51
8.5.1 Cost 51
8.5.2 Documentation 51
8.5.2.1 DII COE DCE implementation plan 51
8.5.2.2 DII COE DCE application programmer's guidance 52
8.5.2.3 DII COE CORBA implementation plan 52
8.5.2.4 DII COE CORBA application programmer's guidance 52
8.5.3 COE Services 52
8.5.3.1 DCE/CORBA Migration 52
8.6 NETWORK SERVICES 52

SECTION 1

SCOPE
1.
1.1 IDENTIFICATION
This specification describes the requirements for the distributed computing services and their
interfacing with other functional elements of the Defense Information Infrastructure (DII) Common
Operating Environment (COE). The distributed computing component and its relationship to the rest o the
COE, is described in the COE Baseline Description Document.
1.2 SYSTEM OVERVIEW
The focus of the COE's distributed computing component is on distributed computing capabilities
that permit procedures and objects to be invoked on remote hosts as though they were local to the caling
module. In addition to these basic capabilities, the distributed computing component will include avariety
of enabling services, such as security, time, persistence, and naming; many of these services are reuired for
the development of applications that are distributed. The two fundamental technologies that will be
implemented in the COE are the Distributed Computing Environment (DCE) and the Common Object
Request Broker Architecture (CORBA), including some related services; These technology choices are based on requirements from Department of Defense (DoD) services and related agencies.
This Software Requirements Specification (SRS) focuses on specifying requirements for the
implementation of these basic technologies in the COE, as well as for related capability requirement that
may not be addressed by those two basic technologies. Related capability requirements may include
requirements relating to the integration of the distributed computing component with other component or
capabilities in the COE.
1.3 DOCUMENT OVERVIEW
NOTE: The requirements specified herein assume a familiarity with the concepts of distributed
computing and with the two specific technologies, DCE and CORBA, that are being used to implement th
distributed computing component of the COE.
Section 2 specifies documents that are referenced elsewhere in this SRS.
Section 3 specifies requirements for the COE distributed computing component, with the bulk of
the requirements being specified in section 3.2. Section 3.2 is subdivided as follows:
Section 3.2.1 specifies fundamental or common requirements for distributing computing
which are not specific to either of the two technologies that are being focused upon.
Section 3.2.2 specifies requirements that are specific to DCE.
Section 3.2.3 specifies requirements that are specific to CORBA.
Section 4 is describes qualification provisions.
Section 5 provides a requirements traceability table that identifies the source of requirements.
Section 6 contains miscellaneous notes.
Appendix A describes recommendations that the COE Distributed Computing Technical Working
Group has made to the DII COE Architecture Oversight Group (AOG).
Appendix B describes capabilities that are outside the scope of the distributed computing
component, and are recommended to be included in the requirements specifications for other COE
components.
2.

SECTION 2

REFERENCED DOCUMENTS
2.
Several of the documents listed below will evolve prior to the COE V4.0 timeframe. The most
recent version of these documents are listed, but this list should be periodically updated to track he
evolution of those documents.
2.1 GOVERNMENT DOCUMENTS
1. GCCS Baseline Common Operating Environment, November 28, 1994
2. GCCS Integration Standard version 2.0, October 23, 1995
3. User Interface Specifications for the Global Command and Control System (GCCS) version
2.0, DISA, December 1995
4. DRAFT, Architectural Design Document for the Global Command and Control System
(GCCS) Common Operating Environment (COE), DISA, July 24, 1995
5. "GCCS Implementation of the Distributed Computing Environment Version 1.0", DISA,
September 1995.
6. DII COE Integration and Run-Time Environment Specification, Version 2.0, DISA.
7. Appendix X to the DII COE I&RTES, DISA, December 1995.
2.2 NON-GOVERNMENT DOCUMENTS
NOTE: The following list of references includes documents from the Object Management Group
(OMG) that are available either in a combined CORBAservices specification, or separately as OMG
documents. Both references are given and the content should be identical in either the individual o
combined formats. In the event of a discrepancy, the OMG should be consulted to determine which
document is in error.
NOTE: Most, if not all of these services will be purchased as COTS for use within the COE, such
that specification issues are unlikely to be an issue for COE developers.
1. OSF DCE Application Development Guide, Revision 1.0, Prentice Hall, 1993.
2. OSF DCE Application Development Reference, Revision 1.0, Prentice Hall, 1993.
3. Rosenberry, W., D. Kenney, and G. Fisher, Understanding DCE, O'Reilly & Associates,
1992.
4. Shirley, J., W. Hu, and D. Magid, Guide to Writing DCE Applications, Second Edition,
O'Reilly & Associates, 1992.
5. Thompson, J. and Otto, E. "Distributed Computing Environment(DCE) Lessons Learned,"
Logicon, July 21, 1995
6. Common Facilities Architecture, Revision 3.0, OMG Document 94-11-9, The Object
Management Group, Framingham, MA, November 14, 1994.
7. Common Object Request Broker: Architecture and Specification, Revision 1.2, OMG
Document 93-12-43, The Object Management Group, Framingham, MA.
8. Common Object Services Specification, Volume I, Revision 1.0, First Edition, OMG
Document 94-1-1, The Object Management Group, Framingham, MA, March 1, 1994.
9. Object Management Architecture Guide, Revision 2.0, Second Edition, OMG TC Document
92-11-1, The Object Management Group, Framingham, MA, September 1, 1992.
10. Object Services Architecture, Revision 8.0, OMG Document 94-11-12, The Object
Management Group, Framingham, MA, December 9, 1994.
11. Custer, H., Windows NT, Microsoft Press, Redmond, Washington, 1993.
12. Ada Langugage Mapping, OMG Document 1995/95-05-16, The Object Management Group,
Framingham, MA.
13. C++ Language Mapping, OMG Document 1994/94-09-14, The Object Management Group,
Framingham, MA.
14. C++ Language Mapping 1.1, OMG Document tc/96-01-13, The Object Management Group,
Framingham, MA.
15. Distributed Document Component Facility, OMG Document 1995/95-12-30, The Object
Management Group, Framingham, MA.
16. Event Notification Service, OMG Document 1994/94-01-01, The Object Management Group,
Framingham, MA.
17. Concurrency Service, OMG Document 1994/94-05-08, The Object Management Group,
Framingham, MA.
18. Externalization Service, OMG Document 1994/94-09-15, The Object Management Group,
Framingham, MA.
19. Licensing Service, OMG Document 1995/1995-03-23, The Object Management Group,
Framingham, MA.
20. Lifecycle Service, OMG Document 1994/94-01-01, The Object Management Group,
Framingham, MA.
21. Object Query Service, OMG Document 1995/95-01-01, The Object Management Group,
Framingham, MA.
22. Naming Service, OMG Document 1994/94-01-01, The Object Management Group,
Framingham, MA.
23. Transaction Service, OMG Document 1994/94-08-04, The Object Management Group,
Framingham, MA.
24. Persistent Object Service, OMG Document 1994/94-10-07, The Object Management Group,
Framingham, MA.
25. Properties Service, OMG Document 1995/95-06-01, The Object Management Group,
Framingham, MA.
26. Relationship Service, OMG Document 1994/94-05-05, The Object Management Group,
Framingham, MA.
27. Security Service, OMG Document 1995/95-12-01, The Object Management Group,
Framingham, MA.
28. Time Service, OMG Document 1995/95-11-08, The Object Management Group, Framingham,
MA.
3.

SECTION 3

REQUIREMENTS
3.
3.1 REQUIRED STATES AND MODES
The required modes and states of the system are assumed to be documented once for the entire
COE, and therefore is not discussed herein, since this SRS focuses only on the distributed computing
component of the COE.
3.2 CSCI CAPABILITY REQUIREMENTS
As mentioned earlier, the distributed computing component of the COE supports the two
fundamental, industry standard technologies for distributed computing, DCE and CORBA. DCE supports
remote procedure call (RPC) paradigm of software development, whereas CORBA supports a distributed
object management (DOM) paradigm. There are, however, requirements that are fundamental or at a higer
level than either DCE or CORBA, or which are common to both paradigms, which are specified in Sectio
3.2.1. DCE specific requirements are specified in Section 3.2.2, and CORBA specific requirements ar
specified in Section 3.2.3, below.
Some fundamental requirements for items like Remote Procedure Call or Object Request Broker
functionality are not specified herein since such requirements are inherent in the DCE ad CORBA
technologies that have been selected for implementation in the COE.
NOTE: in the following material, the word "implementation" is used to refer to the general set of
combined capabilities used to implement the distributed computing requirements. The implementation ay
include both COTS and GOTS components. Additionally, the word "participant" is used to refer to entties
that make use of the distributed computing implementation (in DCE terminology, these are generally
referred to as principals). The word "domain" is used to generally refer to a DCE cell or a CORBA
namespace, which typically correspond to a system management or security scope.
3.2.1 Fundamental or Common Requirements
The requirements specified in this section of the SRS are fundamental or common to the
implementation of distributed computing in the COE, and apply to the implementation of both of the DE
and CORBA technologies (including related infrastructure services) in the COE.
3.2.1.1 Security
3.2.1.1.1 General Security Requirements. The implementation shall comply with, or support,
those security requirements specified in the Security SRS for the COE that are
applicable to the distributed computing component, including at least the following:
a) Mandatory access control [TBD]
b) Discretionary access control
c) Mutual Identification and Authentication
d) Authorization
e) Privacy
f) Integrity
g) Non-Repudiation
h) Auditing
3.2.1.1.2 Fortezza Integration. The implementation shall provide a FORTEZZA/MISSI
compliant alternative encryption mechanism that is usable by all of the applicable
distributed computing services.
3.2.1.1.3 Firewalls. The implementation shall provide support for use through firewalls and
guards.
NOTE: Operation of the implementation through firewalls and guards is probably not a
requirement that can be directly satisfied by the distributed computing component of the COE, but ismore
likely to be satisfied through the configuration of the entire system/network, including routers, paket
filtering, intermediate hosts, etc. Even so, the implementation of distributed computing shouldn't eny
service in such configurations.
3.2.1.2 Dynamic Reconfigurability
3.2.1.2.1 Location Independence. The implementation shall be able to determine the location
of resources by using a location-independent name to permit a client participant to
bind to a resource regardless of its physical location (e.g., to support relocation of
services to another host).
3.2.1.2.2 Replication. The implementation shall provide the ability to replicate its own servers
(i.e., those that support the distributed computing implementation itself) and
application-level services (e.g., a map server, correlation server, etc) to support fault
tolerant operation and optimal performance.
3.2.1.2.3 Flexible Domain Configuration. The implementation shall have the flexibility to
support a variety of deployment options, including the ability to subdivide a domain
(e.g., cell or namespace) and incorporate the subdivided resources into a foreign
domain and/or be hosted on a different (e.g., deployed) network.
3.2.1.2.4 Dynamic Addressing. The implementation shall support configurations where hosts
[including hosts that provide distributed computing servers?] in the domain are
frequently moved and must determine their network IP address at boot-up using the
technique known as dynamic IP addressing.
3.2.1.2.5 Mobile Operation. [TBD]. It has yet to be determined if a requirement exists for the
ability for the distributed computing implementation to support operation while
moving (e.g., like cell phone operation).
3.2.1.3 Synchronized Time
3.2.1.3.1 Time Service. The implementation shall provide for automatic, secure, global
synchronized time.
3.2.1.4 Performance and Architecture
3.2.1.4.1 Scalability
NOTE: in the following requirements, rough, order-of-magnitude estimates of maximums have
been provided. These are highly subjective. The range of scalability is a factor that should be cosidered as
part of the COE product evaluation/recommendation cycle.
3.2.1.4.1.1 Intra/Inter-net/Remote Network Scalability. The implementation shall support
scalable operation over intra-, internet, and remotely connected networks,
proportionate to network speed and available capacity. At the low end, for
remotely connected hosts, a minimum of 9.6Kb/s line speed with varying
available capacity should be assumed.
3.2.1.4.1.2 Domain Scalability. The implementation shall provide scalable performance
as the number of domains (e.g., cells, namespaces) increases. A maximum of
domains numbering in the low thousands should be assumed.
3.2.1.4.1.3 Usage Scalability. The implementation shall provide scalable performance as
the number of users/principals/servers/objects in a domain increases.
Maximum site sizes may have up to 50,000 users, numbers of servers in the
low hundreds, and numbers of objects in the several thousands.
NOTE: objects can exist at varying levels of granularity, and it is not difficult to imagine hundres
of millions of objects. However, it is likely that the level of granularity utilized in the COE wil require
more than several thousand in the COE V4-V5 timeframe.
3.2.1.4.1.4 Server Load Balancing. The implementation shall provide the ability to
distribute client requests amongst replicated servers to facilitate optimal
performance by exploiting any parallelism that may exist in the configuration
of servers.
NOTE: DCE binding APIs provide support for the client to select from multiple available servers
on either a random basis or on the basis of some other criteria that the client applies. Products lke Encina
extend this to provide for bindings that distribute client requests based on server load.
3.2.1.4.2 Deleted (this paragraph should be removed from the document prior to finalization).
3.2.1.4.3 Concurrency/Threading. The implementation shall provide concurrent access to
services and shall support multithreading of service implementations.
3.2.1.5 Fault Tolerance
3.2.1.5.1.1 Replication. The implementation shall, in the event of server failure, support:
a) automatic rebinding of clients to replicated servers, where those servers are
a part of the distributed computing component (e.g., DCE CDS, Security
Server, etc, and b) graceful notification to the client of a server failure with the
opportunity for the client to rebind to a replicated server if appropriate for the
application.
3.2.1.5.1.2 Server Failure Management. The implementation shall provide the capability
of monitoring the health of the servers, and be capable of automatically
restarting failed servers.
3.2.1.5.1.3 Reliability. The implementation shall provide the ability to guarantee that
requests for services are implemented reliably, such that the requestor can
know deterministically whether the request was performed by the server.
3.2.1.6 Language Support
3.2.1.6.1 Ada, C, C++. The implementation shall provide support for application software
development in the following programming languages: Ada'95 (including Ada Task
compatibility with threading in the distributed computing component and in the
operating system), ANSI C, C++.
NOTE: We may need to specify compiler products for Ada'95 and C++ since these languages are
currently either incompletely implemented or not described by formal specifications, respectively.
NOTE: Java requirements are included in the CORBA Specific Requirements and DCE Specific
Requirements sections, later.
3.2.1.6.2 Java (Future Requirement). Support for application software development in the Java
language will likely become a requirement in the future. It will probably be required
that DCE or OMG IDL can be compiled to produce Java bytecode classes that can
access the DCE or CORBA interface implementations using the respective
distributed computing mechanisms.
3.2.1.7 Transaction Processing
3.2.1.7.1 Atomicity. The implementation shall provide support of atomicity for ensuring that a
computation consisting of one or more operations on one or more objects satisfies the
requirements of atomicity (if a transaction is interrupted by a failure, any partially
completed results are undone).
3.2.1.7.2 Isolation. The implementation shall provide the ability of transactions to execute
concurrently, with the same result as if they were performed sequentially.
3.2.1.7.3 Durability. The implementation shall provide support for durability (if a transaction
completes successfully, the results of its operations are never lost, except in the event
of catastrophe).
3.2.1.7.4 Database Support. The implementation shall provide support for 3-tier applications
including support for multiple databases (same or different database vendors) and
multiple platforms including all of the COE platforms and database management
systems.
3.2.1.7.5 Database Management Heterogeneity. The implementation shall support transactions
that span across the breadth of the multiple, heterogenous database management
systems that are supported by the COE (e.g. Oracle, Sybase...).
3.2.1.7.6 Process Spanning Transactions. The implementation shall provide the ability to have
transactions span across multiple processes (e.g. Process A starts a transaction,
Process B continues the transaction, Process A completes the transaction).
3.2.1.7.7 Database Independent API. The implementation shall provide an transaction
processing API that is independent of the database management system.
3.2.1.7.8 Transaction Rollback. The implementation shall provide the ability to abort
transactions and cause all involved databases to rollback to their initial state (before
transaction began).
3.2.1.7.9 Nested Transactions. The implementation shall provide the ability to nested
transactions.
3.2.1.8 Queueing
3.2.1.8.1 Persistence. The implementation shall maintain an object in a queue until it has been
de-queued, and shall provide for reliable recovery of queue contents in the event of a
system restart or failure.
3.2.1.8.2 Queue Query. The implementation shall provide the capability to access (read)
queued objects while they are still in the queue (i.e. you should not have to de-queue
an object before you can read it).
NOTE: Current application level uses of queueing functionality require performance of at least 30
operations per second. Because performance is relative to a wide variety of factors unrelated to quueing,
performance should be part of product evaluation criteria, but is probably inappropriate to define a a
requirement.
3.2.1.8.3 Deleted. (This paragraph should be removed before this document is finalized).
3.2.1.8.4 Priorities. The queue service must support queuing and de-queuing of objects at at
least 10 different priority levels.
3.2.1.8.5 Queue Polling. The implementation shall support: a) Synchronous Blocking, where
a queue is polled and the process is blocked until something arrives in the queue; b)
Synchronous Non-Blocking, where a queue is polled and control is immediately
returned to the process whether there are any objects in the queue or not; and c)
Asynchronous, where a queue is polled, control is immediately returned to the
process, but the process is notified (at some later time) when an object arrives on the
queue.
3.2.1.8.6 Queue Size. The implementation shall be capable of accomodating queue objects of
at least 50K bytes.
3.2.1.8.7 Number of Queues. The implementation shall be able to create and maintain a
minimum of 100 (simultaneous) queues.
3.2.1.8.8 Concurrent access. The implementation shall support: a) concurrent access to
queues, b) simultaneous queueing from multiple processes, and c) multiple processes
reading from the same queue.
3.2.1.8.9 Multiple Queues. The implementation shall support the ability for a process to have
multiple (incoming) queues.
3.2.1.8.10 Queued Object Size. The implementation shall be capable of accommodating queue
objects of a least 50K bytes.
3.2.1.8.11 Access Control. The implementation shall support the ability to define access control
lists on a per queue basis.
3.2.1.8.12 Queue Locking. The implementation shall support the ability to lock queue entries to
protect against concurrent write access.
3.2.1.8.13 LIFO. The implementation shall provide the ability to access the queue in last-in-
first-out (LIFO) order.
NOTE: The recommended solution from Transarc does not satisfy the above requirement for LIFO
access.
3.2.1.9 Management
3.2.1.9.1 License Management. The implementation shall provide tools for managing any
licensing mechanisms that are required by the implementation. These tools should
preferably be GUI based, and should be easy to use by systems administrators. Any
license management should be integrated with the othe relevant management
functions.
3.2.1.9.2 Transaction Management. The implementation shall provide an administrative
application to monitor transactions, including performance, failed transactions, and
status of servers, and to start/stop the transaction processing monitor.
3.2.1.9.3 Reserved.
3.2.1.9.4 Queue Management. The implementation shall provide an administrative application
to monitor queues, providing the following capabilities:
a) Report the number of objects in each queue.
b) Report all processes connected to each queue (process name and machine it is
running on).
c) Report the status of each process (e.g. waiting, reading, writing)
d) Flush any or all queues.
e) Start and stop the queue service servers.
3.2.1.9.5 Reserved.
3.2.1.9.6 Reserved.
3.2.1.10 Segmentation
The implementation shall be segmented in accordance with the version of the DII COE Integration
and Run-Time Environment Specification (I&RTES) that is approved by DISA for COE 4.0.
3.2.1.11 Standards
3.2.1.11.1 Standards Compliance. The implementation shall adhere with formal, industry de
facto, and community standards such as the Joint Technical Architecture (JTA).
3.2.1.12 Platforms
3.2.1.12.1 COE Supported Platforms. Unless otherwise stated, the full implementation (all of
the capabilities described below for each paradigm) shall be supported on each of the
platforms identified as DII COE supported platforms. As of this writing (COE V3.0),
this list includes: Sun Solaris 2.4 and 2.5; Hewlet-Packard HP-UX 9.0.7 and 10.01;
Microsoft NT 3.51 (client side software only). For COE V4.0, this list may expand
to include other platforms such as Digital Unix and IBM AIX. This list is specified
in the COE Baseline Specification, but may undergo frequent updates. Check with
the DISA COE Engineering Office to verify the set of supported platforms.
3.2.1.13 Legacy Compatibility
3.2.1.13.1 Legacy Compatibility. The implementation shall be consistent with the requirements
of legacy and migration systems that will utilize the COE.
3.2.1.14 Product Quality.
3.2.1.14.1 Product Quality. The implementation shall be of a quality consistent with the best
commercial practices of the industry, including continuing product improvement, bug
fixes, telephone and email support, documentation, interoperability, and training.
3.2.1.15 Training
3.2.1.15.1 Training. The implementation shall provide product training materials suitable to
support the following types of training:
a) Product specific training
b) COE specific training
c) Installation training
d) System management training
e) Software development training.
3.2.1.16 Documentation
3.2.1.16.1 Developer Documentation. The implementation shall include documentation to
describe the proper or recommended usage of the implementation by software
developers, as well as explicitly identify usage which is prohibited by the
architectural tenets of the COE or which would be incompatible with other COE
capabilities.
3.2.2 DCE Specific Requirements
To address distributed computing for the remote procedural call based software development
paradigm, the DII has adopted the Distributed Computing Environment (DCE) technology, defined by the
Open Group (previously the Open Software Foundation). There are many reasons why DCE was selected, most of which are beyond the scope of this SRS. The DII COE specific requirements for the use of DC
are specified in the next sections.
3.2.2.1 DCE Version
3.2.2.1.1 Version Compliance. The implementation shall be compliant with OSF DCE V1.2.1.
3.2.2.2 DCE services
3.2.2.2.1 Theads. The implementation shall use the native operating system threads services.
If the operating system implementation of threads is unsupported, the DCE
implementation shall provide an implementation of the POSIX pthreads services.
3.2.2.2.2 Naming. The implementation shall provide the DCE Cell Directory Service and
utilize the DNS or LDAP directory service for locating other cells.
NOTE: X.500 based Global Directory Services (GDS) are not subject to widespread commercial
availability, due in part to the nearly ubiquitous deployment of DNS in the commercial and DoD
communities. However, the COE may be required to support X.500 based GDS in the future to align wit
other major government initiatives such as the Defense Message System which is utilizing X.500.
3.2.2.2.3 Security. The implementation shall provide the DCE Security Service.
3.2.2.2.4 Reserved.
3.2.2.2.5 Transitive trust. The implementation shall provide transitive trust between
hierarchical cells, such that principals may access services located in other cells
without the need for pair-wise registration of principals between cells.
NOTE: The currently recommended DCE product from Transarc does not satisfy the transitive
trust requirement. Transitive trust has not yet been implemented by the Open Group; This situationshould
be remedied by the COE V4.0 timeframe.
3.2.2.2.6 Reserved.
3.2.2.2.7 Reserved.
3.2.2.2.8 Time. The implementation shall provide a DCE Distributed Time Service (DTS) that
is capable of interfacing with NTP for time synchronization outside of the cell.
3.2.2.2.9 Host. The implementation shall provide the DCE Host services.
3.2.2.3 DCE Applications
3.2.2.3.1 Distributed File System: The implementation shall provide the DCE Distributed File
System (DFS), including at least the DFS Client and DFS Exporter functionality.
3.2.2.3.2 NFS/DFS Gateway. The implementation shall provide a gateway that permits hosts
to use NFS to access files in the DFS.
3.2.2.4 DCE Software Development
3.2.2.4.1 Reserved.
3.2.2.4.2 Application programming interface: The implementation shall provide the API
implementation of the standard DCE API and Generic Security Service API (GSS-
API).
3.2.2.4.3 C++ class interface: The implementation shall provide a C++ class library interface
to the DCE and GSS APIs described above for use with the C++ programming
language.
NOTE: This is just a C++ interface to the DCE and GSS APIs, not a more generalized way of
invoking C++ objects across a network (that is where CORBA is used).
3.2.2.4.4 DCE traffic monitor/debugger: The implementation shall provide a GUI-based tool
for monitoring DCE traffic between clients and servers that can be used to assist in
debugging the clients, servers, and DCE configuration.
3.2.2.4.5 Templates: The implementation shall provide example client and server software
templates that demonstrate typical usage of the DCE capabilities, for each of the
supported programming languages.
NOTE: This would seem to duplicate the capabilities of the ease of use APIs listed in section
3.2.2.4.8, but may be needed to support users who cannot afford to procure the ease of use libraries
3.2.2.4.6 Reserved.
3.2.2.4.7 Ease of use APIs. The implementation shall provide an API that provides a more
abstract, higher level interface to the common DCE programming idioms, such as
client and server registration and initialization, obtaining a server binding, performing
name/directory service lookups, use of security services (including ACL access,
monitoring, and auditing), use of the GSS-API, use of RPCs, and access to
identification and authentication information.
3.2.2.5 Management
3.2.2.5.1 Cell Management. The implementation shall provide GUI based tools to support both
local and remote (but within the cell) configuration or reconfiguration of DCE
services, including support for:
a) Add/delete a host from a DCE cell.
b) Relocate a host from one cell to another.
c) Attaching a host to another network.
d) Maintenance of dynamic IP addressing capabilities.
e) Start/Stop servers.
f) Add/Delete/Modify and configure servers in a cell.
g) Maintenance of cell and lan profiles.
h) Maintaince of the endpoint map of hosts.
i) Monitor server status and be capable of starting or restarting servers upon reboot,
failure, or as part of normal operating procedures.
j) Maintain the security server, including the registry, groups, access control lists, and
all attributes associated with each.
k) Synchronize the security registry contents with the related operating system
information, including user account and group information.
l) Maintain the time server.
m) Add/Delete/Modify filesets in the distributed file system.
n) Synchronize master and replicated servers.
o) Rebuild master servers after failure.
p) Relocate servers to different hosts.
q) Create and maintain DCE server replicas, including the security server, time server,
DFS filesystem servers, and cell directory server
r) Create and maintain the hierarchical Cell Directory Service, including all of the
attributes associated with the CDS.
s) Remote management of the implementation.
t) Maintain the audit functionality, including the collection, synchronization, reduction,
and archival of audit logs and the start/stop of the DCE audit daemons on hosts.
u) Register cells for inter-cell authentication.
v) Backup and restore DCE server data, including at least the CDS directory and
security server data.
w) Browse and search the CDS namespace.
3.2.2.6 Compatibility and Migration Support
3.2.2.6.1 3-Tier Migration. The implementation shall provide tools to ease migration of legacy
and 2-tier applications to a 3-tier architecture.
3.2.2.6.2 Network Protocols. The implementation shall provide the ability to add DCE's
security functionality to the following network protocols, such that the protocols
exhibit the security benefits of DCE and remain compatible with operating system
supplied versions of these same protocols:
a) Remote Shell (rsh/rshd)
b) Remote Execution (rexec/rexecd)
c) Remote Login (rlogin,rlogind)
d) Remote Copy (rcp, rcpd)
e) Telnet
f) FTP
g) SMTP
h) SNMP
i) HTTP
3.2.2.6.3 Wrappers. The implementation shall provide example, or template, DCE wrappers
that show how to encapsulate an existing non-DCE executable application such that
it can be invoked via DCE, including wrapper backends that perform:
a) Command line based service invocation.
b) Other (TBD).
3.2.2.7 DCE Default Configuration
The implementation shall provide a default configuration for the DCE Cell Directory Service
(including namespace configuration in accordance with the DII COE DCE Implementation Plan) and DCE
Security Server (including default principals and configuration to implement DII security policies).3.2.2.8 DCE Documentation
3.2.2.8.1 Concept of Operation. The implementation shall provide a DII COE DCE Concept
of Operations (CONOPS) document describing the overall definition, role, operation
and maintenance of DCE cells in the DII. This is a higher level document than the
DII DCE Adminstration Guide.
3.2.2.8.2 Administration Guide. The implementation shall provide a DII COE DCE
Administration Guide that describes procedures and tools for the details of daily
administration of a DCE cell and how to interconnect DCE cells within and across,
organizational boundaries.
3.2.2.9 Java Language Support
3.2.2.9.1 (Future Requirement) Access to DCE Services using Java. The implementation shall
provide the capability to access distributed DCE based services from Java applets and
applications.
3.2.2.10 Microsoft NT Support
3.2.2.10.1 Access to DCE Servers. The implementation shall provide the capability to access
distributed DCE based services from Windows NT clients.
3.2.3 CORBA Specific Requirements
To address distributed computing needs for the object-oriented software development paradigm,
the DII has adopted the Common Object Request Broker (CORBA) technology, defined by the Object
Management Group (OMG). There are many reasons why CORBA was selected, most of which are beyond
the scope of this SRS. The DII COE specific requirements for the use of CORBA are specified in the ext
sections.
3.2.3.1 Background
This section is included for information purposes only at this time, until CORBA requirements are
transition are better understood. In the future, this section should be removed from this document.Programs that are planning, designing, or using CORBA include:
a) New Attack Submarine NSSN program. This program has specified CORBA for use in
interfacing its subsystems over the C3I System network. The Prime contractor is Lockheed
Martin Federal Systems Division. Lockheed Martin proposed using IONA. It will pobably be
the only ORB product used in the system. No work on object class definitions has been done
yet. NSSN has many applications that will be based upon processors other than workstations
such as: HP 743I VME board running HPRT and PowerPCs running VxWorks.
b) Theater Battle Management Core Systems. The prime contractor for TBMCS (LORAL) has
chosen IONA ORBIX as the ORB for design/implementation. TBMCS will integrate CTAPS,
CIS, and WCCS under a single architecture.
c) DARPA Distributed Air Operations Center (DAOC) Advanced Technology Demonstration
(ATD). Logicon has selected IONA ORBIX as the commercial ORB.
d) DARPA/ISO - Joint Task Force ATD program - provides collaborative tools for the CJTF and
staff. Linked with theater CINCS and deployed forces. The architecutral contractor,
Teknowledge Federal Systems, has been supporting a two-ORB policy, using IONA ORBIX
as the commercial ORB and Corbus, a GOTS ORB developed by BBN. The system is
currently moving to a second commercial ORB that has not been selected.
e) JFACC Program - just getting underway, will provide a collaborative capability for the JFACC
and staff that enables a continuous planning cycle for employment of air assets. The JFACC
program will use the JTF ATD architecture as a starting point (described above).
COMMENT: The following requirements were received from Navy, but I don't think that schedule
is within the scope of an SRS. Are the wrapper development efforts dependent upon CORBA being in th
COE kernel? Or is it that the Navy would like to wrap the referenced products using the CORBA produts
recommended by the DCWG and so the Navy wants a product decision by the specified timeframe?
The time frame in which systems require CORBA technology vary; The Navy desires that some of
its applications, including NIPS, TDBM, and ATWCS be wrapped with CORBA wrappers by November
1996. The Air Force's TBMCS program is currently using CORBA in design/development and will deploy some operational CORBA based capabilities beginning in 4QCY97.
3.2.3.2 CORBA Version
The implementation shall be compliant with version 2.0 of the CORBA specifications, as specified
by the Object Management Group.
Note: There is not currently a validation and compliance testing suite, so compliance right now is not something that can easily be verified.
3.2.3.3 CORBA Interfaces
3.2.3.3.1 CORBA. The implementation shall provide implementations of the following
adopted CORBA interfaces:
a) ORB core
b) IIOP
c) Implementation Repository
d) Interface Repository
e) IDL compiler
f) Static Invocation Interface
g) Dynamic Invocation Interface
h) Dynamic Skeleton Interface.
3.2.3.3.2 CORBAservices. The implementation shall provide the following CORBAservices as
defined by the OMG:
a) Naming
b) Event Management
c) Transaction
d) Lifecycle
e) Security
f) Query
g) Time
Note: Some of the CORBAservices specified above have not yet been implemented by vendors,
although they have been adopted by the OMG. Those that are specified for COE V4.0 are expected to b
available within the needed time frame.
3.2.3.3.3 Future CORBA Services. In the future, the implementation shall provide the
following additional CORBAservices:
a) Concurrency
b) Relationship
c) Licensing
d) Persistence
e) Trader
f) Properties
g) Externalization.
3.2.3.4 CORBAfacilities
3.2.3.4.1 CORBAfacilities. The implementation shall provide the following CORBAfacilities
as specified by the OMG: a) Compound Document Presentation and Data
Interchange. This facility is based on the Opendoc specifications developed by IBM,
Apple, CIL, et al.
3.2.3.5 CORBA Applications
3.2.3.5.1 Interface Repository Browser: The implementation shall provide a GUI-based
capability for browsing the interfaces that are contained in the interface repositories
of local and remote systems, as permitted by security policy.
3.2.3.6 CORBA Software Development
3.2.3.6.1 Inter-ORB traffic monitor/debugger: The implementation shall provide a GUI-based
tool for monitoring CORBA traffic between clients and servers that can be used to
assist in debugging the clients, servers, and CORBA configuration.
3.2.3.6.2 Templates: The implementation shall provide example client and server software
templates that demonstrate typical usage of the common CORBA interfaces, for each
of the supported programming languages.
3.2.3.7 Management
3.2.3.7.1 Implementation repository management: The implementation shall provide a GUI-
based tool for managing the contents and configuration of local and remote
implementation repositories.
3.2.3.7.2 Interface repository management: The implementation shall provide a GUI-based tool
for managing the contents and configuration of local and remote interface
repositories.
3.2.3.7.3 Namespace management: The implementation shall provide a GUI-based tool for
managing the contents and configuration of the CORBA namespace.
3.2.3.7.4 Security management: The implementation shall provide a GUI-based tool for
managing the security configuration of the CORBA implementation.
3.2.3.8 Compatibility and Migration Support
3.2.3.8.1 DCE Compatability. The implementation shall be compatible with the COE
implementation of the DCE.
NOTE: To the extent possible, the CORBA and DCE implementations should leverage off of each
other's strengths, and CORBA capabilities should re-use or be layered upon the DCE implementation suh
that duplication is minimized and greater consistency is obtained.
3.2.3.8.2 Application Service Wrapping. The implementation shall provide the capability to
access DCE enabled application services.
Note: The above might take the form of a CORBA/DCE generic bridge, or might involve the
wrapping of DCE application services with CORBA wrappers. The practical ability to accomplish this ill
probably have to be determined on a case-by-case basis, depending on the DCE services that the appliation
services uses, such as DCE pipes and pointers.
3.2.3.8.3 Microsoft Distributed Common Object Model (DCOM). The implementation shall
the capability for OLE objects to request services from CORBA objects and vice-
versa, using CORBA adopted technology.
3.2.3.9 Java Language Support
3.2.3.9.1 Access to CORBA Services using Java. The implementation shall provide the
capability for Java Bytecode applets and applications to access distributed CORBA
based services.
3.2.3.9.2 Java Servers. The implementation shall provide the capability to implement CORBA
servers using the Java language, and to access to such distributed CORBA services
from Java and non-Java clients.
3.2.3.10 Microsoft NT Support
3.2.3.10.1 Access to CORBA Servers from NT. The implementation shall provide the capability
to access distributed CORBA based services from Windows NT clients.
3.3 CSCI EXTERNAL INTERFACE REQUIREMENTS
3.3.1 Interface identification and diagrams
3.3.2 Project-unique identifier of interface
3.4 CSCI INTERNAL INTERFACE REQUIREMENTS
3.5 CSCI INTERNAL DATA REQUIREMENTS
3.6 ADAPTATION REQUIREMENTS
3.7 SAFETY REQUIREMENTS
3.8 SECURITY AND PRIVACY REQUIREMENTS
Specified earlier.
3.9 CSCI ENVIRONMENT REQUIREMENTS
3.9.1 Platform Requirements
The implementation shall support the platforms (hardware/operating system combinations)
specified for the DII COE.
3.9.2 Network Requirements
The implementation shall support the network specified for the DII COE.
3.10 COMPUTER RESOURCE REQUIREMENTS
3.10.1 Computer hardware requirements
3.10.2 Computer hardware resource utilization requirements
3.10.3 Computer software requirements
3.10.4 Computer communications requirements
3.11 SOFTWARE QUALITY FACTORS
3.12 DESIGN AND IMPLEMENTATION CONSTRAINTS
3.13 PERSONNEL-RELATED REQUIREMENTS
3.14 TRAINING-RELATED REQUIREMENTS
3.14.1 Product training
Product Training. The implementation shall provide on-site training for each of the COTS and
GOTS products comprising the implementation, requisite with the use of the products.
3.14.2 DII COE Training
The implementation shall provide centralized training that supplements the product training
described above, and that is tailored to provide instruction on how to use the implementation withinthe DII
COE context, including:
3.14.2.1 Installation Training
The implementation shall provide training that prepares the student for performing the installation of all of the components of the implementation.
3.14.2.2 System Management Training
The implementation shall provide training that prepares the student to management the
implementation, including procedures that may be unique to the DII COE context, such as security.
3.14.2.3 Software Development Training
The implementation shall provide training that prepares the student for software development using
the allowed features of the implementation, including the use of programming idioms, templates, testng, or
other methods that may be unique to the DII COE context.
3.15 LOGISTICS-RELATED REQUIREMENTS
3.16 OTHER REQUIREMENTS
3.17 PACKAGING REQUIREMENTS
3.18 PRECEDENCE AND CRITICALITY OF REQUIREMENTS

SECTION 4

QUALIFICATION PROVISIONS
4.
TBD.

SECTION 5

REQUIREMENTS TRACEABILITY
5.
The members of the Distributed Computing Working Group (DCWG) are in joint agreement that
the requirements specified herein are jointly held. Only those requirements that are unique to a
service/agency are listed below.
Requirement No.
Brief Title
Source
Comments
3.2.1.8.13.
LIFO
Navy
Recommended product
does not satisfy reqt.
Reqt is unique to Navy.

SECTION 6

NOTES
6.
None.

APPENDIX A

WORKING GROUP PRODUCT RECOMMENDATIONS
7.
7.1 RECOMMENDATIONS TO THE DII COE ARCHITECTURE OVERSIGHT BOARD
7.1.1 Transarc DCE
The Distributed Computing Environment (DCE) version 1.1 product from Transarc is
recommended to satisfy all of the core DCE requirements, including the implementation of time, threas,
directory services, RPC, and security. Additionally, Transarc's implementation of the DCE Distributd File
System (DFS) and NFS/DFS gateway is recommended for sites that require such functionality. Transarcs
implementation of DCE, however, does not currently support Ada'95 language bindings or mobile hosts,
which are identified as COE requirements. Transarc's DCE is available on all of the COE platforms.
Another, government-owned implementation of DCE version 1.1 was developed by the Army. The
Army's implementation, contracted to Unixpros, uses the same source code baseline as the Transarc DC
product, with modifications to support both the Ada'95 bindings and mobile host requirements that wee
identified above as deficiencies in the ability of the Transarc implementation to satisfy COE requirments.
Conversely, though, there are are limitations regarding the availability of the Army's implementatio on the
range of COE platforms that are required.
The decision to recommend Transarc's DCE vice the Army/Unixpros solution hinged on three
factors: 1) COTS vs GOTS. The recommendation is consistent with DOD direction to use COTS, and
addresses requirements for best commercial practices with regard to training, documentation, technicl
support, and availability. 2) Cost. The recommendation is believed to be in the best cost interest of the
majority of the COE users. There are, however, implementation scenarios in which deployment costs (ot
including support, maintenance, training, etc) could exceed the alternative Army/Unixpros solution. 3)
Availability and porting. The recommendation is available on the full range of COE platforms requird, as
well as other platforms that are not yet required but which may be in the future.
7.1.2 Open Horizons Connection [with qualifications]
The Connection product from Open Horizons supports migration of legacy applications to DCE by
providing DCE'ized versions of runtime libraries for other COTS products, principally Oracle. The
Connection product, however, has some limitations in that it supports only a subset of the Oracle daabase
APIs that are in use in legacy systems. Hence, the product may also be limited in its use in conjuntion with
4GL database forms packages, such as Sybase's Gain Momentum, and so it's recommendation must be
qualified to consider the case-by-case applicability of the tool.
Note: Requirements for DCE'izing legacy database connections are somewhat weak, and are
generally subsumed under the category of migration requirements.
Note: The working group is investigating other alternatives that may provide similar functionality,
such as Intellisoft's DCE/Snare.
7.1.3 HAL DCE Cell Manager
The DCE Cell Manager product from HAL provides a GUI-based interface to most DCE cell
management functions. It is being upgraded to support DCE version 1.1, including hierarchical cellsand
access control list management. The product, however, is not integrated with the products being use for
COE Management Services, and the ability of the product to support the CONOPS for COE system
administration and security has not been evaluated.
7.1.4 Transarc Encina (including the Recoverable Queueing Service)
Encina/Encina++ from Transarc provides a higher level API to the DCE functionality, and provides
support for the reliability requirements that are inherent to transaction processing. Encina++, incuded with
Encina, provides an object oriented API to the Encina functionality, with bindings to the C++ languae.
Encina satisfies replication and load balancing requirements without additional application programmng,
and satisfies the monitoring requirements. The Reliable Queueing Service, an optional product in th
Encina family, satisfies all but the Army's LIFO requirements for queueing and is also recommended t
satisfy the queueing requirements. Both products rely upon DCE core services.
7.1.5 TBD CORBA
CORBA recommendations are TBD.
7.1.6 TBD CORBA/Ada Mappings
CORBA recommendations are TBD.

APPENDIX B

REQUIREMENTS FOR OTHER COE COMPONENTS
8.
8.1 OPERATING SYSTEM REQUIREMENTS
8.1.1 Time
System time coordination with DCE/CORBA time services.
8.2 MANAGEMENT SERVICES
8.2.1 Common Desktop Environment
8.2.1.1 Single Login Integration
8.2.1.1.1 Single Login Integration. The implementation shall be integrated with the DII COE
console login facilities, defined by the Management Services implementation, such
that executation of the normal console login sequence identifies and authenticates the
user, allowing the user with DCE-enabled capabilities. Currently, this requires
integration of DCE with the Triteal Enterprise Desktop product.
8.2.1.1.2 DFS/CDE integration. The implementation of DCE/DFS shall be integrated with the
Triteal Enterprise Desktop product.
8.2.1.1.3 Remote execution integration. The implementation shall be integrated with the
Triteal Enterprise Desktop product to allow desktop startup of remote processes
using the facilities provided by DCE.
8.2.2 System Management
8.2.2.1 DCE/System Management Integration
8.2.2.1.1 DCE/System Management Integration. The implementation of the cell management
capabilities shall be integrated with the UNIX user and security management
capabilities, specified in the DII COE Management Services SRS.
8.2.2.1.2 CORBA/System Management Integration. The implementation of the CORBA
management capabilities shall be integrated with the UNIX system management
capabilities, specified in the DII COE management Services SRS.
8.3 COMMON SUPPORT APPLICATIONS
8.3.1 Netscape/Mosaic
DCE/WEB integration.
8.3.2 Java
DCE/IDL mappings.
8.4 SOFTWARE DEVELOPMENT SERVICES
8.4.1 Design
8.4.1.1 Object Oriented Analysis and Design
8.4.1.1.1 Object Oriented Analysis and Design Tools: The implementation shall provide GUI-
based tools for performing object oriented design and analysis as part of the software
development environment.
8.4.2 Testing
8.4.2.1 Automatic test generation tools.
8.4.2.1.1 Automatic Test Generation Tools. The implementation shall provide tools to support
the automatic generation of tests.
8.5 GENERAL SYSTEM ENGINEERING
8.5.1 Cost
- - Licensing agreements for all GCCS Distributed Computing software on a per-site basis. Per-
site licensing (as opposed to per-machine licensing) allows sites to have greater flexibility in coniguring
and maintaining their system. Maintaining licenses for each machine is more time consuming and placs an
additional burden on support personnel.
- - One negotiated price (by DISA) for all of the services. JMCIS sites would then be
responsible for purchasing licenses, based on the negotiated price.
- - Each JMCIS site must have the capability to run multiple server machines. To take full
advantage of distributed computing features like 3-tier architecture, server replication, and load blancing
JMCIS sites will require the ability to install GCCS Distributed Computing software on several serves and
clients. Licensing and pricing must permit this type of flexibility.
Tools must be cost effective and support required platforms.
8.5.2 Documentation
8.5.2.1 DII COE DCE implementation plan
The implementation shall include a DII COE DCE Implementation Plan document. This is
primarily a planning document, and shall describe the implementation in suffucient detail to be ableto guide
planning efforts as well as assist in determining the work plan for the DCWG and DISA JIEO activitie
related to procedural based distributed computing.
8.5.2.2 DII COE DCE application programmer's guidance
The implementation shall include a DII COE DCE Application Programmer's Guidance document.
This document shall describe the proper or recommended usage of the implementation by software
developers, as well as explicitly identify usage which is prohibited.
8.5.2.3 DII COE CORBA implementation plan
The implementation shall include a DII COE CORBA Implementation Plan document. This is
primarily a planning document, and shall describe the implementation in suffucient detail to be ableto guide
planning efforts as well as assist in determining the work plan for the DCWG and DISA JIEO activitie
related to object-oriented distributed computing.
8.5.2.4 DII COE CORBA application programmer's guidance
The implementation shall include a DII COE CORBA Application Programmer's Guidance
document. This document shall describe the proper or recommended usage of the implementation by
software developers, as well as explicitly identify usage which is prohibited.
8.5.3 COE Services
8.5.3.1 DCE/CORBA Migration
DII COE services that are implemented as DCE-based application services shall also be accessible
via a CORBA interface.
8.6 NETWORK SERVICES
The following requirement is also listed in section 3.2.1.1.3. See explanatory note below.
Firewalls. The implementation shall provide support for the use of DCE services through firewalls
and guards.
NOTE: Operation of DCE through firewalls and guards is probably not a requirement that can be
directly satisfied by the distributed computing component of the COE, but is more likely to be satisied
through the configuration of the entire system/network, including routers, packet filtering, intermeiate
hosts, etc. Even so, the implementation of distributed computing shouldn't deny service in such
configurations.
 
To the best of our knowledge, the text on this page may be freely reproduced and distributed.
If you have any questions about this, please check out our Copyright Policy.

 

totse.com certificate signatures
 
 
About | Advertise | Bad Ideas | Community | Contact Us | Copyright Policy | Drugs | Ego | Erotica
FAQ | Fringe | Link to totse.com | Search | Society | Submissions | Technology
Hot Topics
Php
Withstanding an EMP
Good computer destroyer?
Wow, I never thought the navy would be so obvious.
Alternatives Internets to HTTP
Anti-Virus
a way to monitor someones AIM conversation
VERY simple question: browser history
 
Sponsored Links
 
Ads presented by the
AdBrite Ad Network

 

TSHIRT HELL T-SHIRTS