Issues for Data Distribution Service Finalization Task Force

To comment on any of these issues, send email to data-distribution-ftf@omg.org. (Please include the issue number in the Subject: header, thusly: [Issue ###].) To submit a new issue, send email to issues@omg.org.

List of issues (green=resolved, yellow=pending Board vote, red=unresolved)

List options: All ; Open Issues only; or Closed Issues only

Issue 6685: DDS editorial issues
Issue 6686: Bad references
Issue 6687: Missing operations to allow the navigation described in the PIM
Issue 6705: ref-1001: section 3.1.1 (editorial
Issue 6706: ref-1002: section 3.1.2.1 (editorial
Issue 6707: ref-1003: Section 3.1.3.2 (editorial)
Issue 6708: ref-1004: Section 3.1.3.3 Metamodel (clarification)
Issue 6709: ref-1005: figure 3.2 (editorial)
Issue 6710: ref-1006: Page 3.11 (editorial)
Issue 6711: ref-1007: Section 3.1.6.3.4 CacheListener (editorial)
Issue 6712: ref-1008: Bad annex reference (editorial)
Issue 6713: ref-1009: Section 3.2.3.2 IDL Model description of the example
Issue 6714: ref-1010: Section 3.2.3.3 XML Model Tags of the example
Issue 6715: ref-1011: Section 3.2.3.3 Introduction to figure 3.9 (editorial)
Issue 6716: ref-1012: Section 3.2.3.3 Simplified XML of the example)
Issue 6717: ref-1013: Section 3.1.6.3.9 Table for ObjectQuery (editorial)
Issue 6718: ref-1014: Page 3-10, figure 3-2 min_topic (editorial)
Issue 6719: ref-1015: Page 3-62 manual edition (editorial)
Issue 6720: ref-1016: Page 3-65 t2 (editorial)
Issue 6721: ref-1017: Section 3.1.4.4.2 topic (editorial
Issue 6722: ref-1018: Name of the methods for ObjectListener (editorial)
Issue 6723: ref-1019: Name of the ObjectRoot::clone method (editorial)
Issue 6729: Additional_communication_paradigms
Issue 6730: Attributes_on_a_topic_description
Issue 6731: Extension_to_the_partition_qos
Issue 6735: Transactional_reliability Issue
Issue 6736: Writer_notification_of_delivery_failure Issue
Issue 6738: Navigation_of_connectivity_information Issue
Issue 6739: Additional_qos_DATA_PRIORITY Issue
Issue 6740: Additional_qos_LIFESPAN Issue
Issue 6743: Make_USER_DATA_an_array_and_mutable Issue
Issue 6744: CacheFactory::find_cache (addition
Issue 6745: : Attributes and operations directly set on valuetypes
Issue 6746: Names of the ObjectRoot attributes
Issue 6747: Depth of cloning (addition)
Issue 6748: CacheAccess operations (documentation)
Issue 6749: CacheAccess::delete_access (editorial
Issue 6750: CacheAccess::deref (clarification)
Issue 6751: stringSeq and longSeq (editorial)
Issue 6752: ObjectHome::get_topic_name (editorial
Issue 6753: ObjectHome::get_all_topic_names (addition
Issue 6754: Operations on collections of objects (addition
Issue 6755: Name of ObjectLink (consistency)
Issue 6756: Obtaining the DomainParticipantFactory
Issue 6757: Potential problems in PSM mappings
Issue 6758: Naming_of_attribute_getter_operations
Issue 6759: Ref-62 Return_type_of_set_query_operations
Issue 6760: Delete dependencies and semantics
Issue 6761: Ref-20 Semantics_of_factory_delete_methods
Issue 6762: Ref-87 Clarify_Topic_deletion_as_local_concept
Issue 6763: Ref-151 No_locally_duplicate_topics
Issue 6764: Ref-22 Automatic_deletion_of_contained_entities
Issue 6765: Ref-15 Behavior_on_deletion_from_wrong_factory
Issue 6766: Single waitset attached to condition
Issue 6767: Entity specialization of set/get qos/listener
Issue 6768: Ref-36 Entity_specialization_set_get_qos
Issue 6769: Inconsistencies between PIM and PSM/IDL
Issue 6770: Ref-39 Entity_specialization_set_get_qos
Issue 6771: Ref-28 IDL_entity_get_statuscondition
Issue 6772: Ref-34 Incorrect_guard_condition_enabled_statuses
Issue 6773: Ref-37 Entity_ specialization_set_get_listener_in_idl
Issue 6774: Ref-42 DomainParticipantListener_on_requested
Issue 6775: Ref-46 ContentFilteredTopic_related_topic
Issue 6776: Ref-48 FooDataWriter_unregister_instance
Issue 6777: Ref-49 DataWriter_get_key
Issue 6778: Ref-57 FooDataReader_get_key
Issue 6779: Ref-56 Subscriber_notify_datareaders_parameters
Issue 6780: Ref-58 DataReader_read_take_w_condition
Issue 6781: Ref-59 FooDataReader_read_take_parameter_order
Issue 6782: Ref-70 Missing_deadline_statuskind_from_pim
Issue 6783: Ref-79 Missing_StatusKind_liveliness_idl_constants
Issue 6784: Ref-88 Inconsistent_naming_PIM_IDL_instance_samples
Issue 6785: Ref-205 On_requested_deadline_missed_paramtype
Issue 6786: Ref-126 Inconsistent_parameter_order_to_get_datareaders
Issue 6787: Ref-135 Missing_accessor_for_SampleRejectedStatus
Issue 6788: Ref-63 QoS_USER_DATA_on_Publisher_and_Subscriber
Issue 6789: Ref-229 IDL_rename_publisher_laxity_w_latency_budget
Issue 6790: Clarification of listener invocation and waitset signaling
Issue 6791: Ref-02 Data_Available_status_transition
Issue 6792: Duplicate use of domainId
Issue 6793: Use of Topic versus TopicDescription
Issue 6794: Ref-40 Name_and_return_type_of_lookup_topic
Issue 6795: Reason and use of enable
Issue 6796: Ref-31 Reason_and_use_of_enabled
Issue 6797: DDS ISSUE# 14] Helper addition to the IDL
Issue 6798: Ref-118 Introduce_TIME_INVALID_constant
Issue 6799: Ref-102 Addition_of time_related_constants
Issue 6800: DDS ISSUE# 15] Semantics of register and unregister instance
Issue 6801: DDS ISSUE# 16] Clarification of expression syntax
Issue 6802: DDS ISSUE# 17] Clarify consequence of changing partitions
Issue 6803: Behavior on creation failure
Issue 6804: DDS ISSUE# 19] Initial value of entity status changes
Issue 6805: DDS ISSUE# 20] Narrow the applicability of assert liveliness
Issue 6806: DDS ISSUE# 21] Helper operations
Issue 6807: Ref-134 Additional_w_timestamp_operations
Issue 6808: DDS ISSUE# 22] Details in the code generation
Issue 6809: ISSUE# 23] Make Listener inheritance explicit in figures 2-9 and 2-10
Issue 6810: DDS ISSUE# 24] Clarification of status flag
Issue 6811: DDS ISSUE# 25] Addition of read and take to ReadCondition
Issue 6812: DDS ISSUE# 26] Definition of DCPSKey
Issue 6813: DDS ISSUE# 27] Additional situations resulting in inconsistent QoS
Issue 6814: [DDS ISSUE# 28] Desirability to define "information model" in a file
Issue 6815: DDS ISSUE# 29] Disposing a multi-topic
Issue 6816: DDS ISSUE# 30] Setting of default qos on factories
Issue 6817: DDS ISSUE# 31] Topic QoS refactor
Issue 6818: DDS ISSUE# 32] Create dependencies on type
Issue 6819: DDS ISSUE# 33] Initialization of resources needed
Issue 6820: DDS ISSUE# 34] Initial data when DataWriter appears
Issue 6821: Inconsistency on what operations may return NOT_ENABLED
Issue 6822: DDS ISSUE# 36] QoS clarifications
Issue 6823: Ref-210 Clarification_of_responsibility_of_RxO_qos
Issue 6824: Ref-212 Qos_Coupling_TimeBasedFilter_deadline
Issue 6825: Ref-104 Coupling_bwn_TIME_BASED_FILTER_and_RELIABILITY
Issue 6826: Ref-156 Clarify_TIME_BASED_FILTER
Issue 6827: Ref-106 Desc_of_Inconsistent_topic_status::total_count_change
Issue 6828: Ref-108 Ownership_interaction_with_deadline
Issue 6829: Ref-109 Destination_order_should_be_request_offered
Issue 6830: Ref-111 Default_values_for_qos
Issue 6831: Ref-144 Wrong_description_of_compatible_DURABILITY
Issue 6832: Ref-165 Make_USER_DATA_changeable
Issue 6833: Ref-144 User_data_on_topic
Issue 6834: Ref-142 Confusing_description_of_manual_by_participant
Issue 6835: Ref-162 Separate_transient_into_two_kinds
Issue 6836: DDS ISSUE# 37] SAMPLE_LOST_STATUS on DataReader
Issue 6837: DDS ISSUE# 38] Allow application to install a clock
Issue 6838: DDS ISSUE# 39] Combine module names
Issue 6839: DDS ISSUE# 40] Expression syntax is missing enumeration
Issue 6840: DDS ISSUE# 41] Inconsistent use of instance in datawriter api
Issue 6841: DDS ISSUE# 42] Clarify how counts in the status accumulate
Issue 6842: DDS ISSUE# 43] Bad references
Issue 6843: Ref-139 Bad_reference_to filter_expression
Issue 6844: DDS ISSUE# 44] Errors in figures
Issue 6845: DDS ISSUE# 45] Is OMG IDL PSM more correct than CORBA PSM?
Issue 6846: DDS ISSUE# 46] Use of RETCODE_NOT_IMPLEMENTED
Issue 6848: Rename DataType interface to TypeSupport
Issue 6849: DDS ISSUE# 49] Behavior_of_register_type
Issue 6853: DDS ISSUE# 52] Provide for zero copy access to data
Issue 6854: DDS ISSUE# 53] Refactor lifecycle state
Issue 6855: Ref-85 Garbage_collection_of_disposed_instances
Issue 6856: Ref-112 Value_of_data_for_DISPOSED_state
Issue 6857: Ref-113 Meta_sample_accounting_towards_resource_limits
Issue 6858: DDS ISSUE# 54] Refactor or extend API used to access samples
Issue 6859: Ref-231 Provide_a_way_to_limit_count_returned_samples
Issue 6861: DDS ISSUE# 55] Rename DataType interface to TypeSupport
Issue 6862: DDS ISSUE# 56] Missing fields in builtin topics
Issue 6863: Ref-224 Built_in_topics_not_in_PSM
Issue 6864: DDS ISSUE# 57] Clarify creation of waitset and conditions
Issue 6867: ref-1032: User-provided oid
Issue 7022: ObjectHome index and name
Issue 7023: ObjectRoot::is_modified (clarification)
Issue 7024: New structure for DLRLOid
Issue 7025: Naming of the private members
Issue 7026: clean_modified (in ObjectRoot, Relations...)
Issue 7057: New definition for ObjectFilter
Issue 7058: Mapping DCPS-DLRL
Issue 7059: clone + deref
Issue 7060: Several instead one listener
Issue 7061: delete clone
Issue 7062: New definition for ObjectListener
Issue 7064: Ref-170 Missing_description_of_OWNERSHIP_STRENGH
Issue 7066: ref-171 Rename_Topic_USER_DATA_to_TOPIC_DATA
Issue 7067: New definition for Selections
Issue 7100: Missing operations on DomainParticipantFactory and need for helper values
Issue 7134: ref-1054: Bad which_added operations in IDL
Issue 7136: ref-1053 Missing is_composition
Issue 7169: Changing the IDL module

Issue 6685: DDS editorial issues (data-distribution-ftf)

Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-66 Misplacing_of_key_in_builtin_topic_table
Section 2.1.5. Fields "key" with Topic DCPSPublication and DCPSSubscription are placed one row to high compared to the relating Topic. In other words, DCPSPublication and DCPSSubscription should be placed one row higher.
The thickness of the lines between the rows might suggest some relation between fields, but is implemented inconsequent.
Proposal: Correct table as described above.
Ref-103 Typo_on_section_2.1.2.5.2.8
3rd paragraph Says "... prior to calling any of the sample-accessing operations, namely: … on the DataWriter"
Should say "on the DataReader" instead of "on the DataWriter"
Proposal: Replace "DataWriter" with "DataReader" in said paragraph
Ref-105 Typo_on_section_2.1.3.11
Section 2.1.3.11 says "Assuming the STRENGTH policy allows it…"
Should say "Assuming the OWNERSHIP policy allows it…"
Proposal: Replace as stated above
Ref-115 Typo_consistent_use_of_term_publication
Section 2.1.2.2.1.15 says "The publication to ignore"… The parallel sentence in section 2.1.2.2.1.16 says the "The DataReader to ignore"…
These two sentences should be consistent
Proposal: Replace "publication" with DataWriter in 2.1.2.2.1.15
Ref-143 Typo_on_RELIABILITY_description
In Section 2.1.3, QoS table. The description of RELIABILITY in the last line it says "and whether a samples can be discarded from it." Should say "samples" instead of "a samples"
Proposal: Replace as stated above
Ref-145 Bad_reference_to_DCPSEntity
Section 2.1.3 Figure 15 Says DCPSEntity instead of Entity in one of the lines
Proposal: Replace as stated above
Ref-147 Typo_on_section_2.1.5
Section 2.1.5 third paragraph says: The last "r" in "get_datareader" is not using italics as it should within the sentence "The built-in DataReader objects can be retrieved by using the operation get_datareader, with the Subscriber and the topic name as parameters."
Proposal: Make  the "r" italic.
Ref-207 Grammar_errors_on_secs_2.1.2.4_and_2.1.2.5
2.1.2.4.2.6: 2nd paragraph, 1st line: "one" should be "once" according to 2.1.2.1.1.7 this should be the case.
2.1.2.5.2.8: 3rd paragraph, 3rd line: "....on any DataWriter" should be "....on any DataReader"
2.1.2.5.3.8: Point 5, 3rd line: "....that is required that.... " should be "....that it is required that.....""
Proposal: Fix above 3 typos as stated above
Ref-221 Typo_on_section_2.1.4.4
2.1.4.4: First paragraph after the bullets:"... is done after 
ininitial is at ion phase..." should be "... is done after  initialization phase..." 
Proposal: Fix as stated above
Ref-228 Typo_on_2.1.2.2.1.13
2.1.2.2.1.13 2nd paragraph, 4th line: "filed" should be "field"
Proposal: Fix as stated above

Resolution: see below
Revised Text: Resolution1: Correct table in Section 2.1.5 as described Revised Text1: Move text "DCPSPublication (entry created when a DataWriter is created in association with its Publisher)" to the cell above where it currently is. Move the text "DCPSSubscription (entry created when a DataReader is created in association with its Subscriber)" to the cell above wher it currently is. Straddle together the Cell containing "DCPSParticipant (entry created when a DomainParticipant object is created)" with the one directly below Straddle together the cell containing "DCPSTopic (entry created when a Topic object is created)" with the 2 cells that follow directly below Straddle together the cell containing "DCPSPublication (entry created when a DataWriter is created in association with its Publisher)" with the 3 cells that follow directly below Straddle together the cell containing "DCPSSubscription (entry created when a DataReader is created in association with its Subscriber)" with the 3 cells that follow directly below Resulting table in section 2.1.5 is: Topic name Field Name Type Meaning DCPSParticipant(entry created when a DomainParticipant object is created) key BuiltinTopicKey_t DCPS key to distinguish entries user_data UserDataQosPolicy Policy of the corresponding DomainParticipant DCPSTopic(entry created when a Topic object is created) key BuiltinTopicKey_t DCPS key to distinguish entries name string Name of the Topic type_name string Name of the type attached to the Topic DCPSPublication(entry created when a DataWriter is created in association with its Publisher) key BuiltinTopicKey_t DCPS key to distinguish entries topic_name string Name of the related Topic partition PartitionQosPolicy Policy of the Publisher to which the DataWriter belongs user_data UserDataQosPolicy Policy of the corresponding DataWriter DCPSSubscription(entry created when a DataReader is created in association with its Subscriber) key BuiltinTopicKey_t DCPS key to distinguish entries topic_name string Name of the related Topic partition PartitionQosPolicy Policy of the Subscriber to which the DataReader belongs user_data UserDataQosPolicy Policy of the corresponding DataReader Disposition1: Resolved Summary2: Ref-103 Typo_on_section_2.1.2.5.2.8 3rd paragraph says: "In the aforementioned case, the operation begin_access must be called prior to calling any of the sample-accessing operations, namely: get_datareaders on the Subscriber and read, take, read_w_condition, take_w_condition on any DataWriter." Should say "on the DataReader" instead of "on the DataWriter". Resolution2: Replace say "on the DataWriter" instead of "on the DataReader" Revised Text2: In the aforementioned case, the operation begin_access must be called prior to calling any of the sample-accessing operations, namely: get_datareaders on the Subscriber and read, take, read_w_condition, take_w_condition on any DataReader. Disposition2: Resolved Summary3: Ref-105 Typo_on_section_2.1.3.11 Section 2.1.3.11 says "The setting BY_SOURCE_TIMESTAMP indicates that, assuming the STRENGTH policy allows it, a timestamp placed at the source should be used. " Should say "...assuming the OWNERSHIP policy allows it…" Resolution3: Replace as stated above Revised Text3: The setting BY_SOURCE_TIMESTAMP indicates that, assuming the OWNERSHIP policy allows it, a timestamp placed at the source should be used. Disposition3: Resolved Summary4: Ref-115 Typo_consistent_use_of_term_publication Section 2.1.2.2.1.15 says "The publication to ignore"… The parallel sentence in section 2.1.2.2.1.16 says the "The DataReader to ignore"… These two sentences should be consistent Resolution4: Replace "publication" with DataWriter in 2.1.2.2.1.15 Revised Text4: The DataWriter to ignore is identified by the handle argument. Disposition4: Resolved Summary5: Ref-143 Typo_on_RELIABILITY_description In Section 2.1.3, QoS table. The description of RELIABILITY in the last line it says "and whether a samples can be discarded from it." Should say "samples" instead of "a samples" Resolution5: Replace as stated above Revised Text5: Outside steady state the HISTORY and RESOURCE_LIMITS policies will determine how samples become part of the history and whether samples can be discarded from it Disposition5: Resolved Summary6: Ref-145 Bad_reference_to_DCPSEntity Section 2.1.3 Figure 2-15 Says DCPSEntity instead of Entity in one of the lines Resolution6: Replace as stated above Revised Text6: Disposition6: Resolved Summary7: Ref-147 Typo_on_section_2.1.5 Section 2.1.5 third paragraph says: The last "r" in "get_datareader" is not using italics as it should within the sentence The built-in DataReader objects can be retrieved by using the operation get_datareader, with the Subscriber and the topic name as parameters. Resolution7: Make the "r" italic. Revised Text7: The built-in DataReader objects can be retrieved by using the operation get_datareader, with the Subscriber and the topic name as parameters. Disposition7: Resolved Summary8: Ref-207 Grammar_errors_on_secs_2.1.2.4_and_2.1.2.5 2.1.2.4.2.6: 2nd paragraph, 1st line: "one" should be "once" according to 2.1.2.1.1.7 this should be the case. 2.1.2.5.2.8: 3rd paragraph, 3rd line: "....on any DataWriter" should be "....on any DataReader" 2.1.2.5.3.8: Point 5, 3rd line: "....that is required that.... " should be "....that it is required that....."" Resolution8: The first and last typos are true typos. The second (2.1.2.5.2.8: 3rd paragraph, 3rd line) is invalid as the text is correct as it appears in the final PTC document Fix as stated below: · 2.1.2.4.2.6: 2nd paragraph, 1st line: Replace "one" with "once" · 2.1.2.5.3.8: Point 5, 3rd line: Replace "....that is required that.... " with "....that it is required that....." Revised Text8: 2.1.2.4.2.6: 2nd paragraph The operation unregister_instance should be called just once per instance, regardless of how many times register_instance was called for that instance. 2.1.2.5.3.8: Point 5 If PRESENTATION access_scope is GROUP and ordered_access is set to TRUE, then the returned collection contains at most one sample. The difference in this case is due to the fact that it is required that the application is able to read samples belonging to different DataReader objects in a specific order. Disposition8: Resolved Summary9: Ref-221 Typo_on_section_2.1.4.4 2.1.4.4: First paragraph after the bullets:"... is done after ininitial is at ion phase..." should be "... is done in an initialization phase..." Resolution9: Fix Section 2.1.4.4 as stated above Revised Text9: Usually the first step is done in an initialization phase, while the others are put in the application main loop. Disposition9: Resolved Summary10: Ref-228 Typo_on_2.1.2.2.1.13 2.1.2.2.1.13 2nd paragraph, 4th line: "filed" should be "field" Resolution10: Fix as stated above Revised Text10: This application data is propagated as a field in the built-in topic and can be used by an application to implement its own access control policy. Disposition10: Resolv
Actions taken:
December 8, 2003: received issue
September 23, 2004: closed issue

Issue 6686: Bad references (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
[DDS ISSUE# 2] Bad references
Ref-67  Bad_reference_to_SubscriberFactory
Sections 2.1.5, 2.1.6.2.1, and 2.1.6.2.2. Mention a SubscriberFactory. There is no SubscriberFactory. They should mention DomainParticipant. instead
Proposal: Replace SubscriberFactory with DomainParticipant in said sections.
Ref-71 Bad_reference_to_CORBA_PIM
Section 2.2.2. Says "The CORBA PIM is provided by means of the IDL that defines the interface an application can use to interact with the Service."
This should be: "The CORBA PSM ..."
Proposal: Replace as stated above
Ref-80 Bad_reference_to_appendixA
On page 2-21, 2-22, 2-28, 2-29, 2-54, 2-59 all the references to appendix A should be references to appendix B
Proposal: Replace as stated above

Resolution: see below
Revised Text: Summary1: Ref-67 Bad_reference_to_SubscriberFactory Sections 2.1.5, 2.1.6.2.1, and 2.1.6.2.2. Mention a SubscriberFactory. There is no SubscriberFactory. They should mention DomainParticipant. instead Resolution1: Replace SubscriberFactory with DomainParticipant in said sections. Revised Text1: · Section 2.1.5 Built-in Topics The built-in data-readers all belong to a built-in Subscriber. This subscriber can be retrieved by using the method get_builtin_subscriber provided by the DomainParticipant. · Section 2.1.6.2.1 SubscriptionView The first part of Figure 22 shows the Subscriber's and the DataReader's creation by means of the DomainParticipant. · Section 2.1.6.2.2 Notifications via Conditions and Wait-Sets The first part of Figure 22 shows the Subscriber's and the DataReader's creation by means of the DomainParticipant. Disposition1: Resolved Disposition1: Resolved Summary2: Ref-71 Bad_reference_to_CORBA_PIM Section 2.2.1. Says "The CORBA PIM is provided by means of the IDL that defines the interface an application can use to interact with the Service." This should be: "The CORBA PSM ..." Resolution2: · Section 2.2.1 Overview and Design Rationale Replace "PIM" with "PSM" Revised Text2: The CORBA PSM is provided by means of the IDL that defines the interface an application can use to interact with the Service. Disposition2: Resolved Summary3: Ref-80 Bad_reference_to_appendixA On page 2-21, 2-22, 2-28, 2-29, 2-54, 2-59 all the references to appendix A should be references to appendix B Resolution3: Replace all references to appendix A with references to appendix B Revised Text3: Disposition3: Resolved
Actions taken:
December 8, 2003: received issue
September 23, 2004: closed issue

Issue 6687: Missing operations to allow the navigation described in the PIM (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-200 Figure_2_10_arrow_readcondition
The PIM indicates by means of the arrow pointing from ReadCondition to DataReader that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). 
Proposal: Fix by adding get_datareader() operation to the ReadCondition. This operation should take no arguments and return a DataReader
Ref-201 Figure_2_5_arrow_statuscondition
The PIM indicates by means of the arrow pointing from StatusCondition to Entity that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). 
Proposal: Fix by adding get_entity() to StatusCondition. This operation should take no arguments and return a Entity.
Ref-202 Figure_2_10_arrow_topicdescription
The PIM indicates by means of the arrow pointing from DataReader to TopicDescription that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). 
Proposal: Fix by adding get_topicdescription to DataReader. This operation should take no arguments and return a Topicdescription.
Ref-203 Figure_2_9_arrow_topic
The PIM indicates by means of the arrow pointing from DataWriter to Topic that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). 
Proposal: Fix by adding  get_topic() to DataWriter. This operation should take no arguments and return a Topic.
Ref-227 Missing_navigation_operations
The PIM indicates by means of the arrow pointing from DataReader to Subscriber, from DataWriter to Publisher, and from DomainEntity to Participant that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute)
Proposal: Fix by adding  DataReader::get_subscriber (no parameters, returns a Subscriber), DataWriter::get_publisher (no parameters, returns a Publisher), and the operations Publisher:: get_participant(), Subscriber::get_participant, and TopicDescription::get_participant() (these 3 operations should take no arguments and return a Participant). It is not needed to add get_participant to DataReader or DataWriter because it is possible to navigate to the subscriber/publisher and from there to the participant.

Resolution: see below
Revised Text: Summary1: Ref-200 Figure_2_10_arrow_readcondition The PIM indicates by means of the arrow pointing from ReadCondition to DataReader that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). Resolution1: Add a get_datareader() operation to the ReadCondition. This operation should take no arguments and return a DataReader This corresponds to the following changes: · add the following line to the IDL for the ReadCondition interface in section 2.2.3 (CORBA PSM IDL) DataReader get_datareader (); · add the operation get_datareader to the table in section 2.1.2.5.8 · add the subsection 2.1.2.5.8.1 with the text below: "2.1.2.5.8.1 get_datareader This operation returns the DataReader associated with the ReadCondition. Note that there is exactly one DataReader associated with each ReadCondition." Revised Text1: Changes in PIM · In section 2.1.2.5.8 ReadCondtion Class · in the table, add the following operation get_datareader DataReader · add a new subsection, with the following content: "2.1.2.5.8.1 get_datareader This operation returns the DataReader associated with the ReadCondition. Note that there is exactly one DataReader associated with each ReadCondition." Changes in IDL · in section 2.2.3 (CORBA PSM IDL) · interface ReadCondition, · add the following operation DataReader get_datareader(); Disposition1: Resolved Summary2: Ref-201 Figure_2_5_arrow_statuscondition The PIM indicates by means of the arrow pointing from StatusCondition to Entity that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). Resolution2: Add a get_entity() operation to StatusCondition. This operation should take no arguments and return a Entity. This corresponds to the following changes: · add the following line to the IDL for the StatusCondition interface in section 2.2.3 (CORBA PSM IDL) Entity get_entity(); · add the operation get_entity to the table in section 2.1.2.1.9 · add the subsection 2.1.2.1.9.2 with the text below: "2.1.2.1.9.2 get_entity This operation returns the Entity associated with the StatusCondition. Note that there is exactly one Entity associated with each StatusCondition." Revised Text2: Changes in PIM · In section 2.1.2.1.9 · in the table, add the following operation get_entity Entity · add a new subsection with the following content: "2.1.2.1.9.2 get_entity This operation returns the Entity associated with the StatusCondition. Note that there is exactly one Entity associated with each StatusCondition." Changes in IDL · In section 2.2.3 DCPS PSM : IDL · interface StatusCondition · add the following operation: Entity get_entity(); Disposition2: Resolved Summary3: Ref-202 Figure_2_10_arrow_topicdescription The PIM indicates by means of the arrow pointing from DataReader to TopicDescription that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). Resolution3: Add a get_topicdescription operation to DataReader. This operation should take no arguments and return a TopicDescription. This corresponds to the following changes: · add the following line to the IDL for the DataReader interface in section 2.2.3 (CORBA PSM IDL) TopicDescription get_topicdescription(); · add the operation get_ topicdescription to the DataReader table in section 2.1.2.5.3 · add the subsection 2.1.2.5.3.15 with the text below: "2.1.2.5.3.15 get_ topicdescription This operation returns the TopicDescription associated with the DataReader. This is the same TopicDescription that was used to create the DataReader." Revised Text3: Changes in PIM · In section 2.1.2.5.3 DataReader · in the table, add the following operation get_topicdescription TopicDescription · add a new subsection with the following content: "2.1.2.5.3.15 get_ topicdescription This operation returns the TopicDescription associated with the DataReader. This is the same TopicDescription that was used to create the DataReader." Changes in IDL · In section 2.2.3 DCPS PSM : IDL · interface DataReader · add the following operation: TopicDescription get_topicdescription(); Disposition3: Resolved Summary4: Ref-203 Figure_2_9_arrow_topic The PIM indicates by means of the arrow pointing from DataWriter to Topic that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute). Resolution4: Add a get_topic() operation to DataWriter. This operation should take no arguments and return a Topic. Revised Text4: Changes in PIM · In section 2.1.2.4.2 DataWriter · in the table, add the following operation get_topic Topic · add a new subsection with the following content: "2.1.2.5.3.15 get_ topic This operation returns the Topic associated with the DataWriter. This is the same Topic that was used to create the DataWriter " Changes in IDL · In section 2.2.3 DCPS PSM : IDL · interface DataWriter · add the following operation: Topic get_topic(); Disposition4: Resolved Summary5: Ref-227 Missing_navigation_operations The PIM indicates by means of the arrow pointing from DataReader to Subscriber, from DataWriter to Publisher, and from DomainEntity to Participant that navigation is possible. However, the navigation is not present in the PSM (no operation, no attribute) Resolution5: Add a get_subscriber operation on DataReader; this operation takes no parameter and returns a Subscriber. This corresponds to the following changes: · add the following line to the IDL for the DataReader interface in section 2.2.3 (CORBA PSM IDL) Subscriber get_subscriber(); · add the operation get_subscriber to the DataReader table in section 2.1.2.5.3 · add the subsection 2.1.2.4.2.16 with the text below: "2.1.2.5.3.16 get_subscriber This operation returns the Subscriber to which the DataReader belongs." Add a get_publisher operation on DataWriter operation takes no parameter and returns a Publisher This corresponds to the following changes: · add the following line to the IDL for the DataWriter interface in section 2.2.3 (CORBA PSM IDL) Publisher get_publisher(); · add the operation get_publisher to the DataWriter table in section 2.1.2.4.2 · add the subsection 2.1.2.4.2.16 with the text below: "2.1.2.4.2.16 get_ publisher This operation returns the Publisher to which the DataWriter belongs." Add a get_participant operation on Publisher; this operation takes no parameter and return a Participant. This corresponds to the following changes: · add the following line to the IDL for the Publisher interface in section 2.2.3 (CORBA PSM IDL) DomainParticipant get_participant(); · add the operation get_participant to the Publisher table in section 2.1.2.4.1 · add the subsection 2.1.2.4.1.12 with the text below: "2.1.2.4.1.12 get_participant This operation returns the DomainParticipant to which the Publisher belongs." Add a get_participant operation on Subscriber; this operation takes no parameter and return a Participant. This corresponds to the following changes: · add the following line to the IDL for the Subscriber interface in section 2.2.3 (CORBA PSM IDL) DomainParticipant get_participant(); · add the operation get_participant to the Subscriber table in section 2.1.2.5.2 · add the subsection 2.1.2.5.2.13 with the text below: "2.1.2.5.3.13 get_participant This operation returns the DomainParticipant to which the Subscriber belongs." Add a get_participant operation on TopicDescription; this operation takes no parameter and return a Participant. This corresponds to the following changes: · add the following line to the IDL for the TopicDescription interface in section 2.2.3 (CORBA PSM IDL) DomainParticipant get_participant(); · add the operation get_participant to the TopicDescription table in section 2.1.2.3.1 · add the subsection 2.1.2.3.1.1 with the text below: 2.1.2.3.1.1get_participant This operation returns the DomainParticipant to which the TopicDescription belongs. Note: It is not needed to add get_participant to DataReader or DataWriter because it is possible to navigate to the subscriber/publisher and from there to the participant. Revised Text5: Changes in PIM · In section 2.1.2.5.3 DataReader · in the table, add the following operation get_subscriber Subscriber · add a new subsection with the following content: "2.1.2.5.3.16 get_subscriber This operation returns the Subscriber to which the DataReader belongs." · In section 2.1.2.4.2 DataWriter · in the table, add the following operation get_publisher Publisher · add a new subsectionwith the following content: "2.1.2.4.2.16 get_ publisher This operation returns the Publisher to which the DataWriter belongs." · In section 2.1.2.4.1 Publisher · in the table, add the following operation get_participant DomainParticipant · add a new subsection with the following content: "2.1.2.4.1.12 get_participant This operation returns the DomainParticipant to which the Publisher belongs." · In section 2.1.2.5.2 Subscriber · in the table, add the following operation get_participant DomainParticipant · add a new subsection with the following content: "2.1.2.5.3.13 get_participant This operation returns the DomainParticipant to which the Subscriber belongs Changes in IDL · In section 2.2.3 DCPS PSM : IDL · interface DataReader · add the following operation: Subscriber get_subscriber(); · interface DataWriter · add the following operation: Publisher get_publisher(); · interface Publisher · add the following operation: DomainParticipant get_participant(); · interface Subscriber · add the following operation: DomainParticipant get_participant(); Disposition5: Resolved
Actions taken:
December 8, 2003: received issue
September 23, 2004: closed issue

Discussion:


Issue 6705: ref-1001: section 3.1.1 (editorial (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
In the final editing process, "native-language constructs" has become "native-language data-accessing constructs". However, the mentioned constructs are not only related to accessing the data (e.g., creation of an object)
Proposal [THALES]
Remove the extra "data-accessing"

Resolution: see below
Revised Text: Revised Text: · Section 3.1.1 Overview and Design Rationale The purpose of this layer is to provide more direct access to the exchanged data, seamlessly integrated with the native-language constructs..
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6706: ref-1002: section 3.1.2.1 (editorial (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Starting at "A DLRL object has at least one shared attribute..." (page 3-3), the whole section has been messed up during the final editing process; therefore, the content is no more understandable.
Proposal [THALES]
Restore the wording and footprint as it was in version V67 (the last Word one)

Resolution: see below
Revised Text: Resolution: Restore the wording and footprint as it was in version V67 . Concrete Changes are as follows: Replace the text section 3.1.2.1 starting from "A DLRL object has at least one shared attribute.." until the next section (3.1.3.2) with the following text (refer to convenience document for precise format): "A DLRL object has at least one shared attribute. Shared attributes are typed and can be either mono-valued or multi-valued: · Mono-valued: · of a simple type: · basic-type (long, short, char, string, etc.); · enumeration-type; · simple structure · reference to a DLRL object. For these mono-valued attributes, type enforcement is as follows: · strict type equality for simple types; · equality based on inclusion for reference to a DLRL object (i.e., a reference to a derived object can be placed in a reference to a base object). · Multi-valued (collection-based): · two collection basis of homogeneously-typed items: · a list (ordered with index); · a map (access by key). Type enforcement for collection elements is as follows: · strict type equality for simple types; · equality based on type inclusion for references to DLRL objects (i.e., a reference to a derived object can be placed in a collection typed for base objects). DLRL will manage DLRL objects in a cache (i.e., two different references to the same object - an object with the same identity - will actually point to the same memory location). Object identity is given by an oid (object ID) part of any DLRL object." Revised Text: A DLRL object has at least one shared attribute. Shared attributes are typed2 and can be either mono-valued or multi-valued: o Mono-valued: o of a simple type: o basic-type (long, short, char, string, etc.); o enumeration-type; o simple structure3 o reference to a DLRL object. For these mono-valued attributes, type enforcement is as follows: o strict type equality for simple types; o equality based on inclusion for reference to a DLRL object (i.e., a reference to a derived object can be placed in a reference to a base object). o Multi-valued (collection-based): o two collection basis of homogeneously-typed items: o a list (ordered with index); o a map (access by key). Type enforcement for collection elements is as follows: o strict type equality for simple types; o equality based on type inclusion for references to DLRL objects (i.e., a referenceto a derived object can be placed in a collection typed for base objects).
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6707: ref-1003: Section 3.1.3.2 (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
The last paragraph is a note, but is not is the note style
Proposal [THALES]
Correct the footprint

Resolution: see below
Revised Text: · Section 3.1.3.2.2 Associations Note - Embedded structures are restricted to the ones that can be mapped simply at the DCPS level. For more complex ones, component objects (i.e., objects linked by a composition relation) may be used.
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6708: ref-1004: Section 3.1.3.3 Metamodel (clarification) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Clarification
Severity:
Summary:
Several readers wondered if the described metamodel should be implemented, while it is only given for description purpose
Proposal [THALES]
Add the following clarification sentence (at the end of the first paragraph of the section page 3.4):
" This metamodel is given for explanation purpose. This specification does not require that it is implemented as such."

Resolution: see below
Revised Text: Resolution: Add a clarification sentence (at the end of the first paragraph of the section page 3.4): Revised Text: Changes in PIM · At the end of section 3.1.3.3 Metamodel · add the following paragraph " This metamodel is given for explanation purpose. This specification does not require that it is implemented as such."
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6709: ref-1005: figure 3.2 (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
In the class "Relation", the attribute "rel_needs_class" should be named "full_oid_required" (according to the text description)
In the class "Class" In the class "Relation", the attribute "id_needs_class" should be named "full_oid_required" (according to the text description)
Proposal [THALES]
Correct the figure

Resolution: Correct the figure
Revised Text:
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6710: ref-1006: Page 3.11 (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
On that page, "Class", "Attribute" and "Relation" (line 2) and "Class" (first line before section 3.1.4.4.3) should be in bold+italics  as the other words that are identifiers
Proposal [THALES]
Correct the words

Resolution: see below
Revised Text: Resolution: Correct the footprint for those words. Revised Text: · Section 3.1.4.4 Metamodel with Mapping Information The three constructs that need added information related to the structural mapping are Class, Attribute and Relation. · Section 3.1.4.4.2 MonoAttribute o key_fields is the name of the fields that make the key in this topic (1 or 2 depending on the Class definition);
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6711: ref-1007: Section 3.1.6.3.4 CacheListener (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
In the textual description, the method "on_begin_updates" is named "start_updates" 
In the textual description, the method "on_end_updates" is named "end_updates"
Proposal [THALES]
Correct the textual description to align on the table.

Resolution: see below
Revised Text: Resolution: Correct the textual description to align on the table. Revised Text: · In section 3.1.6.3.4 CacheListener, · In the paragraph after the table, starting with "It provides the following methods" · First bullet, replace first word with: "on_begin_updates" · Second bullet, replace first word with: "on_end_updates" Disposition: Resolved
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6712: ref-1008: Bad annex reference (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
in the whole DLRL section, Annex C is badly named Annex A
Proposal [THALES]
Change all the "cf. annex A", with "cf. annex C"

Resolution: see above
Revised Text:
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Discussion:
Resolution: 
Change all the "cf. annex A", with "cf. annex C", everywhere in the whole DLRL section.


Issue 6713: ref-1009: Section 3.2.3.2 IDL Model description of the example (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
in the final editing process, the IDL for the example has been truncated (cut by 2)
Proposal [THALES]
restore it as it was in version V67

Resolution: see below
Revised Text: Resolution: Update the valuetype Track in section 3.2.3.2 with the two missing fields: and add the valuetype Track3D and valuetype Radar. Revised Text: Changes · Section 3.2.3.2 IDL Model description · Modify valuetype Track to be: valuetype Track : DLRL::ObjectRoot { public double x; public double y; public stringStrMap comments; public long w; public RadarRef a_radar; }; · Add (after valuetype Track): valuetype Track3D : Track { public double z; }; valuetype Radar : DLRL::ObjectRoot { public TrackList tracks; };
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6714: ref-1010: Section 3.2.3.3 XML Model Tags of the example (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
In the final editing process, 
the XML for the example has been truncated (cut by 2)
the chosen font makes it difficult to read (why not use Courier New which is a fixed-sized font as for the code example)
there should be a void line between the first sentence that introduces the XML description and the description itself
Proposal [THALES]
restore the contents as it was in version V67
introduce a void line to keep it separated from the introduction

Resolution: see below
Revised Text: Resolution: Restore the contents as it was in version V67 and introduce a void line to keep it separated from the introduction Revised Text: Changes · Insert empty line after the first paragraph in section 3.2.3.3 before the XML starts · Append the following at the end of the XML (the XML that precedes figure 3-9 in section 3.2.3.3) and use for the XML, the same paragraph format used for IDL: <local name="w"/> </classMapping> <classMapping name="Track3D"> <mainTopic name="TRACK-TOPIC" classField="CLASS" oidField="OID"/> <extensionTopic name="TRACK3D-TOPIC" classField="CLASS" oidField="OID"/> <monoAttribute name="z"> <valueField>Z</valueField> </monoAttribute> </classMapping> <classMapping name="Radar"> <mainTopic name="RADAR-TOPIC" oidField="OID"/> <multiRelation name="tracks"> <multiPlaceTopic name="RADARTRACKS-TOPIC" oidField="RADAR-OID" indexField="INDEX"/> <valueKey classField="TRACK-CLASS" oidField="TRACK-OID"/> </multiRelation> </classMapping> <associationDef> <relation class="Track" attribute="a_radar"/> <relation class="Radar" attribute="tracks"/> </associationDef> </Dlrl>
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6715: ref-1011: Section 3.2.3.3 Introduction to figure 3.9 (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
The style applied to the introduction to figure 3.9 is not correct
The figure itself is badly placed on the page
Proposal [THALES]
Correct the footprint

Resolution: Correct the footprint
Revised Text:
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6716: ref-1012: Section 3.2.3.3 Simplified XML of the example) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
In the final editing process, 
the minimum XML model tags for the example have been truncated (cut by 2)
the removal of the title makes less clear the purpose of the description
Proposal [THALES]
restore the contents as it was in version V67
change the last sentence of the introduction " In this case, the XML file would be as follows" with
"In case no deviation is wanted from the default mapping, the XML description can be restricted to the following minimum:"

Resolution: see below
Revised Text: Resolution: Change the last sentence of the introduction " In this case, the XML file would be as follows" with "In case no deviation is wanted from the default mapping, the XML description can be restricted to the following minimum:" Append the missing part of the XML, to the existing one. following XML at the end of the existing XML: <local name="w"/> </classMapping> <associationDef> <relation class="Track" attribute="a_radar"/> <relation class="Radar" attribute="tracks"/> </associationDef> </Dlrl> Revised Text: Changes · In section 3.2.3.3, just before the XML, · change the last sentence of the introduction to: " Change the last sentence of the introduction " In this case, the XML file would be as follows" with "In case no deviation is wanted from the default mapping, the XML description can be restricted to the following minimum:" · Append the following XML, at the end of the existing one: <local name="w"/> </classMapping> <associationDef> <relation class="Track" attribute="a_radar"/> <relation class="Radar" attribute="tracks"/> </associationDef> </Dlrl>
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6717: ref-1013: Section 3.1.6.3.9 Table for ObjectQuery (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
attribute "parameter" is stated as of type "string[}" instead "string []"
Proposal [THALES]
correct the table

Resolution: see below
Revised Text: Resolution: Change the type of the attribute "parameters" from "string[}" to "string []" Revised Text: · In section 3.1.6.3.9 ObjectQuery · In the table, change the entry for the attribute "parameters to the following parameters string []
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6718: ref-1014: Page 3-10, figure 3-2 min_topic (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
page 3-10: In figure 3-2 the class attribute min_topic should be main_topic
Proposal [THALES]
correct the figure

Resolution: correct the figure
Revised Text:
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6719: ref-1015: Page 3-62 manual edition (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
page 3-62: Just after the XML listing: manual edition should be manual editing
Proposal [THALES]
correct the sentence

Resolution: see below
Revised Text: Resolution: Change the sentence to the following: "It should be noted that XML is not suitable for manual editing" Revised Text: · In section 3.2.3.3, after the XML · replace the first sentence: " It should be noted that XML is not suitable for manual edition." with: " It should be noted that XML is not suitable for manual editing."
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6720: ref-1016: Page 3-65 t2 (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
page 3-65: 4th line from below: t3->z(3000.0); t3 should be t2
Proposal [THALES]
correct the example

Resolution: see below
Revised Text: Resolution: Apply those 3 changes Revised Text: · Section 3.2.3.5 Code Example · Replace t2->a-radar->put(r1);// modifies r1->tracks accordingly t3->z(3000.0); t2->a-radar->put(r1);// modifies r1->tracks accordingly with t2->a_radar->put(r1);// modifies r1->tracks accordingly t2->z(3000.0); t2->a_radar->put(r1);// modifies r1->tracks accordingly
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6721: ref-1017: Section 3.1.4.4.2 topic (editorial (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
1st bullet: section 3.1.4.4.2 (Mono Attribute) Class::topic should be Class::main_topic 
Proposal [THALES]
correct the wording

Resolution: see below
Revised Text: Resolution: Correct the wording. Revised Text: · In section 3.1.4.4.2 MonoAttribute · first bullet, replace " Class::topic" with " Class::main_topic" Disposition: Resolved
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6722: ref-1018: Name of the methods for ObjectListener (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
In section 3.1.6.3.6 (ObjectListener) the method names used in the bulleted descriptions do not correspond to the names used in the Table
page 3-38: section 3.1.6.4.2 (Object Creation) first bullet: two times on_object_created is mentioned according to figure 3-4 this should be on_new_object.
Proposal [THALES]
correct the table (the bullets are in accordance with the IDL)
correct the figure (on_object_created is the name of the operation in other places, including IDL)

Resolution: see below
Revised Text: Resolution: Apply everywhere the names "on_object-created", "on_object_modified" and "on_object_deleted". Revised Text: Changes · In section 3.1.6.3.6, · in the table; · replace "on_created_object" with "on_object_created" · replace "on_modified_object" with "on_object_ modified" · replace "on_deleted_object" with "on_object_ deleted" · In Figure 3-4 ObjectListener. Replace "on_new_object" with "on_object_created
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6723: ref-1019: Name of the ObjectRoot::clone method (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
in section 3.6.1.3.11 the clone method is named clone_object in the text explanation.
Proposal [THALES]
correct the text (everywhere else, the method is named clone)

Resolution: see below
Revised Text: Resolution: Correct the text from " clone_object" to "clone" (everywhere else, the method is named "clone"). Revised Text: Changes · In section 3.1.6.3.11 · In the text starting with "it offers methods" · first bullet, replace the first sentence: "create a copy of the object and attach it to a CacheAccess (clone_object)" with "create a copy of the object and attach it to a CacheAccess (clone)" Disposition: Resolved
Actions taken:
December 17, 2003: received issue
September 23, 2004: closed issue

Issue 6729: Additional_communication_paradigms (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2010 Additional_communication_paradigms Issue [Boeing SOSCOE] ? In addition to the Data-Distribution model, our applications also need two basic communication models; Point2Point and GroupCommunications. ? The API’s and PIM defined in DDS would fit well with those communication models as well. That is, the concept of typed DataReaders and DataWriters that are configured by means of QoS and interact with the user-level application by means of listeners and conditions are all applicable to Point2Point and GroupCommunications. ? It would therefore be useful to introduce extensions such that the application can use these additional communication models in a way that fits naturally with the data-distribution PIM. ? In Boeing SOSCOE’s applications Point2Point communications: • Represent 1-to-1 bi-directional communication channels similar to UNIX “pipes” that can be configured by means of QoS • Are “connection-oriented” in the sense that each endpoint must explicitly establish the “connection” and is made aware if the “connection” is broken • Allow the application to read-write typed data to the other end-point. Each “writer” in one side of the connection communicates with the corresponding “reader” at the other end. In general each writer must be matched by a reader at the opposite end. Otherwise it is a configuration error. • Allow prioritization among the data written by different writers by means of QoS • Allow both synchronous and asynchronous writes. • Support the concept of application-level acknowledgements or “transactional” messaging by which the writing application can receive notification that the reading application has received the message and has positively acted on it. • Support the classic “client-connect versus server-listen/accept” pattern such that the “server” side can establish multiple dedicated point-to-point connections to each “client” that requests a connection. • Filters are not generally expected to be required for Point2Point communications. However, in order to keep the same API they should be allowed. The middleware should automatically give positive acknowledgement of reliable or transactional data that has been filtered out. • It is an error for a data-writer to write a Topic that does not have a corresponding data-reader. The writer() call should return a special error code indication there is no matching DataReader. In Boeing SOSCOE’s applications GroupCommunications : • Provide the capability for a group of peer applications to organize themselves into a “group”, such that each member of the group is aware of the presence of all other members and can send messages directed to either one specific peer, all the peers in the group, or a subset of the members of the group. • Provide some means to control group membership. • Each member of the group is identified by some “ID” such that other members can refer to it and direct messages to it. • Provide a serialized view of membership and delivery of messages such that: • The writer knows the membership when it sends each message and anybody that does not belong to that membership will not get the message. • All members of the group have the same view of the membership for each message delivered to them. • Do not need to provide total order or even agreed order. • Act as a “live” group in that messages are only delivered to the members that are present when the message is sent. In other words, it does not store messages on behalf of future members of the group • It is OK to write a DataWriter that does not have a corresponding DataReader on some of the other Group members. The data is considered acknowledged for RELIABLE but not for the purpose of TRANSACTIONAL (ref issue# 2060). Proposal [Boeing SOSCOE] ? Consider introducing a more primitive concept for a group of endpoints, from which the following classes derive: Publisher, Subscriber, EndpointConnector, and GroupConnector. ? This base class could be called “Connector” but does not really mean a “connection” in the “TCP” sense, rather the “connectivity” into the middleware services. ? All these “Connectors” act as factories for DataReader and DataWriter entities (which represent the Endpoints). ? The type of the DataReader and DataWriter created from each kind of “Connector” is the same (even though they will act differently). This is because the DataReader and DataWriter are typed facades to write a specific data-type and it is not desirable to have to create (by means of implied IDL) different types for each kind of connector. ? The EndpointConnectors and GroupConnectors are informed of the connections/disconnections by means of Listeners with an “onConnect” and “onDisconnect” operations. ? For Point2Point communication the factory of DataReader and DataWriter entities would be the EndpointConnector • To match the two EndpointConnector objects that should be “hooked up” the application could use either a Topic, or a more general matching of “Attibute-value” pairs, or a combination of the above. • To aid in the establishment of many point2point connectors using the client-connect, server-listen/accept pattern the EndpointConnector could use an auxiliary “ServerConnector” and corresponding listeners that would inform it of the fact that clients are attempting a connection. ? For Group communications the factory of DataReader and DataWriter entities would be the GroupConnector • The same generic mechanism used by the EndpointConnector should be used to identify the group the GroupConnector entities are joining, that is either a Topic, or a more general matching of “Attibute-value” pairs, or a combination of the above. Comments[RTI] ? To avoid confusion it might be a good idea to propose a name other than “connector” for the base class and also avoid the use of the terms “client” and “server” which are often associated with the pattern of communications used by CORBA and RMI. ? Point2Point communications would never use KEEP_LAST QoS. They will use KEEP_ALL. They will also always have DURABILITY TRANSIENT. In a sense messages are sent to the other end “immediately” and held only as long as is necessary to ensure the QoS (e.g. RELIABLE or TRANSACTIONAL) afterwards they are removed from the middleware. 

Resolution:
Revised Text:
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed no change

Discussion:
Point-to-point communications and Group communications are not included in the Data-Distribution specification and their inclusion would be beyond the scope of the FTF. Other OMG specifications address Group communications and one-way messaging as extensions to the CORBA remote-method-invocation model. However, it appears that what is requested here is a "messaging" API, so this issue may be better addressed by means of an RFP.


Issue 6730: Attributes_on_a_topic_description (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Attributes_on_a_topic_description Issue [Boeing SOSCOE] ? Some use-cases need more control on association of DataReader and DataWriters beyond the control offered by means of the Topic ? For example, SOSCOE may need to add a layer on top of the DDS API to offer additional customized services to the applications that sit on top. These services may include the propagation of identity-certificates, security, dynamic selection of the “best” DataWriter to use for a given Topic, etc. ? These mechanisms would like to benefit from the propagation of Topics and QoS that DDS offers and extend that mechanism with additional information that can then be used by the SOSCOE layer to provide these additional services. ? However the matching that DDS offers is performed only on the topic name. It would be useful to have a more extensible mechanism to allow matching on other application-defined attributes. ? These attributes are an expansion of the topic_name that appears in the TopicDescription. In addition to the string (topic_name) we could have a set of attribute-name/value pairs that could then be used to match the writers with readers. In a sense the topic_name would be a singular, mandatory attribute used for matching but they could be others. ? These attributes are not mutable. ? SOSCOE has created a provider property class that allows applications to have attributes with typed values; it becomes a triplet (name, type, value). Currently the “type” only supports simple types, but the intent is to extend it to well-known structured types. ? For example, the Topic “weather data” would describe a general topic but there may be an attribute that describes the region (e.g. “North America”) and the subscriber can specify they want weather-data but only over “North America” Proposal [Boeing SOSCOE] ? One approach would be to add a set of “name-value” attributes to the TopicDescription. Matching would be done not only on the topic-name but also on the remaining attributes. In a sense, the topic name is just one of the attributes that must be matched between the Topic that is published and that which is subscribed.

Resolution: see below
Revised Text: Resolution: Resolution of this issue as stated would necessitate the use of value-types or else extensions to the IDL language to support the ("attribute-name", "attribute-value") pairs. However, the specification can be modified to allow the application to implement this in "user-code". The FTF resolved to add two facilities to support this: · A QoS similar to USER_DATA on the Topic could be used by SOSCOE to "stuff" the name-value pairs. · Additional listener operations DataReaderListener::on_remote_publication_match and DataWriterListener::on_remote_subscription_match would be called when the infrastructure discovers a match between the local reader/writer entity and the remote writer/reader entities. These listeners would have access to the QoS of the local and remote entities and use the information stored in the TOPIC_DATA to implement any matching policy desired by the SOSCOE layer. In particular they could use the ignore_xxx operations to prevent the association between the local reader/writer entity and the remote writer/reader entities. The resolution of issue 7066 introduced the TOPIC_DATA QoS on topic containing a octet sequence which can be used for this purpose. So to resolve this issue it is only necessary to introduce the aforementioned listeners. Revised Text: Changes in PIM · Section 2.1.2.2.3 DomainParticipantListener Interface · DomainParticipantListener table · Add the following rows: on_publication_match void the_writer DataWriter status PublicationMatchStatus on_subscription_match void the_reader DataReader status SubscriptionMatchStatus · Section 2.1.2.4.4 DataWriterListener Interface · DataWriterListener table · Add the following rows: on_publication_match the_writer DataWriter status PublicationMatchStatus · Section 2.1.2.5.7 DataReaderListener Interface · DataReaderListener table · Add the following rows: on_subscription_match the_reader DataReader status SubscriptionMatchStatus · Add after paragraph "Since a DataReader is a kind of Entity … The operation on_subscription_match is intended to inform the application of the discovery of DataWriter entities that match the DataReader . Some implementations of the service may not propagate this information. In that case the DDS specification does not require this listener operation to be called. · Section 2.1.4.1 Communication Status · Status table. Modify it adding rows for SUBSCRIPTION_MATCH and PUBLICATION_MATCH. The resulting table follows: Entity Status Name Meaning Topic INCONSISTENT_TOPIC Another topic exists with the same name but different characteristics. Subscriber DATA_ON_READERS New information is available. DataReader SAMPLE_REJECTED A (received) sample has been rejected. LIVELINESS_CHANGED The liveliness of one or more DataWriter that were writing instances read through the DataReader has changed. Some DataWriter have become "active" or "inactive". REQUESTED_DEADLINE_MISSED The deadline that the DataReader was expecting through its QosPolicy DEADLINE was not respected for a specific instance. REQUESTED_INCOMPATIBLE_QOS A QosPolicy value was incompatible with what is offered. DATA_AVAILABLE New information is available. SAMPLE_LOST A sample has been lost (never received). SUBSCRIPTION_MATCH The DataReader has found a DataWriter that matches the Topic and has compatible QoS. DataWriter LIVELINESS_LOST The liveliness that the DataWriter has committed through its QosPolicy LIVELINESS was not respected; thus DataReader entities will consider the DataWriter as no longer "active". OFFERED_DEADLINE_MISSED The deadline that the DataWriter has committed through its QosPolicy DEADLINE was not respected for a specific instance. OFFERED_INCOMPATIBLE_QOS A QosPolicy value was incompatible with what was requested. PUBLICATION_MATCH The DataWriter has found DataReader that matches the Topic and has compatible QoS. · Status contents table. Modify it adding rows for PublicationMatchStatus and SubscriptionMatchStatus: PublicationMatchStatus Attribute meaning total_count Total cumulative count the concerned DataWriter discovered a "match" with a DataReader. That is, it found a DataReader for the same Topic with a requested QoS that is compatible with that offered by the DataWriter. total_count_change The change in total_count since the last time the listener was called or the status was read. last_subscription_handle Handle to the last DataReader that matched the DataWriter causing the status to change. SubscriptionMatchStatus Attribute meaning total_count Total cumulative count the concerned DataReader discovered a "match" with a DataWriter. That is, it found a DataWriter for the same Topic with a requested QoS that is compatible with that offered by the DataReader. total_count_change The change in total_count since the last time the listener was called or the status was read. last_publication_handle Handle to the last DataWriter that matched the DataReader causing the status to change. · Update Figure 2-17 with additional operations on the listeners: · The resulting figure is: · Section 2.1.2.4.2 DataWriter Class · DataWriter table · Add operation: get_matched_subscription_data ReturnCode_t inout: subscription_data SubscriptionBuiltinTopicData subscription_hand le InstanceHandle_t · Add Section 2.1.2.4.2.20 2.1.2.4.2.20 get_matched_subscription_data This operation retrieves information on a subscription that is currently "associated" with the DataWriter; that is, a subscription with a matching Topic and compatible QoS that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_subscription operation. The subscription_handle must correspond to a subscription currently associated with the DataWriter, otherwise the operation will fail and return PRECONDITION_NOT_MET. The operation get_matched_subscriptions to find the subscriptions that are currently matched with the DataWriter. The operation may also fail if the infrastructure does not hold the information necessary to fill in the subscription_data. In this case the operation will return UNSUPPORTED. · Section 2.1.2.4.2 DataReader Class · DataWriter table · Add operation: get_matched_publication_data ReturnCode_t inout:publication_data PublicationBuiltinTopicData publication_handle InstanceHandle_t · Add Section 2.1.2.5.3.22 2.1.2.5.3.22 get_matched_publication_data This operation retrieves information on a publication that is currently "associated" with the DataReader; that is, a publication with a matching Topic and compatible QoS that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_publication operation. The publication_handle must correspond to a publication currently associated with the DataReader otherwise the operation will fail and return PRECONDITION_NOT_MET. The operation get_matched_publications to find the publications that are currently matched with the DataReader. The operation may also fail if the infrastructure does not hold the information necessary to fill in the publication_data. In this case the operation will return UNSUPPORTED. Changes in IDL · After definition of InstanceHandle_t add: typedef sequence<InstanceHandle_t> InstanceHandleSeq; · At the end of the StatusKind definitions add: const StatusKind PUBLICATION_MATCH_STATUS = 0x0001 << 13; const StatusKind SUBSCRIPTION_MATCH_STATUS = 0x0001 << 14; · Add ( after struct RequestedIncompatibleQosStatus): struct PublicationMatchStatus { long total_count; long total_count_change; InstanceHandle_t last_publication_handle; }; struct SubscriptionMatchStatus { long total_count; long total_count_change; InstanceHandle_t last_subscription_handle; }; · Interface DataWriterListener · Add operation: void on_publication_match(in DataWriter writer, in PublicationMatchStatus status); · Interface DataReaderListener · Add operation: void on_subscription_match(in DataReader reader, in SubscriptionMatchStatus status); · Interface DataWriter · Add operations: ReturnCode_t get_matched_subscription_data(inout SubscriptionBuiltinTopicData subscription_data, in InstanceHandle_t subscription_handle); · Interface DataReader · Add operations: ReturnCode_t get_matched_publication_data(inout PublicationBuiltinTopicData publication_data, in InstanceHandle_t publication_handle); Disposition: Resolved
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Issue 6731: Extension_to_the_partition_qos (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2030 Extension_to_the_partition_qos Issue [Boeing SOSCOE] ? The DDS specification provides the means for an application to configure the “connectivity” of DataReaders and DataWriters by setting the PARTITION QoS on the corresponding Publisher and Subscriber. ? Partitions therefore provide means for publishers and subscribers to restrict the associations that can be established between the readers and writers they contain. ? This facility is useful, but the fact that the matching is done by strict string matching of the partition-name strings can be limiting. ? It would be desirable to have something more extensible like “name-value” pairs and some more flexible expression language (e.g. a filter expression) to indicate the matching, beyond pure string matching. ? SOSCOE has created a provider property class that allows applications to have attributes with typed values; it becomes a triplet (name, type, value). Currently the “type” only supports simple types, but the intent is to extend it to well-known structured types (ref Issue#2035). ? A potential use case is to use name-value pairs to identify the source of the information or a distribution restriction. Things like AREA_NAME, SENSOR_GROUP, etc. each with its value. ? This is the analogous to the attribute-value pairs on the topic except they apply to the Publisher/Subscriber or EndpointConnectors. Proposal [Boeing] ? Addition of a set of name-value pairs to the Publisher and Subscriber (in fact to all Connectors) which can then be used to determine whether the endpoints contained in the connectors should communicate ? The attributes in the Connector can be used to locate providers of interest or somehow the “best” provider where “best” can be specific to each peer Connector. ? The query language described in Appendix A would not be sufficient for SOSCOE’s needs due to the need for function expressions and, at some future time, the ability to support well-known structured types (ref Issue#2035). 

Resolution: see below
Revised Text: Resolution: Similar to issue 6730 resolution of this issue as stated would require the use of value-types or else extensions to IDL to express name-value pairs. However it is possible to offer facilities that would allow the implementation of this feature in application code. The FTF resolved to add a GROUP_DATA QoS to Publisher and Subscriber. The contents are an octet sequence (like USER_DATA QoS) and are propagated with built-in topics for DataWriter/DataReader. The application can examine the GROUP_DATA and implement the customized logic using the same facilities introduced to address issue 6730. Revised Text: Changes in PIM · Section 2.1.3 Supported QoS · QoS Table · Add policy "GROUP_DATA" QosPolicy Value Meaning Concerns RxO Changeable GROUP_DATA a sequence of octets User data not known by the middleware, but distributed by means of built-in topics (cf. Section ).The default value is an empty (zero- sized) sequence. Publisher, Subscriber No Yes · Add section 2.1.3.3 (changes numbers of subsections that follow) 2.1.3.3 GROUP_DATA The purpose of this QoS is to allow the application to attach additional information to the created Publisher or Subscriber. The value of the GROUP_DATA is available to the application on the DataReader and DataWriter entities and is propagated by means of the built-in topics. This QoS can be used by an application combination with the DataReaderListener and DataWriterListener to implement matching policies similar to those of the PARTITION QoS except for the decision can be made based on an application-defined policy. · Section 2.1.5 Built-in Topics · Table with QoS of built-in Subscriber and DataReader objects · Add row: GROUP_DATA <unspecified> · Table with types for built-in topics · For Topic name= "DCPSPublication" add row: group_data GroupDataQosPolic y Policy of the Publisher to which the DataWriter belongs. · For Topic name= "DCPSSubscription" add row: group_data GroupDataQosPolic y Policy of the Subscriber to which the DataReader belongs. Changes in IDL · Add (after const string TOPICDATA_QOS_POLICY_NAME = ""TopicData";) const string GROUPDATA_QOS_POLICY_NAME = "GroupData"; · Add (after const QosPolicyId_t TOPICDATA_QOS_POLICY_ID= 18";) const QosPolicyId_t GROUPDATA _QOS_POLICY_ID = 19; · Add (after struct TopicDataQosPolicy { … };) struct GroupDataQosPolicy { sequence<octet> value; }; · struct PublisherQos · Add member: GroupDataQosPolicy group_data; · struct SubscriberQos · Add member: GroupDataQosPolicy group_data; · struct PublicationBuiltinTopicData · Add member: GroupDataQosPolicy group_data; · struct SubscriptionBuiltinTopicData · Add member: GroupDataQosPolicy group_data;
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Issue 6735: Transactional_reliability Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2060 Transactional_reliability Issue [Boeing SOSCOE] ? For the case of Point2Point and Group communications there is a need to have “application-level” reliability where the sender can find out if a message was (1) delivered to the receiving application(s) and (2) whether it was successfully processed by the receiving application(s). ? SOSCOE uses the name “TRANSACTIONAL” to refer to this kind of application-level message-processed confirmation. ? This would only make sense for HISTORY QoS KEEP_ALL. TRANSACTIONAL and KEEP_LAST would be considered inconsistent QoS settings. Proposal [Boeing SOSCOE] ? Add the kind TRANSACTIONAL to Reliability QoS policy ? Add DataReader::set_transaction_status (or some operation to allow receiving application to accept or reject a transactional message) ? Add listener operations to the DataWriter to get notified of the transactional status. Potentially add also a method to the DataWriter to query the transactional status. ? Add a DataWriter::rewrite(WriteMessageID). This only applies to EndpointConnectors or GroupConnectors that have transactional QoS set. This handles the case where the message was transmitted to the receiving end and put on the reader queue successfully but the reading application does not set the transactional status indicating that the message was processed. The rewrite() is a convenience so that the writing application does not have to re-create the message but is being treated by the infrastructure as a message that needs to be sent again because the reader has presumably erased it from its queues. 

Resolution: duplicate, close issue
Revised Text:
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Discussion:
This issue affects the Point-to-Point communications model introduced in issue 6729.


Issue 6736: Writer_notification_of_delivery_failure Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2070 Writer_notification_of_delivery_failure Issue [Boeing SOSCOE] ? This requirement applies to Point2Point and Group communications, not to Pub/Sub. ? In the case where the QoS is set to RELIABLE or TRANSACTIONAL there is a need for the application to be notified of delivery failures and also find out the delivery-status of individual messages. ? The application needs to get delivery confirmation with the granularity of a single message. ? Notification of delivery can be either synchronous (wait for delivery) or asynchronous (notification via listener). Proposal [Boeing SOSCOE] ? Add DataWriterListener::on_sent_data_lost and DataWriterListener::on_sent_data_received listener methods to notify sender asynchronously of reliable message send status ? Add UserListenerData for use in transactional and reliable processing. This data is specified on each write() and given back to the user through the DataWriterListener::on_sent_data_lost and DataWriterListener::on_sent_data_received methods to help the user determine what should be done with the data. ? Provide a way to get the ID of each message written (WriteMessageID). This can be used by the user to learn more about the message that was lost or received and is used by the system to track transactional and reliable messages that might have to be present. This id is passed to the on_sent_data_lost and on_sent_data_received listener methods and could become invalid at the end of those methods (the system must have some consistent way of determining when to clean up). Comment [RTI] ? In order for the application to find out the status of an individual message it needs to have some way to identify each message. Currently the DDS specification does not provide said mechanism. ? There are several ways to extend DDS to provide message identification functionality. • One way would be to change the return value of DataWriter::write() to return some kind of messageID or handle that can be used to refer to that message. Or alternatively return the messageId as an out parameter. However, this approach may complicate the application which now would need to perhaps track these IDs. • Another would be to add an operation to DataWriter to get the messageId of the last message written. This would have the advantage that it does not require a change of the API, just an extension. However, this would be potentially less efficient and also not safe if multiple application threads are using the same DataWriter to send information. • Another approach is that the MessageId would only be available by means of the callback. • With regards to the UserListenerData, it appears it needs to be either a parameter to the write call, or else something that could be set using a separate call. The first would clearly be more efficient but increases the complexity of the write API. 

Resolution: duplicate, close
Revised Text:
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Discussion:
This issue depends on the additional communication models introduced in issue 6729


Issue 6738: Navigation_of_connectivity_information Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2085 Navigation_of_connectivity_information Issue [Boeing SOSCOE] ? There is no way to use the DDS API to find some of the information that is available internally to the service. ? For example there is no way to determine & enumerate the remote readers that are “associated” with a local writer, or the remote writers associated with a local reader. ? The DDS specification does provide a way to “name/identify” the remote entities. This is the DCPSKey that appears as part of the fields of the built-in-topics which are the ones used to access discovery information on remote entities. Proposal [Boeing SOSCOE] ? Add iterators to the DataReader and DataWriter entities that allow enumerating the remote entities (identified by DCPSKey) associated with it. ? Add helper operations that allow determining the QoS associated with remote entities (identified by DCPSKey) 

Resolution: see below
Revised Text: Resolution: The FTF resolved to add operation to the DataWriter and the DataReader to allow theapllication to "navigate the connectivity information." Added DataWriter operations: · get_matched_subscriptions · get_publication_match_status · get_matched_subscription_data Added DataReader operations: · get_matched_publications · get_subscription _match_status · get_matched_ publication_data Revised Text: Changes in PIM · Figure 2-9 · Modify figure 2-9 adding the new DataWriter operations. The resulting figure is: · Figure 2-10 · Modify figure 2-10 adding the new DataReader operations. The resulting figure is: · Figure 2-13 · Modify figure 2-13 adding PublicationMatchStatus and SubscriptionMatchStatus. The resulting figure is: · Figure 2-19 · Modify Figure 2-19 to include the additional operations on the DataReader. The resulting figure is: · Section 2.1.2.4.2 DataWriter Class · DataWriter table · Add operations: get_publication_match_status PublicationMatchStatus get_matched_subscription_data ReturnCode_t inout: subscription_data SubscriptionBuiltinTopicData subscription_handle InstanceHandle_t get_matched_subscriptions ReturnCode_t inout: subscription_handles InstanceHandle_t [] · Add Section 2.1.2.4.2.17 2.1.2.4.2.17 get_publication_match_status This operation allows access to the PUBLICATION_MATCH_QOS communication status. Communication statuses are described in Section 2.1.4.1. · Add Section 2.1.2.4.2.21 2.1.2.4.2.21 get_matched_subscription_data This operation retrieves information on a subscription that is currently "associated" with the DataWriter; that is, a subscription with a matching Topic and compatible QoS that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_subscription operation. The subscription_handle must correspond to a subscription currently associated with the DataWriter, otherwise the operation will fail and return PRECONDITION_NOT_MET. The operation get_matched_subscriptions to find the subscriptions that are currently matched with the DataWriter. The operation may also fail if the infrastructure does not hold the information necessary to fill in the subscription_data. In this case the operation will return UNSUPPORTED. · Add Section 2.1.2.4.2.22 2.1.2.4.2.22 get_matched_subscriptions This operation retrieves the list of subscriptions currently "associated" with the DataWriter; that is, subscriptions that have a matching Topic and compatible QoS that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_subscription operation. The operation may fail if the infrastructure does not locally maintain the connectivity information. · Section 2.1.2.4.2 DataReader Class · DataReader table · Add operations: get_subscription_match_status SubscriptionMatchStatus get_matched_publication_data ReturnCode_t inout:publication_data PublicationBuiltinTopicData publication_handle InstanceHandle_t get_matched_publications ReturnCode_t inout:publication_handles InstanceHandle_t [] · Add Section 2.1.2.5.3.23 2.1.2.5.3.23 get_subscription_match_status This operation allows access to the SUBSCRIPTION_MATCH_STATUS communication status. Communication statuses are described in Section 2.1.4.1. · Add Section 2.1.2.5.3.31 · 2.1.2.5.3.31 get_matched_publication_data · This operation retrieves information on a publication that is currently "associated" with the DataReader; that is, a publication with a matching Topic and compatible QoS that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_publication operation. · The publication_handle must correspond to a publication currently associated with the DataReader otherwise the operation will fail and return PRECONDITION_NOT_MET. The operation get_matched_publications to find the publications that are currently matched with the DataReader. · The operation may fail if the infrastructure does not locally maintain the connectivity information. · Add Section 2.1.2.5.3.32 2.1.2.5.3.32 get_matched_publications This operation retrieves the list of publications currently "associated" with the DataReader; that is, publications that have a matching Topic and compatible QoS that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_ publication operation. The operation may fail if the infrastructure does not locally maintain the connectivity information. · Section 2.1.2.5.7 DataReaderListener interface · DataReaderListener table · Add operation: on_subscription_match the_reader DataReader status SubscriptionMatchStatus · Section 2.1.2.5.7 DomainParticipantListener interface · DomainParticipantListener table · Add operations: on_publication_match void the_writer DataWriter status PublicationMatchStatus on_subscription_match void the_reader DataReader status SubscriptionMatchStatus Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataWriter · Add operations: ReturnCode_t get_matched_subscriptions( inout InstanceHandleSeq subscription_handles); ReturnCode_t get_matched_subscription_data( inout SubscriptionBuiltinTopicData subscription_data, in InstanceHandle_t subscription_handle); PublicationMatchStatus get_publication_match_status(); · Interface DataReader · Add operations: ReturnCode_t get_matched_publications( inout InstanceHandleSeq publication_handles); ReturnCode_t get_matched_publication_data( inout PublicationBuiltinTopicData publication_data, in InstanceHandle_t publication_handle); SubscriptionMatchStatus get_subscription_match_status();
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Issue 6739: Additional_qos_DATA_PRIORITY Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2090 Additional_qos_DATA_PRIORITY Issue [Boeing SOSCOE] ? Need for additional DATA_PRIORITY Qos on the DataWriter. This Policy defines a priority that is associated with the data contained in a message and would serve two purposes: • 1) It would provide a prioritization mechanism for the transport, and • 2) It would determine how the receiver queues things in the data reader ? For the SOSCOE application this policy would have to support at least 5 values each with descending priority: (Flash_Override, Flash, Immediate, Priority, Routine) ? Ideally there would be a way to propagate and map this to whatever underlying transport is being used to send the messages. ? In its effect on the queuing on the receiving side it acts as a DESTINATION_ORDER QoS. In effect, the order would be BY_PRIORITY. ? Note that it would be possible to have DESTINATION_ORDER BY_PRIORITY and HISTORY KEEP_LAST. In this case, the reader may throw away the higher priority data if another message arrives. This is OK. If this is not the desired behavior, then the application should specify KEEP_ALL (which is expected to be the normal use in practice). Proposal [Boeing SOSCOE] ? Add a DATA_PRIORITY Qos. 

Resolution: see below
Revised Text: Resolution: The FTF thinks that the requirement speaks to the need of two different QoS. One concerning "transport priorities" and the other a "priority-based ordering" similar to the SOURCE_TIMESTAMP and DESTINATION_TIMESTAMP already offered. The FTF resolved to add a TRANSPORT_PRIORITY QoS. However, this QoS will have to be specified as a hint because: · The actual implementation is transport-dependent · The DDS specification does not model the transport. In additin, the FTF resolved to add language indicating how this QoS is meant to be used by the implementations that use priority-aware transports. With regards to de QoS that defines a "priority-based-ordering" the FTF resolved to defer the resolution. The reason is that the impact on the implementation and the intarction of this QoS with the other QoS are not well understood at this point. Revised Text: Changes in PIM · Section 2.1.3 Supported QoS · QoS Table: · Add policy "TRANSPORT_PRIORITY" QosPolicy Value Meaning Concerns RxO Changeable TRANSPORT_P RIORITY An integer "value" This policy is a hint to the infrastructure as to how to set the priority of the underlying transport used to send the data.The default value of the transport_priority is zero. Topic,DataWriter N/A Yes · Add section 2.1.3.14 (after RELIABILITY, changes numbers of subsections that follow) 2.1.3.14 TRANSPORT_PRIORITY The purpose of this QoS is to allow the application to take advantage of transports capable of sending messages with different priorities. This policy is considered a hint. The policy depends on the ability of the underlying transports to set a priority on the messages they send. As this is specific to each transport it is not possible to define the behavior generically. It is expected that during transport configuration the application would provide a mapping between the values of the TRANSPORT_PRIORITY set on DataWriter and the values meaningful to each transport. This mapping would then be used by the infrastructure when propagating the data written by the DataWriter. · Section 2.1.5 Built-in Topics · Table with QoS of built-in Subscriber and DataReader objects · Add row: TRANSPORT_PRIORITY value=0 · Table with types for built-in topics · For Topic name= "DCPSTopic" add row: transport_prior ity TransportPriorityQo sPolicy Policy of the corresponding Topic Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (after const string GROUPDATA_QOS_POLICY_NAME = "GroupData";): const string TRANSPORTPRIORITY_QOS_POLICY_NAME= "TransportPriority"; · Add (after const QosPolicyId_t GROUPDATA_QOS_POLICY_ID= 19;) const QosPolicyId_t TRANSPORTPRIORITY_QOS_POLICY_ID= 20; · Add (after struct GroupDataQosPolicy {..};) struct TransportPriorityQosPolicy { long value; }; · struct TopicQos · Add: TransportPriorityQosPolicy transport_priority; · struct DataWriterQos · Add: TransportPriorityQosPolicy transport_priority; · struct TopicBuiltinTopicData · Add: TransportPriorityQosPolicy transport_priority;
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Issue 6740: Additional_qos_LIFESPAN Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2100 Additional_qos_LIFESPAN Issue [Boeing SOSCOE] ? The LIFESPAN Qos Policy that prevents “stale” data from being delivered to receiving applications. ? The LIFESPAN defines a time period for which the data should live. If a message cannot be delivered before the specified time period, the message is dropped. This value is expressed in time units, it should not be confused with a network-level Time-To-Live (TTL), that could be set on network connections which is expressed in number of hops. ? LIFESPAN is specified as a “span” that is a time interval measured from the time the data is written. ? LIFESPAN should be mutable. ? LIFESPAN needs to be specified only in the writer side. ? Note that this QoS assumes that the sender and receiving applications have their clocks sufficiently synchronized. ? The filtering could be done on the sending side, on the receiving side, or both. It would be an implementation decision that would only affect performance but would otherwise not be observable by the application. ? If data is dropped because the LIFESPAN value expires, Reliable and Transactional QoS would still require the writer to be notified of the failure to deliver. Proposal [Boeing SOSCOE] ? Introduce the LIFESPAN Qos. 

Resolution: see below
Revised Text: Resolution: The FTF resolved to add a LIFESPAN QoS that specifies the maximum duration of validity of each sample written by the DataWriter. This QoS appears both on Topic and DataWriter and is accessible by means of the built-in topics. Revised Text: Changes in PIM · Section 2.1.3 Supported QoS · Figure 2-12 · add the LifespanQosPolicy with the following field: "duration" the resulting Figure 2-12 is: · QoS Table: · Add policy "LIFESPAN" QosPolicy Value Meaning Concerns RxO Changeable LIFESPAN A duration "duration" Specifies the maximum duration of validity of the data written by the DataWriterThe default value of the lifespan duration is infinite. Topic,DataWriter N/A Yes · Add section 2.1.3.2 (changes numbers of subsections that follow) 2.1.3.2 LIFESPAN The purpose of this QoS is to avoid delivering "stale" data to the application. Each data sample written by the DataWriter has an associated 'expiration time' beyond which the data should not be delivered to any application. Once the sample expires, the data will be removed from the DataReader caches as well as from the transient and persistent information caches. The 'expiration time' of each sample is computed by adding the duration specified by the LIFEPAN QoS to the source timestamp. As described in Section 2.1.2.4.2.10 and Section 2.1.2.4.2.11 the source timestamp is either automatically computed by the Service each time the DataWriter write operation is called, or else supplied by the application by means of the write_w_timestamp operation. This QoS relies on the sender and receiving applications having their clocks sufficiently synchronized. If this is not the case and the Service can detect it, the DataReader is allowed to use the reception timestamp instead of the source timestamp in its computation of the 'expiration time'. · Section 2.1.5 Built-in Topics · Table with QoS of built-in Subscriber and DataReader objects · Add row: LIFESPAN duration = infinite · Table with types for built-in topics · For Topic name= "DCPSTopic" add row: lifespan LifespanQosPolicy Policy of the corresponding Topic · For Topic name= "DCPSPublication" add row: lifespan LifespanQosPolicy Policy of the corresponding DataWriter Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (after const string TRANSPORTPRIORITY_QOS_POLICY_NAME = "TransportPriority";) const string LIFESPAN_QOS_POLICY_NAME = "Lifespan"; · Add (after const QosPolicyId_t TRANSPORTPRIORITY_QOS_POLICY_ID = 20;) const QosPolicyId_t LIFESPAN_QOS_POLICY_ID= 21; · Add (after struct TransportPriorityQosPolicy {..};) struct LifespanQosPolicy { Duration_t duration; }; · struct TopicQos · Add member: LifespanQosPolicy lifespan; · struct DataWriterQos · Add member: LifespanQosPolicy lifespan; · struct TopicBuiltinTopicData · Add member: LifespanQosPolicy lifespan; · struct PublicationBuiltinTopicData · Add member: LifespanQosPolicy lifespan;
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Issue 6743: Make_USER_DATA_an_array_and_mutable Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Minor
Summary:
Issue# 2140 Make_USER_DATA_an_array_and_mutable Issue [Boeing SOSCOE] ? There are several potentially independent uses for USER_DATA. It would be useful to have it as a set of name-value pairs or other extensible construct. That way layers such as SOSCOE could add its own USER_DATA and not conflict with the uses that the application above SOSCOE may make of the data. ? SOSCOE has created a provider property class that allows applications to have attributes with typed values; it becomes a triplet (name, type, value). Currently the “type” only supports simple types, but the intent is to extend it to well-known structured types (ref Issue#2035). The extension of the USER_DATA can be used to implement these properties. Proposal [Boeing SOSCOE] ? Make USER_DATA a set of name-value pairs and make it mutable. ? Alternatively keep the USER_DATA as it is for the simple cases and add something more flexible for the more complex cases.

Resolution: closed no change
Revised Text:
Actions taken:
December 18, 2003: received issue
September 23, 2004: closed issue

Discussion:
The resolution of issue 6832 made the USER_DATA mutable.
It appears that the application already has the means of using a structured type inside the USER_DATA and hence get the requested behavior


Issue 6744: CacheFactory::find_cache (addition (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
There in no means to retrieve a previously created Cache
Proposal [THALES]
Add on the CacheFactory::create_cache an extra parameter that is the name of the Cache
Add the CacheFactory::find_cache method
IDL
Cache create_cache (in CacheUsage usage, 
		in DCPS::DomainParticipant domain,
		in string name)
	raises (DCPSError, AlreadyExisting);
Cache find_cache (in string name)
	raises (NotFound);
concerns the PIM (figure, table and text and the PSM)

Resolution: see below
Revised Text: Resolution: Add on the CacheFactory::create_cache an extra parameter that is the name of the Cache and add the CacheFactory::find_cache method. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · in section 3.6.3.6.1 CacheFactory, in the table, · change the last line for the operation "create_cache" to the following description CacheDescription · add the following entry: find_cache_by_name Cache name CacheName · in the following text: · Change the first sentence to "This class offers methods to:" · Change the first bullet to: "to create a Cache (create_cache): This method takes[ ...] both modes) and a description of the Cache (at a minimum this CacheDescription gathers the concerned DomainParticipant as well as a name allocated to the Cache); depending [...]unique usage of the Cache; these two objects will be attached to the provided DomainParticipant; · add a second bullet: "to retrieve a Cachebased on the name given in the CacheDescription (find_cache_by_name). Changes in IDL: · Section 3.2.1.2.1 Generic DLRL Entities · Add: typedef string CacheName; valuetype CacheDescription { public CacheName name; public DCPS::DomainParticipant domain }; · interface CacheFactory { · replace Cache create_cache ( in CacheUsage cache, in DCPS::DomainParticipant domain) raises (DCPSError); With Cache create_cache ( in CacheUsage usage, in CacheDescription description) raises (DCPSError, AlreadyExisting); · Add Cache find_cache_by_name (in CacheName name) raises (BadParameter);
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6745: : Attributes and operations directly set on valuetypes (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
Currently the valuetypes ObjectRoot, RefRelation, ListRelation, IntMapRelation and StrMapRelation use a supported interface in order to gather their respective attributes and operations. This can be simplified in defining directly the attributes and operations on the valuetypes
Proposal [THALES] 
define the corresponding attributes and operations directly in the valuetypes
concern only the PSM

Resolution: see below
Revised Text: Resolution: Define the corresponding attributes and operations directly in the valuetypes. This change concerns only the IDL. Revised Text: Changes in IDL: · Section 3.2.1.2.1 Generic DLRL Entities · suppression of interface ObjectRootOperations · valuetype ObjectRoot { // State //-------- private DLRLOid oid; private ClassName m_class_name; [Then as was formerly in ObjectRootOperations] // Attributes ... }; · suppression of interface ReferenceOperations · valuetype RefRelation { private ObjectReference m_ref; [Then as was formerly in ReferenceOperations] void reset(); ... }; · change of interface CollectionOperations to abstract valuetype CollectionBase, with the same content · change of interface interface ListOperations to abstract valuetype ListBase, with the same content · change of interface StrMapOperations to abstract valuetype StrMapBase with the same content · change of interface IntMapOperations to abstract valuetype IntMapBase, with the same content · valuetype ListRelation : ListBase { [Same content as formerly] }; · valuetype StrMapRelation : StrMapBase { [Same content as formerly] }; · valuetype IntMapRelation : StrMapBase { [Same content as formerly] }; · valuetype ListRelation : ListBase { [Same content as formerly] };
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6746: Names of the ObjectRoot attributes (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
The attribute ObjectHome is named 'owner' and would be better named 'home'
The attribute CacheAccess is named 'cache' and would be better named 'access'
Proposal [THALES]
change as indicated
concerns the PIM (figure, table and text) and the PSM

Resolution: see below
Revised Text: Resolution: Change everywhere to 'object_home' and 'cache_access'. This change concerns the PIM (UML diagram and text) and the IDL Revised Text: Changes in PIM · in section 3.6.3.11 ObjectRoot · in the table, in the attribute list, change the next to last entry by: object_home ObjectHome · in the following text, in the list starting with "its public attributes...", next to last bullet, change "home" to "object_home" Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · valuetype ObjectRoot · Replace readonly attribute ObjectHome owner; readonly attribute CacheAccess cache; With readonly attribute ObjectHome object_home; readonly attribute CacheAccess cache_access;
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6747: Depth of cloning (addition) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
The ObjectRoot::clone operation takes as parameter an ObjectScope (only the object, the object + all its components, the object + all its components + all its related objects). In the latter case, this may result in cloning a lot of objects
Proposal [THALES]
Add an extra parameter to limit the depth of cloning for related objects, with a dedicated value for unlimited
Precise that if some related objects are not cloned (depending on the ObjectScope and RelatedObjectsDepth) , traversing the related relation should raise the NotFound exception (cf. ref-1026)
IDL
typedef  short RelatedObjectDepth;
const RelatedObjectDepth UNLIMITED_RELATED_OBJECT_DEPTH -1;
OidRef clone (in CacheAccess access,
		in ObjectScope scope,
		in RelatedObjectDepth depth)
	raises (ReadOnlyMode);
concerns the PIM (table and text and the PSM)

Resolution: see below
Revised Text: Resolution: Add an extra parameter to limit the depth of cloning for related objects, with a dedicated value for unlimited and add an exception to that operation, specific when one object cannot be cloned because it is already cloned for write purpose. This change concerns the PIM (table and text) and the IDL. Revised Text: Changes in PIM: · in section 3.1.6.3.11 ObjectRoot · in the table, change the definition of the clone operation to the following clone ObjectReference access CacheAccess scope ObjectScope depth integer · in the following text, in the list starting with "it offers methods to:", first bullet, change to the following ([...] refers to unchanged text) "create [...]; an object can be cloned to only one CacheAccess allowing write operations; the operation takes as parameters the CacheAccess, the scope of the request (i.e. [...]) and an integer (depth) that limits the depth of cloning in case of the scope asks for all the related objects." Changes in IDL: · Section 3.2.1.2.1 Generic DLRL Entities · Add: exception AlreadyClonedInWriteMode {} typedef short RelatedObjectDepth; const RelatedObjectDepth UNLIMITED_RELATED_OBJECT_DEPTH -1; · valuetype ObjectRoot · Replace ObjectLink clone ( in CacheAccess access, in ObjectScope scope) raises (ReadOnlyMode); With ObjectReference clone (in CacheAccess access, in ObjectScope scope, in RelatedObjectDepth depth) raises (ReadOnlyMode, AlreadyClonedInWriteMode);
Actions taken:
December 19, 2003: received isue
September 23, 2004: closed issue

Issue 6748: CacheAccess operations (documentation) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
The operations of the CacheAccess are not sufficiently clearly described, that lead to some confusions
Proposal [THALES]
Enhance section 3.1.6.1.2.2 (Cache Management), in introducing better the underlying concepts; in particular, clarify the dynamics with respects to enable/disable updates
Introduce that the specification is designed to allow lazy instantiation 
Describe more deeply the CacheAccess operations (section 3.1.6.3.2); in particular state how they will behave with respects to enable/disable updates
Enhance the description of typical uses of CacheAccess (section 3.1.6.5)

Resolution: see below
Revised Text: Discard the inheritance link between Cache and CacheAccess and add to Cache and ObjectHome the needed attributes and methods. Refactor the ObjectState accordingly. Add a delete_cache operation. Review and enhance the explanations of methods. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · In section 3.1.6.2 DLRL Entities, in the list of entities, after the figure: · Second bullet (Cache), after the first sentence, insert the following: [Class … available.]" Objects within a Cache can be read directly; however to be modified, they need to be attached first to a CacheAccess." · Second bullet (Cache), join the two last sentences and replace "however", by "but", to make the following: "Several Cache objects may be created but in this case, they must be fully isolated:" · Third bullet (CacheAccess), replace the second sentence by the following: "It offers methods to refresh and write objects attached to it." · Third bullet (CacheAccess), remove the third sentence, and the beginning of the next one ; removed part is as follows: [a Cache has by construction one CacheAccess (by inheritance); in addition, other] · Third bullet (CacheAccess), next sentence, add the following text in the middle (part inside quotes): [CacheAccess objects can be created]" in read mode, in order to provide a consistent access to a subset of the Cache without blocking the incoming updates or in write mode"[ in order to provide support for concurrent modifications/updates threads.] · Bullet #13 (ObjectReference), replace 'another' by an, to make the following: "Class to represent a raw reference (untyped) to an object." · Bullet #13 (ObjectReference), add the following footnote (at the end of the bullet): "13 The specification does not impose that all existing objects be instantiated by means of ObjectRoot; objects can be kept by means of ObjectReference, provided that they are instantiated when needed (lazy instantiation)." · In section 3.2.6.1.2.1 Implicit versus Explicit Subscriptions · Last paragraph, change the last word from "Null-Pointer" to "NotFound" · In section 3.1.6.3.1 CacheFactory, · In the list of operations, · add delete_cache, by adding the following entries: delete_cache void a_cache Cache · In the following text, · Add a third bullet, with the following text: " to delete a Cache (delete_cache); this operation releases all the resources allocated to the Cache." · In section 3.1.6.3.2 CacheAccess · In the table, attribute list · change the name of the first attribute to access_usage (instead of cache_usage); the corresponding entry is as follows: acces_usage CacheUsage · In the following text, second paragraph, · change "cache_usage" to "access_usage", as follows: [The attribute] "access_usage"[ indicates…] · In the third paragraph, starting with "Once the…", · First bullet, add the following sentence: [the attached …(refresh);] " this operations takes new values from the Cache for all attached objects, following the former clone directives; this can lead to discard changes on the cloned objects if they haven't been saved by writing the CacheAccesss;" · Second bullet, change the last word from "NullPointer" to "NotFound" · in section 3.1.6.3.3 Cache · First paragraph, change the verb "represent" to "gathers", to make the following sentence: [An instance of this class] "gathers" [ a set of objects that are managed, published and/or subscribed consistently.] · In the table, change the header to: Cache · In the table, list of attributes, · insert a first attribute cache_usage, by inserting the following entry: cache_usage CacheUsage · change the name of the attributes publisher and subscriber to the_publisher and the_subscriber (not strictly related to the issue, but needed for IDL correctness), the corresponding entries are as follows: the_publisher DCPS::Publisher the_subscriber DCPS::Subscriber · add the attribute refs, by inserting the following entry: refs ObjectReference [] · In the table, list of operations: · move the entry for load operation, just after disable_updates, the sequence of entries should be as follows disable_updates void load void create_access … · At the end of the table, add the following entries: deref ObjectRoot ref ObjectReference lock void to_in_milliseconds integer unlock void · In the following text, starting with "The public attributes give:" · First bullet, discard the last word (inherited), the bullet should end with: […(cache_usage);] · Second bullet, add the following text: […(pubsub_state)]", as well as the related Publisher (the_publisher) and Subscriber (the_subscriber);" · add one bullet, with the following content: "the attached CacheAccess (sub_accesses);" · add one bullet, with the following content: "the attached ObjectHome (homes);" · add one bullet, with the following content: "the attached CacheListener (listeners);" · add one bullet, with the following content: "the attached ObjectReference (refs)." · In the following text, starting with "It offers methods to:" · Before bullet #6 (starting with "to create…"), insert a new bullet, with the following content: " to explicitly request the taking into account of the waiting incoming updates (load); in case updates_enabled is TRUE, the load operation does nothing because the updates are taken into account on the fly; in case updates_enabled is FALSE, the load operation 'takes' all the waiting incoming updates and applies them in the Cache; the load operation does not trigger any listener (while automatic taking into account of the updates does - cf. section 3.1.6.4 for more details on listener activation) and may therefore be useful in particular for global initialization of the Cache." · Change the bullet #9 (previously starting with "to request all the known…"), to the following text: "transform an ObjectReference to the corresponding ObjectRoot (deref); this operation can return the already instantiated ObjectRoot or create one if not already done; these ObjectRoot are not modifiable (modifications are only allowed on cloned objects attached to a CacheAccess in write mode);" · Add a new bullet with the following text: " to lock the Cache with respect to all other modifications, either from the infrastructure or from other application threads; this operation allows to be sure that several operations can be performed on the same Cache state (i.e., cloning of several objects in a CacheAccess); this operation blocks until the Cache can be allocated to the calling thread and the waiting time is limited by a time-out (to_in_milliseconds); in case the time-out expired before the lock can be granted, an exception (ExpiredTimeOut) is raised;" · Add a new bullet with the following text: " to unlock the Cache." · Add then a new paragraph, with the following content: " Objects attached to the cache are supposed to be garbage-collected when appropriate. There is therefore no specific operation for doing this." · in section 3.1.6.3.5 ObjectHome · In the table, list of attributes, · move the regitration_index in third position and then insert descriptions for refs and auto_deref as follows: registration_index integer auto_deref boolean refs ObjectReference [] · after the extent attribute, add the full_extent, by inserting the following entry full_extend ObjectRoot [] · In the next attribute description selections, change the type HomeSelection to Selection (was an error); the result is as follows: selections Selection[] · In the table, list of operations, · insert after the description for get_topic_name, the description for get_all_topic_names (was missing), by inserting the following entry: get_all_topic_names void · After the next operation (set_filter), add 3 new ones, by inserting the following entries: set_auto_deref void value boolean deref_all void underef_all void · change the name of the last operation from "find_object" to "find_object_in_access"; the operation description is as follows: find_object_in_access ObjectRoot oid DLRLOid access CacheAccess · add a new operation, by adding the following entries: find_object ObjectRoot oid DLRLOid · In the following text starting with " The public attributes give:" · Discard the third bullet (starting with "the manager…") · Add then a new bullet, with the following text: " the list of ObjectReference that correspond to objects of that class (refs); a boolean that indicates if ObjectReference corresponding to that type should be implicitly instantiated (TRUE) or if this action should be explicitly done by the application when needed by calling a deref operation (auto_deref); as selections act on instantiated objects (cf. section 3.1.6.3.7 for details on selections), TRUE is a sensible setting when selections are attached to that home." · Add then a new bullet, with the following text: " the manager for the list of all the instantiated objects of that class (extent); the manager for the list of all the instantiated objects of that class and all its derived classes (full_extent); · Add then a new bullet, with the following text: " the list of attached Selection (selections); · Add then a new bullet, with the following text: " the list of attached ObjectListener (listeners); · Then add a new paragraph, with the following text: "Those last four attributes will be generated properly typed in the derived specific home." · In the following text, starting with " It offers methods to:": · After the first bullet, insert a new one, with the following content: " set the auto_deref boolean (set_auto_deref);" · Then, insert a new bullet, with the following content: " ask for the instantiation of all the ObjectReference that are attached to that home, in the Cache (deref_all);" · Then, insert a new bullet, with the following content: " ask for the removal of non-used ObjectRoot that are attached to this home (underef_all);" · In the bullet, starting with "retrieve a DLRL…", change the last word to "find_object_in_access"; the new text is as follows: "retrieve a DLRL object based on its oid in a given CacheAccess (find_object_in_access);" · Then, insert a new bullet, with the following content: " retrieve a DLRL object based on its oid in the main Cache (find_object);" · In section 3.1.6.3.10 ObjectRoot · First paragraph, add at the end the following sentence after [… management.]: "ObjectRoot are used to represent either objects that are in the Cache (also called primary objects) or clones that are attached to a CacheAccess (also called secondary objects). Secondary objects refer to a primary one with which they share the ObjectReference." · In the table, list of attributes · Discard the second entry (count integer) · at the end, add a new attribute ref, by inserting the following entry: ref ObjectReference · In the following text, · add a footnote to "attributes" in the following sentence: " Its public attributes give:", with the following content: "16 It is likely that other attributes are needed to manage the objects (i.e., a content version, a reference count...); however these are implementation details not part of the specification." · suppress the second bullet (starting with "the number of times…") · add to the last bullet, the following sentence: [the CacheAccess… (cache_access);] " when the ObjectRoot is a primary object directly attached to the Cache, cache_access is set to NULL;" · then add a new bullet with the following content: " the full ObjectReference that corresponds to it (ref)." · In the following text, starting with " It offers methods to:", · In the bullet #4, starting with "get if the object…", change "updates" to "modifications", the first sentence is then: " get if the object has been modified by incoming modifications (is_modified);" · In the same bullet #4, change the last "false" to "FALSE" · In the same bullet #4, add the following sentence: "'incoming modifications' should be understood differently for a primary object and for a clone object:" · then add a sub-bullet with the following content: " for a primary object, they refer to incoming updates (i.e., coming from the infrastructure);" · then add a second sub-bullet with the following content: " for a secondary object (cloned), they refer to the modifications applied to the object by the last CacheAccess::refresh operation;" · In the following text, starting with "In addition, application classes…" · At the end of the last bullet (starting with "is_<attribute>_modified …") , change the last word from "updates" to "modifications" and add after " (cf. method is_modified)." · In the following text, that start with "The object state is actually made of two parts:" · Replace the first bullet with the following text: " the primary_state which refers to incoming modifications (i.e., incoming updates for a primary object or modifications resulting from CacheAccess::refresh operations for a secondary object); even if the events that trigger the state change are different for both kinds of objects, the state values are the same;" · Change the name of the following figure from "Write state of an object" to "Primary State of an Object" · Replace the second bullet with the following text: "the secondary_state which refers to modifications performed by the application; for a secondary object, the state describes the taking into account of the changes asked by the application (set_xxx or destroy and then write of the CacheAccess); for a primary object, it tracks if the object has been cloned for modification purpose." · Change the name of the following figure from "Read state of an object" to "Secondary State of an Object" · Discard the following text, starting with "For objects managed in both…" as the two following bullets. Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · Add typedef unsigned short ObjectSubState; // Primary object state const ObjectSubState OBJECT_NEW = 0x0001 << 0 const ObjectSubState OBJECT_MODIFIED = 0x0001 << 1; const ObjectSubState OBJECT_READ = 0x0001 << 2; const ObjectSubState OBJECT_DELETED = 0x0001 << 3; // Secondary object state const ObjectSubState OBJECT_CREATED = 0x0001 << 8; const ObjectSubState OBJECT_CHANGED = 0x0001 << 9; const ObjectSubState OBJECT_WRITTEN = 0x0001 << 10; const ObjectSubState OBJECT_DESTROYED = 0x0001 << 11; [New names and values] · valuetype ObjectRoot · Remove readonly attribute long count; readonly attribute ObjectState state; · Add readonly attribute ObjectSubState primary_state; readonly attribute ObjectSubState secondary_state; readonly attribute ObjectReference ref; · local interface ObjectHome · Add readonly attribute ObjectReferenceSeq refs; readonly attribute boolean auto_deref; · Add (in commented-out section) readonly attribute ObjectExtent full_extent; · Add void set_auto_deref ( in boolean value); void deref_all(); void underef_all (); · Replace: [change of name] ObjectRoot find_object ( in DLRLOid oid, in CacheAccess access) raises ( NotFound); With ObjectRoot find_object_in_access ( in DLRLOid oid, in CacheAccess access) raises ( NotFound); · Add: ObjectRoot find_object ( in DLRLOid oid); · local interface CacheAccess · Replace: [change of name] readonly attribute CacheUsage cache_usage; · With readonly attribute CacheUsage access_usage; · local interface Cache · Replace: local interface Cache : "CacheAccess { With local interface Cache { · Add readonly attribute CacheUsage cache_usage; · Replace readonly attribute DCPS::Publisher publisher; readonly attribute DCPS::Subscriber subscriber; With readonly attribute DCPS::Publisher the_publisher; readonly attribute DCPS::Subscriber the_subscriber; · Add readonly attribute ObjectReferenceSeq refs; ObjectRoot deref ( in ObjectReference ref); // --- Protection against concurrent access void lock ( in TimeOutDuration to_in_milliseconds) raises (ExpiredTimeOut); void unlock (); · local interface CacheFactory · Add void delete_cache ( in Cache a_cache);
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6749: CacheAccess::delete_access (editorial (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
The operation is badly named delete_cache in the PSM (instead of delete_access)
The parameter is badly stated as CacheUsage (instead of CacheAccess) in the PIM (in the table for Cache operations)
Proposal [THALES]
Correct the PSM and the PIM table

Resolution: see below
Revised Text: Resolution: Make it everywhere delete_cache with a CacheUsage parameter. This change concerns the PIM (text) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.3.3 Cache · in the table, change the entries concerning the delete_access to the following delete_access void access cacheAccess · in section 3.1.6.5.1 Read Mode · in item #6, delete_cache => delete_access · in section 3.1.6.5.2 Write Mode · in item #8, delete_cache => delete_access Changes in IDL · local interface Cache { ... void delete_access ( in CacheAccess access) raises (BadParameter);
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6750: CacheAccess::deref (clarification) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Clarification
Severity:
Summary:
Issue [THALES]
What does deref when no corresponding object exist in the related CacheAccess or if the object has been deleted?
Proposal [THALES]
In that case, the deref operation should raise an exception (NotFound in the first case - cf. issue ref-1023, and Deleted in the second?)

Resolution: See issue 6748 for disposition -- duplicate
Revised Text:
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue'

Issue 6751: stringSeq and longSeq (editorial) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES] 
Rather name those types StringSeq and LongSeq as in DCPS
Proposal [THALES]
Actually 2 different naming rules conflict:
list of xxx -> xxxSeq
all user-created types starting with a capital letter
Give the precedence to the second!

Resolution: see below
Revised Text: Resolution: Change " stringSeq" to " StringSeq" and "longSeq" to "LongSeq" This change only concerns IDL. Revised Text: Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · Replace typedef sequence<string> stringSeq; typedef sequence<long> longSeq; Witb typedef sequence<string> StringSeq; typedef sequence<long> LongSeq; · local interface ObjectQuery · Replace readonly attribute StringSeq parameters; boolean set_query ( in string expression, in stringSeq parameters); boolean set_parameters ( in stringSeq parameters); With readonly attribute StringSeq parameters; boolean set_query ( in string expression, in StringSeq parameters); boolean set_parameters ( in StringSeq parameters); · local interface ObjectHome { · Replace stringSeq get_all_topic_names (); With StringSeq get_all_topic_names (); · abstract valuetype ListBase : CollectionBase · Replace boolean which_added (out longSeq indexes); With boolean which_added (out LongSeq indexes); · abstract valuetype StrMapBase : CollectionBase · Replace boolean which_added (out stringSeq keys); stringSeq get_all_keys (); With boolean which_added (out StringSeq keys); StringSeq get_all_keys (); · abstract valuetype IntMapBase : CollectionBase · Replace boolean which_added (out longSeq keys); longSeq get_all_keys (); With boolean which_added (out LongSeq keys); LongSeq get_all_keys ();
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6752: ObjectHome::get_topic_name (editorial (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
This operation is not described in the text of section 3.1.6.3.5
Proposal [THALES]
add the description in the text (at the end of the operation list)
"retrieve the name of the DCPS Topic that contains the value for one attribute (get_topic_name)"

Resolution: see below
Revised Text: Resolution: Add the description of the operation. Revised Text: Changes in PIM · Section 3.1.6.3.5 ObjectHome · In the text following the table starting with "It offers methods to:", add one bullet with the following text: "retrieve the name of the topic that contains the value for one attribute (get_topic_name);"
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6753: ObjectHome::get_all_topic_names (addition (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
Getting all the names of all the topics that are used to store objects of a given class may be rather cumbersome
Proposal [THALES]
add an operation to get all in a single call
IDL
StringSeq get_all_topic_names()
Concerns the PIM (figure, table and text) and the PSM

Resolution: see below
Revised Text: Resolution: Add an operation to get all topic names in a single call. This changes concerns the PIM (UML diagram and text) and the IDL Revised Text: Changes in PIM · Section 3.1.6.3.5 ObjectHome · in the table, add one operation by means of the following entry get_all_topic_names string[] · in the text following the table starting with "It offers methods to:", · add one bullet with the following text: retrieve the name of the all the topics that contain values for all attributes of the class (get_all-topic_names); Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · local interface ObjectHome { ... StringSeq get_all_topic_names() ... }
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6754: Operations on collections of objects (addition (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
Rather than returning a sequence of objects as the list of instances and let the application iterate in it, it seems better to let the infrastructure iterate by providing operations that would
filter the objects
apply a modifier on the objects
Proposal [THALES]
Introduce a new class FooExtent that would embed a sequence of Foo and provide those operations.
Make the find_objects returning a FooExtent to allow to filter on a filtered result (like was allowed with the initial ObjectFilter::filter)
Suppres the filter operation in an ObjectFilter (that is now redundant)
Replace the list of instances in FooHome and FooSelection, by an instance of this class
IDL
local interface FooFilter : DLRL::ObjectFilter
{
	boolean check_object (in Foo an_object);
}
local interface FooModifier: DLRL::ObjectModifier
{
	void modify_objects (in Foo an_object);
}
local interface FooExtend: DLRL::OjectExtent
{
	readonly attribute FooSeq objects;
	FooExtent  find_objects (in FooFilter filter);
	void modify_objects (in FooFilter filter,
		in FooModifier modifier);
}
local interface FooHome
{
	readonly attribute FooExtent extent;	// instead of FooSeq
}
local interface FooSelection
{
	readonly attribute FooExtent members;// instead of FooSeq
}
concerns the PIM (figure, list of interfaces, interface sections - table and text) and the PSM

Resolution: see below
Revised Text: Resolution: Introduce a new class FooExtent that would embed a sequence of Foo and provide those operations. Make the find_objects returning a FooExtent to allow to filter on a filtered result (like was allowed with the initial ObjectFilter::filter) Suppress the filter operation in an ObjectFilter (that is now redundant) Introduce FooModifier that allows to indicate a modifier to be applied on a list of objects Replace the list of instances in FooHome and FooSelection, by an instance of this class This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.2 DLRL Entities · list of entities (after the figure) · introduce a bullet after "SelectionListener" and before "ObjectRoot", with the following content: "ObjectModifier Class whose instances represent modifiers to be applied to a set of objects." · introduce after a second new bullet, with the following content: "ObjectExtent Class to manage a set of objects. ObjectExtent objects are used to represent all the instances managed by an ObjectHome as well as all the members of a Selection. They can also be used in conjunction with ObjectFilter and/or ObjectModifier to allow collective operations on sets of objects. · in section 3.1.6.3.5 ObjectHome · in the table, list of attributes · change the type of the attribute extent from "ObjectRoot []" to "ObjectExtent" · in the following text, in the list stating with "the public attributes give:" · change the third bullet to the following: "the manager for the list of all the objects of that class (extent);" · in section 3.1.6.3.7 Selection · in the table, list of attributes · change the type of the attribute members from "ObjectRoot []" to "ObjectExtent" · change the name of that attribute to "membership" · in the following text, in the list stating with "the public attributes give:" · change the third bullet to the following: "the manager for the list of all the objects of that class (membership);" · in section 3.1.6.3.8 ObjectFilter · in the table, · suppress the entry that describes the filter operation · in the following text, · change the first line to "It offers a method to" · remove the last (2nd) bullet. · in the two following paragraph, change "FooObjectFilter" to "FooFilter" [once in each paragraph] · After section 3.1.6.3.10 Selection Listener, introduce a new section "3.1.6.3.11 ObjectModifier" · First paragraph as follows; "An ObjectModifier is an object that allows the application developer to express an operation that will be applied on a set of objects, by means of an ObjectExtent." · built a table with no attributes and one operation, as follows: ObjectModifier no attributes operations modify_object void an_object ObjectRoot · add the following paragraph: "It offers a method to:" · add one bullet, with the following content: "modify an object, which is passed as parameter (modify_object)." · add the following two paragraphs: "The ObjectModifier class is a root from which are derived classes dedicated to application classes (for an application class named Foo, FooModifier will be derived)." "FooModifier is itself a base class that may be derived by the application in order to provide its own modify_object algorithm. The default provided behavior is that modify_object does nothing." · After that new section, introduce a new section "3.1.6.3.12 ObjectExtent" · First paragraph as follows: "This class is just a manager for a set of objects of a given class. It is useful for representing all the instances of a given class, or all the members of a Selection. Other instances may be created by the applications to buid a new subset of objects or to apply on a subset the same modifying operation. · build a table to represent attributes and operations as follows: ObjectExtent attributes objects ObjectRoot [] methods find_objects ObjectExtent a_filter ObjectFilter modify_objects void a_filter ObjectFilter a_modifier ObjectModifier · add the following paragraph: "It has one public attribute:" · add a bullet list (with one bullet), with the following text: "objects, which is the list of all the objects that belong to the ObjectExtent." · add the following paragraph: "It offers methods to:" · add a bullet list (with 2 bullets), as follows: · "retrieve the subset of objects based on a provided ObjectFilter (find_objects); the result of this method is itself an ObjectExtent to allow the application of filtering on the result of another filtering (composition off filters);" · "apply to a subset of the objects (based on a provided ObjectFilter) a provided ObjectModifier (modify_objects); in case the provided a_filter is NULL, the provided a_modifier is called on all the objects." · starting at section 3.1.6.3.13 (formerly .11) all sections 3.1.6.3.x have their number increased by 2. Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · local interface ObjectFilter · Remove operation: ObjectRootSeq filter ( in ObjectRootSeq objects); · Add interfaces: local interface ObjectModifier { /*IMPLIED* void modify_object ( in ObjectRoot an_object); *IMPLIED*/ }; local interface ObjectExtent { /*IMPLIED* readonly attribute ObjectRootSeq objects; ObjectExtent find_objects ( in ObjectFilter filter); void modify_objects ( in ObjectFilter filter, in ObjectModifier modifier); *IMPLIED*/ }; · local interface ObjectHome · Replace readonly attribute ObjectRootSeq extent; With readonly attribute ObjectExtent extent; · local interface Selection · Replace readonly attribute ObjectRootSeq members; With readonly attribute ObjectExtent membership; Changes in implied IDL · Section 3.2.1.2.2 Implied IDL · local interface FooFilter · Remove operation: FooSeq filter ( in FooSeq objects); · Add interface: local interface FooModifier : Dlrl::ObjectModifier { void modify_object ( in Foo an_object); }; · Add interface: local interface FooExtent : Dlrl::ObjectExtent { readonly attribute FooSeq objects; FooExtent find_objects ( in FooFilter filter); void modify_objects ( in FooFilter filter, in FooModifier modifier); }; · local interface FooHome · Replace readonly attribute FooSeq extent; With readonly attribute FooExtent extent; · local interface FooSelection · Replace readonly attribute FooSeq members; With readonly attribute FooExtent membership;
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6755: Name of ObjectLink (consistency) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
This object is referred (in attributes or operations as refs (ex - refs, deref); therefore the name of the class ObjectLink is confusing
Proposal [THALES]
Rename this construct as OidRef
Concerns the PIM (figure, list of entities, entities sections - tables and texts) and the PSM

Resolution: see below
Revised Text: Resolution: Change the name ObjectLink to ObjectReference This change concerns the PIM (UML diagram) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.2 DLRL Entities · after the figure, in the list of entities · 12th bullet, changed to: "ObjectReference Class to represent a raw reference (untyped) to an object." · 13th bullet, changed to: "Reference Class to represent a typed reference to another object" · in section 3.1.3.6.2 CacheAccess · in the table, · change the definition of the attribute refs refs ObjectReference[] · change the parameter of the deref operation ref ObjectReference · in the following text, last bullet of the list · ObjectLink => ObjectReference · in section 3.1.6.3.11 ObjectRoot · in the table, change the result of the clone operation clone ObjectReference · in section 3.1.6.3.12 ObjectLink · new title => ObjectReference · in the following text, ObjectLink => ObjectReference · title of the table ObjectLink => ObjectReference · in section 3.1.6.3.13 Reference · in the title of the table: ObjectLink => ObjectReference · in section 3.1.6.3.20 ListRelation · in the table, change the definition of the attribute values values ObjectReference [] · in section 3.1.6.3.21 StrMapRelation · in the table, change the definition of the attribute values values ObjectReference [] · in section 3.1.6.3.22 IntMapRelation · in the table, change the definition of the attribute values values ObjectReference [] · in section 3.2.1.1 Mapping Rules · in the third paragraph, ObjectLink => ObjectReference Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · Replace struct ObjectLink { With struct ObjectReference { · Replace typedef sequence< ObjectLink> ObjectLinkSeq; With typedef sequence<ObjectReference> ObjectReferenceSeq; · change accordingly: · result of ObjectRoot::clone operation to ObjectReference · private member RefRelation::m_ref to ObjectReference · private member ListRelation::m_refs to ObjectReferenceSeq · StrMapRelation::Item::ref to ObjectReference · IntMapRelation::Item::ref to ObjectReference · readonly attribute Cache::refs to ObjectReferenceSeq · result of CacheAccess::deref operation to ObjectReference
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 6756: Obtaining the DomainParticipantFactory (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
***Ref-04 Initialization_of_DomainParticipantFactory


The specification does not define how the application gets the
DomainParticipantFactory if each implementation uses a different
mechanism it would make applications non portable.


***PROPOSAL***


Modify section 2.1.2.2.2 adding the static operation "get_instance" to
DomainParticipantFactory. This operation will return the factory which
is a singleton. If the method is called multiple times the same
factory is returned

Resolution: see below
Revised Text: Resolution: Modify section 2.1.2.2.2 adding the static operation "get_instance" to DomainParticipantFactory. This operation will return the factory which is a singleton. If the method is called multiple times the same factory is returned As it is not possible to specify "class" scoped operations on an IDL interface, this does not concern the PSM part. Revised Text: Changes in PIM: · In section 2.1.2.2.2 first paragraph, · replace: "It is either a pre-existing object or it is created using some middleware-specific API" with "It is a pre-existing singleton object that can be accessed by means of the get_instance class operation on the DomainParticipantFactory" · In section 2.1.2.2.2 DomainParticipantFactory table, · add the get_instance operation which takes not argument and returns a DomainParticipantFactory get_instance DomainParticipantFactory · Add section 2.1.2.2.2.3 with the following text: 2.1.2.2.2.3 get_instance This operation returns the DomainParticipantFactory singleton. The operation is idempotent, that is, it can be called multiple times without side-effects and it will return the same DomainParticipantFactory instance.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6757: Potential problems in PSM mappings (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
***Ref-05 IDL_case_sensitive


In the PSM, the IDL uses as formal parameter names the same string as
the name of the type only distinguished by the case.  This appears to
treat IDL as a case sensitive language which is not, and would result
in problems in languages such as ADA that are not case sensitive.


This problem appears in several paces where "topic" appears as the
formal parameter name of an argument of type "Topic". For instance the
operations TopicListener::on_inconsistent_topic,
DomainParticipant::delete_topic, Subscriber::create_datareader
Similarly "listener" appears as the formal parameter name of an
argument of a type derived from Listener. For instance the operations
DomainParticipant::create_publisher,
DomainParticipant::create_subscriber, DomainParticipant::create_topic,
DomainParticipantFactory::create_participant
Publisher::create_datawriter, Subscriber::create_datareader


***PROPOSAL*** To avoid confusion use "a_topic" for the formal
parameter name of any parameter of type "Topic", or any specialization
of Topic.


Similarly use "a_listener" for the formal parameter name of any
parameter of type "Listener", or any specialization of Topic.



Resolution: see below
Revised Text: Resolution: Change the name of the parameters, by introducing "a_" in front of the current name. This change concerns the PIM (text) and the IDL Revised Text: Changes in PIM · Section 2.1.2.2.1 DomainParticipant table · Replace create_publisher formal parameter name from "listener" to "a_listener" · Replace create_subscriber formal parameter name from "listener" to "a_listener" · Replace create_topic formal parameter name from "listener" to "a_listener" · Section 2.1.2L.2.2 DomainParticipantFactory table · Replace create_participant formal parameter name from "listener" to "a_listener" · Section 2.1.2.2.3 DomainParticipantListener table · Replace on_inconsistent_topic_status formal parameter name from "topic" to "the_topic" · Replace on_data_on_readers formal parameter name "subscriber" to "the_subscriber"Applied to 16feb_version · Replace on_sample_lost formal parameter name "subscriber" to "the_subscriber" · Section 2.1.2.3.5 TopicListener table · Replace on_inconsistent_topic_status formal parameter name from "topic" to "the_topic" · Section 2.1.2.5.2 Subscriber table · Replace create_datareader formal parameter name from "listener" to "a_listener" · Section 2.1.2.5.6 SubscriberListener table · Replace on_data_on_readers formal parameter name "subscriber" to "the_subscriber" · Replace on_sample_lost formal parameter name "subscriber" to "the_subscriber" Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DomainParticipantFactory, operation create_participant · Change formal parameter name from "listener" to "a_listener" · Interface TopicListener, operation on_inconsistent_topic · Change formal parameter name from "topic" to "the_topic" · Interface DomainParticipant, operation create_publisher · Change formal parameter name from "listener" to "a_listener" · Interface DomainParticipant, operation create_subscriber · Change formal parameter name from "listener" to "a_listener" · Interface DomainParticipant, operation create_topic · Change formal parameter name from "listener" to "a_listener" · Interface DomainParticipant, operation delete_topic · Change formal parameter name from"topic" to "a_topic" · Interface Publisher, operation create_datawriter · Change formal parameter name from "listener" to "a_listener" · Interface Subscriber, operation create_datareader · Change formal parameter name from "topic" to "a_topic" · Change formal parameter name from "listener" to "a_listener" Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6758: Naming_of_attribute_getter_operations (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Naming_of_attribute_getter_operations


DataWriter class table, and DataReader class table operations
get_liveliness_lost_status, get_offered_deadline_missed_status and
get_offered_incompatible_qos_status.


In the IDL file, these getter methods are specified as readonly
attributes with the same names, but without the preceeding
"get_". This leads to getter methods with different names in different
languages (In C++ the"get_" part will disappear, in Java only the "_"
will disappear).


These IDL-to-language mappings would invalidate the PIM.  To avoid
this kinds of issues we should avoid using attributes in either the
PIM or the PSM. The use of explicit operations will make the PIM more
easily mappede to different PSMs.


***PROPOSAL***


Replace the readonly attributes in the PIM or PSM with explicit
get_xxx operations. Replace any read-write attributes attributes in
the PIM or PSM with explicit get_xxx and set_xxx operations.

Resolution: see below
Revised Text: Resolution: Replace the readonly attributes in the DCPS PIM or PSM with explicit get_xxx operations. Replace any read-write attributes in the DCPS PIM or PSM with explicit get_xxx and set_xxx operations. This change concerns the PIM (UML diagram and text) and the IDL Revised Text: Changes in PIM and IDL · Section 2.1.2.1.9 GuardCondition table · Remove attribute enabled_statuses · Add operation get_enabled_statuses returning StatusKind[] get_enabled_statuses StatusKind [] · Section 2.1.2.1.9 StatusCondition table · Remove attribute enabled_statuses · Add operation get_enabled_statuses returning StatusKind[] get_enabled_statuses StatusKind [] · Insert section 2.1.2.1.9.2 with the following text: 2.1.2.1.9.2 get_enabled_statuses This operation retrieves the list of communication statuses that are taken into account to determine the trigger_value of the StatusCondition. This operation returns the statuses that were explictly set on the last call to set_enabled_statuses or, if set_enabled_statuses was never called, the default list (see Section 2.1.2.1.9.1). · Former section 2.1.2.1.9.2 get_entity becomes section 2.1.2.1.9.3 · Section 2.2.3 DCPS PSM : IDL · Interface StatusCondition · Remove: readonly attribute StatusKindMask enabled_statuses; · Add operation: StatusKindMask get_enabled_statuses(); · Section 2.1.2.3.1 TopicDescription · table · Remove attribute type_name · Remove attribute name · Add operation get_type_name returning string · Add operation get_name returning string Resulting table is: TopicDescription no attributes operations get_participant DomainParticipant get_type_name string get_name string · Add section 2.1.2.3.1.2 with the following content: 2.1.2.3.1.2 get_type_name This operation returns the type_name used to create the TopicDescription. · Add section 2.1.2.3.1.3 with the following content: 2.1.2.3.1.3 get_name This operation returns the name used to create the TopicDescription. · Section 2.1.2.3.2 Topic · table · Remove inherited attribute type name · Remove inherited attribute name · Add inherited operation get_ type_name returning string · Add inherited operation get_name returning string Resulting table is: Topic no attributes operations (inherited) get_qos QosPolicy [] (inherited) set_qos ReturnCode_t qos_list QosPolicy [] (inherited) get_listener Listener (inherited) set_listener ReturnCode_t a_listener Listener mask StatusKind [] get_type_name (inherited) string get_name (inherited) string get_inconsistent_topic_status InconsistentTopicStatus · Section 2.1.2.3.3 ContentFilteredTopic · table · Remove inherited attribute type name · Remove inherited attribute name · Remove readonly attribute related_topic · Remove readonly attribute filter_expression · Remove attribute expression_parameters · Add inherited operation get_ type_name returning string · Add inherited operation get_name returning string · Add operation get_related_topic returning string · Add operation get_filter_expression returning string · Add operation get_expression_parameters returning string · Add operation set_expression_parameters which takes a string [] and returns a ReturnCode_t Resulting table is: ContentFilteredTopic no attributes operations get_type_name (inherited) string get_name (inherited) string get_related_topic Topic get_filter_expression string get_expression_parameters string [] set_expression_parameters ReturnCode_t expression_pa rameters string [] · Add section 2.1.2.3.3.1, with the following content: 2.1.2.3.3.1 get_related_topic This operation returns the Topic associated with the ContentFilteredTopic. That is, the Topic specified when the ContentFilteredTopic was created. · Add section 2.1.2.3.3.2, with the following content: 2.1.2.3.3.2 get_filter_expression This operation returns the filter_expression associated with the ContentFilteredTopic. That is, the expression specified when the ContentFilteredTopic was created. · Add section 2.1.2.3.3.3, with the following content: 2.1.2.3.3.3 get_expression_parameters This operation returns the expression_parameters associated with the ContentFilteredTopic. That is, the parameters specified on the last successful call to set_expression_parameters, or if set_expression_parameters was never called, the parameters specified when the ContentFilteredTopic was created. · Add section 2.1.2.3.3.4, with the following content: 2.1.2.3.3.4 set_expression_parameters This operation changes the expression_parameters associated with the ContentFilteredTopic. · Section 2.1.2.3.3 MultiTopic · table · Remove inherited attribute type name · Remove inherited attribute name · Remove readonly attribute subscription_expression · Remove attribute expression_parameters · Add inherited operation get_ type_name returning string · Add inherited operation get_name returning string · Add operation get_subscription_expression returning string · Add operation get_expression_parameters returning string · Add operation set_expression_parameters which takes a string [] and returns a ReturnCode_t Resulting table is: MultiTopic no attributes operations get_type_name (inherited) string get_name (inherited) string get_subscription_expression string get_expression_parameters string [] set_expression_parameters ReturnCode_t expression_pa rameters string [] · Add section 2.1.2.3.4.1, with the following content: 2.1.2.3.4.1 get_ subscription _expression This operation returns the subscription_expression associated with the MultiTopic. That is, the expression specified when the MultiTopic was created. · Add section 2.1.2.3.4.2, with the following content: 2.1.2.3.4.2 get_expression_parameters This operation returns the expression_parameters associated with the MultiTopic. That is, the parameters specified on the last successful call to set_expression_parameters, or if set_expression_parameters was never called, the parameters specified when the MultiTopic was created. · Add section 2.1.2.3.4.3, with the following content: 2.1.2.3.4.3 set_expression_parameters This operation changes the expression_parameters associated with the MultiTopic. · Section 2.1.2.5.8 ReadCondition · table · Remove attribute lifecycle_state_mask · Remove attribute sample_state_mask · Add operation get_lifecycle_state_mask returning a LifecycleStateKind [] · Add operation get_sample_state_mask Add operation get_lifecycle_state_mask returning a SampleStateKind [] Resulting table is: ReadCondition no attributes operations get_datareader DataReader get_sample_state_mask SampleStateKind [] get_view_state_mask ViewStateKind [] get_instance_state_mask InstanceStateKind [] · Add section 2.1.2.5.8.2, with the following content: 2.1.2.5.8.2 get_lifecycle_state_mask This operation returns the set of lifecycle-states that are taken into account to determine the trigger_value of the ReadCondition. These are the lifecycle-states specified when the ReadCondition was created. · Add section 2.1.2.5.8.3, with the following content: 2.1.2.5.8.3 get_sample_state_mask This operation returns the set of sample-states that are taken into account to determine the trigger_value of the ReadCondition. These are the lifecycle-states specified when the ReadCondition was created. · Section 2.1.2.5.8 QueryCondition [[Note: The description and table are inconsistent. The description mentions a "set_arguments" which does not appear in the table. The IDL has a read-write attribute called query_arguments so the proper name of the operation should be set_query_arguments.]] · On the table · Remove attribute query_expression · Remove attribute query_arguments · Add operation get_query_expression returning string · Add operation get_query_arguments returning string[] · Add operation set_query_arguments which takes string[] and returns ReturnCode_t Resulting table is: QueryCondition no attributes operations get_query_expression string get_query_arguments string [] set_query_arguments ReturnCode_t query_arguments string [] · First paragraph after table: replace "set_arguments" with "set_query_arguments". · Add section 2.1.2.5.9.1, with the following content: 2.1.2.5.9.1 get_ query_expression This operation returns the query_expression associated with the QueryCondition. That is, the expression specified when the QueryCondition was created. · Add section 2.1.2.5.9.2, with the following content: 2.1.2.5.9.2 get_query_arguments This operation returns the query_arguments associated with the QueryCondition. That is, the parameters specified on the last successful call to set_query_arguments, or if set_query_arguments was never called, the arguments specified when the QueryCondition was created. · Add section 2.1.2.5.9.3, with the following content: 2.1.2.5.9.3 set_query_arguments This operation changes the query_arguments associated with the QueryCondition. · Section 2.2.3 DCPS PSM : IDL · Interface TopicDescription [[Note this is inconsistent with PSM. Issue 6793 dealt with fixing the inconsistency and also modifies the PSM. The changes below apply after applying the resolution of 6793]] · Remove readonly attribute string type_name; · Remove · readonly attribute string name; · Add operation: string get_type_name(); · Add operation: string get_name(); · Interface Topic [[Note this is inconsistent with PSM (the PSM has an operation not an attribute). There are other issues that deal with fixing inconsistencies but none mention this one explicitly. ]] · Remove: readonly attribute InconsistentTopicStatus inconsistent_topic_status; · Add operation: InconsistentTopicStatus get_inconsistent_topic_status(); · Interface ContentFilteredTopic · Remove: readonly attribute string filter_expression; · Remove: attribute StringSeq expression_parameters; · Add operations: string get_filter_expression(); StringSeq get_expression_parameters(); ReturnCode_t set_expression_parameters(StringSeq expression_parameters); · Interface MultiTopic [[Note this is inconsistent with PSM (the PSM names the attribute "topic_expression" names it "subscription_expression"). There are other issues that deal with fixing inconsistencies but none mention this one explicitly. ]] · Remove: readonly attribute topic_expression; · Remove: attribute expression_parameters; · Add operations: string get_subscription_expression(); StringSeq get_expression_parameters(); ReturnCode_t set_expression_parameters(StringSeq expression_parameters); · Interface ReadCondition · Remove readonly attribute LifecycleStateMask lifecycle_state_mask; · Remove readonly attribute SampleStateMask sample_state_mask; · Add LifecycleStateMask get_lifecycle_state_mask(); · Add SampleStateMask get_sample_state_mask(); · Interface QueryCondition · Remove readonly attribute string query_expression; attribute StringSeq query_arguments; · Add operations: string get_query_expression(); StringSeq get_query_arguments(); ReturnCode_t set_query_arguments(StringSeq query_arguments); · Interface DomainParticipant · Remove readonly attribute DomainId_t domainId; · Add operation: DomainId_t get_domain_id(); · Interface DataWriter · Remove readonly attribute LivelinessLostStatus liveliness_lost_status; readonly attribute OfferedDeadlineMissedStatus offered_deadline_missed_status; readonly attribute OfferedIncompatibleQosStatus offered_incompatible_qos_status; · Add operations: LivelinessLostStatus get_liveliness_lost_status(); OfferedDeadlineMissedStatus get_offered_deadline_missed_status(); OfferedIncompatibleQosStatus get_offered_incompatible_qos_status(); · Interface Subscriber · Remove readonly attribute SampleLostStatus sample_lost_status; · Add SampleLostStatus get_sample_lost_status(); · Interface DataReader · Remove readonly attribute SampleRejectedStatus sample_rejected_status; readonly attribute LivelinessChangedStatus liveliness_changed_status; readonly attribute RequestedDeadlineMissedStatus requested_deadline_missed_status; readonly attribute RequestedIncompatibleQosStatus requested_incompatible_qos_status;Add · Add operations: SampleRejectedStatus get_sample_rejected_status(); LivelinessChangedStatus get_liveliness_changed_status(); RequestedDeadlineMissedStatus get_requested_deadline_missed_status(); RequestedIncompatibleQosStatus get_requested_incompatible_qos_status();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6759: Ref-62 Return_type_of_set_query_operations (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.5.9. The set_query_arguments method is of type void, and
can therefore not return an Error status.


***PROPOSAL***


Subsumed by proposal to Ref-50. Explicit operation to set the
attribute will return ReturnCode_t

Resolution: See issue 6758 for disposition
Revised Text:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6760: Delete dependencies and semantics (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Delete dependencies and semantics


***Ref-19 Delete_dependencies


Delete operations on Entities are only allowed if all other Entities
which have dependencies with it have been deleted first (for example,
a Topic may not be deleted before all other Entities depending on this
Topic have been deleted first)


The specification does not explain what happens if an application
tries to delete an entity that has dependent or contained entities.


This applies to Sections: 2.1.2.2.1.2, 2.1.2.2.1.4, 2.1.2.2.1.6,
2.1.2.2.1.8, 2.1.2.2.1.10, and 2.1.2.2.2.2.


Given that Topic object can be obtained by means of the
DomainParticipant::create_topic operation as well as the
DomainParticipant::lookup_topic operation there is an ambiguity
regarding the deletion of Topic objects prior to deleting the
DomainParticipant. Should the application call delete just on the ones
it obtained by means of "create_topic" or also on the ones obtained by
means of "lookup_topic"


***PROPOSAL***


State that if said "delete" operations are called while there are
dependent or contained entities the operation will fail and return
PRECONDITION_NOT_MET.


Fix by adding get_datareader() operation to the ReadCondition. This
operation should take no arguments and return a DataReader


Also state that the application should also delete the Topic obtained
by means of lookup_topic as this is needed to remote the local
resources devoted to it.

Resolution: see below
Revised Text: Resolution: State that if said "delete" operations are called while there are dependent or contained entities the operation will fail and return PRECONDITION_NOT_MET. Also state that the application should also delete the Topic obtained by means of lookup_topic as this is needed to remove the local resources devoted to it. This change only concerns the PIM (text). Revised Text: Changes in PIM · No changes to sections 2.1.2.2.1.2, 2.1.2.2.1.4, 2.1.2.2.1.6, 2.1.2.2.1.8, 2.1.2.2.1.10, and 2.1.2.2.2.2. They already describe the behavior if contained/dependent entities are not deleted · Section 2.1.2.5.2.6 delete_datareader; after the first paragraph add the paragraph: The deletion of a DataReader is not allowed if there are any existing ReadCondition or QueryCondition objects that are attached to the DataReader. If the delete_datareader operation is called on a DataReader with any of these existing objects attached to it, it will return PRECONDITION_NOT_MET. · Section 2.1.2.2.1.11 lookup_topic; After the 2nd paragraph add the paragraph: A Topic that is locally obtained only by means of lookup_topic, that is, for which create_topic is not locally called, must also be deleted by means of delete_topic so that the local resources can be released.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6761: Ref-20 Semantics_of_factory_delete_methods (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The specification does not state what happens if the application tries
to use an entity after it has been deleted by means of the
corresponding "delete" operation on the factory. This is specially
relevant in languages such as Java (or C++ when using the "var" types)
that automatically maintain reference counts. Is the deletion delayed
until the last reference is release?


***PROPOSAL***


Modify section 2.1.1.1 and ad the return code
ALREADY_DELETED. Similarly add the "const ReturnCode_t RETCODE_
ALREADY_DELETED = 9" to the IDL in section 2.2.3


State that it is an error for the application to use an entity after
it has been deleted. Also state that, in general the result is
unspecified and may depend on the implementation or the PSM. Also
state that in the cases where the implementation can detect this, the
operation that uses the deleted entity should return RETCODE_
ALREADY_DELETED (e.g. calling DataWriter::write on a data writer that
has been deleted).


Resolution: see below
Revised Text: Resolution: Modify section 2.1.1.1 and ad the return code ALREADY_DELETED. Similarly add the "const ReturnCode_t RETCODE_ ALREADY_DELETED = 9" to the IDL in section 2.2.3 State that it is an error for the application to use an entity after it has been deleted. Also state that, in general the result is unspecified and may depend on the implementation or the PSM. Also state that in the cases where the implementation can detect this, the operation that uses the deleted entity should return RETCODE_ ALREADY_DELETED (e.g. calling DataWriter::write on a data writer that has been deleted). This change concerns the PIM (text) and the IDL. Revised Text: Changes in PIM · Section 2.1.1.1 Return codes table · Add to the bottom the row: "ALREADY_DELETED" ALREADY_DELETED The object target of this operation has already been deleted. · Section 2.1.1.1 at the end, before section 2.1.1.2: · Replace paragraph: Any operation with return type ReturnCode_t may return OK or ERROR. Any operation that takes an input parameter may additionally return BAD_PARAMETER. OK, ERROR, and BAD_PARAMETER are the standard return codes. Operations that may return any of the additional error codes above will state so explicitly. · with paragraphs: Any operation with return type ReturnCode_t may return OK or ERROR. Any operation that takes an input parameter may additionally return BAD_PARAMETER. Any operation on an object created from any of the factories may additionally return ALREADY_DELETED. The return codes OK, ERROR, ALREADY_DELETED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation. Operations that may return any of the additional (non-standard) error codes above will state so explicitly. It is an error for an application to use an Entity that has already been deleted by means of the corresponding delete operation on the factory. If an application does this, the result is unspecified and will depend on the implementation and the PSM. In the cases where the implementation can detect the use of a deleted entity, the operation should fail and return ALREADY_DELETED. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Below const ReturnCode_t RETCODE_INCONSISTENT_POLICY = 8; · add const ReturnCode_t RETCODE_ALREADY_DELETED = 9;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6762: Ref-87 Clarify_Topic_deletion_as_local_concept (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.2.1.11 indicates that topics can be made globally
available. So "creation" can have a global effect.


It is not clear therefore whether deletion of a Topic also has a
global effect.


***PROPOSAL***


Add to that paragraph in section 2.1.2.2.1.11. "In any case
delete_topic deletes only the local proxy.


Resolution: see below
Revised Text: Resolution: Add to that paragraph in section 2.1.2.2.1.11. "In any case delete_topic deletes only the local proxy. Revised Text: Changes in PIM · In Section 2.1.2.2.1.11 · just before the last paragraph, add the paragraph: Regardless of whether the middleware chooses to propagate topics, the delete_topic operation deletes only the local proxy.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6763: Ref-151 No_locally_duplicate_topics (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The specification does not state what happens if the applcation
attempts to create multiple times a Topic with the same name


***PROPOSAL***


In section 2.1.2.2.1.5 state that calling create_topic muiltiple times
behaves as doing a lookup first with no timeout and if a Topic is
found then it compares the data-type & the qos specified as parameters
with those of the existing Topic.


State that if the comparison fails, than the operation return
PRECONDITION_FAILED.


State that if the lookup and comparison succeed then the reference
count to the Topic should be incremented such that the application
must call delete_topic as many times as it called create_topic and
directly lookup_topic for the local proxy to be deleted.

Resolution: see below
Revised Text: Resolution: In section 2.1.2.2.1.5 state that calling create_topic multiple times behaves as doing a lookup first with no timeout and if a Topic is found then it compares the data-type & the QoS specified as parameters with those of the existing Topic. State that if the comparison fails, than the operation return PRECONDITION_FAILED. State that if the lookup and comparison succeed then the reference count to the Topic should be incremented such that the application must call delete_topic as many times as it called create_topic and lookup_topic for the local proxy to be deleted. Revised Text: Changes in PIM · Section 2.1.2.2.1.5 · Replace The application is not allowed to create two Topic objects with the same name attached to the same DomainParticipant. If the application attempts this, create_topic will fail and return an error. With The implementation of create_topic will automatically perform a lookup_topic for the specified topic_name with a timeout of zero. If a Topic is found, then the QoS and type_name of the found Topic are matched against the ones specified on the create_topic call and if there is an exact match, the existing Topic is returned. If there is no match the operation will fail. The consequence is that the application can never create more than one Topic with the same topic_name per DomainParticipant. Subsequent attempts will either return the existing Topic (i.e. behave like lookup_topic) or else fail.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6764: Ref-22 Automatic_deletion_of_contained_entities (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The specification describes that an entity cannot be deleted if it has
contained entities. That is: Cannot delete Publisher if local
datawriters. Cannot delete Pubscriber if local datareaders. Cannot
delete DomainParticipant if any Topics, Publishers, Subscribers.


However manually deleting all contained entities is cumbersome in the
case in which the application is trying to do so as is the case when
shutting down.


***PROPOSAL***


Add the operation "ReturnCode_t delete_contained_entities()" to
DomainParticipant, Publisher, and Subscriber.


This operation will return all contained entities and leave the system
in a state that allows the application to delete the container.  This
affects sections 2.1.2.2.1, 2.1.2.4.1, 2.1.2.5.2, and 2.2.3 (IDL)


Resolution: see below
Revised Text: Resolution: Add the operation "ReturnCode_t delete_contained_entities()" to DomainParticipant, Publisher, and Subscriber. This operation will return all contained entities and leave the system in a state that allows the application to delete the container. This affects sections 2.1.2.2.1, 2.1.2.4.1, 2.1.2.5.2, and 2.2.3 (IDL) Revised Text: Changes in PIM · Section 2.1.2.2.1 DomainParticipant · DomainParticipant tabl : Add operation: ReturnCode_t delete_contained_entities() · Add section 2.1.2.2.1.17 with the following title: 2.1.2.2.1.17 delete_contained_entities This operation deletes all the entities that were created by means of the "create" operations on the DomainParticipant. That is, it deletes all contained Publisher, Subscriber, Topic, ContentFileteredTopic and MultiTopic. Prior to deleting each contained entity, this operation will recursively call the corresponding delete_contained_entities operation on each contained entity (if applicable). This pattern is applied recursively. In this manner the operation delete_contained_entities on the DomainParticipant will end up deleting all the entities recursively contained in the DomainParticipant, that is also the DataWriter, DataReader, as well as the QueryCondition and ReadCondition objects belonging to the contained DataReaders. Once delete_contained_entities retuns successfully, the application may delete the DomainParticipant knowing that it has no contained entities. · Section 2.1.2.4.1 Publisher · Publisher table: Add operation: delete_contained_entities ReturnCode_t · Add section 2.1.2.4.1.13 2.1.2.4.1.13 delete_contained_entities This operation deletes all the entities that were created by means of the "create" operations on the Publisher. That is, it deletes all contained DataWriter objects. Once delete_contained_entities retuns successfully, the application may delete the Publisher knowing that it has no contained DataWriter objects. · Section 2.1.2.5.2 Subscriber · Subscriber table: Add operation: delete_contained_entities ReturnCode_t · Add section 2.1.2.5.2.14 2.1.2.5.2.14 delete_contained_entities This operation deletes all the entities that were created by means of the "create" operations on the Subscriber. That is, it deletes all contained DataReader objects. This pattern is applied recursively. In this manner the operation delete_contained_entities on the Subscriber will end up deleting all the entities recursively contained in the Subscriber, that is also the QueryCondition and ReadCondition objects belonging to the contained DataReaders. Once delete_contained_entities retuns successfully, the application may delete the Subscriber knowing that it has no contained DataReader objects. · Section 2.1.2.5.3 DataReader · DataReader table: Add operation: delete_contained_entities ReturnCode_t · Add section 2.1.2.5.3.19 2.1.2.5.3.19 delete_contained_entities This operation deletes all the entities that were created by means of the "create" operations on the DataReader. That is, it deletes all contained ReadCondition and QueryCondition objects. Once delete_contained_entities retuns successfully, the application may delete the DataReader knowing that it has no contained ReadCondition and QueryCondition objects. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DomainParticipant · Add operation: ReturnCode_t delete_contained_entities(); · Interface Publisher · Add operation: ReturnCode_t delete_contained_entities(); · Interface Subscriber · Add operation: ReturnCode_t delete_contained_entities(); · Interface DataReader · Add operation: ReturnCode_t delete_contained_entities();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6765: Ref-15 Behavior_on_deletion_from_wrong_factory (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The specification does not mention what to do when an entity is passed
to a delete method in a factory object, for which that factory object
is not owner? (For example a DataWriter is passed to the
delete_datawriter method of Publisher A, while it was created in
Publisher B.)


This applies to Sections: 2.1.2.2.1.2, 2.1.2.2.1.4, 2.1.2.2.1.6,
2.1.2.2.1.8, 2.1.2.2.1.10, 2.1.2.4.1.6, 2.1.2.5.2.6, and 2.1.2.5.3.7


***PROPOSAL***


State that if said "delete" operations are called passing the wrong
factory, the operation shall fail and return PRECONDITION_NOT_MET.



Resolution: see below
Revised Text: Resolution: State that if said "delete" operations are called passing the wrong factory, the operation shall fail and return PRECONDITION_NOT_MET. This change only concerns the PIM (text). Revised Text: Changes in PIM · On section 2.1.2.2.1.2 delete_publisher add paragraph: The delete_publisher operation must be called on the same DomainParticipant object used to create the Publisher. If delete_publisher is called on a different DomainParticipant the operation will have no effect and it will return PRECONDITION_NOT_MET. · On section 2.1.2.2.1.4 delete_subscriber add paragraph: The delete_subscriber operation must be called on the same DomainParticipant object used to create the Subscriber. If delete_subscriber is called on a different DomainParticipant the operation will have no effect and it will return PRECONDITION_NOT_MET. · On section 2.1.2.2.1.6 delete_topic add paragraph The delete_topic operation must be called on the same DomainParticipant object used to create the Topic. If delete_topic is called on a different DomainParticipant the operation will have no effect and it will return PRECONDITION_NOT_MET. · On section 2.1.2.2.1.8 delete_ contentfilteredtopic add paragraph: The delete_contentfilteredtopic operation must be called on the same DomainParticipant object used to create the ContentFilteredTopic. If delete_contentfilteredtopic is called on a different DomainParticipant the operation will have no effect and it will return PRECONDITION_NOT_MET. · On section 2.1.2.2.1.10 delete_ contentfilteredtopic add paragraph: The delete_multitopic operation must be called on the same DomainParticipant object used to create the MultiTopic. If delete_multitopic is called on a different DomainParticipant the operation will have no effect and it will return PRECONDITION_NOT_MET. · On section 2.1.2.4.1.6 delete_ datawriter add paragraph: The delete_datawriter operation must be called on the same Publisher object used to create the DataWriter. If delete_datawriter is called on a different Publisher the operation will have no effect and it will return PRECONDITION_NOT_MET. · On section 2.1.2.5.2.6 delete_ datareader add paragraph: The delete_datareader operation must be called on the same Subscriber object used to create the DataReader. If delete_datareader is called on a different Subscriber the operation will have no effect and it will return PRECONDITION_NOT_MET.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6766: Single waitset attached to condition (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-27 Single_waitset_attachement_to_condition


Section 2.1.2.1. Why can only one "WaitSet" be associated with a
"Condition". This contradicts figure 2-7 and figure 2-9.


This seems too limiting. When this is combined with the fact that each
entity has a single status condition we end up in a situation where
one thread can wait on an entity changing states.


***PROPOSAL***


Correct section 2.1.2.1 and figure 2-5 to indicate that it is possible
to attach the same condition to multiple waitsets

Resolution: see below
Revised Text: Resolution: Correct section 2.1.2.1 and figure 2-5 to indicate that it is possible to attach the same condition to multiple waitsets. This change concerns the PIM (UML diagram) Revised Text: On figure 2-2 replace the "1" on top of the arrow that goes from class Condition to class Waitset with a "*". The updated figure is: On figure 2-5, replace the "1" on top of the arrow that goes from class Condition to class Waitset with a "*".The updated figure is:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6767: Entity specialization of set/get qos/listener (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-35 Entity_specialization_set_get_qos


The specification shows this operation as being abstract on the
base-class (Entity) and says that it will be overridden by each
specialization (Section 2.1.2.1.1.2). This is consistent with the PSM.


However, the get_qos/set_qos operations are not mentioned in the
tables and text that that describe the DomainParticipant (section
2.1.2.2.1.


Also for the specialized Entity classes, the operations are not
mantioned on the class table but do appear in the text that describes
the table. This applies to Topic (section 2.1.2.3.2), Publisher
(section 2.1.2.4.1), DataWriter (section 2.1.2.4.2), Subscriber
(section 2.1.2.5.2), and DataReader (section 2.1.2.5.3).


***PROPOSAL***


In order to avoid ambiguity, the get_qos/set_qos operation should
appear explicitly in all the aforementioned sections.



Resolution: see below
Revised Text: Resolution: In order to avoid ambiguity, the get_qos/set_qos operation should appear explicitly in all the aforementioned sections. Revised Text: Changes in the text · Modify the following tables: · Section 2.1.2.2.1 DomainParticipant table, · Section 2.1.2.3.2 Topic table · Section 2.1.2.4.1 Publisher table · Section 2.1.2.4.2 DataWriter table · Section 2.1.2.5.2 Subscriber table · Section 2.1.2.5.3 DataReader table · By adding the following operations: (inherited) get_qos QosPolicy [] (inherited) set_qos ReturnCode_t qos_list QosPolicy []
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6768: Ref-36 Entity_specialization_set_get_qos (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The specification shows these operations as being abstract on the
base-class (Entity) and says that it will be overridden by each
specialization (Section 2.1.2.1.1.3). This is consistent with the PSM.


However, the get_listener/set_ listener operations are not mentioned
in the tables and text that that describe the DomainParticipant
(section 2.1.2.2.1.


Also for the specialized Entity classes, the operations are not
mantioned on the class table but do appear in the text that describes
the table. This applies to DomainParticipant (section 2.1.2.2.1),
Topic (section 2.1.2.3.2), Publisher (section 2.1.2.4.1), DataWriter
(section 2.1.2.4.2), Subscriber (section 2.1.2.5.2), and DataReader
(section 2.1.2.5.3.


***PROPOSAL***


In order to avoid ambiguity, the get_listener/set_listener operation
should appear explicitly in all the aforementioned sections.



Resolution: see below
Revised Text: Resolution: In order to avoid ambiguity, the get_listener/set_listener operation should appear explicitly in all the aforementioned sections. This change only concerns the PIM (text). Revised Text: Changes in PIM · Modify the following tables: · Section 2.1.2.2.1 DomainParticipant table, · Section 2.1.2.3.2 Topic table · Section 2.1.2.4.1 Publisher table · Section 2.1.2.4.2 DataWriter table · Section 2.1.2.5.2 Subscriber table · Section 2.1.2.5.3 DataReader table · By adding the following operations: (inherited) get_listener Listener (inherited) set_listener ReturnCode_t a_listener Listener
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6769: Inconsistencies between PIM and PSM/IDL (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
***Ref-38 DomainParticipant_domainId


Section 2.1.2.2.1. DomainParticipant class table. The attribute
domainId is not mentioned although mentioned in the IDL PSM.


***PROPOSAL***


Correct PIM.

Resolution: see below
Revised Text: Resolution: Correct the PIM. Revised Text: Changes in the text · Section 2.1.2.2.1 DomainParticipant class table · Add the following operation: get_domain_id DomainId_t
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6770: Ref-39 Entity_specialization_set_get_qos (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.2.1. DomainParticipant class table defines an operation
called delete_contentfilteredtopic. This operation is named
inconsistently in the IDL PSM (section 2.2.3) where it appears as
delete_contentfiltered in the IDL PSM.


***PROPOSAL***


Change the name of the operation in the IDL to
delete_contentfilteredtopic.

Resolution: see below
Revised Text: Resolution: Change the name of the operation in the IDL to delete_contentfilteredtopic. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DomainParticipant, · Rename delete_contentfiltered to be delete_contentfilteredtopic:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6771: Ref-28 IDL_entity_get_statuscondition (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.1.1. Method get_statuscondition. According to the IDL
file, this method is replaced by two other methods named
create_statuscondition and delete_statuscondition. Which of the two is
right?


The IDL is incorrect. Given that there is a single statuscondition
associated with an Entity having only a get_statuscondition operation
and no delete_statuscondition.


***PROPOSAL***


Correct the IDL. Replace the create_statuscondition and
delete_statuscondition in the IDL with get_statuscondition as
desxribed in the PIM.

Resolution: see below
Revised Text: Resolution: Correct the IDL by replacing the create_statuscondition and delete_statuscondition with get_statuscondition as desxribed in the PIM. This change concerns only the IDL Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface Entity · Remove: StatusCondition create_statuscondition(in StatusKindMask mask); ReturnCode_t delete_statuscondition(in StatusCondition the_condition); · Add: StatusKindMask get_status_changes();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6772: Ref-34 Incorrect_guard_condition_enabled_statuses (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.1.8. The enabled_statuses attribute for a GuardCondition
is incorrect since a GuardCondition does not have any relationship
with status conditions.


***PROPOSAL***


Remove the attribute from section 2.1.2.1.8

Resolution: see below
Revised Text: Resolution: Remove the attribute from section 2.1.2.1.8 Revised Text: Changes in PIM · Section 2.1.2.1.8 GuardCondition table · Remove operation get_enabled_statuses
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6773: Ref-37 Entity_ specialization_set_get_listener_in_idl (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In section 2.2.3 (IDL) the entities DomainParticipant, Topic,
Publisher, DataWriter, Subscriber, and DataReader are missing the
get_listener/set_listener operations.


Those operation are all present in the PIM.


***PROPOSAL***


Add said operations to the IDL to match the PIM


Resolution: see below
Revised Text: Resolution: Add said operations to the IDL to match the PIM. This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DomainParticipant add the operations: ReturnCode_t set_listener(in DomainParticipantListener a_listener, StatusKindMask mask); DomainParticipantListener get_listener(); · Interface Publisher add the operations: ReturnCode_t set_listener(in PublisherListener a_listener, StatusKindMask mask); PublisherListener get_listener(); · Interface DataWriter add the operations: ReturnCode_t set_listener(in DataWriterListener a_listener, StatusKindMask mask); DataWriterListener get_listener(); · Interface Subscriber add the operations ReturnCode_t set_listener(in SubscriberListener a_listener, StatusKindMask mask); SubscriberListener get_listener(); · Interface DataReader add the operations ReturnCode_t set_listener(in DataReaderListener a_listener, StatusKindMask mask); DataReaderListener get_listener();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6774: Ref-42 DomainParticipantListener_on_requested (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.2.3. DomainParticipantListener Interface table: The
operation on_requested_deadline_missed_qos is mentioned instead of
on_requested_deadline_missed.


***PROPOSAL***


Correct section 2.1.2.2.3

Resolution: see below
Revised Text: Resolution: Correct section 2.1.2.2.3 This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.2.3 DomainParticipantListener table · Replace operation name "on_requested_deadline_missed_qos" with: "on_requested_deadline_missed" on_requested_deadline_missed void the_reader DataReader Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6775: Ref-46 ContentFilteredTopic_related_topic (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.3.3 in the ContentFilteredTopic class table, there is
the attribute related_topic. This attribute is missing in the IDL PSM.


***PROPOSAL***


Correct the IDL to match the PIM.


Resolution: see below
Revised Text: Resolution: Correct the IDL to match the PIM. This change only concerns IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface ContentFilteredTopic · add the operation: Topic get_related_topic(); Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6776: Ref-48 FooDataWriter_unregister_instance (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.4.2. FooDataWriter class table. The unregister_instance
method is specified to return InstanceHandle_t.


The return type should be ReturnCode_t conform the IDL PSM.


***PROPOSAL***


Correct section 2.1.2.4.2 to match the IDL file.

Resolution: see below
Revised Text: Changes in PIM · Section 2.1.2.4.2 FooDataWriter table · On the row describing the return type of the operation unregister_instance replace the return type "InstanceHandle_t" with: "ReturnCode_t" Unregister_instance ReturnCode_t instance Foo handle InstanceHandle_t
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6777: Ref-49 DataWriter_get_key (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.4.2.  DataWriter class table. The get_key_value method
is named get_key in the IDL PSM.


***PROPOSAL***


Correct the IDL PSM. Method should be named get_key_value


Resolution: see below
Revised Text: Resolution: Correct the IDL. Method should be named get_key_value This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataWriter. · Rename operation "get_key" to be "get_key_value" (in the comments): // ReturnCode_t get_key_value(inout Data // key_holder, in InstanceHandle_t handle); · Interface FooDataWriter. · Rename operation "get_key" to be "get_key_value" ReturnCode_t get_key_value(inout Foo key_holder, in DDS::InstanceHandle_t handle);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6778: Ref-57 FooDataReader_get_key (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.5.3.  FooDataReader class table. The get_key_value
method is named get_key in the IDL PSM.


***PROPOSAL***


Correct the IDL PSM. Method should be named get_key_value

Resolution: see below
Revised Text: Resolution: Correct the IDL. Method should be named get_key_value This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataWriter. · Rename operation "get_key" to be "get_key_value" (in the comments): // DDS::ReturnCode_t get_key_value(inout Data key_holder, // in InstanceHandle_t handle); · Interface FooDataWriter. · Rename operation "get_key" to be "get_key_value": DDS::ReturnCode_t get_key_value(inout Foo key_holder, in DDS::InstanceHandle_t handle);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6779: Ref-56 Subscriber_notify_datareaders_parameters (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In section 2.1.2.5.2, subscriber class table, operation
notify_datareaders void. In the IDL file this method has two
parameters: LifecycleStateMask and SampleStateMask


***PROPOSAL***


Correct the IDL PSM. Remove the parameters in the IDL file.

Resolution: see below
Revised Text: Resolution: Correct the IDL PSM, by removing the unnecessary parameters. This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface Subscriber, operation notify_datareaders: · Change prototype from: void notify_datareaders(in LifecycleStateMask l_state, in SampleStateMask s_state); · to void notify_datareaders();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6780: Ref-58 DataReader_read_take_w_condition (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.5.3 (DataReader and FooDataReader class table),
operations read_w_condition and take_w_condition. In the IDL file, the
sample_info parameter is of type SampleInfo instead of SampleInfoSeq.


***PROPOSAL*** Correct the IDL PSM. sample_info parameter should be of
type SampleInfoSeq.

Resolution: see below
Revised Text: Resolution: Correct the IDL PSM. the parameter should be of type SampleInfoSeq. This change only concerns IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataReader, operation read_w_condition (comment) · Change prototype from: // ReturnCode_t read_w_condition(out DataSeq received_data, out SampleInfo info_seq, in ReadCondition condition); · To: // ReturnCode_t read_w_condition(out DataSeq received_data, out SampleInfoSeq info_seq, in ReadCondition condition); · Interface DataReader, operation take_w_condition (comment) · Change prototype from: // ReturnCode_t take _w_condition(out DataSeq received_data, out SampleInfo info_seq, in ReadCondition condition); · To: // ReturnCode_t take _w_condition(out DataSeq received_data, out SampleInfoSeq info_seq, in ReadCondition condition); Changes in implied IDL · Interface FooDataReader, operation read_w_condition · Change prototype from: DCPS::ReturnCode_t read_w_condition(out FooSeq received_data, out DCPS::SampleInfo info_seq, in DCPS::ReadCondition condition); · To: DCPS::ReturnCode_t read_w_condition(out FooSeq received_data, out DCPS::SampleInfoSeq info_seq, in DCPS::ReadCondition condition); · Interface FooDataReader, operation take _w_condition (comment) · Change prototype from: DCPS::ReturnCode_t take_w_condition(out FooSeq received_data, out DCPS::SampleInfo info_seq, in DCPS::ReadCondition condition); · To: DCPS::ReturnCode_t take_w_condition(out FooSeq received_data, out DCPS::SampleInfoSeq info_seq, in DCPS::ReadCondition condition);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6781: Ref-59 FooDataReader_read_take_parameter_order (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.5.3 FooDataReader class table, operation read and
take. In the table, lifecycle_states and sample_states parameters are
reversed with respect to IDL PSM.


***PROPOSAL***


Correct section 2.1.2.5.3.

Resolution: see below
Revised Text: Resolution: Correct section 2.1.2.5.3, to align with the IDL. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.5.3 FooDataReader table, operation read: · Swap the order of parameters lifecycle_states with sample_states · From: · 3th paramater: lifecycle_states of type LifecycleStateKind[] · 4rd paramater: sample_states of type SampleStateKind[] · To · 3rd paramater: sample_states of type SampleStateKind[] · 4th paramater: lifecycle_states of type LifecycleStateKind[] · Section 2.1.2.5.3 FooDataReader table, operation take: · Swap the order of parameters lifecycle_states with sample_states: · From: · 3th paramater: lifecycle_states of type LifecycleStateKind[] · 4rd paramater: sample_states of type SampleStateKind[] · To · 3rd paramater: sample_states of type SampleStateKind[] · 4th paramater: lifecycle_states of type LifecycleStateKind[]
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6782: Ref-70 Missing_deadline_statuskind_from_pim (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.2.2. The PSM IDL StatusKind constant definitions
OFFERED_INSTANCE_DEADLINE_MISSED_STATUS and
REQUESTED_INSTANCE_DEADLINE_MISSED_STATUS are not mentioned in the PIM
and do not have a corresponding Status class either.


***PROPOSAL***


Correct PIM and add said statuses to section 2.1.4.1.

Resolution: see below
Revised Text: Resolution: Remove the OFFERED_INSTANCE_DEADLINE_MISSED_STATUS and REQUESTED_INSTANCE_DEADLINE_MISSED_STATUS from the PSM. They are redundant with the already existing OFFERED_DEADLINE_MISSED_STATUS and REQUESTED_DEADLINE_MISSED_STATUS. This change concerns only the PSM. Revised Text: Changes in PSM · Section 2.2.3 DCPS PSM : IDL · Reomove: const StatusKind OFFERED_INSTANCE_DEADLINE_MISSED_STATUS = 0x0001 << 3; const StatusKind REQUESTED_INSTANCE_DEADLINE_MISSED_STATUS = 0x0001 << 4;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6783: Ref-79 Missing_StatusKind_liveliness_idl_constants (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The communication statuses LIVELINESS_LOST and LIVELINESS_CHANGED do
not have a corresponding PSM IDL StatusKind constant definitions


***PROPOSAL***


Correct the IDL PSM. Add said definitions consistently with PIM.




Resolution: see below
Revised Text: Resolution: Add said definitions in the IDL consistently with the PIM. This change concerns only the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · After the line const StatusKind DATA_AVAILABLE_STATUS = 0x0001 << 10; · Add the following lines: const StatusKind LIVELINESS_LOST_STATUS = 0x0001 << 11; const StatusKind LIVELINESS_CHANGED_STATUS = 0x0001 << 12;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6784: Ref-88 Inconsistent_naming_PIM_IDL_instance_samples (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Page 2-68 the field in QoS RESOURCE_LIMITS is called
max_instance_samples. In the IDL (page 2-104) says
max_samples_per_instance


***PROPOSAL***


Change table in 2-68 replace the two references to
max_samples_per_instance

Resolution: see below
Revised Text: Resolution: Change table in 2-68 replace the two references to max_samples_per_instance This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3 QoS table · On the description of the RESOURCE_LIMITS replace "max_instance_samples" with: "max_samples_per_instance"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6785: Ref-205 On_requested_deadline_missed_paramtype (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
2.1.2.2.3: In the "on_requested_deadline_missed" method, the second
parameter is of type "RequestedIncompatibleQosStatus" instead of type
"RequestedDeadlineMissedStatus"


***PROPOSAL***


Fix section 2.1.2.2.3. parameter type should be
RequestedDeadlineMissedStatus

Resolution: see below
Revised Text: Resolution: Fix section 2.1.2.2.3. parameter type should be RequestedDeadlineMissedStatus This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.2.3 DomainParticipantListener table · operation on_requested_deadline_missed, replace type of parameter "status" from: "RequestedIncompatibleQosStatus" to: "RequestedDeadlineMissedStatus" on_requested_deadline_missed void the_reader DataReader status RequestedDeadlineMissedStatus
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6786: Ref-126 Inconsistent_parameter_order_to_get_datareaders (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In section 2.1.2.5.2 the order of parameters in get_datareaders is
(LifecycleState, SampleState) is inconsistent with the order in the
methods which is (SampleState, LifecycleState)


***PROPOSAL***


Change parameter order to (SampleState, LifecycleState) This affects
both the PIM (section 2.1.2.5.2) and the IDL PSM


Resolution: see below
Revised Text: Resolution: Change parameter order to (SampleState, LifecycleState). This change concerns the PIM (text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.5.2 Subscriber table: · Operation get_datareaders, swap the order of parameters lifecycle_states and sample_states, · From: · 2nd paramater: lifecycle_states of type LifecycleStateKind[] · 3rd paramater: sample_states of type SampleStateKind[] · To · 2nd paramater: sample_states of type SampleStateKind[] · 3rd paramater: lifecycle_states of type LifecycleStateKind[] Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface Subscriber · Replace operation: ReturnCode_t get_datareaders(out DataReaderSeq readers, in LifecycleStateMask l_state, in SampleStateMask s_state); · With operation: ReturnCode_t get_datareaders(out DataReaderSeq readers, in SampleStateMask s_state, in LifecycleStateMask l_state);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6787: Ref-135 Missing_accessor_for_SampleRejectedStatus (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Attribute/accessor for SampleRejectedStatus missing from 2.1.2.5.3


***PROPOSAL***


Add the get_xxx accesor to the DataReader class

Resolution: see below
Revised Text: Add the accesor to the DataReader class. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.5.3 DataReader table: · Add operation: "get_sample_rejected_status with no arguments returning SampleRejectedStatus" get_sample_rejected_status SampleRejectedStatus · Insert section 2.1.2.5.3.15 right after the section named "get_requested_incompatible_qos_status" 2.1.2.5.3.15 get_sample_rejected_status This operation allows access to the SAMPLE_REJECTED_STATUS communication status. Communication statuses are described in Section 2.1.4.1." Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataReader, add operation: SampleRejectedStatus get_sample_rejected_status();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6788: Ref-63 QoS_USER_DATA_on_Publisher_and_Subscriber (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.3. According to the QosPolicy table, USER_DATA does not
concern Publisher and Subscriber entities. The IDL PSM however has
UserDataQosPolicy as a member of both the PublisherQos and
SubscriberQos structures.


***PROPOSAL***


Correct the PSM. USER_DATA remove USER_DATA from Publisher and
Subscriber

Resolution: see below
Revised Text: Resolution: Correct the IDL, by removing the extra members in those structures. This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · struct SubscriberQos · Remove member: UserDataQosPolicy user_data; · struct PublisherQos · Remove member: UserDataQosPolicy user_data;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6789: Ref-229 IDL_rename_publisher_laxity_w_latency_budget (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.2.3 IDL. TopicQos, DataWriterQos, DataReaderQos have field
called delay_laxity should be latency_budget according to the PIM


***PROPOSAL***


Change the IDL to match the PIM by renaming delay_laxity to
latency_budget


Resolution: see below
Revised Text: Resolution: Change the IDL to match the PIM by renaming delay_laxity to latency_budget This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · struct TopicQos · Rename field "delay_laxity" to be "latency_budget" · struct DataWriterQos · Rename field "delay_laxity" to be "latency_budget" · struct DataReaderQos · Rename field "delay_laxity" to be "latency_budget"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6790: Clarification of listener invocation and waitset signaling (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-01 Listener_waitset_triggering


The specification does not make clear whether listeners and waitsets
are awaken once per "event" that causes a change of status (implying a
queue of events) or are they just awaken once when the status changes.


***PROPOSAL***


Clarify that there is no implied "queueing". The listeners and
waitsets are enabled based on the status change but not necessarily
called/signaled multiple times.  This clarification should appear in
section 2.1.4.


Resolution: see below
Revised Text: Resolution: Clarify that there is no implied "queueing". The listeners and waitsets are enabled based on the status change but not necessarily called/signaled multiple times. This clarification should appear in section 2.1.4. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.3 · Add the following paragraph at the very end of the section: There is no implied "event queuing" in the invocation of the listeners in the sense that, if several changes of status of the same kind occur in sequence, it is not necessary that the DCPS implementation delivers one listener callback per "unit" change. For example, it may occur that the DCPS implementation discovers that the liveliness of a DataReader has changed in that several matching DataWriter entities have appeared; in that case the DCPS implementation may choose to invoke the on_liveliness_changed operation on the DataReaderListener just once. · Section 2.1.4.4 · Add the following paragraph right before figure 2-20 (that is before the paragraph: "A key aspect of the Condition/WaitSet mechanism is the setting of the trigger_value of each Condition."): Similar to the invokation of listeners, there is no implied "event queuing" in the awakening of a WaitSet in the sense that, if several Conditions attached to the WaitSet have their trigger_value transition to TRUE in sequence the DCPS implementation needs to only unblock the WaitSet once.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6791: Ref-02 Data_Available_status_transition (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The data available status becomes TRUE when data is available, but
under what condition does it become FALSE again?


According the specification when data is taken, but does this only
apply to the object that is accessed? Or do the status values of other
objects that become untrue as a result of the take action also need to
be changed?


Example: A reader with 3 queries, data arrives and the status of the
reader and all queries become TRUE. The data is taken via the reader,
do the status values of the queries become FALSE?


***PROPOSAL***


Add the following clarifications to section 2.1.4.4


DataReader::take() and DataReader::read() changes the status of the
queries that are no longer true after the 'take'. Since 'take' removes
the data from the service so its no longer there it would make little
sense for the status to remain enabled.


In other words the same way that arrival of data may change the 'data
available status' of any query, so would the 'taking' or 'reading' of
data potentially affect all queries.


Note that this does not mean that waitesets that were attached to the
query will not be woken up. This is an implementation issue. Once the
query status changes to 'enabled' it may wake up the attached waitset,
transitioning to 'not-enabled' does not necessarily 'unwakeup' the
waitset since this operation is in general not possible. The
consequence is that the application may be woken up and not see any
active conditions. This is unavoidable if multiple threads are
concurrently taking data.


Resolution: see below
Revised Text: Resolution: Add the following clarifications to section 2.1.4.4 DataReader::take() and DataReader::read() changes the status of the queries that are no longer true after the 'take'. Since 'take' removes the data from the service so its no longer there it would make little sense for the status to remain enabled. In other words the same way that arrival of data may change the 'data available status' of any query, so would the 'taking' or 'reading' of data potentially affect all queries. Note that this does not mean that waitesets that were attached to the query will not be woken up. This is an implementation issue. Once the query status changes to 'enabled' it may wake up the attached waitset, transitioning to 'not-enabled' does not necessarily 'unwakeup' the waitset since this operation is in general not possible. The consequence is that the application may be woken up and not see any active conditions. This is unavoidable if multiple threads are concurrently taking data. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.4.4.2 after the first paragraph "Similar … evaluates to TRUE" · add the following paragraph: The fact that the trigger_value of a ReadCondition is dependent on the presence of samples on the associated DataReader implies that a single take operation can potentially change the trigger_value of several ReadCondition or QueryCondition conditions. For example, if all samples are taken, any ReadCondition and QueryCondition conditions associated with the DataReader that had their trigger_value==TRUE before will see the trigger_value change to FALSE. Note that this does not gurantee that WaitSet objects that were separately attached to those conditions will not be woken up. Once we have trigger_value==TRUE on a condition it may wake up the attached WaitSet, the condition transitioning to trigger_value==TRUE does not necessarily 'unwakeup' the WaitSet as 'unwakening' may not be possible in general. The consequence is that an application blocked on a WaitSet may return from the wait with a list of conditions some of which are not longer "active". This is unavoidable if multiple threads are concurrently waiting on separate WaitSet objects and taking data associated with the same DataReader entity.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6792: Duplicate use of domainId (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-38 Duplicate_domainId_to_create_participant


The specification does not clarify the behavior of
DomainParticipantFactory create_participant when the domainId
specified is already used.


***PROPOSAL***


Return a HANDLE_NIL in that case.


Also add a lookup_participant method to DomainParticipantFactory with
a parameter of type DomainId_t. to allow finding a pre-exising
DomainParticipant.

Resolution: see below
Revised Text: Resolution: The operation create_participant will return a new participant that is also bound to the same domain. Also add a lookup_participant method to DomainParticipantFactory with a parameter of type DomainId_t. to allow finding a pre-exising DomainParticipant. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.2.2 DomainParticipantFactory · DomainParticipantFactory table add operation: DomainParticipant lookup_participant(DomainId_t domainId); · Add section 2.1.2.2.2.4, with the following content: 2.1.2.2.2.4 lookup_participant This operation retrieves a previously created DomainParticipant belonging to specified domainId. If no such DomainParticipant exists, the operation will return a 'nil' value (as specified by the platform). If multiple DomainParticipant were belonging to that domainId exist, then the operation will return one of them. It is not specified which one. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DomainParticipantFactory · Add operation: DomainParticipant lookup_participant(DomainId_t domainId);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6793: Use of Topic versus TopicDescription (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-43 TopicDescription _name_attribute


In section 2.1.2.3 TopicDescription has an attribute called "name.
However the IDL is inconsistent. TopicDescription does not have the
"name" attribute, instead Topic has the name and the other derived
classes (ContentFilteredTopic and MultiTopic) do not have a name


In the IDL specification the name specified by the TopicDescription
but instead only by the Topic.


It makes sense to have the name on TopicDescription as in the PIM such
that all kinds of TopicDescription entities can be locally accessed by
means of "lookup_topic".


***PROPOSAL***


Fix the PIM IDL file to match the PSM.

Resolution: see below
Revised Text: Resolution: Correct the IDL to be compliant with the PIM. This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface TopicDescription · Add operations: string get_type_name(); string get_name(); · Interface Topic · Remove: readonly attribute string name;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6794: Ref-40 Name_and_return_type_of_lookup_topic (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Currently, the lookup_topic method returns a type Topic. Why does it
not return a type TopicDescription? In that case, it could also be
used to search for ContentFilteredTopics and MultiTopics.


***PROPOSAL***


Change prototype lookup_topic to return TopicDescription.


Change name from lookup_topic to lookup_topicdescription


Specify that lookup_topicdescription will also find
ContentFilteredTopic and MultiTopic


This affects sections 2.1.2.2.1.11, and 2.2.3.

Resolution: see below
Revised Text: Resolution: Rename lookup_topic to be find_topic Introduce lookup_topic_description as a separate operation with the normal "lookup" semantics. That is, lookup_topic_description finds only local things. Never need to delete that. With find_topic you will get a "duplicate topic object"; you have to delete it as many times as found. Revised Text: Changes in PIM · Section 2.1.2.2 Domain Module · Figure 2-6 · Update figure to reflect operations find_topic and lookup_topicdescription · Updated figure follows · Figure 2-7 · Update figure to reflext operations find_topic and lookup_topicdescription · Updated figure follows: · DomainParticipant table · lookup_topic operation · Replace operation name: "lookup_topic" with "find_topic" · Add operation "lookup_topic_description": lookup_topicdescription TopicDescription · Section 2.1.2.2.1.5 create_topic (corrections apply after the changes in 6763) · 1st paragraph · Replace: The operation lookup_topic … With: The operation find_topic … · 5th paragraph · Replace: The implementation of create_topic will automatically perform a lookup_topic for the specified topic_name with a timeout of zero.. With The implementation of create_topic will automatically perform a lookup_topicdescription for the specified topic_nam. · 5th paragraph · Replace: Subsequent attempts will either return the existing Topic (i.e. behave like lookup_topic) or else fail. With: Subsequent attempts will either return the existing Topic (i.e. behave like find_topic) or else fail. · Add paragraph after 5th paragraph: If a Topic is obtained multiple times by means of a create_topic, it must also be deleted that same number of times using delete_topic. · Section 2.1.2.2.1.11 lookup_topic · Replace section name "lookup_topic" to "find_topic" · 3rd paragraph: · Replace A Topic that is locally obtained only by means of lookup_topic that is, for which create_topic not locally called, must also be deleted by means of delete_topic so that the local resources can be released. With: A Topic obtained by means of find_topic, must also be deleted by means of delete_topic so that the local resources can be released. If a Topic is obtained multiple times by means of find_topic or create_topic it must also be deleted that same number of times using delete_topic. · Add paragraph after 3rd paragraph: If a Topic is obtained multiple times by means of a find_topic, it must also be deleted that same number of times using delete_topic. · Add section 2.1.2.2.1.11 2.1.2.2.1.11 lookup_topicdescription The operation lookup_topicdescription gives access to an existing locally-created TopicDescription, based on its name. The operation takes as argument the name of the TopicDescription. If a TopicDescription of the same name already exists, it gives access to it, otherwise it returns a 'nil' value. The operation never blocks. The operation lookup_topicdescription may be used to locate any locally-created Topic, ContentFilteredTopic, and MultiTopic object. If the operation fails to locate a TopicDescription a 'nil' value (as specified by the platform) is returned. · Section 2.1.3.2 DURABILITY · 4th paragraph, replace: … In other words, it is "as-if" the service first did lookup_topic to access the Topic, … With: … In other words, it is "as-if" the service first did find_topic to access the Topic, … Changes in IDL · Section 2.2.3 DCPS PSM : IDL · DomainParticipant interface: · operation lookup_topic: · Rename operation from "lookup_topic" to "find_topic" · Add operation: TopicDescription lookup_topicdescription(in string name);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6795: Reason and use of enable (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-30 Topic_enable_semantics


In section 2.1.2.1.1.7 it is not clear what is the precise semantics
of the "enable" on a Topic are? Can the same or another node see it
(by means of "lookup_topic") a Topic that has not been enabled?


Similarly can the TopicListener be invoked before the Topic is
enabled. It appear undesirable since the QoS may still change.


***PROPOSAL***


Clarify that the Topic is not accessible by means of "lookup" neither
locally not globally until it has been enabled.


Also clarify that the TopicListener will only be called after the
Topic is enabled.

Resolution: see below
Revised Text: Resolution: Clarify that the Topic is not accessible by means of "lookup" neither locally not globally until it has been enabled. Also clarify that the TopicListener will only be called after the Topic is enabled. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.1.1.7 · At the end of the section add the paragraph: "The Listeners associated with an entity are not called until the entity is enabled. Conditions associated with an entity that is not enabled are "inactive", that is, have a trigger_value==FALSE (see Section 2.1.4.4). · Section 2.1.2.2.1.11 · First paragraph. Replace: The operation lookup_topic gives access to an existing (or ready to exist) Topic, based on its name. · with: The operation lookup_topic gives access to an existing (or ready to exist) enabled Topic, based on its name. Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6796: Ref-31 Reason_and_use_of_enabled (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The "enable"operation was added in support of DLRL. However if the
application is not using DLRL it is often inconvenient to have to
explicitly "enable" each entity before using it.


***PROPOSAL***


Add a QoS to the DomainParticipant, Publisher and Subscriber called
ENTITY_CREATION_SETTINGS. This QoS contains the Boolean
"create_enabled". If this Boolean is set to "TRUE" then the entities
created by the factory will be automatically enabled (i.e. the factory
will call the enable() before returning the entity).


Specify that the default value of this setting is TRUE.


Specify that this QoS is mutable


This affects sections 2.1.3, and 2.2.3

Resolution: see below
Revised Text: Resolution: Add a QoS to the DomainParticipant, Publisher and Subscriber called ENTITY_FACTORY. This QoS contains the Boolean "create_enabled". If this Boolean is set to "TRUE" then the entities created by the factory will be automatically enabled (i.e. the factory will call the enable() before returning the entity). Specify that the default value of this setting is TRUE. Specify that this QoS is mutable Also allow factory operations even if the factory has not been enabled. Specify that the application cannot enable an entity if its factory not enabled (returns PRECONDITION_NOT_MET). Specify that the action of enabling a factory will also enable all previously created entities which that have the 'autoenable' qos set to TRUE. This affects sections 2.1.3, and 2.2.3 Revised Text: Changes in PIM · Section 2.1.2.1.1.7 enable · Replace: This operation enables the Entity. All Entity objects are created disabled and must be enabled before the DCPS Service can use them. With: This operation enables the Entity. Entity objects can be created either enabled or disabled. This is controlled by the value of the ENTITY_FACTORY Qos policy (Section 2.1.3.14 ) on the corresponding factory for the Entity. The default setting of ENTITY_FACTORY is such that, by default, it is not necessary to explicitly call enable on newly created entities. The enable operation is idempotent. Calling enable on an already enabled Entity returns OK and has no effect. · Replace: Prior to enabling an Entity, the only operations that can be invoked on it are the ones to set or get the QoS policies and the listener and to get the StatusCondition. Other operations will return the error NOT_ENABLED. With: If an Entity has not yet been enabled, the only operations that can be invoked on it are the ones to set or get the QoS policies and the listener, the ones that get the StatusCondition, and the 'factory' operations that create other entities. Other operations will return the error NOT_ENABLED. Entities created from a factory that is disabled, are created disabled regardless of the setting of the ENTITY_FACTORY Qos policy. Calling enable on an Entity whose factory is not enabled will fail and return PRECONDITION_NOT_MET. If the ENTITY_FACTORY Qos policy has autoenable_created_entities set to TRUE, the enable operation on the factory will automatically enable all entities created from the factory. · Section 2.1.3 Supported QoS · Figure 2-12 · add the EntityFactoryQosPolicy with the following field: "autoenable_created_entities" · QoS table add QoSPolicy (at the bottom): QosPolicy Value Meaning Concerns RxO Changeable ENTITY_FACTO RY A boolean: "autoenable_crea ted_entities" Controls the behavior of the entity when acting as a factory for other entities. In other words, configures the side-effects of the create_* and delete_* operations. DomainPartici pant,Publisher,Subscriber, No Yes autoenable_creat ed_entities Specifies whether the entity acting as a factory automatically enables the instances it creates. If autoenable_created_entities==TRUE the factory will automatically enable each created Entity otherwise it will not.By default, TRUE. · Insert section 2.1.3.14 (previous section 2.1.3.14 Relationship between registration, LIVELINESS, and OWNERSHIP becomes section 2.1.3.15): 2.1.3.14 ENTITY_FACTORY This policy controls the behavior of the Entity as a factory for other entities. This policy concerns only DomainParticipant (as factory for Publisher, Subscriber, and Topic), Publisher (as factory for DataWriter), and Subscriber (as factory for DataReader). This policy is mutable. A change in the policy affects only the entities created after the change; not the previously created entities. The setting of autoenable_created_entities to TRUE indicates that the factory create_<entity> operation will automatically invoke the enable operation each time a new Entity is created. Therefore, the Entity returned by create_<entity> will already be enabled. A setting of FALSE indicates that the Entity will not be automatically enabled. The application will need to enable it explicitly by means of the enable operation (see Section 2.1.2.1.1.7 ). The default setting of autoenable_created_entities = TRUE means that, by default, it is not necessary to explicitly call enable on newly created entities. · Section 2.1.6.1 Publication View · Figure 2-21 · Remove the call to "enable" from the sequence charts. · Updated Figure 2-21 follows · Section 2.1.6.2 Subscription View · Figure 2-22 · Remove the call to "enable" from the sequence charts. · Updated Figure 2-22 follows · Section 2.1.6.3 Notifications via Conditions and Wait-Sets · Figure 2-23 · Remove the call to "enable" from the sequence charts. · Updated Figure 2-23 follows Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (in the "Qos" section): const string ENTITYFACTORY_QOS_POLICY_NAME = "EntityFactory"; const QosPolicyId_t ENTITYFACTORY_QOS_POLICY_ID= 15; struct EntityFactoryQosPolicy { boolean autoenable_created_entities; }; · struct DomainParticipantQos · Add (at the end of the structure): EntityFactoryQosPolicy entity_factory; · struct PublisherQos · Add (at the end of the structure): EntityFactoryQosPolicy entity_factory; · struct SubscriberQos · Add (at the end of the structure): EntityFactoryQosPolicy entity_factory;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6797: DDS ISSUE# 14] Helper addition to the IDL (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-33 Specification_of_infinite_duration


The PSM/IDL does not provide the means for an application to specify
an infinite duration (e.g as a timeout)


***PROPOSAL***


Add the following constant to the IDL:


Duration_t DURATION_INFINITE = { 0x7ffffff, 0xffffffff }


Resolution: see below
Revised Text: Resolution: Add constants to the IDL DURATION_INFINITY_SEC = 0x7ffffff and DURATION_INFINITY_NSEC = 0x7ffffff. Use 0x7fffffff for both second and nanosecond part, that way the arithmetic comparison x <= INFINITY will hold in all language bindings, including those like Java which don't support unsigned quantities. Specify that Timestamps and Durations are normalized. The nanosecond part cannot be equal or greater than 1000000000. If the user provides, the operation should fail and return BAD_PARAMETER. Revised Text: Changes in PIM · Section 2.2.2 · At the end, before section 2.2.3 add: The two types used to represent time: Duration_t and Time_t are been mapped into structures that contain fields for the second and the nanosecond parts. These types are further constrained to always use a 'normalized' representation for the time, that is, the nanosec field must verify 0 <= nanosec < 1000000000. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (before section describing the "Return codes"): const long DURATION_INFINITY_SEC = 0x7ffffff; const unsigned long DURATION_INFINITY_NSEC = 0x7ffffff;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6798: Ref-118 Introduce_TIME_INVALID_constant (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The PSM/IDL does not provide the means for an application to specify
an invalid time (e.g. in the SampleInfo)


***PROPOSAL***


Add the following constant to the IDL:


Timestamp t TIMESTAMP_INVALID = { -1, 0 }

Resolution: see below
Revised Text: Resolution: Add constants to the IDL: TIMESTAMP_INVALID_SEC TIMESTAMP_INVALID_NSEC Revised Text: Changes in PIM Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (before section describing the "Return codes"): const long TIMESTAMP_INVALID_SEC = -1; const unsigned long TIMESTAMP_INVALID_NSEC = 0xfffffff;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6799: Ref-102 Addition_of time_related_constants (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Sometimes it is necessary to represent a zero duration as well as an
invalid time.


A zero duration is needed for example to provide the value of a
minimum_separation when a TIME_BASED_FILTER is not desired.


An invalid time is needed when samples are returned that do not
represent actual data.


The representation of a DURATION_ZERO clearly is a Time_t with values
{0, 0}. However it is desirable that the name of this constant is
standardized across implementations.


***PROPOSAL***


Add the following constant to the IDL:


Timestamp t DURATION_ZERO = { 0, 0 }


Resolution: see below
Revised Text: Resolution: Add constants to the IDL: DURATION_ZERO_SEC, DURATION_ZERO Revised Text: Changes in PIM Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (before section describing the "Return codes"): const long DURATION_ZERO_SEC = 0; const unsigned long DURATION_ZERO_NSEC = 0;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6800: DDS ISSUE# 15] Semantics of register and unregister instance (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-13 Semantics_of_register_unregister_instance


The semantics of the register_instance and unregister_instance methods
are not entirely clear. From section 2.1.2.5.1 it seems that both
methods have, apart from their local memory management purpose on the
Publisher side, also a purpose with respect to determining the
LIVELINESS of a DataWriter on the Subscriber side. This means that
these methods will probably result in network communication as well


***PROPOSAL***


Clarify the semantics by adding the following paragraphs as needed to
sections 2.1.2.5 and to 2.1.3:


The need for registering/unregistering instances stems from two use
cases: (1) Ownership resolution on redundant systems, and (2)
Detection of loss in topological connectivity and the consequent
difference in the semantics of unregister and dispose (3).


(1) Ownership resolution on redundant systems


It is expected that users will use DDS to set up redundant systems
where multiple DataWriters are "capable" of writing the same
instance. The data-writers are either configured such that both are
writing the instance "constantly" or else they use some mechanism to
monitor each other and only write when they detect that the primary
"writer" is no longer writing.


Either of the above two cases uses the OWNERSHIP policy "Exclusive"
and arbitrate themselves by means of the OWNERSHIP_STRENGTH. The
desired behavior is that when the "primary" writer stops writing, the
application should start to receive data from the secondary subscriber


This approach requires some mechanism to detect that the "writer" is
no longer "writing" the data as it should. There are several reasons
why this may be happening and all must be detected (but not
necessarily distinguished):


The writing process is no longer running (e.g. the whole application
has crashed)


Connectivity to the writing application has been lost (e.g. network
got disconnected)


Application logic that was writing the data is faulty and has stopped
doing so.


Arbitrating from a writer to one of a higher strength is simple and
the decision can be taken autonomously by the reader. Switching
ownership from a higher strength writer to one of a lower strength
requires that the "reader" application can make a determination that
the "writer" application is "no longer writing the instance".


(1.1) Case where the data is periodically updated


This determination is reasonably simple when the data is being written
periodically at some rate. The writer simply states its offered
deadline (maximum interval between updates) and the reader monitors
that the writer indeed updates the instance at least once per
deadline_period. If the deadline is missed, the reader considers the
writer "not alive" and gives ownership to the highest-strength writer
that is alive


(1.2) Case where data is not periodically updated


The case where the writer is not writing data periodically is also a
very important use-case. Since the data is not being updated at any
fixed period, the "deadline" mechanism cannot be used to determine
ownership. The liveliness neatly solves this situation. Ownership is
maintained while the writer is "alive" and for the writer to be alive
it must be fulfill its "LIVELINESS" contract. The different means to
renew liveliness (automatic, manual) combined by the implied renewal
each time data is written handle the three conditions above (a), (b),
or (c) ( note that to handle (c) Liveliness must be
MANUAL_BY_TOPIC). Now the writer can retain ownership by periodically
writing data or else calling assert_liveliness() if it has no data to
write. Alternatively if only protection against (a) or (b) is desired,
it is sufficient that some task on the writer process periodically
writes data or calls assert_liveliness() on the participant.


However, this scenario requires that the reader knows what instances
are being "written" by the writer. That is the only way that the
reader can maintain ownership of specific instances from the fact that
the writer is still "alive". Hence the need for the writer to
"register" and "unregister" instances. Note that while "registration"
can be done lazily the first time the writer writes the instance,
"unregistration" in general cannot. Unless we are willing to say that
once a writer writes an instance it will forever write the instance
until the writer is deleted. Similar reasoning will lead to the fact
that unregistration will also require a message to be sent to the
readers


(2) Detection of loss in topological connectivity


There are applications that are designed in such way that their
correct operation requires some minimal topological connectivity, that
is, the writer needs to have a minimum number or readers or
alternatively the reader must have a minimum number of writers.


A common scenario is that the application does not start doing is
logic until it knows that some specific writers have the minimum
configured readers (e.g the alarm monitor is up).


A more common scenario is that the application logic will wait until
some writers appear that can provide some needed source of information
(e.g. the raw sensor data that must be processed).


Furthermore once the application is running it is a requirement that
this minimal connectivity (from the source of the data) is monitored
and the application informed if it is ever lost. For the case where
data is being written periodically, the DEADLINE QoS and the
on_deadline_missed() listener provides the notification. The case
where data is not periodically updated requires the use of the
LIVELINESS in combination with register/unregister instance to detect
whether the "connectivity" has been lost, and the notification is
provided by means of the "NO_WRITERS" lifecycle state.


In term of the required mechanisms the scenario is very similar to the
case of maintaining ownership. In both cases the reader needs to know
whether a writer is still "managing the current value of an instance"
even though it is not continually writing it and this knowledge
requires the writer to keep its liveliness plus some means to know
which instances the writer is currently "managing" (i.e. the
registered instances).


Note that the need to notify the reader that a particular instance has
no writers does not imply that the mechanism is the use of a
"NO_WRITES" lifecycle. We could just as well use a listener. The
listener mechanism has the problem that there is no way to know which
instances are the ones that have no writers. But this is also a
problem for the "on_deadline_missed" listener so we think it would be
a good idea to refractor these two mechanisms to that they are similar
in nature and both provide the means to determine which instances have
the problem.


(3) Difference between unregister and dispose


Dispose is different than unregister. The "dispose" indicates that the
data-instance no longer exists (e.g. a track that has disappeared, a
simulation entity that has been destroyed, a record entry that has
been deleted, etc.) whereas the "unregister" indicates that the writer
is no longer taking responsibility for updating the value of the
instance.


Deleting a data-writer is equivalent to unregistering all the
instances it was writing, but is not the same as "disposing" all the
instances.


For a topic with EXCLUSIVE ownership if the current owner of an
instance disposes it, the readers an instance will see the lifecycle
as being "DELETED" and not see the values being written by the weaker
writer (even after the stronger one has disposed the instance). This
is because the owner of the instance is saying that the instance no
longer exists (e.g. the master of the database is saying that a record
has been deleted) and thus the readers should see it as such.


For a topic with EXCLUSIVE ownership if the current owner of an
instance unregisters it than it will relinquish ownership and thus the
readers will see the value updated by another writer (which will then
become the owner). This is because the owner said that it no longer
will be providing values for the instance and thus any other writer
can take ownership and provide those values

Resolution: see below
Revised Text: Resolution: Clarify the semantics by adding the following paragraphs as needed to sections 2.1.2.5 and to 2.1.3: The need for registering/unregistering instances stems from two use cases: (1) Ownership resolution on redundant systems, and (2) Detection of loss in topological connectivity and the consequent difference in the semantics of unregister and dispose (3). 1. Ownership resolution on redundant systems It is expected that users will use DDS to set up redundant systems where multiple DataWriters are "capable" of writing the same instance. The data-writers are either configured such that both are writing the instance "constantly" or else they use some mechanism to monitor each other and only write when they detect that the primary "writer" is no longer writing. Either of the above two cases uses the OWNERSHIP policy "Exclusive" and arbitrate themselves by means of the OWNERSHIP_STRENGTH. The desired behavior is that when the "primary" writer stops writing, the application should start to receive data from the secondary subscriber This approach requires some mechanism to detect that the "writer" is no longer "writing" the data as it should. There are several reasons why this may be happening and all must be detected (but not necessarily distinguished): The writing process is no longer running (e.g. the whole application has crashed) Connectivity to the writing application has been lost (e.g. network got disconnected) Application logic that was writing the data is faulty and has stopped doing so. Arbitrating from a writer to one of a higher strength is simple and the decision can be taken autonomously by the reader. Switching ownership from a higher strength writer to one of a lower strength requires that the "reader" application can make a determination that the "writer" application is "no longer writing the instance". (1.1) Case where the data is periodically updated This determination is reasonably simple when the data is being written periodically at some rate. The writer simply states its offered deadline (maximum interval between updates) and the reader monitors that the writer indeed updates the instance at least once per deadline_period. If the deadline is missed, the reader considers the writer "not alive" and gives ownership to the highest-strength writer that is alive (1.2) Case where data is not periodically updated The case where the writer is not writing data periodically is also a very important use-case. Since the data is not being updated at any fixed period, the "deadline" mechanism cannot be used to determine ownership. The liveliness neatly solves this situation. Ownership is maintained while the writer is "alive" and for the writer to be alive it must be fulfill its "LIVELINESS" contract. The different means to renew liveliness (automatic, manual) combined by the implied renewal each time data is written handle the three conditions above (a), (b), or (c) ( note that to handle (c) Liveliness must be MANUAL_BY_TOPIC). Now the writer can retain ownership by periodically writing data or else calling assert_liveliness() if it has no data to write. Alternatively if only protection against (a) or (b) is desired, it is sufficient that some task on the writer process periodically writes data or calls assert_liveliness() on the participant. However, this scenario requires that the reader knows what instances are being "written" by the writer. That is the only way that the reader can maintain ownership of specific instances from the fact that the writer is still "alive". Hence the need for the writer to "register" and "unregister" instances. Note that while "registration" can be done lazily the first time the writer writes the instance, "unregistration" in general cannot. Unless we are willing to say that once a writer writes an instance it will forever write the instance until the writer is deleted. Similar reasoning will lead to the fact that unregistration will also require a message to be sent to the readers 2. Detection of loss in topological connectivity There are applications that are designed in such way that their correct operation requires some minimal topological connectivity, that is, the writer needs to have a minimum number or readers or alternatively the reader must have a minimum number of writers. A common scenario is that the application does not start doing is logic until it knows that some specific writers have the minimum configured readers (e.g the alarm monitor is up). A more common scenario is that the application logic will wait until some writers appear that can provide some needed source of information (e.g. the raw sensor data that must be processed). Furthermore once the application is running it is a requirement that this minimal connectivity (from the source of the data) is monitored and the application informed if it is ever lost. For the case where data is being written periodically, the DEADLINE QoS and the on_deadline_missed() listener provides the notification. The case where data is not periodically updated requires the use of the LIVELINESS in combination with register/unregister instance to detect whether the "connectivity" has been lost, and the notification is provided by means of the "NO_WRITERS" lifecycle state. In term of the required mechanisms the scenario is very similar to the case of maintaining ownership. In both cases the reader needs to know whether a writer is still "managing the current value of an instance" even though it is not continually writing it and this knowledge requires the writer to keep its liveliness plus some means to know which instances the writer is currently "managing" (i.e. the registered instances). Note that the need to notify the reader that a particular instance has no writers does not imply that the mechanism is the use of a "NO_WRITES" lifecycle. We could just as well use a listener. The listener mechanism has the problem that there is no way to know which instances are the ones that have no writers. But this is also a problem for the "on_deadline_missed" listener so we think it would be a good idea to refractor these two mechanisms to that they are similar in nature and both provide the means to determine which instances have the problem. 3. Difference between unregister and dispose Dispose is different than unregister. The "dispose" indicates that the data-instance no longer exists (e.g. a track that has disappeared, a simulation entity that has been destroyed, a record entry that has been deleted, etc.) whereas the "unregister" indicates that the writer is no longer taking responsibility for updating the value of the instance. Deleting a data-writer is equivalent to unregistering all the instances it was writing, but is not the same as "disposing" all the instances. For a topic with EXCLUSIVE ownership if the current owner of an instance disposes it, the readers an instance will see the lifecycle as being "DELETED" and not see the values being written by the weaker writer (even after the stronger one has disposed the instance). This is because the owner of the instance is saying that the instance no longer exists (e.g. the master of the database is saying that a record has been deleted) and thus the readers should see it as such. For a topic with EXCLUSIVE ownership if the current owner of an instance unregisters it than it will relinquish ownership and thus the readers will see the value updated by another writer (which will then become the owner). This is because the owner said that it no longer will be providing values for the instance and thus any other writer can take ownership and provide those values Revised Text: Changes in PIM · Add a new section: section 2.1.3.14 as follows: 2.1.3.14 Relationship between registration, LIVELINESS, and OWNERSHIP The need for registering/unregistering instances stems from two use cases: · Ownership resolution on redundant systems · Detection of loss in topological connectivity. These two use cases also illustrate the semantic differences between the unregister and dispose operations on a DataWriter · Add sub-section (to the above) 2.1.2.14.1: 2.1.2.14.1 Ownership resolution on redundant systems It is expected that users may use DDS to set up redundant systems where multiple DataWriter entities are "capable" of writing the same instance. In this situation the DataWriter entities are configured such that: · Either both are writing the instance "constantly"" · Or else they use some mechanism to classify each other as "primary" and "secondary", such that the primary is the only one writing, and the secondary monitors the primary and only writes when it detects that the primary "writer" is no longer writing." Both cases above use the OWNERSHIP policy kind EXCLUSIVE and arbitrate themselves by means of the OWNERSHIP_STRENGTH. Regardless of the scheme, the desired behavior from the DataReader point of view is that reader normally receives data from the primary unless the "primary" writer stops writing in which case the reader starts to receive data from the secondary DataWriter. This approach requires some mechanism to detect that a DataWriter (the primary) is no longer "writing" the data as it should. There are several reasons why this may be happening and all must be detected but not necessarily distinguished: · [crash] The writing process is no longer running (e.g. the whole application has crashed) · [connectivity loss] Connectivity to the writing application has been lost (e.g. network got disconnected) · [application fault] The application logic that was writing the data is faulty and has stopped calling the "write" operation on the DataWriter. Arbitrating from a DataWriter to one of a higher strength is simple and the decision can be taken autonomously by the DataReader. Switching ownership from a higher strength DataWriter to one of a lower strength DataWriter requires that the DataReader can make a determination that the stronger DataWriter is "no longer writing the instance". · Add sub-section (to the above) 2.1.2.14.1.1: 2.1.3.14.1.1 Case where the data is periodically updated This determination is reasonably simple when the data is being written periodically at some rate. The DataWriter simply states its offered DEADLINE (maximum interval between updates) and the DataReader automatically monitors that the DataWriter indeed updates the instance at least once per deadline_period. If the deadline is missed, the DataReader considers the DataWriter "not alive" and automatically gives ownership to the next highest-strength DataWriter that is alive. · Add sub-section (to the above) 2.1.2.14.1.2: 2.1.3.14.1.2 Case where data is not periodically updated" The case where the DataWriter is not writing data periodically is also a very important use-case. Since the instance is not being updated at any fixed period, the "deadline" mechanism cannot be used to determine ownership. The liveliness solves this situation. Ownership is maintained while the DataWriter is "alive" and for the DataWriter to be alive it must be fulfill its "LIVELINESS" QoS contract. The different means to renew liveliness (automatic, manual) combined by the implied renewal each time data is written handle the three conditions above [crash], [connectivity loss], and [application fault]. Note that to handle [application fault] LIVELINESS must be MANUAL_BY_TOPIC. The DataWriter can retain ownership by periodically writing data or else calling assert_liveliness if it has no data to write. Alternatively if only protection against [crash] or [connectivity loss] is desired, it is sufficient that some task on the writer process periodically writes data or calls assert_liveliness on the DomainParticipant. However, this scenario requires that the DataReader knows what instances are being "written" by the DataWriter. That is the only way that the DataReader deduce the ownership of specific instances from the fact that the DataWriter is still "alive". Hence the need for the writer to "register" and "unregister" instances. Note that while "registration" can be done lazily the first time the DataWriter writes the instance, "unregistration" in general cannot. Similar reasoning will lead to the fact that unregistration will also require a message to be sent to the readers" · Add sub-section 2.1.2.14.2: 2.1.3.14.2 Detection of loss in topological connectivity There are applications that are designed in such way that their correct operation requires some minimal topological connectivity, that is, the writer needs to have a minimum number or readers or alternatively the reader must have a minimum number of writers. A common scenario is that the application does not start doing is logic until it knows that some specific writers have the minimum configured readers (e.g the alarm monitor is up). A more common scenario is that the application logic will wait until some writers appear that can provide some needed source of information (e.g. the raw sensor data that must be processed). Furthermore once the application is running it is a requirement that this minimal connectivity (from the source of the data) is monitored and the application informed if it is ever lost. For the case where data is being written periodically, the DEADLINE QoS and the on_deadline_missed() listener provides the notification. The case where data is not periodically updated requires the use of the LIVELINESS in combination with register/unregister instance to detect whether the "connectivity" has been lost, and the notification is provided by means of the "NO_WRITERS" lifecycle state. In term of the required mechanisms the scenario is very similar to the case of maintaining ownership. In both cases the reader needs to know whether a writer is still "managing the current value of an instance" even though it is not continually writing it and this knowledge requires the writer to keep its liveliness plus some means to know which instances the writer is currently "managing" (i.e. the registered instances). · Add sub-section 2.1.2.14.3: 2.1.2.14.3 Semantic difference between unregister and dispose" The DataWriter operation dispose is semantically different from unregister. The dispose operation indicates that the data-instance no longer exists (e.g. a track that has disappeared, a simulation entity that has been destroyed, a record entry that has been deleted, etc.) whereas the unregister operation indicates that the writer is no longer taking responsibility for updating the value of the instance. Deleting a DataWriter is equivalent to unregistering all the instances it was writing, but is not the same as "disposing" all the instances. For a Topic with EXCLUSIVE OWNERSHIP if the current owner of an instance disposes it, the readers accessing the instance will see the lifecycle as being "DELETED" and not see the values being written by the weaker writer (even after the stronger one has disposed the instance). This is because the DataWriter that owns the instance is saying that the instance no longer exists (e.g. the master of the database is saying that a record has been deleted) and thus the readers should see it as such. For a Topic with EXCLUSIVE OWNERSHIP if the current owner of an instance unregisters it then it will relinquish ownership of the instance and thus the readers may see the value updated by another writer (which will then become the owner). This is because the owner said that it no longer will be providing values for the instance and thus another writer can take ownership and provide those values. · In section 2.1.2.4.2.6 replace: This operation can affect the ownership of the data instance (as described in Section 2.1.3.6). With This operation can affect the ownership of the data instance (as described in Section 2.1.3.6 and Section 2.1.3.14.1).
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6801: DDS ISSUE# 16] Clarification of expression syntax (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-14 Relationship_of_IDL_and_expression_syntax


The fieldnames and topicnames used in SQL expressions need to be
language-independent. This means that they have to be derived from the
IDL-definition. The SQL description in Annex A/B does not define the
relationship between the FIELDNAME and the IDL definition.


Example, assume the following IDL definition:


module mymodule { struct mystruct { long record; }; };


The SQL statement could become "SELECT record AS value FROM
mymodule.mystruct ...".


The field "record" would be translated into "IDL_record" by an ADA
preprocessor. The SQL-statement should still use "record" to be
language independent. This is a generic situation that occurs for each
language when the name of the field matches a reserved keyword.


***PROPOSAL***


Improve specification "Annex A/B" stating that the syntax of the SQL
statement will match the names used in IDL not necessarily those of
the language mapping.

Resolution: see below
Revised Text: Resolution: Improve specification "Annex B/C" stating that the syntax of the SQL statement will match the names used in IDL not necessarily those of the language mapping. Revised Text: Changes in PIM · Appendix B and C · Replace: "FIELDNAME - A fieldname is a reference to a field in the data-structure. The dot '.' is used to navigate through nested structures. The number of dots that may be used in a FIELD-NAME is unlimited. The FIELDNAME can refer to fields at any depth in the data structure. with: "FIELDNAME - A fieldname is a reference to a field in the data-structure. The dot '.' is used to navigate through nested structures. The number of dots that may be used in a FIELD-NAME is unlimited. The FIELDNAME can refer to fields at any depth in the data structure. The names of the field are those specified in the IDL definition of the corresponding structure, which may or may not match the field-names that appear on the language-specific (e.g. C/C++, Java) mapping of the structure."
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6802: DDS ISSUE# 17] Clarify consequence of changing partitions (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-17 PartitionQos_behavior_on_change


The specification does not make it clear whether changing the
PARTITION QoS on a Publisher or Subscriber affects the existing
DataReader or DataWriter establishing some "associations" that did not
exist before, or breaking associations that existed.


Furthermore the specification does not describe whether the
INCOMPATIBLE_QOS listeners/ status will be triggered


Given that partitions are intended to be used to separate logically
separate 'communication words' so that you can isolate things it would
seem that changing partitions should affect existing DataReader and
writers and not trigger the INCOMPATIBLE_QOS.


***PROPOSAL***


Modify section 2.1.3.9 to clarify that (1) changing the PARTITION QoS
on a Publisher or Subscriber does affect the existing DataReader or
DataWriter entities. It may establish new "associations" that did not
exist before, or break existing associations.


Also explain in section 2.1.3.9 that not matching the PARTITION QoS
does not trigger the INCOMPATIBLE_QOS listener nor change the
associated status.

Resolution: see below
Revised Text: Resolution: Modify section 2.1.3.9 to clarify that (1) changing the PARTITION QoS on a Publisher or Subscriber does affect the existing DataReader or DataWriter entities. It may establish new "associations" that did not exist before, or break existing associations. Also explain in section 2.1.3.9 that not matching the PARTITION QoS does not trigger the INCOMPATIBLE_QOS listener nor change the associated status. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3.9 · After the paragraph "Failure to match partitions is not considered an "incompatible" QoS and does not trigger any listeners nor conditions", add: This policy is changeable. A change of this policy can potentially modify the "association" of existing DataReader and DataWriter entities. It may establish new "associations" that did not exist before, or break existing associations.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6803: Behavior on creation failure (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-21 Error_status_on_create_calls


The specification does not explain the return value of the factory
"create" calls if the operation fails, for example due to inconsistent
QoS.


***PROPOSAL***


Add text that explains that in case of failure the return value is NIL
(as defined on each PSM).


This affects 2.1.2.2.2.1, 2.1.2.2.1.1, 2.1.2.2.1.5, 2.1.2.2.1.7,
2.1.2.2.1.9, 2.1.2.4.1.5, 2.1.2.5.2.5


Resolution: see below
Revised Text: Resolution: Add text that explains that in case of failure the return value is NIL (as defined in the PSM). This change only concerns the PIM (text). Revised Text: Changes in PIM · At the end of sections, 2.1.2.2.1.1, 2.1.2.2.1.3, , 2.1.2.2.1.7, 2.1.2.2.1.9, 2.1.2.2.2.1, 2.1.2.4.1.5, 2.1.2.5.2.5, 2.1.2.5.3.5, 2.1.2.5.3.6, · add the following paragraph: "In case of failure, the operation will return a 'nil' value (as specified by the platform)." · Section 2.1.2.2.1.5 create_topic · Replace: The application is not allowed to create two Topic objects with the same name attached to the same DomainParticipant. If the application attempts this, create_topic will fail and return PRECONDITION_NOT_MET. · with: The application is not allowed to create two Topic objects with the same name attached to the same DomainParticipant. If the application attempts this, create_topic will fail. · Then, add the following paragraph: In case of failure, the operation will return a 'nil' value (as specified by the platform). · Section 2.1.2.5.9 QueryCondition Class · Replace This feature is optional (in the cases where it is not supported, the DataReader::create_querycondition should return an error). with: This feature is optional. In the cases where it is not supported, the DataReader::create_querycondition will return a 'nil' value (as specified by the platform).
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6804: DDS ISSUE# 19] Initial value of entity status changes (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-26 Initial_value_of_entity_status


The specification does not define the initial value of the status of
an entity after initializations/enabling


***PROPOSAL***


Specify in section 2.1.2.1.1.6 that when the entity is created the
return of get_status_changes is an empty mask indicating that no
status have changed

Resolution: see below
Revised Text: Resolution: Specify in section 2.1.2.1.1.6 that when the entity is created the return of get_status_changes is an empty mask indicating that no status have changed. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.1.1.6 · Add the following paragraph at the end of the section: When the entity is first created or if the entity is not enabled, all communication statuses are in the "untriggered" state so the list returned by the get_status_changes operation will be empty. Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6805: DDS ISSUE# 20] Narrow the applicability of assert liveliness (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-29 Entity_operation_assert_livelines


Class Entity has a function assert_liveliness, which is inherited by
all derived classes. However, this function is relevant in combination
with the LIVELINESS QoS policy only, and has only defined behavior on
DomainParticipant and DataWriter


***PROPOSAL*** Remove assert_liveliness from Entity and introduce it
in DataWriter and Participant

Resolution: see below
Revised Text: Resolution: Remove assert_liveliness from Entity and introduce it in DataWriter and Participant. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.1.1 Entity Class · In the Entity table, remove row containing "assert_liveliness" · Remove section 2.1.2.1.1.8 assert_liveliness · Section 2.1.2.2.1 DomainParticipant Class · In the DomainParticipant table add a row with the following operation: assert_liveliness void · Add section 2.1.2.2.1.18 2.1.2.2.1.18 assert_liveliness This operation manually asserts the liveliness of the DomainParticipant. This is used in combination with the LIVELINESS QoS policy (cf. Section 2.1.3, "Supported QoS," on page 2-65) to indicate to the Service that the entity remains active. This operation needs to only be used if the DomainParticipant contains DataWriter entities with the LIVELINESS set to MANUAL_BY_PARTICIPANT and it only affects the liveliness of those DataWriter entities. Otherwise, it has no effect. Writing data via the write operation on a DataWriter asserts liveliness on the DataWriter itself and its DomainParticipant. Consequently the use of assert_liveliness is only needed if the application is not writing data regularly. Complete details are provided in Section 2.1.3.7. · Section 2.1.2.4.2.17 DataWriter · In the DataWriter table, add a row with the following operation: assert_liveliness void · Add section 2.1.2.4.2.17 2.1.2.4.2.17 assert_liveliness This operation manually asserts the liveliness of the DataWriter. This is used in combination with the LIVELINESS QoS policy (cf. Section 2.1.3, "Supported QoS," on page 2-65) to indicate to the Service that the entity remains active. This operation needs only be used if the LIVELINESS setting is either MANUAL_BY_PARTICIPANT or MANUAL_BY_TOPIC. Otherwise, it has no effect. Writing data via the write operation on a DataWriter asserts liveliness on the DataWriter itself and its DomainParticipant. Consequently the use of assert_liveliness is only needed if the application is not writing data regularly. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface Entity: · Remove operation "assert_liveliness" · Interface DomainParticipant. · Add operation: void assert_liveliness(); · Interface DataWriter. · Add operation: void assert_liveliness();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6806: DDS ISSUE# 21] Helper operations (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-32 Quering_waitset_for_attached_conditions


The WaitSet class has methods for attaching and detaching conditions
to it, but no method for querying which conditions are attached.


***PROPOSAL***


In section 2.1.2.1.6 add a get_conditions method that returns a
ReturnCode_t and has a collection of conditions as an out
paramater. In section 2.2.3 (IDL) add the corresponding operation.


Resolution: see below
Revised Text: Resolution: Add on WaitSet a get_conditions method that returns a ReturnCode_t and has a collection of conditions as an out parameter. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.1.6 WaitSet Class · In the WaitSet table, add a row with the following operation get_conditions get_conditions ReturnCode_t out: attached_conditions Condition [] · Add section 2.1.2.1.6.4, with the following content 2.1.2.1.6.4 get_conditions This operation retrieves the list of attached conditions. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface WaitSet · Add operation: ReturnCode_t get_conditions(out ConditionSeq attached_conditions);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6807: Ref-134 Additional_w_timestamp_operations (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
It seems that the samples that indicate the 'disposing' of an instance
need to also be sorted with respect to other samples and the sorting
should follow DESTINATION_ORDER policy


However if DESTINATION_ORDER is BY_SOURCE_TIMESTAMP and the
application is explicitly passing the timestamp to write_w_timestamp
there is no way to also specify the time when an instance is disposed


The same applies to unregister. It may be necessary to order it with
respect to other events


***PROPOSAL***


Add a dispose_w_timestamp operation to DataWriter.


Add an unregister_w_timestamp operation to DataWriter

Resolution: see below
Revised Text: Resolution: As dispose_w_timestamp already exists, there is just the need to add register_w_timestamp and unregister_w_timestamp operations to DataWriter This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.4.2 DataWriter · In the DataWriter table · Add operation: register_instance_w_timestamp register_instance_w_timestamp InstanceHandle_t instance Data timestamp Time_t · Add operation: unregister_instance_w_timestamp unregister_instance_w_timestamp ReturnCode_t instance Data handle InstanceHandle_t timestamp Time_t · In FooDataWriter table, · Add operation: register_instance_w_timestamp register_instance_w_timestamp InstanceHandle_t instance Foo timestamp Time_t · Add operation: unregister_instance_w_timestamp unregister_instance_w_timestamp ReturnCode_t instance Foo handle InstanceHandle_t timestamp Time_t · Add section 2.1.2.4.2.6, with the following content 2.1.2.4.2.6 register_instance_w_timestamp This operation performs the same function as register_instance and can be used instead of register_instance in the cases where the application desires to specify the value for the source_timestamp. The source_timestamp potentially affects the relative order in which readers observe events from multiple writers. For details see Section 2.1.3.11, for the QoS policy DESTINATION_ORDER) · Add section 2.1.2.4.2.8, with the following content 2.1.2.4.2.8 unregister_instance_w_timestamp This operation performs the same function as unregister_instance and can be used instead of unregister_instance in the cases where the application desires to specify the value for the source_timestamp. The source_timestamp potentially affects the relative order in which readers observe events from multiple writers. For details see Section 2.1.3.11 for the QoS policy DESTINATION_ORDER). Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataWriter · Add operation (commented): // InstanceHandle_t register_instance_w_timestamp(in Data instance_data, in Time_t source_timestamp); · Add operation (commented): // ReturnCode_t unregister_instance_w_timestamp(in Data instance_data, in InstanceHandle_t handle, in Time_t source_timestamp); Changes in implied IDL · Interface FooDataWriter · Add operation: DCPS::InstanceHandle_t register_instance_w_timestamp(in Foo instance_data, in DCPS::InstanceHandle_t handle, in DCPS::Time_t source_timestamp); · Add operation: DCPS::ReturnCode_t unregister_instance(in Foo instance_data, in DCPS::InstanceHandle_t handle);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6808: DDS ISSUE# 22] Details in the code generation (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-47 Details_on_type_specific_code_generation


Section 2.1.2.3.7 says "It is required that each implementation of the
Service provides an automatic means to generate this type-specific
class from a description of the type (using IDL for example in the
CORBA mapping)."


The details of this generation for the CORBA mapping should be
mentioned in the CORBA PSM (Section 2.2).


***PROPOSAL***


Add to section 2.1.2.3.7 the clarification that IDL should be used for
the language and perhaps also a figure similar to Figure 3-3 in
section 3.1.4.5 (in the DLRL) showing the process with potential
vendor-specific file for keys or QoS


Resolution: see below
Revised Text: Resolution: Add to section 2.1.2.3.7 the clarification that IDL should be used for the language and also a figure similar to Figure 3-3 in section 3.1.4.5 (in the DLRL) showing the process with potential vendor-specific file for keys or QoS Note: Section 2.1.2.3.6 already mentions IDL as the type-definition language for the CORBA PSM. Revised Text: Changes in PIM · Modify figure 2-8 to also include a figure similar to Figure 3-3 in section 3.1.4.5 (in the DLRL part) with · inputs: data-type-description, and data-type-tags · engine: DCPS Generator · outputs: type-specific-plugin, type-specific-writer and type-specific-reader [[NOTE: Figure 2-8 is modified by other issues as well see resolution of 6858 for the final figure]]
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6809: ISSUE# 23] Make Listener inheritance explicit in figures 2-9 and 2-10 (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-51Forward_mention_PublisherListener_inheritance and Ref-61
Forward_reference_SubscriberListener_baseclass


In 2.1.2.4 the PublisherListener interface extends the
DataWriterListener, this is not mentioned until 2.1.4.3 after
PulisherListener has been described


The same applies to 2.1.2.5 with SubscriberListener and
dataReaderListener


***PROPOSAL***


Modify figures 2-9 and 2-10 to show this relationships. Or at least
mention them in section 2.1.2.4.3 and 2.1.2.5.6


Resolution: see below
Revised Text: Resolution: Revise the UML diagram (figures 2-9 and 2-10) to show that inheritance relationship. This change concerns the PIM (UML diagram). Revised Text: Changes in PIM · Modify figure 2-9 adding arrow that indicates that PublisherListener extends DataWriterListener (See resolution of issue 6738 for final figure) · Modify figure 2-10 adding arrow that indicates that SubscriberListener extends DataReaderListener (See resolution of issue 6738 for final figure) [[NOTE: Figure 2-9 and 2-10 are modified by other issues as well see resolution of 6838 for the final figures]]
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6810: DDS ISSUE# 24] Clarification of status flag (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-64 Association_of_status_and_statuschangedflag


Why does every status have an associated object "StatusChangedFlag"
instead of a boolean attribute "changed"?


Figure 2-14 (2.1.4.2) is at least misleading and should be clarified
regarding this point. It suggests a 1 to n relation with a
"StatusChangedFlag" class.


Even though the diagram was only meant as conceptual explanation, not
for providing an implementation architecture it would be desirable to
improve specification in this respect.


***PROPOSAL***


Add text to 2.1.4.2 to explain that this figure is conceptual and
simply represents that the Entity knows which specific statuses have
changed, it does not imply a particular implementation in terms of
Boolean flags.

Resolution: see below
Revised Text: Resolution: Add text to 2.1.4.2 to explain that this figure is conceptual and simply represents that the Entity knows which specific statuses have changed, it does not imply a particular implementation in terms of Boolean flags. This change concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.4.2 · Add the following paragraph after figure 2-14: "Note that Figure 2-14 is only conceptual it simply represents that the Entity knows which specific statuses have changed. It does not imply any particular implementation of the StatusChangedFlag in terms of boolean values." Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6811: DDS ISSUE# 25] Addition of read and take to ReadCondition (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-60 Adding_read_take_to_ReadCondition


Why not have a read and take operation on readcondition instread of
read_w_condition and take_w_condition?


In the current situation the middleware must perform consistency
checking and handle error conditions.


Moving the read and take to the readCondition will require generation
of typed interfaces but this is merely a wrapper around the current
situation avoiding the error prone design.


For example, if a Waitset has many query conditions. Then the
application would need to find the reader that matches it. Adding a
get_reader in the querycondition would not give a strongly-typed
reader.


The simplest solution is to have a read in the querycondition.


***PROPOSAL***


create the ReadCondition and QueryReadCondition as implied IDL


We will move create_readcondition to the FooDataReader


Add a read to the Read and QueryConditions


Remove the read_w_condition from the DataReader and FooDataReader

Resolution: see below
Revised Text: Resolution: The FTF recognizes that this feature would avoid the need for the implementation to do some consistency checking. However, providing this facility as proposed in the summary would require many changes to the specification, involving: · changing the API of DataReader, ReadCondition, QueryCondition. · Adding the two more implied IDL classes FooReadCondition, FooQueryCondition. · Moving a bunch of operation (create_condition, create_query_condition, delete_read_condition) · Making FooQueryCondition now extend multiple interfaces: QueryCondition and FooReadCondition. Or perhaps only QueryCondition but not FooReadCondition. Given this and the fact that the benefit is not very big, the FTF has resolved to not introduce the specialized "Foo" conditions. Instead the specification will state that the DataReader must check that the condition passed to the DataReader "*_w_condition" operations are indeed attached to the DataReader and if not return PRECONDITION_NOT_MET Revised Text: Changes in PIM · Section 2.1.2.5.3.10 read_w_condition · After the 1st paragraph add the paragraph: The specified ReadCondition must be attached to the DataReader; otherwise the operation will fail and return PRECONDITION_NOT_MET. · Section 2.1.2.5.3.11 take_w_condition · After the 1st paragraph add the paragraph: The specified ReadCondition must be attached to the DataReader; otherwise the operation will fail and return PRECONDITION_NOT_MET. · Section 2.1.2.3.6.1 register_type · 2nd paragraph: · replace PRECONDITION_ERROR with PRECONDITION_NOT_MET
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6812: DDS ISSUE# 26] Definition of DCPSKey (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-68 Definition_of_DCPSKey


Section 2.1.5. This section introduces a type DCPSKey, this type is
not specified anywhere.


This key however is only used for the built-in types so it would be
better if, to avoid confusion, is named BuiltinTopicKey_t


***PROPOSAL***


Rename DCPSKey to be BuiltinTopicKey_t


Define it in the IDL PSM the same way IntanceHandle_t is defined so
that each vendor can define it as needed.

Resolution: see below
Revised Text: Resolution: Rename DCPSKey to be BuiltinTopicKey_t Define it in the IDL the same way IntanceHandle_t is defined so that each vendor can define it as needed. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.5 on the tables describing built-in topics · Replace: "DCPSKey" with "BuiltinTopicKey_t" Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add: #define BUILTIN_TOPIC_KEY_TYPE_NATIVE long[3] typedef BUILTIN_TOPIC_KEY_TYPE_NATIVE BuiltinTopicKey_t;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6813: DDS ISSUE# 27] Additional situations resulting in inconsistent QoS (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-73 Depth_must_be_less_or_equal_ max_samples_per_instance


Clarify that depth <= max_samples_per_instance otherwise its an
inconsistent QoS


***PROPOSAL***


State this requirement in 2.1.3.12 and 2.1.2.13


Resolution: see below
Revised Text: Resolution: State this requirement in 2.1.3.12 and 2.1.2.13 This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3.13 · At the end, add a paragraph with the following content: "The setting of HISTORY depth must be compatible with the RESOURCE_LIMITS max_samples_per_instance. For these two QoS to be compatible, they must verify that depth <= max_samples_per_instance." · Section 2.1.3.13 · At the end add a paragraph, with the following content: "The setting of RESOURCE_LIMITS max_samples_per_instance must be compatible with the HISTORY depth. For these two QoS to be compatible, they must verify that depth <= max_samples_per_instance."
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6814: [DDS ISSUE# 28] Desirability to define "information model" in a file (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-77 Offering_the_possibility_to_define_topics_in_a_file _


It would be desirable to have an "information model" where the QoS of
the topics is specified in some file.


This information model would also contain the QoS for the Topic.


***PROPOSAL***


No action as this seems beyond what the FTF could solve.

Resolution: No action as this seems beyond what the FTF could solve
Revised Text:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6815: DDS ISSUE# 29] Disposing a multi-topic (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-83 Multitopic_refactor _


The specification does not say when a MultiTopic is disposed


***PROPOSAL***


State in section 2.1.2.3.4 that MultiTopic will dispose as soon as one
of the underlying Topic that are its components is disposed


Resolution: see below
Revised Text: Resolution: State in section 2.1.2.3.4 that a data-instance belonging to a MultiTopic will be disposed as soon as one of the corresponding data-instances that belong to one of the Topic objects that compose the MultiTopic is disposed. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.3.4 MultiTopic Class. · After the bullet "DataReader entities associated with a MultiTopic are alerted…" add a bullet, with the following content: DataReader entities associated with a MultiTopic access instances that are "reconstructed" at the DataReader side from the instances written by multiple DataWriter entities. The lifecycle (cfr. Section 2.1.2.5.1 ) of the MultiTopic instance tracks the combined lifecycles of each of the constituting instances, such that, the MultiTopic instance will be "NEW" once all the constituting Topic instances are received. It will be "MODIFIED" each time any of the constituting instances is modified, it will be "DISPOSED" as soon as any one of the constituting Topic instances is disposed, and be considered as having "NO_WRITERS" as soon as one of constituting instances is detected as having "NO_WRITERS".
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6816: DDS ISSUE# 30] Setting of default qos on factories (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-?? Setting_default_qos_on_factories


The specification provides a way to create Entities with QoS by
explicitly providing the QoS when the entity is created.


In some cases it would be desirable to provide a pattern such that
many entities can be easily created with the same QoS without the
application explicitly. Thst way the management of the Qos can be
"centralized" in few modules in the application logic


***PROPOSAL***


Add the following operations:


DomainParticipant::set_default_publisher_qos(),
DomainParticipant::get_default_publisher_qos(),
DomainParticipant::set_default_subscriber_qos(),
DomainParticipant::get_default_subscriber_qos(),
Publisher::set_default_datawriter_qos(),
Publisher::get_default_datawriter_qos(),
Subscriber::set_default_datawriter_qos(),
Subscriber::get_default_datawriter_qos().


This allows the application to set a default QoS for the entities that
the factory will create, and then explicitly use that QoS to create
the entities.


Furthermore specify that if the set_default_xxx_qos operations are not
called then the get_default_xxx_qos will return the defaults specified
in section 2.1.3

Resolution: see below
Revised Text: Resolution: Add the following operations: · On DomainParticipant · set_default_publisher_qos, · get_default_publisher_qos, · set_default_subscriber_qos, · get_default_subscriber_qos, · set_default_topic_qos, · get_default_topic_qos · On Publisher · set_default_datawriter_qos, · get_default_datawriter_qos, · On Subscriber · set_default_datawriter_qos, · get_default_datawriter_qos. This allows the application to set a default QoS for the entities that the factory will create, and then explicitly use that QoS to create the entities. Furthermore specify that if the set_default_xxx_qos operations are not called then the get_default_xxx_qos will return the defaults specified in section 2.1.3 This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.2.1 DomainParticipant Class · In DomainParticipant table : · Add operations set_default_publisher_qos, get_default_publisher_qos, set_default_subscriber_qos, get_default_subscriber_qos, set_default_topic_qos, get_default_topic_qos set_default_publisher_qos ReturnCode_t qos_list QosPolicy [] get_default_publisher_qos void out: qos_list QosPolicy [] set_default_subscriber_qos ReturnCode_t qos_list QosPolicy [] get_default_subscriber_qos void out: qos_list QosPolicy [] set_default_topic_qos ReturnCode_t qos_list QosPolicy [] get_default_topic_qos void out: qos_list QosPolicy [] · Add the following subsections: 2.1.2.2.1.19 set_default_publisher_qos First paragraph as follows: This operation sets a default value of the Publisher QoS policies which will be used for newly created Publisher entities in the case where the QoS policies are not explicitly specified in the create_publisher operation. This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. 2.1.2.2.1.20 get_default_publisher_qos This operation retrieves the default value of the Publisher QoS, that is, the QoS policies which will be used for newly created Publisher entities in the case where the QoS policies are not explicitly specified in the create_publisher operation. The values retrieved get_default_publisher_qos will match the set of values specified on the last succesful call to set_default_publisher_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3. 2.1.2.2.1.21 set_default_subscriber_qos This operation sets a default value of the Subscriber QoS policies which will be used for newly created Subscriber entities in the case where the QoS policies are not explicitly specified in the create_subscriber operation. This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. 2.1.2.2.1.22 get_default_subscriber_qos This operation retrieves the default value of the Subscriber QoS, that is, the QoS policies which will be used for newly created Subscriber entities in the case where the QoS policies are not explicitly specified in the create_subscriber operation. The values retrieved get_default_subscriber_qos will match the set of values specified on the last succesful call to set_default_subscriber_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3. 2.1.2.2.1.23 set_default_topic_qos" This operation sets a default value of the Topic QoS policies which will be used for newly created Topic entities in the case where the QoS policies are not explicitly specified in the create_topic operation. This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. 2.1.2.2.1.24 get_default_topic_qos This operation retrieves the default value of the Topic QoS, that is, the QoS policies which will be used for newly created Topic entities in the case where the QoS policies are not explicitly specified in the create_topic operation The values retrieved get_default_topic_qos will match the set of values specified on the last succesful call to set_default_topic_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3." · Section 2.1.2.4.1 Publisher Class · In the Publisher table : · Add operation set_default_datawriter_qos, get_default_datawriter_qos set_default_datawriter_qos ReturnCode_t qos_list QosPolicy [] get_default_datawriter_qos void out: qos_list QosPolicy [] · Add the following subsections 2.1.2.4.1.14 set_default_datawriter_qos This operation sets a default value of the DataWriter QoS policies which will be used for newly created DataWriter entities in the case where the QoS policies are not explicitly specified in the create_datawriter operation. This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. 2.1.2.4.1.15 get_default_datawriter_qos This operation retrieves the default value of the DataWriter QoS, that is, the QoS policies which will be used for newly created DataWriter entities in the case where the QoS policies are not explicitly specified in the create_datawriter operation. The values retrieved get_default_datawriter_qos will match the set of values specified on the last succesful call to get_default_datawriter_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3." · Section 2.1.2.5.2 Subscriber Class · In the Subscriber table : · Add operation set_default_datareader_qos, get_default_ datareader _qos set_default_datareader_qos ReturnCode_t qos_list QosPolicy [] get_default_datareader_qos void out: qos_list QosPolicy [] · Add the following subsections 2.1.2.5.2.1.5 set_default_datareader_qos This operation sets a default value of the DataReader QoS policies which will be used for newly created DataReader entities in the case where the QoS policies are not explicitly specified in the create_datareader operation. This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. 2.1.2.5.2.1.6 get_default_datareader_qos This operation retrieves the default value of the DataReader QoS, that is, the QoS policies which will be used for newly created DataReader entities in the case where the QoS policies are not explicitly specified in the create_datareader operation. The values retrieved get_default_datareader_qos will match the set of values specified on the last succesful call to get_default_datawriter_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DomainParticipant · add the following operations: ReturnCode_t set_default_publisher_qos(in PublisherQos qos); void get_default_publisher_qos(inout PublisherQos qos); ReturnCode_t set_default_subscriber_qos(in SubscriberQos qos); void get_default_subscriber_qos(inout SubscriberQos qos); ReturnCode_t set_default_topic_qos(in TopicQos qos); void get_default_topic_qos(inout TopicQos qos); · Interface Publisher · add the following operations: ReturnCode_t set_default_datawriter_qos(in DataWriterQos qos); void get_default_datawriter_qos(inout DataWriterQos qos); · Interface Subscriber · add the following operations: ReturnCode_t set_default_datareader_qos(in DataReaderQos qos); void get_default_datareader_qos(inout DataReaderQos qos);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6817: DDS ISSUE# 31] Topic QoS refactor (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-86 Topic_qos_refactor and Ref-150
Tracking_of_topic_properties_by_reader_writer


The reason for the DDS specification to add QoS to the Topic was to
allow annotating the information-model with Topic QoS settings. That
way it is possible for the individual applications and their designers
to be relieved from many details and configure the writers/readers
from this information model.


However the definition of the Topic QoS does not match this.


Section 2.1.2.4.3.2 describes a model where if the QoS is 'not set' on
a dataWriter or a DataReader, then the one of the Topic is used
instead. Furthermore it says that in this situation the QoS of the
Entity will "track" that of the Topic.


These statements are not consistent with the PIM or PSM as there is no
way provided to 'not set' a QoS. All the API's force complete setting
of the QoS.


Furthermore the 'tracking' behavior would be very hard to implement
and it would also be very confusing for an user that has a QoS of an
entity magically change when the QoS of the topic changes. Lastly it
would also be hard to handle the case where as a result of the
'tracking' the QoS of an entity becomes incompatible with that of
other entities already associated with it.


Given all this it would be desirable to make the use of Topic QoS
consistent with the API's and also simpler to implement and understand


***PROPOSAL***


Modify section 2.1.2.4.3.2.


Remove all references to behaviors regarding what happens if some QoS
is "not set" rather say that QoS is always explicitly set although it
can be set from defaults in the factories.


Also remove the sentence regarding how the entity QoS can "track" the
Topic qos.


Explain that the pattern to create an entity with "default" QoS is to
get the default qos from the factory, and also get it from the Topic,
and then modify the desired policies before creating the entity.


To assist this common pattern we recommend adding the following
utility operations:


Publisher::initialize_from_topic_qos(inout DataReaderQos qos, in Topic
a_topic, in long mask)


Subscriber::initialize_from_topic_qos(inout DataWriterQos qos, in
Topic a_topic, in long mask)


Specify that all QoS on Topic is immutable.

Resolution: see below
Revised Text: Resolution: Remove all references to behaviors regarding what happens if some QoS is "not set" rather say that QoS is always explicitly set although it can be set from defaults in the factories and remove the sentence regarding how the entity QoS can "track" the Topic qos. Explain that the pattern to create an entity with "default" QoS is to get the default qos from the factory, and also get it from the Topic, and then modify the desired policies before creating the entity. To assist this common pattern, it is needed to add the following utility operations: · Publisher::initialize_from_topic_qos(inout DataReaderQos qos, in Topic a_topic, in long mask) · Subscriber::initialize_from_topic_qos(inout DataWriterQos qos, in Topic a_topic, in long mask) Specify that all QoS on Topic is immutable. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.4.1 Publisher Class · In the Publisher table · Add operation: copy_from_topic_qos copy_from_topic_qos ReturnCode_t inout: datawriter_qo s QosPolicy [] topic_qos QosPolicy [] · Add the following subsection 2.1.2.4.1.16 copy_from_topic_qos This operation copies the policies in the topic_qos to the corresponding policies in the datawriter_qos (replacing values in the datawriter_qos, if present). This is a "convenience" operation most useful in combination with the operations get_default_datawriter_qos and Topic::get_qos. The operation copy_from_topic_qos can be used to merge the DataWriter default QoS policies with the corresponding ones on the Topic. The resulting QoS can then be used to create a new DataWriter, or set its QoS. This operation does not check the resulting datawriter_qos for consistency. This is because the 'merged' datawriter_qos may not be the final one, as the application can still modify some settings prior to applying the policies to the DataWriter. · Section 2.1.2.4.2.3 set_qos (from Entity) · Delete paragraph: "The setting of QoS on the DataWriter results in a combination of the policies set at DataWriter level and of the ones set at the related Topic level. In case both Topic and DataWriter specify values for the same QosPolicy (identified by its name), the value specified by the DataWriter takes precedence. This applies after creation time as well; if the DataWriter does not specify a policy, the policy value will track changes in the Topic's policy. To be more precise, for both Topic and DataWriter, the value of any QosPolicy can be either 'set' or 'not set.' The following table summarizes the resulting value of the policy depending on how it is specified for Topic and DataWriter:" · Delete the table that follows the paragraph (only table in section 2.1.2.4.2.3) · Delete the paragraph: In a sense, QoS set at Topic level represents a default setting that is inherited and can be 'overridden' by any DataWriter that refers to that Topic. · Section 2.1.2.5.2 Subscriber Class · In the Subscriber table: · Add operation: copy_from_topic_qos copy_from_topic_qos ReturnCode_t inout: datareader_qos QosPolicy [] topic_qos QosPolicy [] · Add the following subsection 2.1.2.5.2.17 copy_from_topic_qos This operation copies the policies in the topic_qos to the corresponding policies in the datareader_qos (replacing values in the datareader_qos, if present). This is a "convenience" operation most useful in combination with the operations get_default_datareader_qos and Topic::get_qos. The operation copy_from_topic_qos can be used to merge the DataReader default QoS policies with the corresponding ones on the Topic. The resulting QoS can then be used to create a new DataReader, or set its QoS. This operation does not check the resulting datareader_qos for consistency. This is because the 'merged' datareader_qos may not be the final one, as the application can still modify some policies prior to applying the policies to the DataReader. · Section 2.1.2.5.3.3 set_qos (from Entity) · Delete paragraph: The setting of QoS on the DataReader results in a combination of the policies set at DataReader level and of the ones set at the related Topic level. The algorithm used to resolve the case where values for the same QosPolicy are set both on the DataReader and the Topic is the same described for the DataWriter in Section 2.1.2.4.2.3. · Section 2.1.2.4.1.5 create_datawriter · At the end of the section add the following paragraph and bullets: Note that a common application pattern to construct the QoS for the DataWriter is to:" · Retrieve the QoS policies on the associated Topic by means of the get_qos operation on the Topic. · Retrieve the default DataWriter qos by means of the get_default_datawriter_qos operation on the Publisher. · Combine those two QoS policies and selectively modify policies as desired. The operation copy_from_topic_qos can be used to assist this task. · Use the resulting QoS policies to construct the DataReader. · Section 2.1.2.5.2.5 create_datareader At the end of the section add the following paragraph and bullets: "Note that a common application pattern to construct the QoS for the DataReader is to: · Retrieve the QoS policies on the associated Topic by means of the get_qos operation on the Topic. · Retrieve the default DataReader qos by means of the get_default_datareader_qos operation on the Subscriber. · Combine those two QoS policies and selectively modify policies as desired. · Use the resulting QoS policies to construct the DataReader. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Publisher interface: · Add operation ReturnCode_t copy_from_topic_qos(inout DataWriterQos a_datawriter_qos, in TopicQos a_topic_qos); · Subscriber interface: · Add operation ReturnCode_t copy_from_topic_qos(inout DataReaderQos a_datareader_qos, in TopicQos a_topic_qos); Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6818: DDS ISSUE# 32] Create dependencies on type (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-89 State_dependency_register_type_create_datareader_writer


The specification does not say clearly whether a TopicDescription can
be created if the associated type has not been registered and what
happens if the user tries to do that.


***PROPOSAL***


State explicitly that the operations that create specializations of
TopicDescription, that is, create_topic and create_multitopic and
lookup_topic, Will return PRECONDITION_NOT_MET if type not locally
registered


Resolution: see below
Revised Text: Resolution: State explicitly that the operations that create specializations of TopicDescription, that is, create_topic and create_multitopic and lookup_topic, Will return PRECONDITION_NOT_MET if type not locally registered · section 2.1.2.2.1.5 create_topic already says that · no need to say anything for ContentFilteredTopic as the type is not specified directly at creation time. Rather the ContentFilteredTopic is passed a Topic which is already associated with the type. The only clarification to made is therefore concerning MultiTopic behavior. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.2.1.9 create multitopic · Replace: The resulting type is specified by the type_name argument. The list of topics and the logic used to combine filter and re-arrange the information from each Topic are specified using the subscription_expression and expression_parameters arguments. · with the two following paragraphs: The resulting type is specified by the type_name argument. Prior to creating a MultiTopic the type must have been registered with the Service. This is done using the register_type operation on a derived class of the DataType interface as described in Section 2.1.2.3.6. The list of topics and the logic used to combine filter and re-arrange the information from each Topic are specified using the subscription_expression and expression_parameters arguments.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6819: DDS ISSUE# 33] Initialization of resources needed (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
] Initialization of resources needed to implement
DURABILITY TRANSIENT or PERSISTENT


***Ref-91 Configuration_of_the_transient_and_persistent_service


The DDS specification does not provide a clear model of the behavior
of the service when the DURABILITY QoS is set to TRANSIENT or
PERSISTENT


Moreover for the application needs to be able to specify the QoS
parameters and resources that the implementation is allowed to use
when implementing this service.


***PROPOSAL***


Add the following explanation to section 2.1.3.2:


For the purpose of implementing the the DURABILITY QoS settings of
TRANSIENT, PERSISTENT the service behaves "as if" it had a "built-in
DataReader and DataWriter" for each Topic that is configured to have
said DURABILITY kind. In other words, it is "as if" somewhere in the
system (possibly on a remote node) there was a "built-in durability"
DataReader that subscribed to that Topic and a "built-in durability"
DataWriter that published that Topic as needed for the new subscribers
that join the system.


For each Topic, the built-in "persistence service"
datareader/datawriter has its QoS configured from the Topic QoS for
that Topic as described in Issue Topic_qos_refactor (Issue #86). In
otherwords, it is as-if the service first did a
"Participant::lookup_topic" for that Topic, and then used the QoS on
the Topic to configure the built-in entities.


As a consequence of this model, the transient or persistence serviced
can be configured by means of setting the proper QoS on the Topic.


For a given Topic, the usual request/offered semantics apply to the
matching between any DataWriter in the system that writes the Topic
and the built-in transient/persistent DataReader for that
Topic. Similarly for the builtin transient/persistent DataWriter for
the Topic and any DataReader for the Topic. As a consequence, a
DataWriter that has an incompatible QoS with respect to what the Topic
specified for the built-in transient/persistent DataReader will not
send its data to the transient/persistent service, and a DataReader
that has incompatible QoS with respect to the specified in the Topic
for the transient/persistent DataWriter will not get data from it.


Incompatibilities between local DataReader/DataWriter entities and the
corresponding builtin transient/persistent entities cause the
"incompatible qos" listener to be invoked as they would with any other
entity.


Resolution: see below
Revised Text: Resolution: Add the following explanations to section 2.1.3.2: · For the purpose of implementing the DURABILITY QoS settings of TRANSIENT, PERSISTENT the service behaves "as if" it had a "built-in DataReader and DataWriter" for each Topic that is configured to have said DURABILITY kind. In other words, it is "as if" somewhere in the system (possibly on a remote node) there was a "built-in durability" DataReader that subscribed to that Topic and a "built-in durability" DataWriter that published that Topic as needed for the new subscribers that join the system. · For each Topic, the built-in "persistence service" datareader/datawriter has its QoS configured from the Topic QoS for that Topic as described in Issue Topic_qos_refactor (Issue #86). In other words, it is as-if the service first did a "Participant::lookup_topic" for that Topic, and then used the QoS on the Topic to configure the built-in entities. · As a consequence of this model, the transient or persistence serviced can be configured by means of setting the proper QoS on the Topic. · For a given Topic, the usual request/offered semantics apply to the matching between any DataWriter in the system that writes the Topic and the built-in transient/persistent DataReader for that Topic. Similarly for the built-in transient/persistent DataWriter for the Topic and any DataReader for the Topic. As a consequence, a DataWriter that has an incompatible QoS with respect to what the Topic specified for the built-in transient/persistent DataReader will not send its data to the transient/persistent service, and a DataReader that has incompatible QoS with respect to the specified in the Topic for the transient/persistent DataWriter will not get data from it. · Incompatibilities between local DataReader/DataWriter entities and the corresponding built-in transient/persistent entities cause the "incompatible qos" listener to be invoked as they would with any other entity. This change only concerns the PIM (text). Revised Text: Changes in PIM · At the end of section 2.1.3.2 add the following paragraphs: For the purpose of implementing the DURABILITY QoS kind other than VOLATILE, the service behaves "as if" for each Topic that has TRANSIENT or PERSISTENT DURABILITY kind there was a corresponding "built-in" DataReader and DataWriter configured to have the same DURABILITY kind. In other words, it is "as if" somewhere in the system (possibly on a remote node) there was a "built-in durability DataReader" that subscribed to that Topic and a "built-in durability DataWriter" that published that Topic as needed for the new subscribers that join the system. For each Topic, the built-in fictitious "persistence service" DataReader and DataWriter has its QoS configured from the Topic QoS of the corresponding Topic. In other words, it is "as-if" the service first did lookup_topic to access the Topic, and then used the QoS from the Topic to configure the fictitious built-in entities. A consequence of this model is that the transient or persistence serviced can be configured by means of setting the proper QoS on the Topic. For a given Topic, the usual request/offered semantics apply to the matching between any DataWriter in the system that writes the Topic and the built-in transient/persistent DataReader for that Topic; similarly for the built-in transient/persistent DataWriter for a Topic and any DataReader for the Topic. As a consequence, a DataWriter that has an incompatible QoS with respect to what the Topic specified will not send its data to the transient/persistent service, and a DataReader that has an incompatible QoS with respect to the specified in the Topic will not get data from it. Incompatibilities between local DataReader/DataWriter entities and the corresponding fictitious "built-in transient/persistent entities" cause the REQUESTED_INCOMPATIBLE_QOS/ OFFERED_INCOMPATIBLE_QOS status to change and the corresponding Listener invocations and/or signaling of Condition objects and WaitSets as they would with non-fictitious entities.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Discussion:


Issue 6820: DDS ISSUE# 34] Initial data when DataWriter appears (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-90 Initial_data_from_transient_or_persistent_data


The specification mentions that if the DURABILITY is set to TRANSIENT
or PERSISTENT a newly appearing DataReader would be able to get some
amount of history data.


However the precise semantics of this are not clear. For example is
the application allows to receive information as soon as the Reader is
active (perhaps from active writers) even before the historical data
has been received?


It would be desirable to offer the application some control about how
to initialize the reader when it first joins the system.


***PROPOSAL***


Introduce an operation DataReader:: get_historical_data(in Duration_t
max_time_to_wait);


This operation will block for a maximum of max_time_to_wait until the
system gets the initial data.


Introduce the ReturnCode_t "TIMEOUT" to section 2.1.1.1.


State that the get_historical_data operation will return OK if all the
initial data has been received. Otherwise it will return TIMEOUT if
the system cannot ensure that all historical data has been received.

Resolution: see below
Revised Text: Resolution: Introduce an operation DataReader:: get_historical_data(in Duration_t max_time_to_wait); This operation will block for a maximum of max_time_to_wait until the system gets the initial data. Introduce the ReturnCode_t "TIMEOUT" to section 2.1.1.1. State that the get_historical_data operation will return OK if all the initial data has been received. Otherwise it will return TIMEOUT if the system cannot ensure that all historical data has been received. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.5.3 DataReader class · In the DataReader table: · Add the operation wait_for_historical_data wait_for_historical_data ReturnCode_t max_wait Duration_t · Add a the following subsection: 2.1.2.5.3.20 wait_for_historical_data" This operation is intended only for DataReader entities that have a non-VOLATILE PERSISTENCE QoS kind. As soon as an application enables a non-VOLATILE DataReader it will start receiving both "historical" data, i.e. the data that was written prior to the time the DataReader joined the domain, as well as any new data written by the DataWriter entities. There are situations where the application logic may require the application to wait until all "historical" data is received. This is the purpose of the wait_for_historical_data operation. The operation wait_for_historical_data blocks the calling thread until either all "historical" data is received, or else the duration specified by the max_wait parameter elapses, whichever happens first. A return value of OK indicates that all the "historical" data was received; a return value of TIMEOUT indicates that max_wait elapsed before all the data was received. · Section 2.1.1.1 Return codes table: · Add the return code: TIMEOUT The operation timed out. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add (after RETCODE_ALREADY_DELETED) const ReturnCode_t RETCODE_TIMEOUT = 10; · Interface DataWriter · Add the operation: ReturnCode_t wait_for_historical_data(in Duration_t max_wait);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6821: Inconsistency on what operations may return NOT_ENABLED (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-204 Entity_table_get_statuscondition_return_not_enabled


Section 2.1.2.1.1, 1st paragraph under the Entity table: In the list
of methods which cannot return a "NOT_ENABLED", the
"get_statuscondition" method is not mentioned.


This is inconsistent with the text in 2.1.2.1.1.7 where it says that
get_statuscondition may be called even if the entity is not enabled.


***PROPOSAL***


Remove the sentence 2.1.2.1.1, 1st paragraph "All operation except for
... return the value NOT_ENABLED" as this is already described in
2.1.2.1.1.7


Resolution: see below
Revised Text: Resolution: Remove the sentence 2.1.2.1.1, 1st paragraph "All operation except for … return the value NOT_ENABLED" as this is already described in 2.1.2.1.1.7 This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.1.1. · Remove paragraph: All operations except for set_qos, get_qos, set_listener, get_listener and enable may return the value NOT_ENABLED.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6822: DDS ISSUE# 36] QoS clarifications (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-208 Latency_budget_description


Section 2.1.3 has a wrong description of LATENCY BUDGET. It says that
it specifies the acceptable delay from production time until 'it is
received by subscribing application' which might suggest that it
includes the time an application might wait until actually reading the
data.


Rather it should say that it specifies the acceptable delay from
production time until the data is inserted in application-cache and
the application is notified of the fact.


***PROPOSAL***


Update the description. In the table to say that "it specifies the
acceptable delay from production time until the data is inserted in
application-cache and the application is notified of the fact"


Resolution: see below
Revised Text: Resolution: Update the description. In the table to say that "it specifies the acceptable delay from production time until the data is inserted in application-cache and the application is notified of the fact" This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3 QoS table LATENCY QoSPolicy, · Replace: Provides a hint as to the maximum acceptable delay from the time the data is written to the time it is received by the subscribing applications. · with: Specifies the maximum acceptable delay from the time the data is written until the data is inserted in the receiver's application-cache and the receiving application is notified of the fact
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6823: Ref-210 Clarification_of_responsibility_of_RxO_qos (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In section 2.1.3 the explanation of the table mentions: "The QosPolicy
objects that need to be set in a compatible manner at the publisher
end are indicated by the setting of the RxO property" This suggests an
asymmetry between publishers and subscribers.  It is not true that
compatibility of policy objects is entirely the responsibility of the
publisher, is it?


***PROPOSAL***


In said sentence replace "at the publisher end" with "between the
publisher and subscriber ends".

Resolution: see below
Revised Text: Resolution: In said sentence replace "at the publisher end" with "between the publisher and subscriber ends". This change concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2 (above RxO bullets): · Replace: The QosPolicy objects that need to be set in a compatible manner at the publisher end are indicated by the setting of the 'RxO' property: · with: The QosPolicy objects that need to be set in a compatible manner between the publisher and subscriber ends are indicated by the setting of the 'RxO' property:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6824: Ref-212 Qos_Coupling_TimeBasedFilter_deadline (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In section 2.1.3.4 it is unclear whether the service is expected to
check compatibilities between DEADLINE and TIME_BASED_FILTER
policies. For example, a DataWriter offering a DEADLINE period smaller
than a DataReaders TIME_BASED_FILTER is bound to lead to problems with
reliable transmission.


Another example is a DataReader which has a DEADLINE period smaller
than the TIME_BASED_FILTER period.


***PROPOSAL***


State that DataReader which has a DEADLINE period smaller than the
TIME_BASED_FILTER period results on INCONSISTENT_POLICY.

Resolution: see below
Revised Text: Resolution: State that DataReader which has a DEADLINE period smaller than the TIME_BASED_FILTER period results on INCONSISTENT_POLICY. This change only concerns the PIM (text) and the IDL. Revised Text: Changes in PIM · Section 2.1.3.4 DEADLINE · Add the following paragraph to the end: The setting of the DEADLINE policy must be set consistently with that of the TIME_BASED_PERIOD. For these two policies to be consistent the settings must be such that "deadline >= minimum_separation". An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6825: Ref-104 Coupling_bwn_TIME_BASED_FILTER_and_RELIABILITY (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.3 on the QoS table. The description of the
TIME_BASED_FILTER QoS says"


"The setting of this QoS is incompatible with RELIABILITY policy set
to ALL"


First there is no such setting for reliability policy. It intended to
say it is incompatible with RELIABILITY 'RELIABLE' and HISTORY
'KEEP_ALL'.


However it is unclear why it is necessary introduce such
incompatibility. It would be better to interpret the meaning of
RELIABLE and KEEP_ALL to mean that the application desires that all
samples that pass whichever filters have been specified are propagated
reliably and kept by the middleware until they can be delivered to the
DataReader. In other words, the RELIABILITY and HISTORY policies apply
after the "filter-type" policies apply. The filters determine what is
of interest, the reliability whether samples lost by the transport
should be retried and the HISTORY whether to keep old values that have
not been delivered to the application once a new value
exist. Interpreted this way all combination of the above policies are
sensical. This approach also extends to the ContentFilteredTopic


***PROPOSAL***


Remove that comment from the TIME_BASED_FILTER QoS


Add a third paragraph to section 2.1.3.8 that explains that it is
indeed possible to specify RELIABILITY RELIABLE, and a HISTORY
KEEP_ALL and still set a TIME_BASED_FILTER. The paragraph would say:


The setting of a TIME_BASED_FILTER, that is, the selection of a
minimum_separation with a value greater than zero is compatible with
all settings of the HISTORY and RELIABILITY QoS. The TIME_BASED_FILTER
specifies the samples that are of interest to the DataReader. The
HISTORY and RELIABILITY affect the behavior of the middleware with
respect to the samples that have been determined to be of interest to
the DataReader, that is, they apply after the TIME_BASED_FILTER has
been applied.


Specify that that if the reliability is RELIABLE then in steady-state
it should behave as-if the last sample passes the
TIME_BASED_FILTER. In other words, in steady state the last sample
should eventually become available to the receiver.

Resolution: see below
Revised Text: Resolution: Remove that comment from the TIME_BASED_FILTER QoS Add a third paragraph to section 2.1.3.8 that explains that it is indeed possible to specify RELIABILITY RELIABLE, and a HISTORY KEEP_ALL and still set a TIME_BASED_FILTER. The paragraph would say: "The setting of a TIME_BASED_FILTER, that is, the selection of a minimum_separation with a value greater than zero is compatible with all settings of the HISTORY and RELIABILITY QoS. The TIME_BASED_FILTER specifies the samples that are of interest to the DataReader. The HISTORY and RELIABILITY affect the behavior of the middleware with respect to the samples that have been determined to be of interest to the DataReader, that is, they apply after the TIME_BASED_FILTER has been applied." Specify that that if the reliability is RELIABLE then in steady-state it should behave as-if the last sample passes the TIME_BASED_FILTER. In other words, in steady state the last sample should eventually become available to the receiver. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3 QoS table, TIME_BASED_FILTER QoS policy: · Remove sentence: The setting of this QoS policy is incompatible with RELIABILITY policy set to ALL. · Section 2.1.3.8 TIME_BASED_FILTER · Add following paragraphs at the end: The setting of a TIME_BASED_FILTER, that is, the selection of a minimum_separation with a value greater than zero is compatible with all settings of the HISTORY and RELIABILITY QoS. The TIME_BASED_FILTER specifies the samples that are of interest to the DataReader. The HISTORY and RELIABILITY QoS affect the behavior of the middleware with respect to the samples that have been determined to be of interest to the DataReader, that is, they apply after the TIME_BASED_FILTER has been applied. In the case where the reliability QoS kind is RELIABLE then in steady-state, defined as the situation where the DataWriter does not write new samples for a period "long" compared to the minimum_separation, the system should guarantee delivery the last sample to the DataReader.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6826: Ref-156 Clarify_TIME_BASED_FILTER (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The specification does not state clearly whether the TIME_BASED_FILTER
applies on a per-instance basis or for the whole topic.


It should probably be per instance but it should be said explicitly.


***PROPOSAL***


Add the clarification to section 2.1.3.8. State that the filter
applier per instence, that is, the reader is requested not to receive
more than one sample per minimum_separation period for each instance

Resolution: see below
Revised Text: Resolution: Add the clarification to section 2.1.3.8. State that the filter applies per instance, that is, the reader is requested not to receive more than one sample per minimum_separation period for each instance. This change only concerns the PIM (text). Revised Text: Changes in PIM · In section 2.1.3.8 · Add the following paragraph after the first paragraph ending in "… at most one change every minimum_separation period.": The TIME_BASED_FILTER applies to each instance separately, that is, the constraint is that the DataReader does not want to see more than one sample of each instance per minimum_separation period.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6827: Ref-106 Desc_of_Inconsistent_topic_status::total_count_change (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.4.1. The Table that describes the statuses and the meaning
of the attributes has an incorrect description of the
"total_count_change"


Currently it talks about topic names, not counts


***PROPOSAL***


In section 2.1.4.1 replace the text "The type of the last topic
discovered..."


With: "The incremental number of inconsistent topics discovered since
the last time the listener was called or the status was read"

Resolution: see below
Revised Text: Resolution: In section 2.1.4.1 replace the text "The type of the last topic discovered…" with: "The incremental number of inconsistent topics discovered since the last time the listener was called or the status was read" This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.4.1 Status table. InconsistentTopicStatus · Replace: The type of the last Topic discovered that had the same name as the Topic to which this status is attached but had an inconsistent type. with: The incremental number of inconsistent topics discovered since the last time the listener was called or the status was read. Disposition: Resolved
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6828: Ref-108 Ownership_interaction_with_deadline (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In case where the OWNERSHIP QoS is EXCLUSIVE, the specification
describes that ownership is lost if the DataWriter loses its
liveliness. However it does not describe whether it loses the
ownership if it misses its deadline with regards to writing the
instance.


If this case is left unspecified the most natural interpretations
would be that the DataWriter does indeed maintain ownership even when
it is missing its deadlines. This seems undesirable.


We do not want a DataWriter that has promised to write samples within
a pre-stated deadline and fails that contract to retain ownership of
an instance and by so doing starve prevent other writers from writing
the instance and therefore starve the reader from data.  The
underlying goal is to make the system robust to faults that affect a
single entity.


***PROPOSAL***


Modify section 2.1.3.6.2 (EXCLUSIVE kind) to reflect this. This
affects several paragraphs:


** First paragraph


Replace: The owner is determined by selecting the DataWriter with the
highest value of the strength that is currently "alive" as defined by
the LIVELINESS QoS policy.


With: The owner is determined by selecting the DataWriter with the
highest value of the strength that is both "alive" as defined by the
LIVELINESS QoS policy and has not violated its DEADLINE contract with
regards to the data-instance.


** First paragraph


After "Ownership can therefore change as a result of (a) ..."


Add a forth case (d) a deadline with regards to the instance that is
missed by the DataWriter that owns the instance.


** Fifth paragraph


Modify "It is also required that the owner remains the same until
there is a change in strength, liveliness, or a new DataWriter with
higher strength modifies the instance."


To: "It is also required that the owner remains the same until there
is a change in strength or liveliness, the owner misses a deadline on
the instance, or a new DataWriter with higher strength modifies the
instance."


Resolution: see below
Revised Text: Resolution: Modify section 2.1.3.6.2 (EXCLUSIVE kind) to reflect this. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3.6.2: · First paragraph, replace: The owner is determined by selecting the DataWriter with the highest value of the strength that is currently "alive" as defined by the LIVELINESS QoS policy. with: The owner is determined by selecting the DataWriter with the highest value of the strength that is both "alive" as defined by the LIVELINESS QoS policy and has not violated its DEADLINE contract with regards to the data-instance. · First paragraph, after "Ownership can therefore change as a result of (a) …" add a fourth case, as follows: (d) a deadline with regards to the instance that is missed by the DataWriter that owns the instance." · Fifth paragraph, modify It is also required that the owner remains the same until there is a change in strength, liveliness, or a new DataWriter with higher strength modifies the instance. to: It is also required that the owner remains the same until there is a change in strength or liveliness, the owner misses a deadline on the instance, or a new DataWriter with higher strength modifies the instance.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6829: Ref-109 Destination_order_should_be_request_offered (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In order to support the BY_SOURCE_TIMESTAMP setting the DataWriter
needs to take appropriate actions (e.g. embed timestamps). Therefore
the DESTINATION_ORDER QoS should follow the same request/offered
pattern other QoSs do.


***PROPOSAL***


Modify the QoS table in section 2.1.3 and add DataWriter as in the
"Concerns" column for DESTINATION_ORDER. Also replace with "Yes" in
the "RxO" (request/offered) column.


Add a paragraph to section 2.1.3.11 stating the request/offered
compatibility as follows:


The value offered is considered compatible with the value requested if
and only if the inequality "offered kind >= requested kind" evaluates
to 'TRUE'. For the purposes of this inequality, the values of
DESTINATION_ORDER kind are considered ordered such that
BY_DESTINATION_TIMESTAMP < BY_SOURCE_TIMESTAMP.

Resolution: see below
Revised Text: Resolution: Modify the QoS table in section 2.1.3 and add DataWriter as in the "Concerns" column for DESTINATION_ORDER. Also replace with "Yes" in the "RxO" (request/offered) column. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3.11 · Add the following paragraph: The value offered is considered compatible with the value requested if and only if the inequality "offered kind >= requested kind" evaluates to 'TRUE'. For the purposes of this inequality, the values of DESTINATION_ORDER kind are considered ordered such that BY_DESTINATION_TIMESTAMP < BY_SOURCE_TIMESTAMP.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6830: Ref-111 Default_values_for_qos (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The QoS table in section 2.1.3 specifies default values for some of
the QoS. Others however are unspecified.


To be consistent we should provide some default value for all
QoS. Meaning if the user never specified the QoS (assuming the
Topic_qos_as_default_for_datareader_or_datawriter_qos issue is
resolved) then we should specify what the behavior should be


***PROPOSAL***


Complete the table in section 2.1.3 with the following values:


USER_DATA: empty sequence (zero-sized)


DURABILITY: VOLATILE


PRESENTATION: access_scope=INSTANCE, coherent_access=FALSE,
ordered_access=FALSE


DEADLINE: (already specified)


LATENCY_BUDGET: says "zero" but that is not really meaningful so
perhaps INFINITE?


OWNERSHIP: (already specified)


OWNERSHIP_STRENGTH: zero


LIVELINESS: (already specified)


TIME_BASED_FILTER: (already specified)


PARTITION: empty sequence (zero length)


RELIABILITY: (already specified)


DESTINATION_ORDER: BY_RECEPTION_TIMESTAMP


HISTORY: (already specified)


RESOURCE_LIMITS: (already specified)

Resolution: see below
Revised Text: Resolution: Complete the table in section 2.1.3 with the following values: · USER_DATA: empty sequence (zero-sized) · DURABILITY: VOLATILE · PRESENTATION: access_scope=INSTANCE, coherent_access=FALSE, ordered_access=FALSE · OWNERSHIP_STRENGTH: zero · PARTITION: empty sequence (zero length) · DESTINATION_ORDER: BY_RECEPTION_TIMESTAMP LATENCY_BUDGET: leave default value as zero but fix "dutatation" typo This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3 QoS table. · USER_DATA policy, at end of Meaning cell add: "The default value is an empty (zero-sized) sequence." · DURABILITY policy; VOLATILE row, at end of Meaning cell add: "This is the default kind." · PRESENTATION policy; INSTANCE row, at end of Meaning cell add: "This is the default access_scope." · PRESENTATION policy; coherent_access, at end of Meaning cell add: "The default setting of coherent_access is FALSE." · PRESENTATION policy; ordered_access, at end of Meaning cell add: "The default setting of ordered_access is FALSE." · LATENCY_BUDGET, replace "duraration" with "duration" · OWNERSHIP_STRENGTH policy at end of Meaning cell add: "The default value of the ownership_strength is zero." · PARTITION policy at end of Meaning cell add: "The default value is an empty (zero-sized) sequence. This is treated as a special value that matches any partition." · DESTINATION_ORDER policy; row kind, at end of Meaning cell add: "The default kind is BY_RECEPTION_TIMESTAMP."
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6831: Ref-144 Wrong_description_of_compatible_DURABILITY (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In Section 2.1.3, QoS table. Section on DURABILITY setting.


Says compatible only if requested kind > offered kind. Should have
said requested >= offered. Same in PRESENTATION


***PROPOSAL***


Replace "requested kind > offered kind" with "requested kind >=
offered kind"

Resolution: see below
Revised Text: Resolution: Replace "requested kind > offered kind" with "requested kind >= offered kind" This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3.2 DURABILITY · Second paragraph, replace: The value offered is considered compatible with the value requested if and only if the inequality "offered kind > requested kind" evaluates to 'TRUE'. with: The value offered is considered compatible with the value requested if and only if the inequality "offered kind >= requested kind" evaluates to 'TRUE'.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6832: Ref-165 Make_USER_DATA_changeable (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In Section 2.1.3, QoS table states USER_DATA is immutable.


Many of the use-cases of the USER_DATA require the application to be
able to change the USER_DATA dynamically.


***PROPOSAL***


Make USER_DATA mutable in section 2.1.3.


Resolution: see below
Revised Text: Make USER_DATA mutable in section 2.1.3. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3 QoS table · USER_DATA policy, Changeable cell, replace "No" with "Yes".
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6833: Ref-144 User_data_on_topic (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Some use-cases would benefit from having USER_DATA also on Topic.


***PROPOSAL***


Add TOPIC_DATA to the TopicQoS with the same definition as the
USER_DATA. The initial value should be an empty sequence


Add topic_data as a field in the DCPSTopic in the table describing the
built-in topics in section 2.1.5, page 2-90.


Resolution: see below
Revised Text: Resolution: Add TOPIC_DATA to the TopicQoS with the same definition as the USER_DATA. The initial value should be an empty sequence. Add topic_data as a field in the DCPSTopic in the table describing the built-in topics in section 2.1.5, page 2-90. This change concerns the PIM (UML diagram and text). Revised Text: [Note that the changes listed here which correspond to the issue at the time of the poll are incomplete. This problem was resolved by the introduction and resolution of a new issue 7066] Changes in PIM · Section 2.1.3 QoS table · USER_DATA policy, Concerns cell Add (after DataWriter) ", Topic."
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6834: Ref-142 Confusing_description_of_manual_by_participant (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In Section 2.1.3, QoS table


Clarify the LIVELINESS "MANUAL_BY_PARTICIPANT". What it says is
correct. But its getting some people confused.


It says:


"The Service will assume that as long as at least one Entity within
the domain has asserted its liveliness the Entity is also alive."


Some people are mistakenly understanding this as:


"...at least one Entity within the domain has asserted its liveliness
the domain is also alive."?


So maybe its best to say:


"The Service will assume that as long as at least one Entity within
the domain has asserted its liveliness the other Entities in the
domain are also alive."


***PROPOSAL***


Replace as stated above

Resolution: see below
Revised Text: Resolution: Change to "The Service will assume that as long as at least one Entity within the domain has asserted its liveliness the other Entities in the domain are also alive." This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.3 QoS table · LIVELINESS policy, MANUAL_BY_PARTICIPANT row, Meaning cell, replace: The Service will assume that as long as at least one Entity within the domain has asserted its liveliness the Entity is also alive. with: The Service will assume that as long as at least one Entity within the DomainParticipant has asserted its liveliness the other Entities in that same DomainParticipant are also alive.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6835: Ref-162 Separate_transient_into_two_kinds (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The description of the DURABILITY Qos in section 2.1.3 is ambiguous
with regards to the TRANSIENT kind.  It would appear that there are in
fact two kinds:


The first interpretation "TRANSIENT_LOCAL" would have the durability
tied to the liveliness of the DataWriter..


The second interpretation "TRANSIENT_GLOBAL would tie the durability
to the fact that some "durability servide" in the system is still
executing.


It should be explained which of these interpretations is meant.


***PROPOSAL***


Modify the QoS table in 2.1.3 to separate the TRANSIENT durability
into two kinds: TRANSIENT_LOCAL and TRANSIENT.


TRANSIENT_LOCAL ties the durability to the liveliness of the
writer. This is the mandatory level described in Appendix A.


TRANSIENT allows the durability to survive the liveliness of the
DataWriter.  Explain this kinds in the table and in section 2.1.3.2


Resolution: see below
Revised Text: Modify the QoS table in 2.1.3 to separate the TRANSIENT durability into two kinds: TRANSIENT_LOCAL and TRANSIENT. TRANSIENT_LOCAL ties the durability to the liveliness of the writer. This is the mandatory level described in Appendix A. TRANSIENT allows the durability to survive the liveliness of the DataWriter. Explain this kinds in the table and in section 2.1.3.2 This change concerns the PIM (text) and the IDL. Revised Text: Changes in PIM · Section 2.1.3 QoS table · DURABILITY policy, Value cell, replace: A "kind": VOLATILE, TRANSIENT, or PERSISTENT with A "kind": VOLATILE, TRANSIENT_LOCAL, TRANSIENT, or PERSISTENT · DURABILITY policy; TRANSIENT row, · Value cell, replace: TRANSIENT with: TRANSIENT_LOCAL, TRANSIENT · Meaning Cell, replace: The Service is only required to keep the data in memory and not in permanent storage. with the following bullets paragraphs: For TRANSIENT_LOCAL, the service is only required to keep the data in the memory of the DataWriter that wrote the data and the data is not required to survive the DataWriter. For TRANSIENT, the service is only required to keep the data in memory and not in permanent storage; but the data is not tied to the lifecycle of the DataWriter and will survive it. Support for TRANSIENT kind is optional. · The resulting table row is: TRANSIENT_L OCAL,TRANSIENT The Service will attempt to keep some samples so that they can be delivered to any potential late-joining DataReader. Which particular samples are kept depends on other QoS such as HISTORY and RESOURCE_LIMITS. For TRANSIENT_LOCAL, the service is only required to keep the data in the memory of the DataWriter that wrote the data and the data is not required to survive the DataWriter.For TRANSIENT, the service is only required to keep the data in memory and not in permanent storage; but the data is not tied to the lifecycle of the DataWriter and will, in general, survive it.Support for TRANSIENT kind is optional. · Section 2.1.3.2 DURABILITY · Second paragraph, replace VOLATILE < TRANSIENT < PERSISTENT with: VOLATILE < TRANSIENT_LOCAL < TRANSIENT < PERSISTENT · Appendix A · Persistence profile. replace: This profile adds the optional setting 'PERSISTENT' of the DURABILITY QoS policy kind. This profile enables saving data into permanent storage so that it can survive system outings. See page 2Ž Font> with: This profile adds the optional settings 'TRANSIENT' and 'PERSISTENT' of the DURABILITY QoS policy kind. This profile enables saving data into either TRANSIENT memory, or permanent storage so that it can survive the lifecycle of the DataWriter and system outings. See section 2.1.3.2. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Modify enum DurabilityQosPolicyKind to be: enum DurabilityQosPolicyKind { VOLATILE_DURABILITY_QOS, TRANSIENT_LOCAL_DURABILITY_QOS, TRANSIENT_DURABILITY_QOS, PERSISTENT_DURABILITY_QOS };
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6836: DDS ISSUE# 37] SAMPLE_LOST_STATUS on DataReader (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-101 Move_sample_lost_status_to_datareader


Section 2.1.4.2 shows in a table that SAMPLE_LOST_STATUS is on the
Subscriber. It would make more sense to have it on the Datareader. It
is not as useful to the application to know that samples have been
lost for DataReaders belonging to the Subscriber as it would be to
know which DataReaders they affected.


When we first specified this it was thought that it would be hard to
implement as a status on the DataReader because, given that the
samples are lost, it would not be clear which DataReader they
affect. As it turns out the anticipated implementation difficulties
are not there.


***PROPOSAL***


Remove the SAMPLE_LOST_STATUS from the status of the Subscriber and
add it to that of the DataReader in the table in 2.1.4.2


Also modify the PIM in section 2.1.2.5.2 and the PSM in section 2.2.3
moving the get_sample_lost_status operation from Subscriber to
DataReader

Resolution: see below
Revised Text: Resolution: Remove the SAMPLE_LOST_STATUS from the status of the Subscriber and add it to that of the DataReader in the table in 2.1.4.2 Also modify the PIM in section 2.1.2.5.2 and the PSM in section 2.2.3 moving the get_sample_lost_status operation from Subscriber to DataReader This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.5.7 DomainParticipantListener interface · DomainParticipantListener table · Operation " on_sample_lost": · Change first argument from: the_subscriber : Subscriber To the_reader : DataReader on_sample_lost void the_reader DataReader status SampleLostStatus · Section 2.1.2.5.6 SubscriberListener interface · Subscriber table · Remove operation "Name: "on_sample_lost" · Section 2.1.2.5.7 DataReaderListener interface · DataReaderListener table · Add operation: on_sample_lost void the_reader DataReader status SampleLostStatus · Modify Figure 2-19 to reflect the move of these operations from Subscriber/SubscriberListener to DataReader/DataReaderListener. · Section 2.1.4.1 Communication status · Communication status table: · Row for entity "Subscriber", · remove "SAMPLE_LOST" Status Name · remove "A sample has been lost (never received)" from Meaning column of SAMPLE_LOST status · Row for DataReader · add "SAMPLE_LOST" Status Name · add "A sample has been lost (never received)" as the Meaning for the SAMPLE_LOST status. · Status description table: · SampleLostStatus, description of "total_count", replace: Total cumulative count of all samples lost across of instances of topics subscribed by the Subscriber. with: Total cumulative count of all samples lost across of instances of data published under the Topic. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface SubscriberListener · Remove operations: on_sample_lost · Interface Subscriber · Remove operation: get_sample_lost_status · Interface DataReaderListener · Add operation: void on_sample_lost(in DataReader reader, in SampleLostStatus status); · Interface DataReaderListener · Add operation: SampleLostStatus get_sample_lost_status();
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6837: DDS ISSUE# 38] Allow application to install a clock (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-114 Behavior_of_instances_when_deleting_datawriter


What happens when an application deletes a datawriter? Do all
registered instances become unregistered? What about samples that may
have been written but not propagated to remote readers?


There are two possibilities:


Approch1 behave as if the application had crashed with regards to the
readers. The application should have been explicit about the contained
entities (e.g. unregister/dispose) before deleting the writer.


Approach2 handle it as any deletion of a container entity. It is a
pre-condition error if it has any registered instances and the entity
is not deleted. But also we provide helper operations on the
DataWriter to unregister_all_instances() and dispose_all_instances().


***PROPOSAL***


Use Approach2. Its more explicit and less error-prone.


Also say that the infrastructure is not required to immediately
release the resources before returning from the delete but can keep
them around for a while until is properly has a chance to clean up its
state, inform remote nodes, etc.

Resolution: see below
Revised Text: Resolution: · Initial consensus was to use Approach2. It was deemed as more explicit and less error-prone.But further analysis at the teleconference on Feb 24, 2004 revealed a problem with Approach2: · Due to the resolution of issue 6764 Approach2 is problematic when the application calls delete_contained_entities on the Publisher. In that case there is no opportunity to call unregister_all_instances or dispose_all_instances on each of the DataWriters. So we think that a third approach Approach3 may be better: · The FTF resolved to use a different approach: Approach3 · Approach3 Behave as if all instances are "unregistered" prior to 'deletion'. Furthermore introduce a new DataWriter QoS that indicates whether instances should also be "disposed" prior to unregistering when the DataWriter is deleted. This QoS applies only to the DataWriter and is mutable. The name of this policy should WRITER_DATA_LIFECYCLE and should be consistent with the name used to resolve 6855. Revised Text: Changes in PIM · Section 2.1.2.4.1.6 delete_datawriter · Add the following paragraph (After paragraph "The delete_datawriter operation must be called on the same Publisher…"): The deletion of the DataWriter will automatically unregister all instances. Depending on the settings of the WRITER_DATA_LIFECYCLE QosPolicy, the deletion of the DataWriter may also dispose all instances. Refer to Section 2.1.3.15 for details. · Section 2.1.3 · Figure 2-12 Add WriterDataLifecycleQosPolicy · Fields: autodispose_unregistered_instances : Boolean · QoS table add QoSPolicy (at the bottom): QosPolicy Value Meaning Concerns RxO Changeable WRITER_DATA _LIFECYCLE A boolean: "autodispose_unr egistered_instanc es" Specifies the behavior of the DataWriter with regards to the lifecycle of the data- instances it manages. DataWriter N/A Yes autodispose_unre gistered_instance s Controls whether a DataWriter will automatically dispose instances each time they are unregistered.The setting autodispose_unregistered_instances = TRUE indicates that unregistered instances will also be considered disposed.By default, TRUE. · Insert Section 2.1.3.15 WRITER_DATA_LIFECYCLE (old section 2.1.3.15 Relationship between registration, LIVELINESS, and OWNERSHIP becomes 2.1.3.16) 2.1.3.15 WRITER_DATA_LIFECYCLE This policy controls the behavior of the DataWriter with regards to the lifecycle of the data-instances it manages, that is, the data-instances that have been either explicitly registered with the DataWriter using the register operations (see Section 2.1.2.4.2.5 and Section 2.1.2.4.2.6 ) or implicitly by directly writing the data (see Section 2.1.2.4.2.10 and Section 2.1.2.4.2.11 ). The autodispose_unregistered_instances flag controls the behavior when the DataWriter unregisters an instance by means of the unregister operations (see Section 2.1.2.4.2.7 and Section 2.1.2.4.2.8 ): The setting 'autodispose_unregistered_instances = FALSE' causes the DataWriter to dispose the instance each time it is unregistered. The behavior is identical to explicitly calling one of the dispose operations (Section 2.1.2.4.2.12 and Section 2.1.2.4.2.13 ) on the instance prior to calling the unregister operation. The setting 'autodispose_unregistered_instances = FALSE' will not cause this automatic disposition upon unregistering. The application can still call one of the dispose operations prior to unregistering the instance and accomplish the same effect. Refer to Section 2.1.3.16.3 for a description of the consequences of disposing and unregistering instances. Note that the deletion of a DataWriter automatically unregisters all data-instances it manages (Section 2.1.2.4.1.6 ). Therefore the setting of the autodispose_unregistered_instances flag will determine whether instances are ultimately disposed when the DataWriter is deleted either directly by means of the Publisher::delete_datawriter operation or indirectly as a consequence of calling delete_contained_entities on the Publisher or the DomainParticipant that contains the DataWriter. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add const string WRITERDATALIFECYCLE_QOS_POLICY_NAME = "WriterDataLifecycle"; const QosPolicyId_t WRITERDATALIFECYCLE_QOS_POLICY_ID = 16; struct WriterDataLifecycleQosPolicy { boolean autodispose_unregistered_instances; }; · struct DataWriterQos · Add (at the end of the structure): WriterDataLifecycleQosPolicy writer_data_lifecycle;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6838: DDS ISSUE# 39] Combine module names (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-131 Module_name_dcps


The specification contains two top-level module names 'DCPS' and
'DLRL' it is desirable to not have so many modules. It will complicate
the C mapping and and also weaken the "brand"


We have verified that there are no naming clashes between the two
modules so they can be combined


***PROPOSAL***


In section 2.2.3 replace the module name from 'DCPS' to 'DDS'


In section 3.2.1.2 replace the module name from 'DLRL to 'DDS'


Resolution: see below
Revised Text: Resolution: In section 2.2.3 replace the module name from 'DCPS' to 'DDS' In section 3.2.1.2 replace the module name from 'DLRL to 'DDS' This change only concerns the IDL. Revised Text: Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Replace: "module DCPS" with "module DDS" · Section 3.2.1.2.1 Generic DLRL Entities (IDL Description) · Replace "#include dcps.idl" with "#include dds_dcps.idl" · Replace "module Dlrl" with "module DDS" · remove "typedef sequence <string> StringSeq;" Changes in implied IDL · Section 2.2.3 DCPS PSM : IDL · Interface FooDataType: · Replace "DCPS::ReturnCode_t" with "DDS::ReturnCode_t" · Replace "DCPS::DataType" with "DDS::DataType" · Replace "DCPS::DomainParticipant" with "DDS::DomainParticipant " · Replace "DCPS::Time_t" with "DDS::Time_t" · Interface FooDataWriter: · Replace "DCPS::ReturnCode_t" with "DDS::ReturnCode_t" · Replace "DCPS::DataWriter" with "DDS::DataWriter" · Replace "DCPS::InstanceHandle_t" with "DDS::InstanceHandle_t" · Replace "DCPS::Time_t" with "DDS::Time_t" · Interface FooDataReader: · Replace "DCPS::ReturnCode_t" with "DDS::ReturnCode_t" · Replace "DCPS::DataReader" with "DDS::DataReader" · Replace "DCPS::InstanceHandle_t" with "DDS::InstanceHandle_t" · Replace "DCPS::SampleInfoSeq" with "DDS::SampleInfoSeq" · Replace "DCPS::SampleStateMask" with "DDS::SampleStateMask" · Replace "DCPS::LifecycleStateMask" with "DDS::LifecycleStateMask" · Replace "DCPS::ReadCondition" with "DDS::ReadCondition" · Section 3.2.1.2.2 Implied IDL · valuetype Foo · Replace "Dlrl::ObjectRoot" with "DDS::ObjectRoot" · Replace "Dlrl::CacheAccess" with "DDS::CacheAccess" · Replace "Dlrl::ObjectScope" with "DDS::ObjectScope" · Replace "Dlrl::RelatedObjectDepth " with "DDS::RelatedObjectDepth" · Replace "Dlrl::ReadOnlyMode " with "DDS::ReadOnlyMode · Replace "Dlrl::AlreadyClonedInWriteMode" with "DDS::AreadyClonedInWriteMode" · interface FooListener · Replace "Dlrl::DDS::SelectionListener" with "DDS::DDS::SelectionListener" · interface FooSelectionListener · Replace "Dlrl::SelectionListener" with "DDS::SelectionListener" · interface FooFilter · Replace "Dlrl::ObjectFilter" with "DDS::ObjectFilter" · interface FooModifier · Replace "Dlrl::ObjectModifier" with "DDS::ObjectModifier" · interface FooQuery · Replace "Dlrl::ObjectQuery" with "DDS::ObjectQuery" · interface FooExtent · Replace "Dlrl::ObjectExtent" with "DDS::ObjectExtent" · interface FooSelection · Replace "Dlrl::Selection" with "DDS::Selection" · interface FooHome · Replace "Dlrl::ObjectHome" with "DDS::ObjectHome" · Replace "Dlrl::BadParameter" with "DDS::BadParameter" · Replace "Dlrl::CacheAccess" with "DDS::CacheAccess" · Replace "Dlrl::DlrlOid" with "DDS::DlrlOid" · Replace "Dlrl::AlreadyExisting" with "DDS::AlreadyExisting" · valuetype FooRef · Replace "Dlrl::RefRelation" with "DDS::RefRelation" · valuetype FooList · Replace "Dlrl::ListRelation" with "DDS::ListRelation" · Replace "Dlrl::NotFound" with "DDS::NotFound" · valuetype FooStrMap · Replace "Dlrl::StrMapRelation" with "DDS::StrMapRelation" · Replace "Dlrl::NotFound" with "DDS::NotFound" · valuetype FooIntMap · Replace "Dlrl::IntMapRelation" with "DDS::IntMapRelation" · Replace "Dlrl::NotFound" with "DDS::NotFound"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6839: DDS ISSUE# 40] Expression syntax is missing enumeration (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-136 Missing_ENUMERATION_from_expression_syntax


ENUMERATION missing from terminals of expression for
Query/Filters/Multitopic on the appendix


***PROPOSAL***


Add the following to the grammars in Appendix B and C.


Under the rule for Parameter ::=


Add the ENUMERATION as a terminal analogous to the INTEGER value.


In the table describing the terminals, explain that the ENUMERATION is
named using the identifier for the enumerated member. Either using the
plain label or if there is conflict using the
EnumerationTypeName::EnumerationLabel

Resolution: see below
Revised Text: Resolution: Add the following to the grammars in Appendix B and C. · Under the rule for Parameter ::=, add the ENUMERATION as a terminal analogous to the INTEGER value. · In the table describing the terminals, explain that the ENUMERATION is named using the identifier for the enumerated member. Either using the plain label or if there is conflict using the EnumerationTypeName::EnumerationLabel Revised Text: · Appendix B · SQL grammar BNF · Replace: Parameter ::= INTEGERVALUE | FLOATVALUE | STRING | PARAMETER . with: Parameter ::= INTEGERVALUE | FLOATVALUE | STRING | ENUMERATEDVALUE | PARAMETER . · Token expression, add another bullet after the bullet "STRING", with the following content: "ENUMERATEDVALUE - An enumerated value is a reference to a value declared within an enumeration. The dot '::' symbol is used to separate the name of the enumeration from that of the field. Both the name of the enumeration and the name of the value correspond to the names specified in the IDL definition of the enumeration." · Appendix C · SQL grammar BNF · Replace: Parameter ::= INTEGERVALUE | FLOATVALUE | STRING | PARAMETER . with: Parameter ::= INTEGERVALUE | FLOATVALUE | STRING | ENUMERATEDVALUE | PARAMETER . · Token expression. Add another bullet after the bullet "STRING", with the following content: "ENUMERATEDVALUE - An enumerated value is a reference to a value declared within an enumeration. A double colon '::' is used to separate the name of the enumeration from that of the field. Both the name of the enumeration and the name of the value correspond to the names specified in the IDL definition of the enumeration."
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6840: DDS ISSUE# 41] Inconsistent use of instance in datawriter api (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-116 Inconsistent_use_of_instance_in_datawriter_api


Currently Section 2.1.2.5.1 mentions the function dispose_instance
which does not exist.


A broader issue is that we have functions called
register_instance/unregister_instance, but other functions that also
apply to an instance (dispose, write, dispose_w_timestamp,
write_w_timestamp, etc.) do not have "instance" in their name... This
can lead to some confusion


***PROPOSAL***


Rename dispose_instance to dispose


Fix the sequence chart in figure 2.1.6.1


Leave the register_instance alone

Resolution: see below
Revised Text: Resolution: Rename dispose_instance to dispose in the affected sections. Fix the sequence chart in figure 2.1.6.1. This change only concerns the PIM (UML diagram and text) . Revised Text: Changes in PIM · Section 2.1.2.5.1 Access to the data · 3rd paragraph replace "dispose_instance" with "dispose" · Section 2.1.6.2 Publication View · 2nd paragraph replace "dispose_instance" with "dispose" · Figure 2-21 · Replace "dispose_instance" with "dispose"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6841: DDS ISSUE# 42] Clarify how counts in the status accumulate (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-117 Deadlines_accumulate


Some people have been confused reading the specification with regards
to the "total" counts and whether they accumulate.


For example, whether the "deadline count" accumulates missed
deadlines. That is, each deadline-time period that passes without an
instance being written increments the deadline count even if the
application does not have time to read the status


***PROPOSAL***


State this more explicitly on section 2.1.4.1

Resolution: see below
Revised Text: Resolution: State this more explicitly on section 2.1.4.1 This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.4.1 Communication Status · Status table: · On the RequestedInstanceDeadlineMissedStatus, total_count row, add the sentence: Deadlines accumulate, that is, each deadline period the total_count will be incremented by one for each instance for which data was not received.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6842: DDS ISSUE# 43] Bad references (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
***Ref-137 Bad_reference_to_Condition


Section 2.1.2.1.7.1 says "StatusCondition" when it should say
"Condition"


***PROPOSAL*** Replace StatusCondition with Condition in 2.1.2.1.7.1

Resolution: see below
Revised Text: Resolution: Replace StatusCondition with Condition in 2.1.2.1.7.1 This change only concerns the PIM (UML diagram and text). Revised Text: Changes in PIM · Section 2.1.2.1.7.1 get_trigger_value · replace sentence: This operation retrieves the trigger_value of the StatusCondition. with: This operation retrieves the trigger_value of the Condition.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed sisue

Issue 6843: Ref-139 Bad_reference_to filter_expression (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.3.4. Says It says "filter_expression" should be
"subscription_expression" in the sentence "The expression_parameters
attribute is a sequence of strings that give values to the
'parameters' (i.e. "%n" tokens) in the subscription_expression. The
number of supplied parameters must fit with the requested values in
the filter_expression (i.e. the number of %n tokens)."


***PROPOSAL***


Replace "filter_expression" with subscription_expression in Section
2.1.2.3.4

Resolution: see below
Revised Text: Resolution: Replace "filter_expression" with "subscription_expression" in Section 2.1.2.3.4 This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.3.4 MultiTopic Class · on the bullet "The expression_parameters attribute…" replace "filter_expression" with "subscription_expression"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6844: DDS ISSUE# 44] Errors in figures (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-146 Bad_arrow_direction_in_figure_19


Section 2.1.4.4 Figure 2-20


The arrows between the BLOCKED and UNBLOCKED states are backwards


***PROPOSAL***


Reverse directions in Figure 2-20

Resolution: see below
Revised Text: Resolution: Reverse directions in Figure 2-20 Revised Text: Changes · On Figure 2-20, · reverse direction of the arrows between BLOCKED and UNBLOCKED · The resulting Figure 2-20 is:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6845: DDS ISSUE# 45] Is OMG IDL PSM more correct than CORBA PSM? (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-138 CORBA_PSM_OR_OMG_IDL_PSM


Document refers to the PSM as a "CORBA" PSM. Wouldn't it be more
appropriate to call it the OMG IDL PSM


***PROPOSAL***


Change to references to OMG IDL

Resolution: see below
Revised Text: Resolution: Change the name "CORBA PSM" to "OMG IDL PSM" Revised Text: Changes · Section 2 Data-Centric Publish-Subscribe (DCPS) · Contents section · replace "CORBA platform" with "OMG IDL platform" · Section 2.2 title · replace: "CORBA Platform Specific Model (PSM)" with: "OMG IDL Platform Specific Model (PSM)" · Section 2.2.1 Introduction · replace "CORBA PSM" with OMG IDL PSM" · Section 2.1.2.3.6 DataType Interface · replace "CORBA mapping" with OMG IDL mapping" · Chapter 3 Data Local Reconstruction Layer (DLRL) · Contents section · replace "PIM" to "Platform Independent Model (PIM)" · replace "CORBA PSM" with "OMG IDL Platform Specific Model (PSM)" · Section 3.1 title · Change "PIM" to "Platform Independent Model (PIM)" · Section 3.1.4.6 · Last sentence. replace "One syntax is proposed with the CORBA PSM in Section 3.2" with "One syntax is proposed with the OMG IDL PSM in Section 3.2" · Section 3.2 title · Change "CORBA PSM" to "OMG IDL Platform Specific Model (PSM)" · Section 3.2.1 title · Change "CORBA Run-time Entities" to "Run-time Entities" · Section 3.2.2.1 Principles · Caption to figure 3-7 replace: "DLRL Generation Process (CORBA)" with "DLRL Generation Process (OMG IDL)"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6846: DDS ISSUE# 46] Use of RETCODE_NOT_IMPLEMENTED (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-152 General_use_of_RETCODE_NOT_IMPLEMENTED


Many operations that can return may need the additional ReturnCode_t
NOT_IMPLEMENTED don't explicitly say so.


***PROPOSAL***


In section 2.1.1.1 add ReturnCode_t NOT_IMPLEMENTED to the list of
codes any functions may return, with the provision that it only
applies to functions that are pert of some optional compliant point.

Resolution: see below
Revised Text: Resolution: In section 2.1.1.1 add ReturnCode_t UNSUPPORTED to the list of codes any functions may return, with the provision that it only applies to functions that are part of some optional compliant point. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.1.1 Format and conventions · After the sentence "Any operation with return type ReturnCode_t may return OK or ERROR. Any operation that takes an input parameter may additionally return BAD_PARAMETER. Any operation on an object created from any of the factories may additionally return ALREADY_DELETED.", add: "Any operation that is stated as optional may additionally return UNSUPPORTED". · Replace: "The return codes OK, ERROR, ALREADY_DELETED, and BAD_PARAMETER are the standard return codes" with "The return codes OK, ERROR, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6848: Rename DataType interface to TypeSupport (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
159 Rename_the_interface_DataType


The name DataType used in the PSM and IDL to refer to the interface
with the "register_type"operation from which the FooDataType derives
for each user-data type 'Foo' is causing confusion.


People think that the FooDataType actually represents the type of the
objects being propagated. In reality the type is 'Foo' and FooDataType
just provides the support to intgrate 'Foo' with the middleware


***PROPOSAL***


Rename DataType to TypeSupport. FooDataType to FooTypeSupport

Resolution: see below
Revised Text: Resolution: Rename "DataType" to "TypeSupport" and "FooDataType" to "FooTypeSupport". This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · Section 2.1.2.2.1.9 create_topic · 4th paragraph · Replace "DataType" with "TypeSupport" · Section 2.1.2.2.1.9 create_multitopic · 2nd paragraph: · Replace "DataType" with "TypeSupport" · Section 2.1.2.3 Topic-Definition Module · Bullet list: · Replace "DataType" with "TypeSupport" · Section 2.1.2.3.1 TopicDescription class · 2nd paragraph: · Replace "DataType" with "TypeSupport" · Section 2.1.2.3.6 DataType Interface · Replace section title "DataType Interface" to "TypeSupport Interface" · 1st paragraph · Replace "DataType" with "TypeSupport" · DataType table · Replace table title "DataType" with "TypeSupport" · Section 2.1.2.3.7 Derived Classes for Each Application Class · Replace "DataType" with "TypeSupport" Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Replace: "interface DataType" with: "interface TypeSupport" Changes in implied IDL · Section 2.2.3 DCPS PSM : IDL · Replace: "interface FooDataType : DDS::DataType" with "interface FooTypeSupport : DDS::TypeSupport"
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6849: DDS ISSUE# 49] Behavior_of_register_type (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-161 Behavior_of_register_type


The specification does not say what happens if the application locally
tries to define the same "type_name" via register_type


The case where a different DataType is being "registered" with a name
that was already used should clearly fail


The case where the same DataType is registered again with the same
name could either fail or be idempotent.


***PROPOSAL***


State in 2.1.2.3.6 that it is a pre-condition that the same name has
not already been used to register a different type. In case this is
attempted the register_type() operation shall return
PRECONDITION_ERROR


State in 2.1.2.3.6 the documentation that it is OK to re-register the
same DataType again with the same type_name. In this case the
operation is idempotent and returns OK.

Resolution: see below
Revised Text: Resolution: State in 2.1.2.3.6 that it is a pre-condition that the same name has not already been used to register a different type. In case this is attempted the register_type() operation shall return PRECONDITION_ERROR State in 2.1.2.3.6 the documentation that it is OK to re-register the same DataType again with the same type_name. In this case the operation is idempotent and returns OK. This change only concerns the PIM (text). Revised Text: Changes in PIM · Section 2.1.2.3.6.1 register_type · After the 1st paragraph add: It is a pre-condition error to use the same type_name to register two different TypeSupport with the same DomainParticipant. If an application attempts this, the operation will fail and return PRECONDITION_ERROR. However, it is allowed to register the same TypeSupport multiple times with a DomainParticipant using the same or different values for the type_name. If register_type is called multiple times on the same TypeSupport with the same DomainParticipant and type_name the second (and subsequent) registrations are ignored but the operation returns OK
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6853: DDS ISSUE# 52] Provide for zero copy access to data (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-121 Add_a_return_loan_operation_to_datareader


For high-performance it is desirable to use a zero-copy API on the
receive side where the middleware can "loan" buffers to the
application on the "read" or "take" operations.


However the use of "zero-copy" requires a mechanism for the
application to indicate that it no longer needs access to the "loaned
buffers"


One possibility would be to an operation DataReader::finish or
DataReader::return_loan that takes the SampleData and SampleInfo
sequences as parameter and indicates to the middleware that the
application is no longer accessing the buffers in the corresponding
sequences.


Another possibility would be to add a separate API for read/take


***PROPOSAL***


No concrete proposal yet


Resolution: see below
Revised Text: Resolution: The resolution of this issue is tied to that of issue 6859. The resolution of 6859 adds the means for the read/take operation to "loan" buffers from the Service. The resolution of 6853 adds a "return_loan" operator to return a "loan" acquired by the read/take. Revised Text: Changes in PIM · Section 2.1.2.5.3 DataReader Class · DataReader table · Add operation return_loan: return_loan ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] · Section 2.1.2.5.3 FooDataReader Class · FooDataReader table · Add operation return_loan: return_loan ReturnCode_t inout: data_values Foo [] inout: sample_info SampleInfo [] · Add section 2.1.2.5.3.12 (previous section 2.1.2.5.3.12 get_liveliness_changed_status becomes 2.1.2.5.3.13) 2.1.2.5.3.12 return_loan This operation indicates to the DataReader that the application is done accessing the collection of data_values and sample_infos obtained by some earlier invocation of read or take on the DataReader. The data_values and sample_infos must belong to a single related 'pair'; that is, they should correspond to a pair returned from a single call to read or take. The data_values and sample_infos must also have been obtained from the same DataReader to which they are returned. If either of these conditions is not met the operation will fail and return PRECONDITION_NOT_MET. The operation return_loan allows implementations of the read and take operations to "loan" buffers from the DataReader to the application and in this manner provide "zero-copy" access to the data. During the loan, the DataReader will guarantee that the data and sample-information are not modified. It is not necessary for an application to return the loans immediately after the read or take calls. However, as these buffers correspond to internal resources inside the DataReader, the application should not retain them indefinitely. The use of the return_loan operation is only necessary if the read or take calls "loaned" buffers to the application. As described in Section 2.1.2.5.3.8 this only occurs if the data_values and sample_infos collections had max_len=0 at the time read or take was called. The application may also examine the 'owns' property of the collection to determine where there is an outstanding loan. However, calling return_loan on a collection that does not have a loan is safe and has no side effects. If the collections had a loan, upon return from return_loan the collections will have max_len=0. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · DataReader interface: · Add operation (commented out): // ReturnCode_t return_loan(inout DataSeq received_data, // inout SampleInfoSeq info_seq); · FooDataReader interface · Add operation DDS::ReturnCode_t return_loan( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6854: DDS ISSUE# 53] Refactor lifecycle state (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-84 Lifecycle_state_refactor


The precise interpretation and representation of the lifecycle and
sample states in the specification is not clear.


Figure 2-11 is not easy to interpret as it refers to "observable"
lifecycles rather than representing some internal state that the
application accesses.


Several issues are left open such as


Whether what happens if in the sample history of a single instance are
new, modified, and deleted samples (i.e. the instance was disposed and
then created again before the reader accessed it). An additional
question is then whether the read/take will return samples belonging
to multiple "generations" of the instance.


How can an application determine that as a result of a single
read/take operation multiple samples of the same instance appear. This
may be important because the processing a new/modified sample may
depend on the number of samples the application has for that instance.


Also the representation of the instance_state as an enumeration:
NEW/MODIFIED/DELETED/NO_WRITERS is not natural. If this represents the
state of the instance, then the instance can be simultaneously
MODIFIED and DELETED; it can be MODIFIED and have no writers,
etc. That is logically these are not mutually exclusive


***PROPOSAL***


Change the description in section 2.1.2.5.1 and representation
lifecycle as described below.


Replace the lifecycle_state with the following two variables (flags)


observation_lifecycle = {NEW, NOT_NEW}


liveness_lifecycle = { ALIVE , DISPOSE_EXPLICIT, DISPOSED_NO_WRITERS }


Define DISPOSED as the union (DISPOSED_EXPLICIT | DISPOSED_NO_WRITERS)


The observation_lifecycle and liveness_lifecycle are independent. All
combinations are possible. So the lifecycle may be simultaneously NEW
& DISPOSED_EXPLICIT, NOT_NEW & ALIVE, etc.


All the samples in the history of an instance have the same
"lyfecycles" states. Each time a sample is received (or a loss of
liveliness on a remote DataWriter is detected) the liveness_lifecycle
may change. If it changes it changes for all samples belonging to the
same instance.


If an instance was DISPOSED (either DISPOSED_EXPLICIT or
DISPOSED_NO_WRITERS) and a samplefor that instance is received, the
liveness_lifecycle of the instance changes to ALIVE.


If a sample is received for a DataWriter, indicating that the ser has
called "dispose" for that instance then the liveness_lifecycle of the
instance changes to DISPOSED_EXPLICIT.


If the infrastructure detects the loss of liveness of a DataWriter for
the instance and this is the only DataWriter writing the instance
known to the reader then the liveness_lifecycle of the instance
changes to DISPOSED_NO_WRITERS


Each time the liveliness of an instance changes from ALIVE to
DISPOSED_EXPLICIT an internal count maintained per instance
(disposed_explicit_count) is incremented. Each time the liveliness of
an instance changes from ALIVE to DISPOSED_NO_WRITERS an internal
count maintained per instance (disposed_no_writers_count) is
incremented.


The observation_lifecycle and liveness_lifecycle as well as the
disposed_explicit_count and disposed_no_writers_count appear in the
SampleInfo returned when the application reads/takes samples. This
counters are from the perpective of the DataReader and start at 0 when
the liveness_lifecycle is NEW


Each time the application reads or takes samples, all returned samples
for any one instance have the same observation_lifecycle and
liveness_lifecycle. They represents the "snapshot" of the
corresponding values for the instance taken at the time the data is
read or taken.


The disposed_explicit_count and disposed_no_writers_count in
SampleInfo are not the same. On all samples. They represent the number
of lifecycles of each kind that the instance had gone through at the
time the sample stored into the history queue. The application can use
this to distinguish samples belonging to different generations.  In
addition the SampleInfo contains an additional filed. The
"instance_rank" that specifies how many samples for the same instance
follow in the sequence retuned as part of the read/take. This helps
the application anticipate that more samples for the same instance
follow. The last sample for the instance returned will always have a
instance_rank==0


In addition the SampleInfo contains an additional filed. The
"generation_rank" that specifies the generation to which the sample
belongs relative to the generations


The observation_lifecycle, liveness_lifecycle, disposed_explicit_count
and disposed_no_writers can be used as in the expressions used for the
purposes of making a Query.


Figure 2-11 should be updated to reflect the state-transitions
described above.


Both the read and take operations affect the
observation_lifecycle. The first time the application reads/takes
samples for an instance the observation_lifecycle will be NEW. After
the observation_lifecycle will be NOT_NEW.


Once the application reads/takes samples with the liveness_lifecycle
DISPOSED (either EXPLICIT or NO_WRITERS), the observation_lifecycle is
'reset' and if new samples are received for that instance the
observation_lifecycle will be set to NEW again.

Resolution: see below
Revised Text: Resolution: Change the description in section 2.1.2.5.1 and representation lifecycle as described below. Replace the lifecycle_state with the following two variables (flags) · view_state = {NEW, NOT_NEW} · instance_state= { ALIVE , NOT_ALIVE_DISPOSE, NOT_ALIVE_NO_WRITERS } Define NOT_ALIVE as the union (NOT_ALIVE_DISPOSED | NOT_ALIVE_NO_WRITERS) The view_state and instance_state are independent. All combinations are possible. So the lifecycle may be simultaneously NEW & NOT_ALIVE_DISPOSED, NOT_NEW & ALIVE, etc. All the samples in the history of an instance have the same "lifecycles" states. Each time a sample is received (or a loss of liveliness on a remote DataWriter is detected) the instance_state may change. If it changes it changes for all samples belonging to the same instance. · If an instance was NOT_ALIVE (either NOT_ALIVE_DISPOSED or NOT_ALIVE_NO_WRITERS) and a sample for that instance is received, the instance_state of the instance changes to ALIVE. · If a sample is received for a DataWriter, indicating that the user has called "dispose" for that instance then the instance_state of the instance changes to NOT_ALIVE_DISPOSED. · If the infrastructure detects the loss of liveliness of a DataWriter for the instance and this is the only DataWriter writing the instance known to the reader then the instance_state of the instance changes to NOT_ALIVE_NO_WRITERS Each time the liveliness of an instance changes from NOT_ALIVE_DISPOSED to ALIVE an internal count maintained per instance (disposed_generation_count) is incremented. Each time the liveliness of an instance changes from NOT_ALIVE_NO_WRITERS to ALIVE an internal count maintained per instance (no_writers_generation_count) is incremented. The view_state and instance_state as well as the disposed_generation_count and no_writers_generation_count appear in the SampleInfo returned when the application reads/takes samples. This counters are from the perspective of the DataReader and start at 0 when the instance_state is NEW Each time the application reads or takes samples, all returned samples for any one instance have the same view_state and instance_state. They represents the "snapshot" of the corresponding values for the instance taken at the time the data is read or taken. The disposed_generation_count and no_writers_generation_count in SampleInfo are not the same on all samples. They represent the number of lifecycles of each kind that the instance had gone through at the time the sample stored into the history queue. The application can use this to distinguish samples belonging to different generations. In addition the SampleInfo contains an additional field. The "sample_rank" that specifies how many samples for the same instance follow in the sequence retuned as part of the read/take. This helps the application anticipate that more samples for the same instance follow. The last sample for the instance returned will always have a sample_rank==0 In addition the SampleInfo contains an additional filed. The "generation_rank" that specifies the generation to which the sample belongs relative to the generations, and an absolute_generation_rank that specified the generation relative to all other samples available in the DataReader. The view_state, instance_state, disposed_generation_count and no_writers_generation_count can be used as in the expressions used for the purposes of making a Query. Figure 2-11 should be updated to reflect the state-transitions described above. Both the read and take operations affect the view_state. The first time the application reads/takes samples for an instance the view_state will be NEW. After the view_state will be NOT_NEW. Once the application reads/takes samples with the instance_state NOT_ALIVE (either DISPOSED or NO_WRITERS), the view_state is 'reset' and if new samples are received for that instance the view_state will be set to NEW again. Revised Text: Changes in PIM · Figure 2-10 · Change SampleInfo · Remove lifecycle_state · Add view_state : ViewStateKind instance_state : InstanceStateKind disposed_generation_count : long no_writers_generation_count : long sample_rank : long generation_rank : long absolute_generation_rank : long · Section 2.1.2.3.4 MultiTopic class · 4th bullet Replace (whole bullet): DataReader entities associated with a MultiTopic access instances that are "reconstructed" at the DataReader side from the instances written by multiple DataWriter entities. The lifecycle (cfr. Section 2.1.2.5.1 ) of the MultiTopic instance tracks the combined lifecycles of each of the constituting instances, such that, the MultiTopic instance will be "NEW" once all the constituing Topic instances are received. It will be "MODIFIED" each time any of the constituting instances is modified, it will be "DISPOSED" as soon as any one of the constituting Topic instances is disposed, and be considered as having "NO_WRITERS" as soon as one of constituting instances is detected as having "NO_WRITERS". With: DataReader entities associated with a MultiTopic access instances that are "constructed" at the DataReader side from the instances written by multiple DataWriter entities. The MultiTopic access instance will begin to exist as soon as all the constituting Topic instances are in existence. The view_state and instance_state is computed from the corresponding states of the constituting instances: The view_state (cfr. Section 2.1.2.5.1 ) of the MultiTopic instance is NEW if at least one of the constituting instances has view_state = NEW, otherwise it will be NOT_NEW. The instance_state (cfr. Section 2.1.2.5.1 ) of the MultiTopic instance is "ALIVE" if the instance_state of all the constituting Topic instances is ALIVE. It is "NOT_ALIVE_DISPOSED" if at least one of the constituting Topic instances is NOT_ALIVE_DISPOSED. Otherwise it is NOT_ALIVE_NO_WRITERS. · Section 2.1.2.5 · Replace the last paragraph with: The following section presents how the data can be accessed and introduces the sample_state, view_state, and instance_state. Section 2.1.2.5.2 (Subscriber Class) through Section 2.1.2.5.9 (QueryCondition Class) provide details on each class belonging to this module. · Section 2.1.2.5.1 Access to the data · Replace everything from the beginning to the paragraph that follows (this paragraph remains): Once the data samples are available to the data readers, they can be read or taken by the application. The basic rule is that the application may do this in any order it wishes. This approach is very flexible and allows the application ultimate control. However, the application must use a specific access pattern in case it needs to retrieve samples in the proper order received, or it wants to access a complete set of coherent changes. With the following text: Data is made available to the application by the following operations on DataReader objects: read, read_w_condition, take, and take_w_condition. The general semantics of the "read" operations is that the application only gets access to the corresponding data; the data remains the middleware's responsibility and can be read again. The semantics of the "take" operations is that the application takes full responsibility for the data; that data will no longer be accessible to the DataReader. Consequently, it is possible for a DataReader to access the same sample multiple times but only if all previous accesses were read operations. Each of these operations returns an ordered collection of Data values and associated SampleInfo objects. Each data value represents an atom of data information (i.e., a value for one instance). This collection may contain samples related to the same or different instances (identified by the key). Multiple samples can refer to the same instance if the settings of the HISTORY QoS (Section 2.1.3.12 ) allow for it. The SampleInfo contains information pertaining to the associated Data value: · The sample_state of the Data value. I.e., if the sample has already been READ or NOT_READ by that same DataReader. · The view_state of the related instance. I.e., if the instance is NEW, or NOT_NEW for that DataReader- see below. · The instance_state of the related instance. I.e., if the instance is ALIVE, NOT_ALIVE_DISPOSED, or NOT_ALIVE_NO_WRITERS - see below. · The values of disposed_generation_count and no_writers_generation_count for the related instance at the time the sample was received. These counters indicate the number of times the instance had become ALIVE (with instance_state= ALIVE) at the time the sample was received - see below. · The sample_rank and generation_rank of the sample within the returned sequence. These ranks provide a preview of the samples that follow within the sequence returned by the read or take operations. · The absolute_generation_rank of the sample within the DataReader. This rank provides a preview of what is available within the DataReader. · The source_timestamp of the sample. This is the time-stamp provided by the DataWriter at the time the sample was produced. For each sample received, the middleware internally maintains a sample_state relative to each DataReader. The sample_state can either be READ or NOT_READ. · READ indicates that the DataReader has already accessed that sample by means of read. · NOT_READ indicates that the DataReader has not accessed that sample before. The sample_state will, in general, be different for each sample in the collection returned by read or take. For each instance the middleware internally maintains an instance_state. The instance_state can be ALIVE, NOT_ALIVE_DISPOSED or NOT_ALIVE_NO_WRITERS. · ALIVE indicates that (a) samples have been received for the instance, (b) there are live DataWriter entities writing the instance, and (c) the instance has not been explicitly disposed (or else more samples have been received after it was disposed). · NOT_ALIVE_DISPOSED indicates the instance was explicitly disposed by a DataWriter by means of the dispose operation. · NOT_ALIVE_NO_WRITERS indicates the instance has been declared as not-alive by the DataReader because it detected that there are no live DataWriter entities writing that instance. The precise behavior events that cause the instance_state to change depends on the setting of the OWNERSHIP QoS: · If OWNERSHIP is set to EXCLUSIVE, then the instance_state becomes NOT_ALIVE_DISPOSED only if the DataWriter that "owns" the instance explicitly disposes it. The instance_state becomes ALIVE again only if the DataWriter that owns the instance writes it. · If OWNERSHIP is set to SHARED, then the instance_state becomes NOT_ALIVE_DISPOSED if any DataWriter explicitly disposes the instance. The instance_state becomes ALIVE as soon as any DataWriter writes the instance again. The instance_state available in the SampleInfo is a snapshot of the instance_state of the instance at the time the collection was obtained (i.e. at the time read or take was called). The instance_state is therefore be the same for all samples in the returned collection that refer to the same instance. For each instance the middleware internally maintains two counts: the disposed_generation_count and no_writers_generation_count, relative to each DataReader: · The disposed_generation_count and no_writers_generation_count are initialized to zero when the DataReader first detects the presence of a never-seen-before instance. · The disposed_generation_count is incremented each time the instance_state of the corresponding instance changes from NOT_ALIVE_DISPOSED to ALIVE. · The no_writers_generation_count is incremented each time the instance_state of the corresponding instance changes from NOT_ALIVE_NO_WRITERS to ALIVE. The disposed_generation_count and no_writers_generation_count available in the SampleInfo capture a snapshot of the corresponding counters at the time the sample was received. · The sample_rank and generation_rank available in the SampleInfo are computed based solely on the actual samples in the ordered collection returned by read or take. · The sample_rank indicates the number or samples of the same instance that follow the current one in the collection. · The generation_rank available in the SampleInfo indicates the difference in 'generations' between the sample (S) and the Most Recent Sample of the same instance that appears In the returned Collection (MRSIC). That is, it counts the number of times the instance transitioned from not-alive to alive in the time from the reception of the S to the reception of MRSIC. The generation_rank is computed using the formula: generation_rank = (MRSIC.disposed_generation_count + MRSIC.no_writers_generation_count) - (S.disposed_generation_count + S.no_writers_generation_count) The absolute_generation_rank available in the SampleInfo indicates the difference in 'generations' between the sample (S) and the Most Recent Sample of the same instance that the middleware has received (MRS). That is, it counts the number of times the instance transitioned from not-alive to alive in the time from the reception of the S to the time when the read or take was called. absolute_generation_rank = (MRS.disposed_generation_count + MRS.no_writers_generation_count) - (S.disposed_generation_count + S.no_writers_generation_count) These counters and ranks allow the application to distinguish samples belonging to different 'generations' of the instance. Note that it is possible for an instance to transition from not-alive to alive (and back) several times before the application accesses the data by means of read or take. In this case the returned collection may contain samples that cross generations (i.e. some samples were received before the instance became not-alive, other after the instance re-appeared again). Using the information in the SampleInfo the application can anticipate what other information regarding the same instance appears in the returned collection, as well as, in the infrastructure and thus make appropriate decisions. For example, an application desiring to only consider the most current sample for each instance would only look at samples with sample_rank==0. Similarly an application desiring to only consider samples that correspond to the latest generation in the collection will only look at samples with generation_rank==0. An application desiring only samples pertaining to the latest generation available will ignore samples for which absolute_generation_rank != 0. Other application-defined criteria may also be used. For each instance (identified by the key), the middleware internally maintains a view_state relative to each DataReader. The view_state can either be NEW or NOT_NEW. · NEW indicates that either this is the first time that the DataReader has ever accessed samples of that instance, or else that the DataReader has accessed previous samples of the instance, but the instance has since been reborn (i.e. become not-alive and then alive again). These two cases are distinguished by examining the disposed_generation_count and the no_writers_generation_count. · NOT_NEW indicates that the DataReader has already accessed samples of the same instance and that the instance has not been reborn since. The view_state available in the SampleInfo is a snapshot of view_state of the instance relative to the DataReader used to access the samples at the time the collection was obtained (i.e. at the time read or take was called). The view_state is therefore the same for all samples in the returned collection that refer to the same instance. Once an instance has been detected as not having any "live" writers and all the samples associated with the instance are 'taken' from the DataReader, the middleware can reclaim all local resources regarding the instance. Future samples will be treated as 'never seen' The application accesses data by means of the operations read or take on the DataReader. These operations return an ordered collection of DataSamples consisting of a SampleInfo part and a Data part. The way the middleware builds this collection depends on QoS policies set on the DataReader and Subscriber, as well as the source timestamp of the samples, and the parameters passed to the read/take operations, namely: · the desired sample states (i.e., READ , NOT_READ, or both) · the desired view states (i.e., NEW, NOT_NEW, or both) · the desired instance states (ALIVE, NOT_ALIVE_DISPOSED, NOT_ALIVE_NO_WRITERS, or a combination of these) · The read and take operations are non-blocking and just deliver what is currently available that matches the specified states. The read_w_condition and take_w_condition operations take a ReadCondition object as a parameter instead of sample, view, and instance states. The behavior is that the samples returned will only be those for which the condition is TRUE. These operations, in conjunction with ReadCondition objects and a WaitSet, allow performing waiting reads (see below). · Figure 2-11 · Replace figure with the following: · Replace Figure 2-11 caption. New caption is: Statechart of the instance_state and view_state for a single instance . · Section 2.1.1.5.2 Subscriber Class · Subscriber table: · Operation get_datareaders · Change signature. Resulting operation is: get_datareaders ReturnCode_t out: readers DataReader [] sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] · Section 2.1.2.5.2.10 get_datareaders · Replace: … samples with the specified lifecycle_states and sample_states. With … samples with the specified sample_states, view_states, and instance_states. · Section 2.1.2.5.2.11 notify_datareaders · Replace: … objects attached to contained DataReader entities containing samples with any LifecycleState and SampleState 'NOT_READ'. With: … objects attached to contained DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState. · Section 2.1.1.5.3 DataReader Class · DataReader table · Operation read · Change signature. Resulting operation is: read ReturnCode_t out: data_values Data [] out: sample_infos SampleInfo [] sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] · Operation take take ReturnCode_t out: data_values Data [] out: sample_infos SampleInfo [] sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] · Operation create_readcondition · Change signature. Resulting operation is: create_readcondition ReadCondition sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] · Operation create_querycondition · Change signature. Resulting operation is: create_querycondition QueryCondition sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] query_expression string query_parameters string [] · FooDataReader table · Operation read · Change signature. Resulting operation is: read ReturnCode_t out: data_values Foo [] out: sample_infos SampleInfo [] sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] · Operation take · Change signature. Resulting operation is: take ReturnCode_t out: data_values Foo [] out: sample_infos SampleInfo [] sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] · Section 2.1.2.5.3.8 read · Replace: The act of reading a sample changes its sample_state to READ but it does not affect the lifecycle_state of the instance. With: The act of reading a sample sets its sample_state to READ. If the sample belongs to the most recent generation of the instance, it will also set the view_state of the instance to NOT_NEW. It will not affect the instance_state of the instance. · Section 2.1.2.5.3.9 take · Replace: The act of reading a sample removes it from the middleware so it cannot be 'read' or 'taken' again. It also may change the lifecycle_state of the instance. With: The act of taking a sample removes it from the DataReader so it cannot be 'read' or 'taken' again. If the sample belongs to the most recent generation of the instance, it will also set the view_state of the instance to NOT_NEW. It will not affect the instance_state of the instance. · Section 2.1.2.5.3.10 read_w_condition · 2nd paragraph replace: … and passing as lifecycle_states and sample_states the value … With: … and passing as sample_states, view_states and instance_states the value … · Replace: The samples are accessed via read and therefore this operation does not change the lifecycle_state of any instance and leaves the samples under the control of the Service so they can be accessed again. With: The samples are accessed with the same semantics as the read operation. · Section 2.1.2.5.3.11 take_w_condition · Replace: This operation removes samples from the middleware so they cannot be 'read' or 'taken' again. It also may change the lifecycle_state of the instances whose samples are taken. With: The samples are accessed with the same semantics as the take operation. · Section 2.1.2.5.5 SampleInfo class · SampleInfo table · Remove lifecycle_state · Add view_state : ViewStateKind instance_state : InstanceStateKind disposed_generation_count : long disposed_no_writers_count : long sample_rank : long generation_rank : long absolute_generation_rank : long · Resulting table is: SampleInfo attributes sample_state SampleStateKind view_state ViewStateKind instance_state InstanceStateKind disposed_generation_cou nt long no_writers_generation_c ount long sample_rank long generation_rank long absolute_generation_ran k long source_timestamp Time_t instance_handle InstanceHandle_t No operations · Replace all the text after the table with: SampleInfo is the information that accompanies each sample that is 'read' or 'taken'. It contains the following information: · The sample_state (READ or NOT_READ) that indicates whether or not the corresponding data sample has already been read; · The view_state, (NEW, or NOT_NEW) that indicates whether the DataReader has already seen samples for the most-current generation of the related instance. · The instance_state (ALIVE, NOT_ALIVE_DISPOSED, or NOT_ALIVE_NO_WRITERS) that indicates whether the instance is currently in existence or, if it has been disposed, the reason why it was disposed: · ALIVE if this instance is currently in existence; · NOT_ALIVE_DISPOSED if this instance was disposed by the a DataWriter; · NOT_ALIVE_NO_WRITERS if the instance has been disposed by the DataReader because none of the DataWriter objects currently "alive" (according to the LIVELINESS QoS) are writing the instance. · The disposed_generation_count that indicates the number of times the instance had become alive after it was disposed explicitly by a DataWriter, at the time the sample was received · The no_writers_generation_count that indicates the number of times the instance had become alive after it was disposed because there were no writers, at the time the sample was received. · The sample_rank that indicates the number of samples related to the same instance that follow in the collection returned by read or take. · The generation_rank that indicates the generation difference (number of times the instance was disposed and become alive again) between the time the sample was received, and the time the most recent sample in the collection related to the same instance was received. · The absolute_generation_rank that indicates the generation difference (number of times the instance was disposed and become alive again) between the time the sample was received, and the time the most recent sample (which may not be in the returned collection) related to the same instance was received. · the source_timestamp that indicates the time provided by the DataWriter when the sample was written. · the instance_handle that identifies locally the corresponding instance. Refer to Section 2.1.2.5.1 for a detailed explanation of these states and ranks. · Section 2.1.2.5.8 ReadCondition Class · ReadCondition table · Remove operation get_lifecycle_state_mask · Add operation: (After operation get_sample_state_mask) · Name: get_view_state_mask · Return: ViewStateKind [] · Add operation: · Name: get_instance_state_mask · Return: InstanceStateKind [] · Resulting table is: ReadCondition no attributes operations get_datareader DataReader get_sample_state_mask SampleStateKind [] get_view_state_mask ViewStateKind [] get_instance_state_mask InstanceStateKind [] · 2nd paragraph: · Replace: (by specifying the desired lifecycle-states, as well as sample-states) With (by specifying the desired sample-states, view-states, and instance-states) · On the footnote to … information is available. · Replace "lifecycle" with "view" · Delete section 2.1.2.5.8.2 get_lifecycle_state_mask. Section 2.1.2.5.8.3 becomes 2.1.2.5.8.2) · Section 2.1.2.5.8.2 get_sample_state_mask · Replace "lifecycle-states" with "sample-states" · Add section 2.1.2.5.8.3 (after 2.1.2.5.8.2 get_sample_state_mask): 2.1.2.5.8.3 get_view_state_mask This operation returns the set of view-states that are taken into account to determine the trigger_value of the ReadCondition. These are the view-states specified when the ReadCondition was created. · Add section 2.1.2.5.8.4 2.1.2.5.8.4 get_instance_state_mask This operation returns the set of instance-states that are taken into account to determine the trigger_value of the ReadCondition. These are the instance-states specified when the ReadCondition was created. · Section 2.1.4.4.2 Trigger State of the ReadCondition · 3rd paragraph. Replace: A ReadCondition that has a lifecycle_state_mask = {NEW}, and sample_state_mask = {NOT_READ} will have trigger_value … With A ReadCondition that has a sample_state_mask = {NOT_READ}, view_state_mask = {NEW} will have trigger_value … · Replace: … the sample would still have (LifecycleState, SampleState) = (NEW, READ) … With: … the sample would still have (SampleState, ViewState) = (READ, NEW) … · Section 2.1.3.16.2 Detection of loss in topological connectivity · 5th paragraph: · replace "lifecycle state" with "view state". · Section 2.1.3.16.3 Semantic difference between unregister and dispose · 3rd paragraph: · Replace: see the lifecycle as being "DELETED" With see the instance_state as being "DISPOSED" · Section 2.1.4.4.2 Trigger State of the ReadCondition · 1st paragraph; Replace: … with LifeCycleState and SampleState matching those of the ReadCondition. With … with SampleState, ViewState, and InstanceState matching those of the ReadCondition. · Section 2.2.2 PIM to PSM Mapping Rules · 4th paragraph Replace: … StatusKind, SampleStateKind, and LifecycleStateKind With: … StatusKind, SampleStateKind, ViewStateKind, and InstanceStateKind. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Replace: // Sample states to support reads typedef unsigned long LifecycleStateKind; typedef sequence<LifecycleStateKind> LifecycleStateSeq; const LifecycleStateKind NEW_LIFECYCLE_STATE = 0x0001 << 0; const LifecycleStateKind MODIFIED_LIFECYCLE_STATE = 0x0001 << 1; const LifecycleStateKind DISPOSED_LIFECYCLE_STATE = 0x0001 << 2; const LifecycleStateKind NO_WRITERS_LIFECYCLE_STATE = 0x0001 << 3; // This is a bit-mask LifecycleStateKind typedef unsigned long LifecycleStateMask; const LifecycleStateMask ANY_LIFECYCLE_STATE = 0xffff; With: // View states to support reads typedef unsigned long ViewStateKind; typedef sequence<ViewStateKind> ViewStateSeq; const ViewStateKind NEW_VIEW_STATE = 0x0001 << 0; const ViewStateKind NOT_NEW_VIEW_STATE = 0x0001 << 1; // This is a bit-mask ViewStateKind typedef unsigned long ViewStateMask; const ViewStateMask ANY_VIEW_STATE = 0xffff; // Instance states to support reads typedef unsigned long InstanceStateKind; typedef sequence<InstanceStateKind> InstanceStateSeq; const InstanceStateKind ALIVE_INSTANCE_STATE = 0x0001 << 0; const InstanceStateKind NOT_ALIVE_DISPOSED_INSTANCE_STATE = 0x0001 << 1; const InstanceStateKind NOT_ALIVE_NO_WRITERS_INSTANCE_STATE = 0x0001 << 2; // This is a bit-mask InstanceStateKind typedef unsigned long InstanceStateMask; const InstanceStateMask ANY_INSTANCE_STATE = 0xffff; const InstanceStateMask NOT_ALIVE_INSTANCE_STATE = 0x006; · Interface Subscriber · Replace: ReturnCode_t get_datareaders(out DataReaderSeq readers, in SampleStateMask s_state, in LifecycleStateMask l_state); With: ReturnCode_t get_datareaders(out DataReaderSeq readers, in SampleStateMask sample_states, in ViewStateMask view_states, in InstanceStateMask instance_states); · Interface DataReader · Replace (in the comments): // ReturnCode_t read(out DataSeq received_data, // out SampleInfoSeq info_seq, // in SampleStateMask s_mask, // in LifecycleStateMask l_mask); With: // ReturnCode_t read(out DataSeq received_data, // out SampleInfoSeq info_seq, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); · Replace (in the comments): // ReturnCode_t take(out DataSeq received_data, // out SampleInfoSeq info_seq, // in SampleStateMask s_mask, // in LifecycleStateMask l_mask); With: // ReturnCode_t take(out DataSeq received_data, // out SampleInfoSeq info_seq, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); · Replace: ReadCondition create_readcondition(in SampleStateMask mask, in LifecycleStateMask l_mask); With: ReadCondition create_readcondition(in SampleStateMask sample_states, in ViewStateMask view_states, in InstanceStateMask instance_states); · Replace: QueryCondition create_querycondition(in SampleStateMask mask, in LifecycleStateMask l_mask, in string query, in StringSeq query_parameters); With: QueryCondition create_querycondition(in SampleStateMask sample_states, in ViewStateMask view_states, in InstanceStateMask instance_states, in string query_expression, in StringSeq query_parameters); · struct SampleInfo · Remove: LifecycleStateKind lifecycle_state; · Add: ViewStateKind view_state; InstanceStateKind instance_state; long disposed_generation_count; long no_writers_generation_count; long sample_rank; long generation_rank; long absolute_generation_rank; · Resulting SampleInfo structure is: struct SampleInfo { SampleStateKind sample_state; ViewStateKind view_state; InstanceStateKind instance_state; Time_t source_timestamp; InstanceHandle_t instance_handle; long disposed_generation_count; long no_writers_generation_count; long sample_rank; long generation_rank; long absolute_generation_rank; }; · Interface FooDataReader · Replace: DDS::ReturnCode_t read(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::SampleStateMask s_mask, in DDS::LifecycleStateMask l_mask); With: DDS::ReturnCode_t read(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); · Replace: DDS::ReturnCode_t take(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::SampleStateMask s_mask, in DDS::LifecycleStateMask l_mask); With: DDS::ReturnCode_t take(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); · interface ReadCondition · Remo
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6855: Ref-85 Garbage_collection_of_disposed_instances (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
In the current specification, the applications are fully responsible
for the resource usage. Applications have means to detect when
lifecycles end and resources can be freed. However even if
applications are to be written correctly not all cases are covered and
some situations potential produce garbage, that is memory that remains
in use and never reclaimed.


The following are potential scenarios:


Instances become disposed but the application does not 'take' the
sample and therefore does not allow the middleware to end the
lifecycle and remove the state regarding the instance


Instances that no longer have any "active" writers and consequently
get no more samples.


Even if the application 'takes' the disposed instance or instances
with no writers it is not always possible for the middleware to
reclaim the resources. This can occur in two cases; in case of the
service keeping Transient and Persistent data and in case of
incomplete MultiTopic samples. In both cases an immediate removal of
samples is not desirable but eventually the samples should be removed


On the one hand deleting the instances immediately will potentially
cause problems since late coming readers may require the disposed
instances, e.g. reallocating consumers requiring disposed instances to
finish interrupted cleanup actions or MultiTopics joining Topics with
different lifecycles.


On the other hand the disposed instances cannot be kept
indefinitely. Doing this will eventually flood the system, especially
for Topics with increasing key-values.


The solution is to keep them for a certain duration and then reclaim
the resources the question is how long this duration should be?


Note that, in general, just because a sample has been disposed the
middleware cannot reclaim all the resources on it either on the writer
side or on the reader side. For the middleware to reclaim resources it
is necessary that the instance is also unregistered; otherwise
disposing would automatically relinquish ownership which is not the
desired behavior.


This applies both to the writer side, the reader side, and/or the
transient/persistent durability service.


On the writer side it is therefore clear. Resources are only fully
reclaimed when the instance is unregistered.


So the tricky issue is how to handle the case where an instance has no
writers, whether it had been disposed or not, whether there are
samples in the queue or not. Nominally we notify the application of
this event by some means (see Issue #84). The application should then
take the samples.


One approach would be for the middleware to keep some resources around
for a certain duration (DISPOSE_LIFESPAN) in case this was just a
transient situation and the instance appears again. This treats the
DISPOSED_NO_WRITERS similarly to the DISPOSED_EXPLICIT


***PROPOSAL***


Add a new QoS on the DataReader called NO_WRITERS_LIFESPAN with a
single Duration_t field "duration".  It is mutable with a default
value equal to DURATION_INFINITY. This represents the duration for
which the DataReader should keep the knowledge of an instance once it
detects it has NO_WRITERS.


After the instance has no writers the middleware is not required to
keep the information about the instance any longer than the
NO_WRITERS_LIFESPAN. The implication is that if an instance becomes
DISPOSED_NO_WRITERS and the application does not take the instance for
a time that exceeds the NO_WRITERS LIFESPAN the application could miss
the fact that the instance has DISPOSED and has no writers.


Resolution: see below
Revised Text: Resolution: Add a new QoS on the DataReader called READER_DATA_LIFECYCLE with a single Duration_t field "autopurge_nowriter_samples_delay". It is mutable with a default value equal to DURATION_INFINITY. This represents the duration for which the DataReader should keep the knowledge of an instance once it detects it is in the view_state NOT_ALIVE_NO_WRITERS. After the instance has no writers the middleware is not required to keep the information about the instance any longer than the READER_INSTANCE_LIFECYCLE autopurge_nowriter_samples_delay. The implication is that if an instance becomes NOT_ALIVE_NO_WRITERS and the application does not take the instance for a time that exceeds the "autopurge_nowriter_samples_delay" duration the application could miss the fact that the instance has become NOT_ALIVE_NO_WRITERS and never receive any information on that instance again. The behavior of the transient and persistent services is also affected. The conditions for the transient and persistent services to "garbage-collect" an instance are: 1. the sample should be explicitly disposed by the owning data-writer AND 2. the data-writer should have relinquished its ownership (by calling unregister, or by the NO_WRITERS status) AND 3. the garbage-collect should be delayed for a given amount of time to allow late-joining application to 'see' the disposed status The latter condition perhaps needs some explanation: in case an application crashes and is (automatically) restarted in the system, it generally asks for initial data to regain its state and for instance when it detects disposed instances, it could perform necessary clean-up activities. The mentioned delay in the garbage-collect 'preserves' the disposed state 'long-enough' to allow restarted (or late-joining) applications to react on this disposed status. The duration of this delay should be specified as an additional field in the DURABILITY QoS. This field should be called "service_cleanup_delay" and is the Duration described in point 3 above. Revised Text: Changes in PIM · Section 2.1.3 Supported Qos · QoS table add QoSPolicy (at the bottom): QosPolicy Value Meaning Concerns RxO Changeable READER_DATA _LIFECYCLE A duration "autopurge_nowr iter_samples_del ay" Specifies the behavior of the DataReader with regards to the lifecycle of the data- instances it manages. DataReader N/A Yes autopurge_nowrit er_samples_delay Indicates the duration the DataReader must retain information regarding instances that have the view_state NOT_ALIVE_NO_WRITERS.By default, unlimited. · QoS table · QoSPolicy: DURABILITY · Add value: · A duration "service_cleanup_delay" service_cleanup _delay Only needed if kind is TRANSIENT or PERSISTENT.Controls when the service is able to remove all information regarding a data-instances.By default, zero · Add section 2.1.3.16 2.1.3.16 READER_DATA_LIFECYCLE This policy controls the behavior of the DataReader with regards to the lifecycle of the data-instances it manages, that is, the data-instances that have been received and for which the DataReader maintains some internal resources. The DataReader internally maintains the samples that have not been taken by the application, subject to the constraints imposed by other QoS policies such as HISTORY and RESOURCE_LIMITS. The DataReader also maintains information regarding the identity, view_state and instance_state of data-instances even after all samples have been 'taken'. This is needed to properly compute the states when future samples arrive. Under normal circumstances the DataReader can only reclaim all resources for instances that view_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken'. This behavior can cause problems if the application "forgets" to 'take' those samples. The 'untaken' samples will prevent the DataReader from reclaiming the resources and they would remain in the DataReader indefinitely. The autopurge_nowriter_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes NOT_ALIVE_NO_WRITERS. After this time elapses, the DataReader will purge all internal information regarding the instance, any untaken samples will also be lost. · Section 2.1.3.4 DURABILITY · Add at the end of the section, after paragraph "Incompatibilities between local DataReader/DataWriter entities …" The setting of the service_cleanup_delay controls when the TRANSIENT or PERSISTENT service is able to remove all information regarding a data-instances. Information on a data-instances is maintained until the following conditions are met: 1. the instance has been explicitly disposed (instance_state = NOT_ALIVE_DISPOSED), 2. and while in the NOT_ALIVE_DISPOSED state the system detects that there are no more "live" DataWriter entities writing the instance, that is, all existing writers either unregister the instance (call unregister) or loose their liveliness, 3. and a time interval longer that service_cleanup_delay has elapsed since the moment the service detected that the previous two conditions were met. The utility of the service_cleanup_delay is apparent in the situation where an application disposes an instance and it crashes before has a chance to complete additional tasks related to the disposition. Upon re-start the application may ask for initial data to regain its state and the delay introduced by the service_cleanup_delay will allow the re-started application to receive the information on the disposed insatnce and complete the interrupted tasks. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add const string READERDATALIFECYCLE_QOS_POLICY_NAME = "ReaderDataLifecycle"; const QosPolicyId_t READERDATALIFECYCLE_QOS_POLICY_ID= 17; struct ReaderDataLifecycleQosPolicy { Duration_t autopurge_nowriter_samples_delay; }; · struct DataReaderQos · Add (at the end of the structure): ReaderDataLifecycleQosPolicy reader_data_lifecycle; · struct DurabilityQosPolicyKind · Add field: Duration_t service_cleanup_delay;
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6856: Ref-112 Value_of_data_for_DISPOSED_state (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.2.5.3 of the specification does not clearly state that the
data_value and the sample_info sequences returned by the read* and
take* operations must be of the same length and are in one-to-one
correspondence


Furthermore, the specification does not describe what the element of
the DataSeq should be when the SampleInfo states that the
lifecycle_state is DISPOSED


Same issue when the lifecycle_state is NO_WRITERS


These should be clarified


***PROPOSAL***


This must be rephrased in accordance with the proposal in Ref-84.


State in section 2.1.2.5.3 that both sequences are of the same length
and in one-to-one correspondence


State that when the that when the liveness_lifecycle is DISPOSED
(NO_WRITERS or EXPLICIT) the last sample for the instance, that is the
one with instance_rank==0 has no corresponding data

Resolution: see below
Revised Text: Resolution: The resolution of this issue must be aligned with that of 6854 State in section 2.1.2.5.3 that both Data and SampleInfo sequences are of the same length and in one-to-one correspondence State that when the that when the instance_state is NOT_ALIVE (NO_WRITERS or DISPOSED) the last sample for the instance, that is the one with sample_rank==0 has no corresponding data Revised Text: Changes in PIM · Section 2.1.2.5.3.8 read · Add towards the end of the section (before the last paragraph "This operation must be provided…" On output, the collection of Data values and the collection of SampleInfo structures are of the same length and are in one-to-one correspondence. Each SampleInfo provides information, such as the source_timestamp, the sample_state, view_state, and instance_state, etc., about the corresponding sample. Some elements in the returned collection may not have valid data. If the instance_state in the SampleInfo is NOT_ALIVE_DISPOSED or NOT_ALIVE_NO_WRITERS, then the last sample for that instance in the collection, that is the one whose SampleInfo has sample_rank==0 does not contain valid data. Samples that contain no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy. · Section 2.1.2.5.3.9 take · 1st paragraph Replace: This operation accesses a collection of data-samples from the DataReader. With: This operation accesses a collection of data-samples from the DataReader and a corresponding collection of SampleInfo structures. · Add towards the end of the section (before the last paragraph "This operation must be provided…" Similar to read, the collection of SampleInfo is on a one-to-one correspondence with the collection of Samples. Furthermore, if the view_state in the SampleInfo is NOT_ALIVE_DISPOSED or NOT_ALIVE_NO_WRITERS, then the last sample for that instance in the collection, that is the one whose SampleInfo has sample_rank==0 does not contain valid data. As was the case for read, samples with no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6857: Ref-113 Meta_sample_accounting_towards_resource_limits (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Related to 112


The specification does not mention whether the DISPOSED samples count
towards the RESOURCE_LIMITS (max samples and such)


Although this detail unspecified, it is something that would become
observable to the user if, for some reason, an application does not
"take" disposed samples.


***PROPOSAL***


State that these samples don't count towards the limits because they
do not have "data" associated with it. Moreover in view of Ref-84
there can be at most one such SampleInfo per instance so the
worst-case needed resources are small and can be taken into account by
the implementation


Resolution: duplicate
Revised Text:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Discussion:
Resolution: 
These samples don't count towards the limits because they do not have "data" associated with it. Moreover in view of Ref-84 there can be at most one such SampleInfo per instance so the worst-case needed resources are small and can be taken into account by the implementation.
The resolution of this issue was already included in the resolution of issue  6856.
Disposition:	Duplicate/merged (issue 6856)


Issue 6858: DDS ISSUE# 54] Refactor or extend API used to access samples (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-74 Refactor_the_api_used_to_access_samples


The read() and take() operations return a "one-dimensional" array of
samples.


For the case where the PRESENTATION.access_scope==BY_INSTANCE and
either HISTORY.kind == KEEP_ALL or HISTORY.depth > 1 it is desirable
that the application can easily access all the samples that correspond
to one instance versus the samples that correspond to other
instances. For this purpose a 2-dimensional array (or sequence) where
a is preferable. That would allow for example to determine easily how
many samples are for each instance (before examining all) and also
easily examine the "first" or "last" sample of each instance without
navigating though all the other instances


What we would like is something like:


struct FooSample {


struct SampleInfo *info;


struct Foo *data;


};


typedef sequence<FooSample> FooSampleSeq;


read(out FooSampleSeq sample_seq);


typedef sequence<FooSampleSeq>; FooSampleCollatedSeq;


read_collated(out FooSampleCollatedSeq collated_seq);


Data would be then accessed as


sample_seq[i]->data;


Or else as


collated_seq[k][I]->data


However t is not clear how to map this to the IDL PSM.


***PROPOSAL***


No concrete proposal as it would be hard to represent in IDL but it
would be nice if such API was offered.

Resolution: see below
Revised Text: Resolution: Introduce additional operations to the DataReader to allow reading a single instance, and iteration over each instance. Introduce an additional field tin the RELIABLE QoS that indicates the maximum blocking time for the write operation in case it does not have resources to buffer the write. Rename "register_instance" to be "register" and "unregister_instance" to be "unregister" Revised Text: Changes in PIM · Section 2.1.1.1 Format and conventions · Return codes table · Add return code NO_DATA NO_DATA Indicates a transient situation where the operation did not return any data but there is no inherent error. · Section 2.1.3 · QoS Table, policy "RELIABILITY" · Add Value: "max_blocking_time" RELIABILITY A "kind":RELIABLE, BEST_EFFORTand a duration "max_blocking_t ime" Indicates the level of reliability offered/requested by the Service. Topic, DataReader, DataWriter Yes No <<… same as before …>> max_blocking_ti me This setting applies only to the case where kind=RELIABLE and the HISTORY is KEEP_ALL.The value of the max_blocking_time indicates the maximum time the operation DataWriter::write is allowed to block if the DataWriter does not have space to store the value written. · Section 2.1.2.4.2.10 write · At the end of the section add: If the RELIABILITY kind is set to RELIABLE and the HISTORY kind is set to KEEP_ALL the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits in specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the write operation will fail and return TIMEOUT. · Section 2.1.3.12 RELIABILITY · 2nd paragraph: Add after "…the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits in specified in the RESOURCE_LIMITS to be exceeded.": Under these circumstances, the RELIABILITY max_blocking_time configures the maximum duration the write operation may block. · Section 2.1.2.4.2 DataWriter class · DataWriter table: · Change operation name from "register_instance" to "register" · Change operation name from "unregister_instance" to "unregister" · Change operation name from "register_instance_w_timestamp" to "register_w_timestamp" · Change operation name from "unregister_instance_w_timestamp" to "unregister_w_timestamp" · FooDataWriter table · Change operation name from "register_instance" to "register" · Change operation name from "unregister_instance" to "unregister" · Change operation name from "register_instance_w_timestamp" to "register_w_timestamp" · Change operation name from "unregister_instance_w_timestamp" to "unregister_w_timestamp" · Section 2.1.2.4.2.5 register_instance · Change section name to "register" · 5th paragraph: change from "register_instance" to "register" · 5th paragraph: add the end append the text: · The explicit use of this operation is optional as the application may call directly the write operation and specify a HANDLE_NIL to indicate that the 'key' should be examined to identify the instance. · Section 2.1.2.4.2.5 register_instance_w_timestamp · Change section name to "register_w_timestamp" · 1st paragraph: change from "register_instance" to "register" (2 times) · Section 2.1.2.4.2.6 unregister_instance · Change section name to "unregister" · 1st paragraph: change from "register_instance" to "register" (2 times) · 2nd paragraph: change from "unregister_instance" to "unregister" and from "register_instance" to "register" · 5th paragraph: change from "register_instance" to "register" · 6th paragraph: change from "unregister_instance" to "unregister" · 8th paragraph: change from "unregister_instance" to "unregister" · Section 2.1.2.4.2.6 unregister_instance_w_timestamp · Change section name to "unregister_w_timestamp" · 1st paragraph: change from "unregister_instance" to "unregister" (2 times) · Section 2.1.2.4.2.10 write · 5th paragraph: change from "register_instance" to "register" · Section 2.1.2.4.2.12 dispose · 6th paragraph: change from "register_instance" to "register" · Section 2.2.2 · Add paragraph before the last paragraph "The classes that do not…" : The DataSample class that associates the SampleInfo and Data collections returned from the data-accessing operations (read and take) have not been explicitly mapped into IDL. The collections themselves have been mapped into sequences. The correspondence between each Data and SampleInfo is represented by the use of the same index to access the corresponding elements on each of the collections. It is anticipated that additional data-accessing API's may be provided on each target language to make this operation as natural and efficient is it can be. The reason is that accessing data is the main purpose of the Data-Distribution service, and, the IDL mapping provides a programming-language neutral representation that cannot take advantage of the strengths of each particular language. · Section 2.1.2.5.3 DataReader Class · DataReader table · Add operations read_next_sample, take_next_sample, read_instance, take_instance, read_next_instance, take_next_instance, read_next_instance_w_condition, take_next_instance_w_condition. read_next_sample ReturnCode_t inout: data_value Data inout: sample_info SampleInfo take_next_sample ReturnCode_t inout: data_value Data inout: sample_info SampleInfo read_instance ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long a_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] take_instance ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long a_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] read_next_instance ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] take_next_instance ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] read_next_instance_w_condition ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t a_condition ReadCondition take_next_instance_w_condition ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t a_condition ReadCondition · FooDataReader table · Add operations read_next_sample, take_next_sample, read_instance, take_instance, read_next_instance, take_next_instance, read_next_instance_w_condition, take_next_instance_w_condition. read_next_sample ReturnCode_t inout: data_value Foo inout: sample_info SampleInfo take_next_sample ReturnCode_t inout: data_value Foo inout: sample_info SampleInfo read_instance ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long a_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] take_instance ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long a_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] read_next_instance ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] take_next_instance ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] read_next_instance_w_ condition ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t a_condition ReadCondition take_next_instance_w_ condition ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long previous_handle InstanceHandle_t a_condition ReadCondition · Before section 2.1.2.5.3.1 · Replace sentence: All sample-accessing operations, namely: read, take, read_w_condition, take_w_condition may return the error PRECONDITION_NOT_MET With All sample-accessing operations, namely all variants of read, take may return the error PRECONDITION_NOT_MET. · Section 2.1.2.5.3.8 read · At the end of the section, add paragraph: If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. · Section 2.1.2.5.3.9 take · At the end of the section, add paragraph: If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. · Section 2.1.2.5.3.8 read_w_condition · At the end of the section, add paragraph: If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. · Section 2.1.2.5.3.8 take_w_condition · At the end of the section, add paragraph: If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. · After section 2.1.2.5.3.11 (named "take with condition") add the following sections: 2.1.2.5.3.12 read_next_sample This operation copies the next, non-previously accessed Data value from the DataReader; the operation also copies the corresponding SampleInfo. The implied order among the samples stored in the DataReader is the same as for the read operation (section 2.1.2.5.3.8). The read_next_sample operation is semantically equivalent to the read operation where the input Data sequence has max_len=1, the sample_states=NOT_READ, the view_states=ANY_VIEW_STATE, and the instance_states=ANY_INSTANCE_STATE. The read_next_sample operation provides a simplified API to 'read' samples avoiding the need for the application to manage sequences and specify states. If there is no unread data in the DataReader, the operation will return NO_DATA and nothing is copied. 2.1.2.5.3.13 take_next_sample This operation copies the next, non-previously accessed Data value from the DataReader and 'removes' it from the DataReader so it is no longer accessible. The operation also copies the corresponding SampleInfo. This operation analogous to the read_next_sample except for the fact that the sample is 'removed' from the DataReader. The take_next_sample operation is semantically equivalent to the take operation where the input sequence has max_len=1, the sample_states=NOT_READ, the view_states=ANY_VIEW_STATE, and the instance_states=ANY_INSTANCE_STATE. This operation provides a simplified API to 'take' samples avoiding the need for the application to manage sequences and specify states. If there is no unread data in the DataReader, the operation will return NO_DATA and nothing is copied. 2.1.2.5.3.14 read_instance This operation accesses a collection of Data values from the DataReader. The behavior is identical to read except for that all samples returned belong to the single specified instance whose handle is a_handle. Upon successful return, the Data collection will contain samples all belonging to the same instance. The corresponding SampleInfo verifies instance_handle == a_handle. The semantics are the same as for the read operation, except for in building the collection the DataReader will check that the sample belongs to the specified instance and otherwise it will not place the sample in the returned collection. The behavior of the read_instance operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. 2.1.2.5.3.15 take_instance This operation accesses a collection of Data values from the DataReader. The behavior is identical to take except for that all samples returned belong to the single specified instance whose handle is a_handle. The semantics are the same as for the take operation, except for in building the collection the DataReader will check that the sample belongs to the specified instance and otherwise it will not place the sample in the returned collection. The behavior of the take_instance operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the take_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. 2.1.2.5.3.18 read_next_instance This operation accesses a collection of Data values from the DataReader where all the samples belong to a single instance. The behavior is similar to read_instance except that the actual instance is not directly specified. Rather the samples will all belong to the 'next' instance with intance_handle 'greater' than the specified previous_handle that has available samples. This operation implies the existence of some total order 'greater than' relationship between the instance handles. The specifics of this relationship are not important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other . This ordering is between the instances, that is, it does not depend on the actual samples received or available. For the purposes of this explanation it is 'as if' each instance handle was represented as a unique integer. The behavior of read_next_instance is 'as if' the DataReader invoked read_instance passing the smallest instance_handle among all the ones that (a) are greater than previous_handle and (b) have available samples (i.e. samples that meet the constraints imposed by the specified states). The special value HANDLE_NIL is guranteed to be 'less than' any valid instance_handle. So the use of the parameter value previous_handle==HANDLE_NIL will return the samples for the instance which has the smallest instance_handle among all the instances that contain available samples. The behavior of the read_instance operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan. The operation read_next_instance is intended to used in an application-driven iteration where the application starts by passing previous_handle==HANDLE_NIL, examines the samples returned, and then uses the instance_handle returned in the SampleInfo as the value of the previous_handle argument to the next call to read_next_instance. The iteration continues until read_next_instance returns the value NO_DATA. The behavior of the read_next_instance operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_next_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. 2.1.2.5.3.19 take_next_instance This operation accesses a collection of Data values from the DataReader and 'removes' them from the DataReader. This operation has the same behavior as read_next_instance except that the samples are 'taken' from the DataReader such that they are no longer accessible via subsequent 'read' or 'take' operations. The behavior of the take_next_instance operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the take_next_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. 2.1.2.5.3.20 read_next_instance_w_condition This operation accesses a collection of Data values from the DataReader. The behavior is identical to read_next_instance except for that all samples returned satisfy the specified condition. In other words, on success all returned samples belong to the same instance, and the instance is the instance with 'smallest' instance_handle among the ones that verify (a) instance_handle >= previous_handle and (b) have samples for which the specified ReadCondition evaluates to TRUE. The behavior of the read_next_instance_w_condition operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_next_instance_w_condition operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. 2.1.2.5.3.21 take_next_instance_w_condition This operation accesses a collection of Data values from the DataReader and 'removes' them from the DataReader. This operation has the same behavior as read_next_instance_w_condition except that the samples are 'taken' from the DataReader such that they are no longer accessible via subsequent 'read' or 'take' operations. The behavior of the take_next_instance_w_condition operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the take_next_instance_w_condition operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Similar to read, this operation must be provided on the specialized class that is generated for the particular application data-type that is being taken. If the DataReader has no samples that meet the constraints, the return value will be NO_DATA. · Figure 2-8 · Add the additionaloperations to the implied interface FooDataReader · Resulting figure is: Changes in IDL · 2.2.3 DCPS PSM : IDL · Return codes. Add: const ReturnCode_t RETCODE_NO_DATA = 11; · struct ReliabilityQosPolicy · Add field: Duration_t max_blocking_time; · interface DataWriter: · Change operation name from "register_instance" to "unregister" · Change operation name from "unregister_instance" to "register" · Change operation name from "register_instance_w_timestamp" to "register_w_timestamp" · Change operation name from "unregister _instance_w_timestamp" to "unregister _w_timestamp" · interface FooDataWriter: · Change operation name from "register_instance" to "unregister" · Change operation name from "unregister_instance" to "register" · Change operation name from "register_instance_w_timestamp" to "register_w_timestamp" · Change operation name from "unregister _instance_w_timestamp" to "unregister _w_timestamp" · interface DataReader · Add (commented out) operations: // ReturnCode_t read_next_sample( // inout Data received_data, // inout SampleInfo sample_info); // ReturnCode_t take_next_sample( // inout Data received_data, // inout SampleInfo sample_info); // ReturnCode_t read_instance( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); // ReturnCode_t take_instance( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); // ReturnCode_t read_instance_w_condition( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in ReadCondition condition); // ReturnCode_t take_instance_w_condition( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in ReadCondition condition); // ReturnCode_t read_next_instance( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); // ReturnCode_t take_next_instance( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); // ReturnCode_t read_next_instance_w_condition( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in ReadCondition condition); // ReturnCode_t take_next_instance_w_condition( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in InstanceHandle_t a_handle, // in ReadCondition condition); · interface FooDataReader · Add operations: DDS::ReturnCode_t read_next_sample(inout Foo received_data, inout DDS::SampleInfo sample_info); DDS::ReturnCode_t take_next_sample(inout Foo received_data, inout DDS::SampleInfo sample_info); DDS::ReturnCode_t read_instance(inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); DDS::ReturnCode_t take_instance(inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); DDS::ReturnCode_t read_instance_w_condition( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::ReadCondition condition); DDS::ReturnCode_t take_instance_w_condition( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::ReadCondition condition); DDS::ReturnCode_t read_next_instance( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); DDS::ReturnCode_t take_next_instance( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); DDS::ReturnCode_t read_next_instance_w_condition( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::ReadCondition condition); DDS::ReturnCode_t take_next_instance_w_condition( inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::InstanceHandle_t a_handle, in DDS::ReadCondition condition);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6859: Ref-231 Provide_a_way_to_limit_count_returned_samples (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Could be that the result of the read is many samples (100000s). This
would be bad for the application.


***PROPOSAL***


No concrete proposal as it would be hard to represent in IDL but it
would be nice if such API was offered.

Resolution: see below
Revised Text: Resolution: The resolution of this issue is tied to that of issue 6853. The resolution of 6859 adds the means for the read/take operation to "loan" buffers from the Service. The resolution of 6853 adds a "return_loan" operator to return a "loan" acquired by the read/take This issue has been partially combined with 6853. The API to read/take is changed in two ways: · Add an extra parameter to read and take that indicates the maximum samples. · Provides the means to do "zero-coy from the service. The Data and SampleInfo sequences that appear as arguments to "read/take" should be passed as "inout" rather than "out' and, depending on characteristics of the input sequences the Service can decide whether to copy the data or to loan the buffers. Revised Text: Changes in PIM · Section 2.1.2.5.3 DataReader Class · DataReader Table · Operation "read": · Parameter "data_values" change from "out" to "inout" · Parameters "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Operation "take": · Parameter "data_values" change from "out" to "inout" · Parameters "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Operation "read_w_condition": · Parameter "data_values" change from "out" to "inout" · Parameters "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Operation "take_w_condition": · Parameter "data_values" change from "out" to "inout" · Parameters "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Resulting operations in DataReader table: read ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] take ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] read_w_condition ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long a_condition ReadCondition take_w_condition ReturnCode_t inout: data_values Data [] inout: sample_infos SampleInfo [] max_samples long a_condition ReadCondition · FooDataReader Table · Operation "read": · Parameter "data_values" change from "out" to "inout" · Parameter "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Operation "take": · Parameter "data_values" change from "out" to "inout" · Parameter "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Operation "read_w_condition": · Parameter "data_values" change from "out" to "inout" · Parameter "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Operation "take_w_condition": · Parameter "data_values" change from "out" to "inout" · Parameter "sample_infos" change from "out" to "inout" · Add 3rd parameter (after sample_infos): in max_samples : long · Resulting operations in FooDataReader table: read ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] take ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long sample_states SampleStateKind [] view_states ViewStateKind [] instance_states InstanceStateKind [] read_w_condition ReturnCode_t inout: data_values Foo [] inout: sample_info SampleInfo [] max_samples long a_condition ReadCondition take_w_condition ReturnCode_t inout: data_values Foo [] inout: sample_infos SampleInfo [] max_samples long a_condition ReadCondition · Section 2.1.2.5.3.8 read · Replace: This operation accesses a collection of data samples from the DataReader. Depending on the setting of the PRESENTATION QoS policy (cf. Section 2.1.3.3), the operation will return either a 'list' of samples or else a single sample. With: This operation accesses a collection of Data values from the DataReader. The size of the returned collection will be limited by to the specified max_samples. The properties of the data_values collection and the setting of the PRESENTATION QoS policy (cf. Section 2.1.3.4) may impose further limits on the size of the returned 'list'. · Replace paragraph: In any case, the relative order between the samples of one instance is consistent with the DESTINATION_ORDER QosPolicy: With: In any case, the relative order between the samples of one instance is consistent with the DESTINATION_ORDER QosPolicy: · If DESTINATION_ORDER is BY_RECEPTION_TIMESTAMP samples belonging to the same instances will appear in the relative order in which there were received (FIFO, earlier samples ahead of the later samples). · If DESTINATION_ORDER is BY_SOURCE_TIMESTAMP samples belonging to the same instances will appear in the relative order implied by the source_timestamp (FIFO, smaller values of source_timestamp ahead of the larger values). In addition to the collection of samples, the read operation also uses a collection of SampleInfo structures (sample_infos), see Section 2.1.2.5.5 . The initial (input) properties of the data_values and sample_infos collections will determine the precise behavior of read operation. For the purposes of this description the collections are modeled as having three properties: the current-length (len), the maximum length (max_len), and whether the collection container owns the memory of the elements within (owns). PSM mappings that do not provide these facilities may need to change the signature of the read operation slightly to compensate for it. The initial (input) values of the len, max_len, and owns properties for the data_values and sample_infos collections govern the behavior of the read operation as specified by the following rules: 1. The values of len, max_len, and owns for the two collections must be identical. Otherwise read will and return PRECONDITION_NOT_MET. 2. On successful output, the values of len, max_len, and owns will be the same for both collections. 3. If the input max_len==0, then the data_values and sample_infos collections will be filled with elements that are 'loaned' by the DataReader. On output, owns will be FALSE, len will be set to the number of values returned, and max_len will be set to a value verifying max_len >= len. The use of this variant allows for zero-copy access to the data and the application will need to "return the loan" to the DataWriter using the return_loan operation (see Section 2.1.2.5.3.12 ). 4. If the input max_len>0 and the input owns==FALSE, then the read operation will fail and return PRECONDITION_NOT_MET. This avoids the potential hard-to-detect memory leaks caused by an application forgetting to "return the loan". 5. If input max_len>0 and the input owns==TRUE, then the read operation will copy the Data values and SampleInfo values into the elements already inside the collections. On output, owns will be TRUE, len will be set to the number of values copied, and max_len will remain unchanged. The use of this variant forces a copy but the application can control where the copy is placed and the application will not need to "return the loan". The number of samples copied depends on the relative values of max_len and max_samples: · If max_samples = LENGTH_UNLIMITED, then at most max_len values will be copied. The use of this variant lets the application limit the number of samples returned to what the sequence can accommodate. · If max_samples <= max_len, then at most max_samples values will be copied. The use of this variant lets the application limit the number of samples returned to fewer that what the sequence can accommodate. · If max_samples > max_len, then the read operation will fail and return PRECONDITION_NOT_MET. This avoids the potential confusion where the application expects to be able to access up to max_samples, but that number can never be returned, even if they are available in the DataReader, because the output sequence cannot accommodate them. As described above, upon return the data_values and sample_infos collections may contain elements "loaned" from the DataReader. If this is the case, the application will need to use the return_loan operation (see Section 2.1.2.5.3.12 ) to return the "loan" once it is no longer using the Data in the collection. Upon return from return_loan, the collection will have max_len=0 and owns=FALSE. The application can determine whether it is necessary to "return the loan" or not based on how the state of the collections when the read operation was called, or by accessing the 'owns' property. However, in many cases it may be simpler to always call return_loan, as this operation is harmless (i.e. leaves all elements unchanged) if the collection does not have a loan. To avoid potential memory leaks, the implementation of the Data and SampleInfo collections should dis-allow changing the length of a collection for which owns==FALSE. Furthermore, deleting a collection for which owns==FALSE should be consider an error. On output, the collection of Data values and the collection of SampleInfo structures are of the same length and are in one-to-one correspondence. Each SampleInfo provides information, such as the source_timestamp, the sample_state, view_state, and instance_state, etc., about the corresponding sample. Some elements in the returned collection may not have valid data. If the view_state in the SampleInfo is DISPOSED_EXPLICIT or DISPOSED_NO_WRITERS, then the last sample for that instance in the collection, that is, the one whose SampleInfo has sample_rank==0 does not contain valid data. Samples that contain no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy. · Section 2.1.2.5.3.8 take · 3rd paragraph, replace: Similar to read, the collection of SampleInfo is on one-to-one correspondence with the collection of Samples. Furthermore, If the view_state in the SampleInfo is DISPOSED_EXPLICIT or DISPOSED_NO_WRITERS, then the last sample for that instance in the collection, that is the one whose SampleInfo has sample_rank==0 does not contain valid data. As was the case for read, samples with no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy. With: The behavior of the take operation follows the same rules than the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the take operation may 'loan' elements to the output collections which must then be returned by means of return_loan. The only difference with read is that, as stated, the sampled returned by take will no longer be accessible to successive calls to read or take. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Interface DataReader · Change signature of (commented) operation "read" from: // ReturnCode_t read(out DataSeq received_data, // out SampleInfoSeq info_seq, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); To: // ReturnCode_t read(inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); · Change signature of (commented) operation "take" from: // ReturnCode_t take(out DataSeq received_data, // out SampleInfoSeq info_seq, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); To: // ReturnCode_t take(inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in SampleStateMask sample_states, // in ViewStateMask view_states, // in InstanceStateMask instance_states); · Change signature of (commented) operation "read_w_condition" from: // ReturnCode_t read_w_condition( // out DataSeq received_data, // out SampleInfoSeq info_seq, // in ReadCondition condition); To: // ReturnCode_t read_w_condition( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in ReadCondition condition); · Change signature of (commented) operation "take_w_condition" from: // ReturnCode_t take_w_condition( // out DataSeq received_data, // out SampleInfoSeq info_seq, // in ReadCondition condition); To: // ReturnCode_t take_w_condition( // inout DataSeq received_data, // inout SampleInfoSeq info_seq, // in long max_samples, // in ReadCondition condition); · interface FooDataReader · Change signature of operation "read" from: DDS::ReturnCode_t read(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); To: DDS::ReturnCode_t read(inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); · Change signature of operation "take" from: DDS::ReturnCode_t read(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); To: DDS::ReturnCode_t take (inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::SampleStateMask sample_states, in DDS::ViewStateMask view_states, in DDS::InstanceStateMask instance_states); · Change signature of operation "read_w_condition" from: DDS::ReturnCode_t read_w_condition(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::ReadCondition condition); To: DDS::ReturnCode_t read_w_condition(inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::ReadCondition condition); · Change signature of (commented) operation "take_w_condition" from: DDS::ReturnCode_t take_w_condition(out FooSeq received_data, out DDS::SampleInfoSeq info_seq, in DDS::ReadCondition condition); To: DDS::ReturnCode_t take_w_condition(inout FooSeq received_data, inout DDS::SampleInfoSeq info_seq, in long max_samples, in DDS::ReadCondition condition);
Actions taken:
December 23, 2003: received issue
September 23, 2004: closeed issue

Issue 6861: DDS ISSUE# 55] Rename DataType interface to TypeSupport (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-159 Rename_the_interface_DataType


The name DataType used in the PSM and IDL to refer to the interface
with the "register_type"operation from which the FooDataType derives
for each user-data type 'Foo' is causing confusion.


People think that the FooDataType actually represents the type of the
objects being propagated. In reality the type is 'Foo' and FooDataType
just provides the support to intgrate 'Foo' with the middleware


***PROPOSAL***


Rename DataType to TypeSupport. FooDataType to FooTypeSupport

Resolution: See issue 6848 for disposition -- duplicate
Revised Text:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6862: DDS ISSUE# 56] Missing fields in builtin topics (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-??? Missing_fields_in_builtin_topics


It appears that definition of built-in topics in section 2.1.5 is
missing some fields


Related to Ref-68


***PROPOSAL***


Add the missing fields


Define a concrete type for the BuiltinTopicKey_t.  Make it long[3]

Resolution: see below
Revised Text: Resolution: The built-in topics should allow accessing to the QoS of the remote entity At a minimum all QoS that is request/offered must be accessible as by definition it will be available at the other end. Other QoS should also appear in the built-in topics if it could be of use to the application to examine it. The FTF resolved that: The Built-in Topic "DCPSTopic" should contain the key, the participant key, the name of the topic, the type of the Topic, and the followng Qos: DurabilityQosPolicy, DeadlineQosPolicy, LatencyBudgetQosPolicy, LivelinessQosPolicy, ReliabilityQosPolicy, DestinationOrderQosPolicy, HistoryQosPolicy, ResourceLimitsQosPolicy, OwnershipQosPolicy. The Built-in Topic "DCPSPublication" should contain the key, the name of the topic, the type of the Topic, and the followng Qos: DurabilityQosPolicy, DeadlineQosPolicy, LivelinessQosPolicy, ReliabilityQosPolicy, UserDataQosPolicy, OwnershipStrengthQosPolicy, PresentationQosPolicy, PartitionQosPolicy. The Built-in Topic "DCPSSubscription" should contain the key, the name of the topic, the type of the Topic, and the followng Qos: DurabilityQosPolicy, DeadlineQosPolicy, LivelinessQosPolicy, ReliabilityQosPolicy , DestinationOrderQosPolicy destination_order, UserDataQosPolicy, TimeBasedFilterQosPolicy, PresentationQosPolicy, PartitionQosPolicy. Revised Text: Changes in PIM · Section 2.1.5 Built-in Topics · After the table describing the QoS of the built-in Subscriber and DataReader objects · Replace sentence: The following tables describe those built-in topics as well as their contents With: The information that is accessible about the remote entities by means of the built-in topics includes all the QoS policies that apply to the corresponding remote Entity. This QoS policies appear as normal 'data' fields inside the data read by means of the built-in Topic. Additional information is provided to identify the Entity and facilitate the application logic. The table below lists the built-in topics, their names, and the additional information--beyond the QoS policies that apply to the remote entity--that appears in the data associated with the built-in topic. · Replace table describing the contents of the data associated with the built-in Topics with the following table: Topic name Field Name Type Meaning DCPSParticipant(entry created when a DomainParticipant object is created) key BuiltinTopicKey_t DCPS key to distinguish entries user_data UserDataQosPolicy Policy of the corresponding DomainParticipant DCPSTopic(entry created when a Topic object is created) key BuiltinTopicKey_t DCPS key to distinguish entries name string Name of the Topic type_name string Name of the type attached to the Topic durability DurabilityQosPolicy Policy of the corresponding Topic deadline DeadlineQosPolicy Policy of the corresponding Topic latency_budget LatencyBudgetQosP olicy Policy of the corresponding Topic liveliness LivelinessQosPolicy Policy of the corresponding Topic reliability ReliabilityQosPolicy Policy of the corresponding Topic destination_or der DestinationOrderQo sPolicy Policy of the corresponding Topic history HistoryQosPolicy Policy of the corresponding Topic resource_limit s ResourceLimitsQosP olicy Policy of the corresponding Topic ownership OwnershipQosPolic y Policy of the corresponding Topic DCPSPublication(entry created when a DataWriter is created in association with its Publisher) key BuiltinTopicKey_t DCPS key to distinguish entries participant_key BuiltinTopicKey_t DCPS key of the participant to which the DataWriter belongs topic_name string Name of the related Topic type_name string Name of the type attached to the related Topic durability DurabilityQosPolicy Policy of the corresponding DataWriter deadline DeadlineQosPolicy Policy of the corresponding DataWriter latency_budget LatencyBudgetQosP olicy Policy of the corresponding DataWriter liveliness LivelinessQosPolicy Policy of the corresponding DataWriter reliability ReliabilityQosPolicy Policy of the corresponding DataWriter user_data UserDataQosPolicy Policy of the corresponding DataWriter ownership_stre ngth OwnershipStrengthQ osPolicy Policy of the corresponding DataWriter presentation PresentationQosPoli cy Policy of the Publisher to which the DataWriter belongs partition PartitionQosPolicy Policy of the Publisher to which the DataWriter belongs DCPSSubscription(entry created when a DataReader is created in association with its Subscriber) key BuiltinTopicKey_t DCPS key to distinguish entries participant_key BuiltinTopicKey_t DCPS key of the participant to which the DataReader belongs topic_name string Name of the related Topic type_name string Name of the type attached to the related Topic durability DurabilityQosPolicy Policy of the corresponding DataReader deadline DeadlineQosPolicy Policy of the corresponding DataReader latency_budget LatencyBudgetQosP olicy Policy of the corresponding DataReader liveliness LivelinessQosPolicy Policy of the corresponding DataReader reliability ReliabilityQosPolicy Policy of the corresponding DataReader destination_or der DestinationOrderQo sPolicy Policy of the corresponding DataReader user_data UserDataQosPolicy Policy of the corresponding DataReader time_based_fil ter TimeBasedFilterQos Policy Policy of the corresponding DataReader presentation PresentationQosPoli cy Policy of the Subscriber to which the DataReader belongs partition PartitionQosPolicy Policy of the Subscriber to which the DataReader belongs Changes in IDL · Section 2.2.3 DCPS PSM : IDL · After the declaration of struct SubscriberQos add: struct ParticipantBuiltinTopicData { BuiltinTopicKey_t key; UserDataQosPolicy user_data; }; struct TopicBuiltinTopicData { BuiltinTopicKey_t key; string name; string type_name; DurabilityQosPolicy durability; DeadlineQosPolicy deadline; LatencyBudgetQosPolicy latency_budget; LivelinessQosPolicy liveliness; ReliabilityQosPolicy reliability; DestinationOrderQosPolicy destination_order; HistoryQosPolicy history; ResourceLimitsQosPolicy resource_limits; OwnershipQosPolicy ownership; }; struct PublicationBuiltinTopicData { BuiltinTopicKey_t key; BuiltinTopicKey_t participant_key; string topic_name; string type_name; DurabilityQosPolicy durability; DeadlineQosPolicy deadline; LatencyBudgetQosPolicy latency_budget; LivelinessQosPolicy liveliness; ReliabilityQosPolicy reliability; UserDataQosPolicy user_data; OwnershipStrengthQosPolicy ownership_strength; PresentationQosPolicy presentation; PartitionQosPolicy partition; }; struct SubscriptionBuiltinTopicData { BuiltinTopicKey_t key; BuiltinTopicKey_t participant_key; string topic_name; string type_name; DurabilityQosPolicy durability; DeadlineQosPolicy deadline; LatencyBudgetQosPolicy latency_budget; LivelinessQosPolicy liveliness; ReliabilityQosPolicy reliability; DestinationOrderQosPolicy destination_order; UserDataQosPolicy user_data; TimeBasedFilterQosPolicy time_based_filter; PresentationQosPolicy presentation; PartitionQosPolicy partition; };
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6863: Ref-224 Built_in_topics_not_in_PSM (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The built-in Topics are defined in the PIM but not in the PSM.


***PROPOSAL***


Add the definition to the IDL PSM in section 2.2.3


include structures containing the fields in the built-in topics
described in the table in section 2.1.5

Resolution: See issue 6862 for disposition -- duplicate
Revised Text:
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed issue

Issue 6864: DDS ISSUE# 57] Clarify creation of waitset and conditions (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-72 Creation_of_waitset_and_guardcondition


The DDS spec already says they are not created from a factory because
they are intended to be base classes that will be extended by
application-defined classes. This makes them similar to the Listener
interfaces.


However it would be desirable to be more explicit regarding this


***PROPOSAL***


State in the PSM section that WaitSet GuardCondition are to be
implemented as classes in the native language that can be created
using the "new" operator natural in the PSM. Furthermore, they should
have at least a constructor that takes no arguments so that
applications can be portable across implementations of the DDS spec.

Resolution: see below
Revised Text: Resolution: State in the PSM section that WaitSet GuardCondition are to be implemented as classes in the native language that can be created using the "new" operator natural in the PSM. Furthermore, they should have at least a constructor that takes no arguments so that applications can be portable across implementations of the DDS spec. Revised Text: Changes in PIM · Section 2.2.2 PIM to PSM Mapping rules · At the end of the section add the paragraph: The classes that do not have factory operations, namely WaitSet and GuardCondition are mapped to IDL interfaces. The intent is that they will be implemented as native classes on each of the implementation languages and they will be constructed using the "new" operator natural for that language. Furthermore, the implementation language mapping should offer at least a constructor that takes no arguments such that applications can be portable across different vendor implementations of this mapping.
Actions taken:
December 23, 2003: received issue
September 23, 2004: closed sisue

Issue 6867: ref-1032: User-provided oid (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
There is a need that the application can provide the oid of an object at creation time
That characteristic should be provided on a class basis.

Resolution: see below
Revised Text: Resolution: Add two alternative creation mechanism · create_with_oid, when the application developer whish to provide the oid value · create an object based on a filled ObjectRoot; in this case, a two-steps process is easier · create an unregistered (i.e. with no oid) object · register the unregistered object (i.e. allocate the oid) This change concerns the PIM (UML diagram and text) and the IDL Revised Text: Changes in PIM · In section 3.1.6.3.5 ObjectHome · in the table, list of operations · insert after the description of create_object, the description of a new operation "create_object_with_oid", by means of thefollowing entries: create_object_with_oid ObjectRoot access CacheAccess oid DLRLOid · after, insert a new operation "create_unregistered_object", by means of the following entries; create_unregistered_object ObjectRoot access CacheAccess · after, create a third new operation "register_created_object", by means of the following entries: register_created_object void unregistered_object ObjectRoot · In the following text starting with " It offers methods to:", · bullet #5, add the following sentence at the end: [create a new DLRL…creation] "; it raises an exception (ReadOnlyMode) if the CacheAccess is in READ_ONLY mode;" · add a new bullet with the following text: "create a new DLRL object with a user-provided oid (create_object_with_oid); this operation takes as parameter the CacheAccess concerned by the creation as well as the allocated oid; it raises an exception (ReadOnlyMode) if the CacheAccess is in READ_ONLY mode and another exception (AlreadyExisting) in that oid has already be given;" · add a new bullet with the following text: "pre-create a new DLRL object in order to fill its content before the allocation of the oid (create_unregistered_object); this method takes as parameter the CacheAccess concerned with this operation; it raises an exception (ReadOnlyMode) if the CacheAccess is in READ_ONLY mode;" · add a new bullet with the following text: "register an object resulting from such a pre-creation (register_created_object); this operation embeds a logic to derive from the object content a suitable oid; only objects created by create_unregistered_object can be passed as parameter; the method raises an exception (BadParameter) if an attempt is made to pass another kind of object or if the object content is not suitable and another exception (AlreadyExisting) if the result of the computation leads to an existing oid." Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · local interface ObjectHome · Add operations: ObjectRoot create_object_with_oid( in CacheAccess access, in DLRLOid oid) raises ( ReadOnlyMode, AlreadyExisting); ObjectRoot create_unregistered_object ( in CacheAccess access raises ( ReadOnlyMode); void register_created_object ( in ObjectRoot unregistered_object) raises ( AlreadyExisting, BadParameter);
Actions taken:
December 19, 2003: received issue
September 23, 2004: closed issue

Issue 7022: ObjectHome index and name (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
For footprint reasons an ObjectHome is designated by its index in the Cache. For convenience, the index should be an attribute of the ObjectHome
Proposal [THALES]
add the index readonly attribute
return the allocated index when the home is registered
add an operation to retrieve the home based on the index and homogeneise names of find operations
Concrete changes:
IDL
interface ObjectHome {
	...
	readonly attribute unsigned long registration_index;
	...
interface Cache {
	... 
	unsigned long register_home (
		in ObjectHome a_home)
		raises (
			BadHomeDefinition);
	ObjectHome find_home_by_name (
		in ClassName class_name)
		raises (
			BadParameter); 
	ObjectHome find_home_by_index (
		in unsigned long index)
		raises (
			BadParameter);
	...
In section 3.1.6.3.3 Cache
In the table
Change the return type for register_home operation from "void" to "integer"
Change the name of "find_home" to "find_home_by_name"
add the following entry:
find_home_by_index		ObjectHome	
	integer	registration_index
In the following text, starting with "It offers methods"
first bullet: add at the end "this method returns the index under which the ObjectHome is registered by the Cache;"
second bullet:change to "to retrieve an already registered ObjectHome, based on its name (find_home_by_name) or based on its index of registration (find_home_by_index)
in section 3.1.6.2.3.5 ObjectHome
in the table
at the end of the attribute list, add the following entry
registration_index	integer
in the following text, in the list starting by "The public attributes gives"
add a last bullet "the index under which the ObjectHome has been registered by the Cache (cf. Cache::register_home)
Correct the UML diagram accordingly

Resolution: see below
Revised Text: Resolution: Add the attribute "index" on the ObjectHome and return the allocated index when the home is registered Add an operation to retrieve the home based on the index and homogenize names of find operations This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · In section 3.1.6.3.3 Cache · In the table · change the return type for register_home operation from "void" to "integer" · change the name of "find_home" to "find_home_by_name" · add the following entry: find_home_by_index ObjectHome registration_index integer · In the following text, starting with "It offers methods" · first bullet: add at the end "this method returns the index under which the ObjectHome is registered by the Cache;" · second bullet:change to "to retrieve an already registered ObjectHome, based on its name (find_home_by_name) or based on its index of registration (find_home_by_index) · in section 3.1.6.2.3.5 ObjectHome · in the table · at the end of the attribute list, add the following entry registration_index integer · in the following text, in the list starting by "The public attributes gives" · add a last bullet "the index under which the ObjectHome has been registered by the Cache (cf. Cache::register_home) Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · interface ObjectHome · Add: readonly attribute unsigned long registration_index; · interface Cache · Replace void register_home ( in ObjectHome a_home) raises ( BadHomeDefinition); ObjectHome find_home ( in ClassName class_name) raises ( BadParameter); With: unsigned long register_home ( in ObjectHome a_home) raises ( BadHomeDefinition); ObjectHome find_home_by_name ( in ClassName class_name) raises ( BadParameter); · Add ObjectHome find_home_by_index ( in unsigned long index) raises ( BadParameter);
Actions taken:
February 25, 2004: received issue
September 23, 2004: closed issue

Issue 7023: ObjectRoot::is_modified (clarification) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Clarification
Severity:
Summary:
Issue [THALES]
return of is_modified() on a newly created object is unspecified
Proposal [THALES]
should return false
concerns the text 
Concrete change
in section 3.1.6.3.11 ObjectRoot
in the text after the table, starting with "it offers methods to:"
last bullet, add the following sentence at the end "in case the object is newly created, this operation returns false."

Resolution: see below
Revised Text: Resolution: This ioperation returns FALSE in case of a newly created object. This change only concerns the text. Revised Text: Changes in PIM · in section 3.1.6.3.11 ObjectRoot, · in the text after the table, starting with "it offers methods to:", last bullet, add the following sentence at the end "in case the object is newly created, this operation returns false."
Actions taken:
February 25, 2004: received issue
September 23, 2004: closed issue

Issue 7024: New structure for DLRLOid (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
Just a long seems too small to express a DLRLOid
proposal [THALES]
Change to a structure with 2 longs, one meant to identify the creator of the Oid and one local to this creator.
Concrete changes
IDL
struct DLRLOid {
	unsigned long creator_id;
	unsigned long local_id;
	};
	[instead typedef long DLRLOid]

Resolution: see below
Revised Text: Resolution: Change to a structure with 2 longs, one meant to identify the creator of the Oid and one local to this creator. This change only concerns the IDL. Revised Text: Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · Replace typedef long DLRLOid; With struct DLRLOid { unsigned long creator_id; unsigned long local_id; };
Actions taken:
February 25, 2004: received issue
September 23, 2004: closed issue

Issue 7025: Naming of the private members (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
All private members should be named consistently starting by m_
Concerns m_ref and m_refs in Relation
Proposal [THALES]
change the names
Concrete changes
IDL
valuetype RefRelation {
	private ObjectReference	m_ref;
		[instead ref]
valuetype ListRelation : ListBase {
	private ObjectReferenceSeq	m_refs;
		[instead refs]
valuetype IntMapRelation : StrMapBase {
	...
	private ItemSeq	m_refs;
		[instead refs]
valuetype IntMapRelation : StrMapBase {
	...
	private ItemSeq	m_refs;
		[instead refs]

Resolution: see below
Revised Text: Resolution: Change the name of the private members that don't apply this naming rule. This only concerns the IDL Revised Text: Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · valuetype RefRelation · Replace private ObjectReference ref; With private ObjectReference m_ref; · valuetype ListRelation · Replace private ObjectReferenceSeq refs; With private ObjectReferenceSeq m_refs; · valuetype IntMapRelation · Replace private ItemSeq refs; With private ItemSeq m_refs; · valuetype IntMapRelation · Replace private ItemSeq refs; With private ItemSeq m_refs;
Actions taken:
February 25, 2004: received issue
September 23, 2004: closed issue

Issue 7026: clean_modified (in ObjectRoot, Relations...) (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
These operations are error-prone when there are called by the developer.
Proposal [THALES]
suppress them and specify that the internal cleaning of modifications is performed after the last call to listeners
Concerns the interfaces
ObjectRoot
CollectionBase (was CollectionOperations)
RefRelation (was ReferenceOperations)
Concrete changes:
IDL
suppress the following operations
ObjectRoot::clean_modified
CollectionBase::clean_modified
RefRelation::clean_modified
in section 3.1.6.3.11 ObjectRoot
in the table, suppress the 2 lines that describe the operation  clean_modified
in the following list of methods (starting with "it offers methods to") suppress the last bullet
in section 3.1.6.3.13 Reference 
in the table, suppress the 2 lines that describe the operation  clean_modified
in the following list of methods (starting with "it offers methods to") suppress the last bullet
in section 3.1.6.3.14 Collection
in the table, suppress the 2 lines that describe the operation  clean_modified
in the following list of methods (starting with "it offers methods to") suppress the last bullet
in section 3.1.6.4.1 General Scenario
in the last bullet, suppress the end of the sentence "(if not already done)"
Finally [..] of the updated objects are cleaned.

Resolution: see below
Revised Text: Resolution: Suppress them and specify that the internal cleaning of modifications is performed after the last call to listeners. This affects the interfaces ObjectRoot CollectionBase (was CollectionOperations) and RefRelation (was ReferenceOperations) This change concerns the PIM (UML diagram and text) and the IDL Revised Text: Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · suppress the following operations · ObjectRoot::clean_modified · CollectionBase::clean_modified · RefRelation::clean_modified Changes in PIM · in section 3.1.6.3.11 ObjectRoot · in the table, suppress the 2 lines that describe the operation clean_modified · in the following list of methods (starting with "it offers methods to") suppress the last bullet · in section 3.1.6.3.13 Reference · in the table, suppress the 2 lines that describe the operation clean_modified · in the following list of methods (starting with "it offers methods to") suppress the last bullet · in section 3.1.6.3.14 Collection · in the table, suppress the 2 lines that describe the operation clean_modified · in the following list of methods (starting with "it offers methods to") suppress the last bullet · in section 3.1.6.4.1 General Scenario · in the last bullet, suppress the end of the sentence "(if not already done)", to make: "Finally [..] of the updated objects are cleaned."
Actions taken:
February 25, 2004: received issue
September 23, 2004: closed issue

Issue 7057: New definition for ObjectFilter (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]: 
Selection could be slightly enhanced, by providing to the filter the fact if the object was previously a member.
proposal [THALES]
carry the fact that the object was previously part of the selection to the filter so that it may return quicker the result when appropriate
Concrete changes
IDL
enum MembershipState {
	UNDEFINED_MEMBERSHIP,
	ALREADY_MEMBER,
	NOT_MEMBER
	};

local interface ObjectFilter  {
	/*IMPLIED*
	boolean check_object (
		in Object 			an_object,
		in MembershipState		membership_state);
		[addition of this parameter]
	*IMPLIED*/
	};
in section 3.1.6.3.8 ObjectFilter
in the table, add a parameter to the check_object operation by adding the following entry:
	membership_state	enum MembershipState
in the following text, add the following text to the bullet
[check ... (check_object);] this method is called with s first parameter the object to be checked and as second parameter an indication whether the object previously passed the filter (membership_state); this last parameter (which is actually an enumeration with three possible values - UNDEFINED_MEMBERSHIP, ALREADY_MEMBER and NOT_MEMBER) is useful when the ObjectFilter is attached to a Selection to allow the writing of optimized filters.
ref-1051: New definition for Selections (clarification)
Issue [THALES]
As notification_scope has been removed from the ObjectHome (cf. issue ref-1049), it is necessary to pass the information whether the modification of objects inside a Selection should be appreciated only on the object basis, or on the object + its contained objects basis
clarification is requested regarding when the SelectionListener is activated
Proposal [THALES]
add a parameter concerns_contained when the Selection is created and add this characteristic in the selection attributes
add a paragraph to specify when the Selection Listener is activated
in SelectionListener::on_object_out, the ObjectRoot may no more exist, therefore passing the ObjectReference is better
Concrete changes
in IDL
local interface Selection {
	...
	readonly attribute boolean concerns_contained;
		[addition]
	...
	};
local interface SelectionListener {
	...
	[the following method is no more commented out for it will not be redefined in the derived implied IDL]
	void on_object_out (
		in ObjectReference the_ref);
			[instead in ObjectRoot the_object]
	};
in section 3.1.6.3.7 Selection
in the table, attribute list, add the following entry (as third attribute)
concerns_contained	boolean
in the following text, starting with "It has the following attributes:"
add the following text at the end of the second bullet:
"; it is given at Selection creation time(cf. ObjectHome::create_selection)"
add a bullet in third position, with the following content:
"a boolean concerns_contained that indicates whether the Selection considers the modification of one of its members based on its content only (FALSE) or based on it content or the content of its contained objects (TRUE); it is given at Selection creation time(cf. ObjectHome::create_selection);"
add at the end of the section, the following paragraph:
"The SelectionListener is activated when the composition of the Selection is modified or when one of its members is modified. A member can be considered as modified, either only when it is itself modified or when itself or one of its contained objects is modifie (depending on the value of concerns_contained). Modifications in the Selection are considered with respects to the state of the Selection last time it was examined, i.e.:
add one bullet with the following text:
" at each incoming updates processing if autro_refresh is TRUE;"
add a second bullet with the following text:
"at each explicit call to refresh, if auto-refresh is FALSE."

Resolution: see below
Revised Text: Resolution: Add a parameter to the check_object operation to carry the fact that the object was previously part of the selection so that it may return quicker the result when appropriate. This change concerns the PIM (UML diagram and text) and the IDL Revised Text: Changes in PIM · in section 3.1.6.3.8 ObjectFilter · in the table, add a parameter to the check_object operation by adding the following entry: membership_state enum MembershipState · in the following text, add the following text to the bullet · [check ... (check_object);] this method is called with as first parameter the object to be checked and as second parameter an indication whether the object previously passed the filter (membership_state); this last parameter (which is actually an enumeration with three possible values - UNDEFINED_MEMBERSHIP, ALREADY_MEMBER and NOT_MEMBER) is useful when the ObjectFilter is attached to a Selection to allow the writing of optimized filters. Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · Add enum MembershipState { UNDEFINED_MEMBERSHIP, ALREADY_MEMBER, NOT_MEMBER }; · local interface ObjectFilter { /*IMPLIED* boolean check_object ( in ObjectRoot an_object, in MembershipState membership_state); [addition of this parameter] *IMPLIED*/ }; Changes in implied IDL · Section 3.2.1.2.2 Implied IDL · local interface FooFilter · Replace boolean check_object ( in Foo an_object); With boolean check_object ( in Foo an_object, in MembershipState membership_state);
Actions taken:
March 1, 2004: received issue
September 23, 2004: closed issue'

Issue 7058: Mapping DCPS-DLRL (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Currently the specified mapping is rather strict with respects to the mapping between the DLRLOid and the DCPS topic keys. When an existing DCPS schema is reused, more flexibility would be desirable 
In particular, there exist cases when there are no DCPS keys at all (should correspond to a singleton class)
In some circumstances, the indication of the class name while needed in the main topic (to guess what is the type of the corresponding object) is not needed in the relations to objects of that type

Resolution: see below
Revised Text: Resolution: Introduce the possibility to manage a key to/from oid explicit mapping and adapt the XML description of the mapping to support the case when the keys cannot transport the oid. This leads to introducing a new XML element "keyDescription" which will describe the nature of the key with respect to the oid and that will allow for a variable number of key fields In any case, this will have no impact on the operations that can be performed on the DLRL objects This change concerns the text including the XML description (DTD) Revised Text: Changes in PIM · in section 3.1.4.2 Mapping Rules · insert a new paragraph in third position with the following content: "Generally speaking, there exist some flexibility in designing the DCPS model that can be used to map a DLRL model. Nevertheless, there exist cases when the underlying DCPS model exists with no provision for storing the object references and no possibility to be modified. In that case however, the DCPS topics contain fields that allow to uniquely identify instances (the keys). With some restrictions concerning inheritance, these models can also be mapped back into DLRL models. Section 3.1.4.5 is specifically dedicated to that issue." · modify the last paragraph as follows: "The mapping rules when some flexibility is allowed in DCPS model are as follows:" · After section 3.1.4.4.5 MultiRelation, insert a new section: "3.1.4.5 Mapping when DCPS Model is Fixed" · First paragraph with the following text: "In some occasions, it is desirable to map an existing DCPS model to DLRL. It is even desirable to mix in the same system participants that act at DCPS level with other that acts at DLRL level. DLRL, by not imposing the same object model to be shared among all participants is even designed to allow this last feature." · Second paragraph with the following content: "In this case, it is possible to use the topic keys to identify the objects, but not to store directly the object references. Therefore the DLRL implementation must be indicated the topic fields that are used to store the keys so it can manage under the scenes the association keys to/from oid and perform the needed indirection." · Third paragraph with the following content: "Because the object model remains local, this is feasible even if supporting inheritance between the applicative classes (beyond the primary inheritance between an applicative class and ObjectRoot) may be tricky. However an existing DCPS model by construction is unlikely to rely heavily on inheritance between its 'classes'. Therefore such a mapping is supported." · The next section is now numbered " 3.1.4.6 How is this Mapping Indicated?" [instead formerly 3.1.4.5] · In section "3.2.3.3.1 Models Tags DTD · Replace the full description of mainTopic with the following: "<!ELEMENT mainTopic (keyDescription)>" "<!ATTLIST mainTopic name CDATA #REQUIRED>" · Replace the full description of extensionTopic with the following: "<!ELEMENT extensionTopic (keyDescription)>" "<!ATTLIST extensionTopic name CDATA #REQUIRED>" · Replace the element description of monoRelation by the following "<!ELEMENT monoRelation (placeTopic?,keyDescription)>" · Replace the element description of multiRelation by the following "<!ELEMENT multiRelation (multiPlaceTopic,keyDescription)>" · Discard the now useless full description of valueKey, by removing the following lines: "<!ELEMENT valueKey […] >" "<!ATTLIST valueKey […]>" · Replace the full description of placeTopic with the following "<!ELEMENT placeTopic (keyDescription)>" "<!ATTLIST placeTopic name CDATA #REQUIRED>" · Replace the full description of multiPlaceTopic with the following "<!ELEMENT multiPlaceTopic (keyDescription)>" "<!ATTLIST multiPlaceTopic name CDATA #REQUIRED indexField CDATA #REQUIRED>" · Add the full description of keyDescription, by inserting the following "<!ELEMENT keyDescription (keyField*)>" "<!ATTLIST keyDescription content (FullOid | SimpleOid | NoOid) #REQUIRED>" · Add the full description of keyField, by inserting the following "<!ELEMENT keyField (#PCDATA)>" · In section 3.2.2.3.2.6 ClassMapping · Second bullet, correct "sub-tab" into "sub-tag" · In section 3.2.2.3.2.7 MainTopic · Change the second paragraph to the following " It comprises one attribute name that gives the name of the Topic and:" · insert then one bullet with the following content: "a mandatory sub-tag keyDescription." · Discard the rest of the section until the example. · Replace the example to the following: "<mainTopic name="TRACK-TOPIC"> <keyDescription ... </keyDescription> </mainTopic>" · Insert then a section "3.2.2.3.2.8 KeyDescription" · First paragraph, with text: "This tag describes the key to be associated to several elements (mainTopic, extensionTopic, placeTopic and multiPlaceTopic)." · Second paragraph, with text: "It comprises an attribute that describes the content of the keyDescription, that can be:" · Followed by 3 bullets · First bullet: "FullOid, in that case, the key description should contain as first keyField the name of the Topic field used to store the class name and as second keyField the name of the Topic field used to store the OID itself;" · Second bullet: "SimpleOid, in that case the key description should only contain one keyField to contain the OID itself;" · Third bullet: "NoOid, in that case the case description should contain as many keyField that are needed to identify uniquely one row in the related Topic and it is the responsibility of the DLRL implementation to manage the association between those fields and the DLRLOid as perceived by the application developer." · Last paragraph as follows: "It contains also as many elements keyField as needed." · Followed by an example, as follows: "Example: <keyDescription content="SimpleOid"> <keyField>OID</keyField> </keyDescription>" · Then sections 3.2.2.3.2.x, starting from 3.2.2.3.2.9 ExtensionTable have their number increased by 1 · In section 3.2.2.3.2.10 MonoAttribute · Change the example to the following: "<monoAttribute name="y"> <placeTopic name="Y_TOPIC"> <keyDescription content="SimpleOID"> <keyField>OID</keyField> </keyDescription> </placeTopic> <valueField>Y</valueField> </monoAttribute>" · In section 3.2.2.3.2.11 MultiAttribute · Change the example to the following: "<multiAttribute name="comments"> <multiPlaceTopic name="COMMENTS-TOPIC" <keyDescription content="FullOID"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </multiPlaceTopic> <valueField>COMMENT</valueField> </multiAttribute>" · In section 3.2.2.3.2.12 MonoRelation · Last bullet, change the name of the sub-tag from "valueKey" to "keyDescription" · Change the example to the following: "<multiAttribute name="comments"> <multiPlaceTopic name="COMMENTS-TOPIC" <keyDescription content="FullOID"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </multiPlaceTopic> <valueField>COMMENT</valueField> </multiAttribute>" · In section 3.2.2.3.2.13 MultiRelation · Change the example to the following: "<multiRelation name="tracks"> <multiPlaceTopic name="RADARTRACKS-TOPIC" <keyDescription content="SimpleOID"> <keyField>RADAR-OID</keyField> </keyDescription> <\multiPlaceTopic> <keyDescription content="FullSimpleOID"> <keyField>TRACK-CLASS</keyField> <keyField>TRACK-OID</keyField> </keyDescription> </multiRelation> · In section 3.2.3.3 XML Model Tags · Change the first paragraph to the following "The XML tags to drive the generation process could be as follows:" · Change the full example to the following <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Dlrl SYSTEM "dlrl.dtd"> <Dlrl name="example"> <templateDef name="StringStrMap" pattern="StrMap" itemType="string"/> <templateDef name="RadarRef" pattern="Ref" itemType="Radar"/> <templateDef name="TrackList" pattern="List" itemType="Track"/> <classMapping name="Track"> <mainTopic name="TRACK-TOPIC"> <keyDescription content="FullOid"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </mainTopic> <monoAttribute name="x"> <valueField>X</valueField> </monoAttribute> <monoAttribute name="y"> <placeTopic name="Y_TOPIC"> <keyDescription content="FullOid"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </placeTopic> <valueField>Y</valueField> </monoAttribute> <multiAttribute name="comments"> <multiPlaceTopic name="COMMENTS-TOPIC" indexField="INDEX"> <keyDescription content="FullOid"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </multiPlaceTopic> <valueField>COMMENT</valueField> </multiAttribute> <monoRelation name="a_radar"> <keyDescription content="SimpleOid"> <keyField>RADAR_OID</keyField> </keyDescription> </monoRelation> <local name="w"/> </classMapping> <classMapping name="Track3D"> <mainTopic name="TRACK-TOPIC"> <keyDescription content="FullOid"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </mainTopic> <extensionTopic name="TRACK3D-TOPIC"> <keyDescription content="FullOid"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </extensionTopic> <monoAttribute name="z"> <valueField>Z</valueField> </monoAttribute> </classMapping> <classMapping name="Radar"> <mainTopic name="RADAR-TOPIC"> <keyDescription content="SimpleOid"> <keyField>OID</keyField> </keyDescription> </mainTopic> <multiRelation name="tracks"> <multiPlaceTopic name="RADARTRACKS-TOPIC" indexField="INDEX"> <keyDescription content="SimpleOid"> <keyField>RADAR-OID</keyField> </keyDescription> </multiPlaceTopic> <keyDescription content="FullOid"> <keyField>TRACK-CLASS</keyField> <keyField>TRACK-OID</keyField> </keyDescription> </multiRelation> </classMapping> <associationDef> <relation class="Track" attribute="a_radar"/> <relation class="Radar" attribute="tracks"/> </associationDef> </Dlrl>
Actions taken:
March 1, 2004: received issue
September 23, 2004: closed issue

Discussion:
Proposal [THALES]
Reconsider this point and introduce the possibility to  manage a key to/from oid explicit mapping
Adapt the XML description of the mapping to support the case when the keys cannot transport the oid.
introduce a new element "keyDescription" which will describe the nature of the key with respect to the oid and that will allow for a variable number of key fields
In any case, this will have no impact on the operations that can be performed on the DLRL objects
Concrete changes
in section 3.1.4.2 Mapping Rules
Insert a new paragraph in third position with the following content:
"Generally speaking, there exist some flexibility in designing the DCPS model that can be used to map a DLRL model. Nevertheless, there exist cases when the underlying DCPS model exists with no provision for storing the object references and no possibility to be modified. In that case however, the DCPS topics contain fields that allow to uniquely identify instances (the keys). With some restrictions concerning inheritance, these models can also be mapped back into DLRL models. Section 3.1.4.5 is specifically dedicated to that issue."
Modify the last paragraph as follows:
"The mapping rules when some flexibility is allowed in DCPS model are as follows:"
After section 3.1.4.4.5 MultiRelation, insert a new section: "3.1.4.5	Mapping when DCPS Model is Fixed"
First paragraph with the following text:
"In some occasions, it is desirable to map an existing DCPS model to DLRL. It is even desirable to mix in the same system participants that act at DCPS level with other that acts at DLRL level. DLRL, by not imposing the same object model to be shared among all participants is even designed to allow this last feature."
Second paragraph with the following content:
"In this case, it is possible to use the topic keys to identify the objects, but not to store directly the object references. Therefore the DLRL implementation must be indicated the topic fields that are used to store the keys so it can  manage under the scenes the association keys to/from oid and perform the needed indirection."
Third paragraph with the following content:
"Because the object model remains local, this is generally feasible but cannot support inheritance between the applicative classes (beyond the primary inheritance between an applicative class and ObjectRoot).  However an exiting DCPS model will not, by construction, support inheritance between its 'classes'. Therefore such a mapping is supported."
The next section is now numbered " 3.1.4.6 How is this Mapping Indicated?" [instead formerly 3.1.4.5]
In section "3.2.3.3.1 Models Tags DTD
Replace the full description of mainTopic with the following:
"<!ELEMENT mainTopic (keyDescription)>"
"<!ATTLIST mainTopic name CDATA  #REQUIRED>"
Replace the full description of extensionTopic with the following:
"<!ELEMENT extensionTopic (keyDescription)>"
"<!ATTLIST extensionTopic name CDATA  #REQUIRED>"
Replace the element description of monoRelation by the following
"<!ELEMENT monoRelation (placeTopic?,keyDescription)>"
Replace the element description of  multiRelation by the following
"<!ELEMENT multiRelation (multiPlaceTopic,keyDescription)>"
Discard the now useless full description of  valueKey, by removing the following lines:
"<!ELEMENT valueKey […] >"
"<!ATTLIST valueKey […]>"
Replace the full description of placeTopic with the following
"<!ELEMENT placeTopic (keyDescription)>"
"<!ATTLIST placeTopic name CDATA  #REQUIRED>"
Replace the full description of multiPlaceTopic with the following
"<!ELEMENT multiPlaceTopic (keyDescription)>"
"<!ATTLIST multiPlaceTopic name CDATA  #REQUIRED
		      indexField CDATA  #REQUIRED>"
Add the full description of keyDescription, by inserting the following
"<!ELEMENT keyDescription (keyField*)>"
"<!ATTLIST keyDescription content (FullOid | SimpleOid | NoOid) #REQUIRED>"
Add the full description of keyField, by inserting the following
"<!ELEMENT keyField (#PCDATA)>"
In section 3.2.2.3.2.6 ClassMapping
Second bullet, correct "sub-tab" into "sub-tag"
In section 3.2.2.3.2.7 MainTopic
Change the second paragraph to the following
" It comprises one attribute name that gives the name of the Topic and:"
insert then one bullet with the following content:
"a mandatory sub-tag keyDescription."
Discard the rest of the section until the example.
Replace the example to the following:
"<mainTopic name="TRACK-TOPIC">
		<keyDescription 
		...
		</keyDescription>
	</mainTopic>"
Insert then a section "3.2.2.3.2.8 KeyDescription"
First paragraph, with text:
"This tag describes the key to be associated to several elements (mainTopic, extensionTopic, placeTopic and multiPlaceTopic)."
Second paragraph, with text:
"It comprises an attribute that describes the content of the keyDescription, that can be:"
Followed by 3 bullets
First bullet:
"FullOid, in that case, the key description should contain as first keyField the name of the Topic field used to store the class name and as second keyField the name of the Topic field  used to store the OID itself;"
Second  bullet:
"SimpleOid, in that case the key description should only contain one keyField to contain the OID itself;"
Third bullet:
"NoOid, in that case the case description should contain as many keyField that are needed to identify uniquely one row in the related Topic and it is the responsibility of the DLRL implementation to manage the association between those fields and the DLRLOid as perceived by the application developer."
Last paragraph as follows:
"It contains also as many elements keyField as needed."
Followed by an example, as follows:
"Example:
<keyDescription content="SimpleOid">
	<keyField>OID</keyField>
</keyDescription>"
Then sections 3.2.2.3.2.x, starting from 3.2.2.3.2.9 ExtensionTable have their number increased by 1
In section 3.2.2.3.2.10 MonoAttribute
Change the example to the following
"<monoAttribute name="y">
       <placeTopic name="Y_TOPIC">
	       <keyDescription content="SimpleOID">
			<keyField>OID</keyField>
	       </keyDescription>
	</placeTopic>
      <valueField>Y</valueField>
</monoAttribute>"
In section 3.2.2.3.2.11 MultiAttribute
Change the example to the following:
"<multiAttribute name="comments">
         <multiPlaceTopic name="COMMENTS-TOPIC"
                 <keyDescription content="FullOID">
		         <keyField>CLASS</keyField>
		         <keyField>OID</keyField>
		</keyDescription>
	     </multiPlaceTopic>
     	     <valueField>COMMENT</valueField>
</multiAttribute>"
In section 3.2.2.3.2.12 MonoRelation
Last bullet, change the name of the sub-tag ffrom "valueKey" to "keyDescription"
Change the example to the following:
"<multiAttribute name="comments">
	<multiPlaceTopic name="COMMENTS-TOPIC"
		<keyDescription content="FullOID">
			<keyField>CLASS</keyField>
			<keyField>OID</keyField>
		</keyDescription>
	</multiPlaceTopic>
     	<valueField>COMMENT</valueField>
</multiAttribute>"
In section 3.2.2.3.2.13 MultiRelation
Change the example to the following:
"<multiRelation name="tracks">
	<multiPlaceTopic name="RADARTRACKS-TOPIC"
	         <keyDescription content="SimpleOID">
		     <keyField>RADAR-OID</keyField> 
	         </keyDescription>
	<\multiPlaceTopic>
	<keyDescription content="FullSimpleOID">
	         <keyField>TRACK-CLASS</keyField> 
	         <keyField>TRACK-OID</keyField>
	</keyDescription>
</multiRelation>
In section 3.2.3.3 XML Model Tags
Change the full example to the following
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE Dlrl SYSTEM "dlrl.dtd">
<Dlrl name="example">
   <templateDef name="StringStrMap" pattern="StrMap" itemType="string"/>
   <templateDef name="RadarRef" pattern="Ref" itemType="Radar"/>
   <templateDef name="TrackList" pattern="List" itemType="Track"/>
   <classMapping name="Track">
      <mainTopic name="TRACK-TOPIC">
         <keyDescription content="FullOid">
            <keyField>CLASS</keyField>
            <keyField>OID</keyField>
         </keyDescription>
      </mainTopic>
      <monoAttribute name="x">
         <valueField>X</valueField>
      </monoAttribute>
      <monoAttribute name="y">
         <placeTopic name="Y_TOPIC">
            <keyDescription content="FullOid">
               <keyField>CLASS</keyField>
               <keyField>OID</keyField>
            </keyDescription>
         </placeTopic>
         <valueField>Y</valueField>
      </monoAttribute>
      <multiAttribute name="comments">
         <multiPlaceTopic name="COMMENTS-TOPIC" indexField="INDEX">
            <keyDescription content="FullOid">
               <keyField>CLASS</keyField>
               <keyField>OID</keyField>
            </keyDescription>
         </multiPlaceTopic>
         <valueField>COMMENT</valueField>
      </multiAttribute>
      <monoRelation name="a_radar">
         <keyDescription content="SimpleOid">
            <keyField>RADAR_OID</keyField>
         </keyDescription>
      </monoRelation>
      <local name="w"/>
   </classMapping>
   <classMapping name="Track3D">
      <mainTopic name="TRACK-TOPIC">
         <keyDescription content="FullOid">
            <keyField>CLASS</keyField>
            <keyField>OID</keyField>
         </keyDescription>
      </mainTopic>
      <extensionTopic name="TRACK3D-TOPIC">
         <keyDescription content="FullOid">
            <keyField>CLASS</keyField>
            <keyField>OID</keyField>
         </keyDescription>
      </extensionTopic>
      <monoAttribute name="z">
         <valueField>Z</valueField>
      </monoAttribute>
   </classMapping>
   <classMapping name="Radar">
      <mainTopic name="RADAR-TOPIC">
         <keyDescription content="SimpleOid">
            <keyField>OID</keyField>
         </keyDescription>
      </mainTopic>
      <multiRelation name="tracks">
         <multiPlaceTopic name="RADARTRACKS-TOPIC" indexField="INDEX">
            <keyDescription content="SimpleOid">
               <keyField>RADAR-OID</keyField>
            </keyDescription>
         </multiPlaceTopic>
         <keyDescription content="FullOid">
            <keyField>TRACK-CLASS</keyField>
            <keyField>TRACK-OID</keyField>
         </keyDescription>
      </multiRelation>
   </classMapping>
   <associationDef>
      <relation class="Track" attribute="a_radar"/>
      <relation class="Radar" attribute="tracks"/>
   </associationDef>
</Dlrl>



Issue 7059: clone + deref (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
ref-1041: clone + deref
Issue [THALES]
For performance, it seems good to offer a combined operation clone + deref that returns the instantiated cell instead the ObjectReference
Proposal [THALES]
Add such a method on ObjectRoot and also in the implied IDL
Concrete changes:
IDL
ObjectRoot {
	...
	ObjectRoot clone_object(
		in CacheAccess access,
		in ObjectScope scope,
		in RelatedObjectDepth depth)
		raises (
			ReadOnlyMode,
			AlreadyClonedInWriteMode);
Foo {
	...
	Foo clone_foo(
		in CacheAccess access,
		in ObjectScope scope,
		in ReletedObjectDepth depth)
		raises (
			ReadOnlyMode,
			AlreadyClonedInWriteMode);
in section 3.1.6.3.11 ObjectRoot
in the table, add the clone_object operation, by adding the following:
clone_object		ObjectRoot
	access	CacheAccess
	scope	ObjectScope
	depth	integer
in the following text, in the list starting with "it offers methods to":
insert a second bullet, with the following contents:
"clone and instantiate the object in the same operation (clone_object), this operations takes the same parameters as the previous one, but returns an ObjectRoot instead of an ObjectReference; it corresponds exactly to the sequence of clone followed by CacheAccess::deref on the reference returned by the clone operation;"
in section 3.1.6.5.1 Read Mode
item #2, change "(ObjectRoot::clone)" to "(ObjectRoot::clone or clone_object)"
in section 3.1.6.5.2 Write Mode
item #2, change "(ObjectRoot::clone)" to "(ObjectRoot::clone or clone_object)"

Resolution: see below
Revised Text: Resolution: Add such a method on ObjectRoot and also in the implied IDL This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.3.11 ObjectRoot · in the table, add the clone_object operation, by adding the following: clone_object ObjectRoot access CacheAccess scope ObjectScope depth integer · in the following text, in the list starting with "it offers methods to", insert a second bullet, with the following contents: "clone and instantiate the object in the same operation (clone_object), this operations takes the same parameters as the previous one, but returns an ObjectRoot instead of an ObjectReference; it corresponds exactly to the sequence of clone followed by CacheAccess::deref on the reference returned by the clone operation;" · in section 3.1.6.5.1 Read Mode · item #2, change "(ObjectRoot::clone)" to "(ObjectRoot::clone or clone_object)" · in section 3.1.6.5.2 Write Mode · item #2, change "(ObjectRoot::clone)" to "(ObjectRoot::clone or clone_object)" Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · interface ObjectRoot · Add ObjectRoot clone_object( in CacheAccess access, in ObjectScope scope, in RelatedObjectDepth depth) raises ( ReadOnlyMode, AlreadyClonedInWriteMode); Changes in implied IDL · Section 3.2.1.2.2 Implied IDL · Interface Foo · Add: Foo clone_foo( in CacheAccess access, in ObjectScope scope, in ReletedObjectDepth depth) raises ( ReadOnlyMode, AlreadyClonedInWriteMode);
Actions taken:
February 26, 2004: received issue
September 23, 2004: closed issue

Issue 7060: Several instead one listener (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
Issue [THALES]
Currently there is only one ObjectHome and one Cache Listeners. This has been considered by users as fairly limited and as several selections are allowed, unjustified
Proposal [THALES]
Allow several listeners. 
Precise the triggering order
Concrete changes
IDL
local interface Selection {
	...
	SelectionListener set_listener (
		in SelectionListener listener);
	[instead attach_listener in  the commented out section]
	...
local interface Cache {
	...
	readonly attribute CacheListenerSeq listeners;
	[instead readonly attribute CacheListener listener]
	...
	void detach_listener (
		in CacheListener listener);
		[instead no parameter]
	...
local interface ObjectHome {
	...
	readonly attribute ObjectListenerSeq listeners;
	[instead readonly attribute ObjectListener listener]
	... 
	void detach_listener (
		in ObjectListener listener);
		[instead no parameter]
	...
in section 3.1.6.3.3 Cache
in the table
list of attributes replace the entry for listener by the following
listeners	CacheListener []
list of operations, replace the entry for detach_listener by the following
detach_listener		void
	listener	CacheListener
in section 3.1.6.3.5 ObjectHome
in the table
list of attributes replace the entry for listener by the following
listeners	ObjectListener []
list of operations, correct the entry for attach_listener by the following (ObjectListener instead of Listener)
attach_listener		void
	listener	ObjectListener
list of operations, replace the entry for detach_listener by the following
detach_listener		void
	listener	ObjectListener
in section 3.1.6.3.7 Selection
in the table, change the entries for attach_listener and detach_listener to only one operation set_listener, as follows:
set_listener		SelectionListener
	listener	SelectionListener
in the following text, in the list starting with "it offers the methods to:"
change the first bullet to:
"set the listener (set_listener) that will be triggered when the composition of the selection changes as well as if one of its members is modified; set_listener returns the previously set listener if any; set_listener, called with a NULL parameter discards the current listener."
in section 3.1.6.4.1 General Scenario
In the list starting with "This set of updates is managed as follows"
change the first bullet to:
"First, all the CacheListener::start_updates operations are triggered; the order in which these listeners are triggered is not specified"
change the last bullet to:
"Finally all the CacheListener::end_updates operations are triggered and the modification state of the object are cleaned; the order in which these listeners are triggered is not specified."
in section 3.1.6.4.2 Object Creation
In the list
change the first bullet to:
"First, the ObjectListener suitable to that object are searched and their on_object_created operations triggered; the search follows the inheritance structure starting from the more specific ObjectHome (e.g., FooHome, for an object typed Foo) to ObjectRoot. The search is stopped when all on_object_created operations at one level return true; inside one level the triggering order is not specified."
in section 3.1.6.4.3 ObjectModification
In the list
change the last bullet to:
"Then, the ObjectListener suitable to that object are searched and their on_object_modified operations triggered; the search follows the inheritance structure starting from the more specific ObjectHome (e.g., FooHome, for an object typed Foo) to ObjectRoot. The search is stopped when all on_object_modified operations at one level return true; inside one level the triggering order is not specified."
in section 3.1.6.4.4 ObjectDeletion
In the list
change the last bullet to:
"the ObjectListener suitable to that object are searched and their on_object_deleted operations triggered; the search follows the inheritance structure starting from the more specific ObjectHome (e.g., FooHome, for an object typed Foo) to ObjectRoot. The search is stopped when all on_object_deleted operations at one level return true; inside one level the triggering order is not specified."

Resolution: see below
Revised Text: Resolution: Allow for several ObjectListener and CacheListener to be attached. This changes concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.3.3 Cache · in the table · list of attributes replace the entry for listener by the following listeners CacheListener [] · list of operations, replace the entry for detach_listener by the following detach_listener void listener CacheListener · in section 3.1.6.3.5 ObjectHome · in the table · list of attributes replace the entry for listener by the following listeners ObjectListener [] · list of operations, correct the entry for attach_listener by the following (ObjectListener instead of Listener) attach_listener void listener ObjectListener · list of operations, replace the entry for detach_listener by the following detach_listener void listener ObjectListener · in section 3.1.6.3.6 ObjectListener · next to last paragraph, change "will not propagated", to "does not need to be propagated" , to make the following sentence: "Each of these methods must return a boolean. TRUE means that the event has been fully taken into account and therefore does not need to be propagated to other ObjectListener objects (of parent classes)." · in section 3.1.6.3.7 Selection · in the table, change the entries for attach_listener and detach_listener to only one operation set_listener, as follows: set_listener SelectionListener listener SelectionListener · in the following text, in the list starting with "it offers the methods to:" · change the first bullet to: "set the listener (set_listener) that will be triggered when the composition of the selection changes as well as if one of its members is modified; set_listener returns the previously set listener if any; set_listener, called with a NULL parameter discards the current listener." · in section 3.1.6.4.1 General Scenario · In the list starting with "This set of updates is managed as follows" · change the first bullet to: "First, all the CacheListener::start_updates operations are triggered; the order in which these listeners are triggered is not specified" · change the last bullet to: "Finally all the CacheListener::end_updates operations are triggered and the modification state of the object are cleaned; the order in which these listeners are triggered is not specified." · in section 3.1.6.4.2 Object Creation · In the list · change the first bullet to: "First, the ObjectListener suitable to that object are searched and their on_object_created operations triggered; the search follows the inheritance structure starting from the more specific ObjectHome (e.g., FooHome, for an object typed Foo) to ObjectRoot. The search is stopped when all on_object_created operations at one level return true; inside one level the triggering order is not specified." · in section 3.1.6.4.3 ObjectModification · In the list · change the last bullet to: "Then, the ObjectListener suitable to that object are searched and their on_object_modified operations triggered; the search follows the inheritance structure starting from the more specific ObjectHome (e.g., FooHome, for an object typed Foo) to ObjectRoot. The search is stopped when all on_object_modified operations at one level return true; inside one level the triggering order is not specified." · in section 3.1.6.4.4 ObjectDeletion · In the list · change the last bullet to: "the ObjectListener suitable to that object are searched and their on_object_deleted operations triggered; the search follows the inheritance structure starting from the more specific ObjectHome (e.g., FooHome, for an object typed Foo) to ObjectRoot. The search is stopped when all on_object_deleted operations at one level return true; inside one level the triggering order is not specified." Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · local interface Selection · Replace: void attach_listener ( in SelectionListener listener); void detach_listener (); With: SelectionListener set_listener ( in SelectionListener listener); · local interface Cache · Replace readonly attribute CacheListener listener; void detach_listener (); With: readonly attribute CacheListenerSeq listeners; void detach_listener ( in CacheListener listener); · local interface ObjectHome · Replace readonly attribute ObjectListener listener; void detach_listener (); With: readonly attribute ObjectListenerSeq listeners; void detach_listener ( in ObjectListener listener);
Actions taken:
February 26, 2004: received issue
September 23, 2004: closed issue

Issue 7061: delete clone (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
ref-1046: delete_clone
Issue [THALES]
Useful to remove a clone from a CacheAccess in addition to the deletion of all the clones.
Proposal [THALES]
add an operation on CacheAccess to delete a clone
should concern the object itself + all its components, but not the related objects (regardless to what was the clone request)
Concrete changes
IDL
local interface CacheAccess {
	...
	void delete_clone(
		in ObjectReference ref);
	...
in section 3.1.6.3.2 CacheAccess
in the table, after the purge operation, insert the delete_clone operation, by insetring the following entries:
delete_clone		void
	ref	ObjectReference
in the following text, 
introduce a bullet, before the last one, with the following content:
"alternatively, the copy of one object and all its attached contained objects can be detached from the CacheAccess (delete_clone);"


Resolution: see below
Revised Text: Resolution: Add an operation on CacheAccess to delete a clone and all its components. This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.3.2 CacheAccess · in the table, after the purge operation, insert the delete_clone operation, by inserting the following entries: delete_clone void ref ObjectReference · in the following text, introduce a bullet, before the last one, with the following content: "alternatively, the copy of one object and all its attached contained objects can be detached from the CacheAccess (delete_clone);" Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · local interface CacheAccess · Add operation: void delete_clone( in ObjectReference ref);
Actions taken:
February 26, 2004: received issue
September 23, 2004: closed issue

Issue 7062: New definition for ObjectListener (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
ref-1049: New definition for ObjectListener
Issue [THALES]
When a contained object is modified, it can be difficult to get the information in the ObjectListener
In all cases, getting the old value seems useful in certain cases,
As first parameter, the ObjectReference is more appropriate tah the ObjectRoot, for the object may not be instanciated; in addition, this simplifies because it is no more mandatory to generate the implied FooListener.
Whether the notification concerns the object or the object + its contained object should be set on a Listener basis, rather than on the ObjectHome basis.

Resolution: see below
Revised Text: Resolution: Change slightly the definition of listeners, by passing the ObjectReference instead the object itself, adding the old value when appropriate and add a method on ObjectRoot that return the access to the contained modified objects. Specify at each ObjectListener attachment whether it should apply only on the object or on the object + its contained ones and discard the attribute notification_scope on ObjectHome This change concerns the PIM (UML diagram and text) and the IDL. Revised Text: Changes in PIM · in section 3.1.6.3.5 ObjectHome · in the table, · in the attribute list, discard the entry for the attribute notification_scope · in the operations list, add a new parameter to the attach_listener entry; attach_listener is now as follows: attach_listener void listener ObjectListener concerns_contained_objects boolean · in the following list starting with "the public attributes give": · discard the third bullet ("the scope to [...] has changed.") · in the following list starting with "it offers methods to": · append to the second bullet ("attach/detach [...] detach_listener);), the following sentence: "when a listener is attached, a boolean parameter specifies, when set to TRUE, that the listener should listen also for the modifications of the contained objects (concerns_contained_objects);" · in section 3.1.6.3.6 Object Listener · in the table, change the list of operations to the following on_object_created boolean ref ObjectReference on_object_modified boolean ref ObjectReference old_value ObjectRoot on_object_deleted boolean ref ObjectReference · in the following text, starting with "It is defined with three methods:" · append at the end of first bullet: "this operations is called with the ObjectReference of the newly created object (ref);" · append to the end of the second bullet: "this operation is called with the ObjectReference of the newly deleted object (ref);" · append at the end of the third bullet: " this operation is called with the ObjectReference of the modified object (ref) and its old value (old_value); the old value may be NULL;" · in section 3.1.6.3.13 ObjectRoot · in the table, list of operations · insert a third operation, with the following information which_contained_modified RelationDescription [] · in the following text, add a bullet with the following text: "get which contained objects have been modified (which_contained_modified); this method returns a list of descriptions for the relations that point to the modified objects (each description includes the name of the relation and if appropriate the index or key that corresponds to the modified contained object). Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · Add typedef string RelationName; enum RelationKind { REF_RELATION, LIST_RELATION, INT_MAP_RELATION, STR_MAP_RELATION}; valuetype RelationDescription { public RelationKind kind; public RelationName name; }; valuetype ListRelationDescription ; RelationDescription { public long index; }; valuetype IntMapRelation : RelationDescription { public long key; }; valuetype StrMapRelation : RelationDescription { public string key; }; typedef sequence<RelationDescription> RelationDescriptionSeq; · local interface ObjectRoot · Add operation: RelationDescriptionSeq which_contained_modified (); · local interface ObjectListener · Replace commented-out operations boolean on_object_created ( in ObjectRoot the_object); boolean on_object_modified ( in ObjectRoot the_object); boolean on_object_deleted ( in ObjectRoot the_object); With operations (no longer commented out) boolean on_object_created( in ObjectReference ref); boolean on_object_modified( in ObjectReference ref, in ObjectRoot old_value); boolean on_object_deleted( in ObjectReference ref); · local interface ObjectHome · Remove readonly attribute notification_scope; · Replace commented-out operation void attach_listener ( in ObjectListener listener); With operation (no longer commented out) void attach_listener ( in ObjectListener listener, in boolean concerns_contaied_object);
Actions taken:
February 26, 2004: received issue
September 23, 2004: closed issue

Discussion:
Proposal [THALES]
Change slightly the definition of listeners, by passing the ObjectReference instead the object itself, adding the old value when appropriate and introducing a forth method (on_contained_object_modified).
Specify at each ObjectListener attachment whether it should apply only on the object or on the object + its contained ones and discard the attribute notification_scope on ObjectHome
Concrete changes
IDL
typedef RelationName string;
enum RelationKind {
	REF_RELATION,
	LIST_RELMATION,
	INT_MAP_RELATION,
	STR_MAP_RELATION};
valuetype RelationDescription {
	public RelationKind	kind;
	public RelationName	name;	
	};
valuetype ListRelationDescription ; RelationDescription {
	public long index;
	};
valuetype IntMapRelation : RelationDescription {
	public long key;
	};
valuetype StrMapRelation : RelationDescription {
	public string key;
	};
local interface ObjectListener {
	[methods are no more commented out]
	boolean on_object_created(
		in ObjectReference ref);
	boolean on_object_modified(
		in ObjectReference ref,
		in ObjectRoot old_value);
	boolean on_contained_object_modified(
		in ObjectReference ref,
		in RelationDescription relation_desc,
		in ObjectReference contained_ref,
		in ObjectRoot old_contained_value);
	boolean on_object_deleted(
		in ObjectReference ref);
local interface ObjectHome {
	...
	[discard the readonly attribute notification_scope]
	...
	[attach_listener is no more commented out]
	void attach_listener (
		in ObjectListener	listener,
		in boolean		concerns_contaied_object);
		[instead only the first parameter]
	...
in section 3.1.6.3.5 ObjectHome
in the table,
in the attribute list, discard the entry for the attribute notification_scope
in the operations list, add a new parameter to the attach_listener entry; attach_listener is now as follows:
attach_listener		void
	listener	ObjectListener
	concerns_contained_objects	boolean
in the following list starting with "the public attributes give":
discard the third bullet ("the scope to [...] has changed.")
in the following list starting with "it offers methods to":
append to the second bullet ("attach/detach [...] detach_listener);), the following sentence:
"when a listener is attached, a boolean parameter specifies, when set to TRUE, that the listener should listen also for the modifications of the contained objects (concerns_contained_objects);"
in section 3.1.6.3.6 Object Listener
in the table, change the list of operations to the following:
on_object_created		boolean
	ref	ObjectReference
on_objcet_modified		boolean
	ref	ObjectReference
	old_value	ObjectRoot
on_contained_object_modified		boolean
	ref	ObjectReference
	relation_desc	RelationDesc
	contained_ref	ObjectReference
	old_value	boolean
on_object_deleted		boolean
	ref	ObjectReference
in the following text, change the header of the list with the following:
"It is defined with four methods:" [instead of 3]
append at the end of first bullet:
"this operations is called with the ObjectReference of the newly created object (ref);"
append to the end of the second bullet:
"this operation is called with the ObjectReference of the newly deleted object (ref);"
append at the end of the third bullet:
" this operation is called with the ObjectReference of the modified object (ref) and its old value (old_value); the old value may be NULL;"
add a fourth bullet, with the following content:
"on_contained_object_modified which is called when the contents of a contained object has been modified; this operation is called with the ObjectReference of the container object (ref), the ObjectReference of the modified contained object (contained_ref), a description of the relation that links those two objects (relation_desc, including the name of the relation and if appropriate the index that corresponds to the contained object) and old value of the contained  object (old_contained_value); the old value may be NULL;


Issue 7064: Ref-170 Missing_description_of_OWNERSHIP_STRENGH (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Section 2.1.3 "Supported QoS" is missing a subsection that describes the OWNERSHIP_STRENGH QoS policy


***Proposal ***
Add section 2.1.3.7 with the text:


2.1.2.3.7 OWNERSHIP_STRENGH
This QoS policy should be used in combination with the OWNERSHIP policy. It only applies to the situation case where OWNERSHIP kind is set to EXCLUSIVE.
The value of the OWNERSHIP_STRENGTH is used to determine the ownership of a data-instance (identified by the key). The arbitration is performed by the DataReader. The rules used to perform the arbitration are described in Section 2.1.3.6.2 .

Resolution: see below
Revised Text: Resolution: Add section 2.1.3.7 with the text describing the OWNERSHIP_STRENGTH Revised Text: Changes in PIM · Add section 2.1.3.7 with the text: 2.1.2.3.7 OWNERSHIP_STRENGH This QoS policy should be used in combination with the OWNERSHIP policy. It only applies to the situation case where OWNERSHIP kind is set to EXCLUSIVE. The value of the OWNERSHIP_STRENGTH is used to determine the ownership of a data-instance (identified by the key). The arbitration is performed by the DataReader. The rules used to perform the arbitration are described in Section 2.1.3.6.2.
Actions taken:
March 3, 2004: received issue
September 23, 2004: closed issue

Issue 7066: ref-171 Rename_Topic_USER_DATA_to_TOPIC_DATA (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The resolution of  issue 6833 added USER_DATA QoS to Topic.
However this name overlaps with the USER_DATA on the DataWriter and DataReader and prevents an application from using both at the same time.


***PROPOSAL***
Rename USER_DATA on Topic to be TOPIC_DATA.
This involves adding a TOPIC_DATA QoS that concerns Topic, DataReader and DataWriter
Add topic_data as a field in the DCPSTopic, DCPSPublication and DCPSSubscription tables describing the built-in topics in section 2.1.5, page 2-90.

Resolution: see below
Revised Text: Resolution: Rename USER_DATA on Topic to be TOPIC_DATA. Add topic_data as a field in the DCPSTopic, DCPSPublication and DCPSSubscription in the table describing the built-in topics in section 2.1.5, page 2-90. Revised Text: Changes in PIM · Section 2.1.3 Supported QoS · QoS Table: · Add policy "TOPIC_DATA" QosPolicy Value Meaning Concerns RxO Changeable TOPIC_DATA a sequence of octets User data not known by the middleware, but distributed by means of built-in topics (cf. Section ).The default value is an empty (zero- sized) sequence. Topic No Yes · Add section 2.1.3.2 (changes numbers of subsections that follow) 2.1.3.2 TOPIC_DATA The purpose of this QoS is to allow the application to attach additional information to the created Topic such that when a remote application discovers their existence it can examine the information and use it in an application-defined way. In combination with the th listeners on the DataReader and DataWriter as well as by means of operations such as ignore_topic, these QoS can assist an application to extend the provided QoS. This QoS is very similar in intent to USER_DATA. They both concern Topic, DataWriter, and DataReader and are available by means of the built-in topics. Using separate QoS allows both to be set independently. The intended use is that the TOPIC_DATA would be primarily configured on the Topic and the USER_DATA primarily on the DataReader/DataWriter. · Section 2.1.5 Built-in Topics · Built-in topic stable · Section describing fields of built-in topic "DCPSTopic" · Rename "user_data" to be "topic_data" topic_data TopicDataQosPolicy Policy of the corresponding Topic · Add row to section describing fields of built-in topic "DCPSPublication" topic_data TopicDataQosPolicy Policy of the related Topic. · Add row to section describing fields of built-in topic "DCPSSubscription" topic_data TopicDataQosPolicy Policy of the related Topic. Changes in IDL · Section 2.2.3 DCPS PSM : IDL · Add: const string TOPICDATA_QOS_POLICY_NAME = "TopicData"; const QosPolicyId_t TOPICDATA_QOS_POLICY_ID = 18; struct TopicDataQosPolicy { sequence<octet> value; }; · struct UserDataQosPolicy · rename field "data" to "value" · struct TopicQoS · Add: TopicDataQosPolicy topic_data; · struct PublicationBuiltinTopicData · Add: TopicDataQosPolicy topic_data; · struct SubscriptionBuiltinTopicData · Add: TopicDataQosPolicy topic_data; · struct TopicBuiltinTopicData · Add: TopicDataQosPolicy topic_data;
Actions taken:
March 3, 2004: received issue
September 23, 2004: closed issue

Issue 7067: New definition for Selections (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Clarification
Severity:
Summary:
Issue [THALES]
As notification_scope has been removed from the ObjectHome (cf. issue ref-1049), it is necessary to pass the information whether the modification of objects inside a Selection should be appreciated only on the object basis, or on the object + its contained objects basis
clarification is requested regarding when the SelectionListener is activated
Proposal [THALES]
add a parameter concerns_contained when the Selection is created and add this characteristic in the selection attributes
add a paragraph to specify when the Selection Listener is activated
in SelectionListener::on_object_out, the ObjectRoot may no more exist, therefore passing the ObjectReference is better
Concrete changes
in IDL
local interface Selection {
	...
	readonly attribute boolean concerns_contained;
		[addition]
	...
	};
local interface SelectionListener {
	...
	[the following method is no more commented out for it will not be redefined in the derived implied IDL]
	void on_object_out (
		in ObjectReference the_ref);
			[instead in ObjectRoot the_object]
	};
in section 3.1.6.3.7 Selection
in the table, attribute list, add the following entry (as third attribute)
concerns_contained	boolean
in the following text, starting with "It has the following attributes:"
add the following text at the end of the second bullet:
"; it is given at Selection creation time(cf. ObjectHome::create_selection)"
add a bullet in third position, with the following content:
"a boolean concerns_contained that indicates whether the Selection considers the modification of one of its members based on its content only (FALSE) or based on it content or the content of its contained objects (TRUE); it is given at Selection creation time(cf. ObjectHome::create_selection);"
add at the end of the section, the following paragraph:
"The SelectionListener is activated when the composition of the Selection is modified or when one of its members is modified. A member can be considered as modified, either only when it is itself modified or when itself or one of its contained objects is modifie (depending on the value of concerns_contained). Modifications in the Selection are considered with respects to the state of the Selection last time it was examined, i.e.:
add one bullet with the following text:
" at each incoming updates processing if autro_refresh is TRUE;"
add a second bullet with the following text:
"at each explicit call to refresh, if auto-refresh is FALSE."

Resolution: see below
Revised Text: Resolution: Add a parameter concerns_contained when the Selection is created and add this characteristic in the selection attributes Add a paragraph to specify when the Selection Listener is activated. Change the signature of on_object_out to pass an ObjectReference rather than an ObjectRoot. This change concerns the PIM (UML diagram and text) and the Idl Revised Text: Changes in PIM · in section 3.1.6.3.5 ObjectHome · In the table, list of operations · add a last parameter to create_selection, by adding the following entry concerns_contained_objects boolean · in the following list, starting with "It offers methods to:" · bullet #6 (create_selection), replace by the following: " create a Selection (create_selection); the filter parameter specifies the ObjectFilter to be attached to the Selection andSelection, the auto_refresh parameter specifies if the Selection has to be refreshed automatically or only on demand (cf. Selection) and a boolean parameter specifies, when set to TRUE, that the Selection is concerned not Selection);only by its member objects but also by their contained ones (concerns_contained_objects); attached ObjectFilter belongs to the Selection that itself belongs to its creating ObjectHome." · in section 3.1.6.3.7 Selection · in the table, attribute list, · add the following entry (as third attribute) concerns_contained boolean · in the following text, starting with "It has the following attributes:" · add the following text at the end of the second bullet: "; it is given at Selection creation time(cf. ObjectHome::create_selection)" · add a bullet in third position, with the following content: "a boolean concerns_contained that indicates whether the Selection considers the modification of one of its members based on its content only (FALSE) or based on it content or the content of its contained objects (TRUE); it is given at Selection creation time(cf. ObjectHome::create_selection);" · add at the end of the section, the following paragraph: "The SelectionListener is activated when the composition of the Selection is modified or when one of its members is modified. A member can be considered as modified, either only when it is itself modified or when itself or one of its contained objects is modifie (depending on the value of concerns_contained). Modifications in the Selection are considered with respects to the state of the Selection last time it was examined, i.e.: · add one bullet with the following text: " at each incoming updates processing if autro_refresh is TRUE;" · add a second bullet with the following text: "at each explicit call to refresh, if auto-refresh is FALSE." Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · local interface Selection · Add attribute: readonly attribute boolean concerns_contained; · local interface SelectionListener · Replace commented-out operation void on_object_out ( in ObjectRoot the_object); With operation (no longer commented out) void on_object_out ( in ObjectReference the_ref); · local interface ObjectHome · Replace Selection create_selection ( in ObjectFilter filter, in boolean auto_refresh) raises (BadParameter); With Selection create_selection ( in ObjectFilter filter, in boolean auto_refresh, in boolean concern_contained_objects) raises (BadParameter); Changes in implied IDL · Section 3.2.1.2.2 Implied IDL · local interface FooHome { · Replace Selection create_selection ( in FooFilter filter, in boolean auto_refresh) raises (BadParameter); With Selection create_selection ( in FooFilter filter, in boolean auto_refresh, in boolean concern_contained_objects) raises (BadParameter);
Actions taken:
March 4, 2004: received issue
September 23, 2004: closed issue

Issue 7100: Missing operations on DomainParticipantFactory and need for helper values (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The resolution of issue 6816 added operation to get and set default QoS on all the entity factories except for the DomainParticipantFactory. This was an omission as this factory also needs to provide these operation such that DomainParticipant entities can also be created using default QoS.
The operation lookup_participant is defined in the PIM (section 2.1.2.2.2) and does not appear in the PSM
Furthermore, it would be desirable to have some utility constants in the IDL that can be used to indicate to the factory that default QoS should be used to construct an entity. This avoids having to explicitly get the default QoS in the case the application does not want to change any of the defaults. Helper constants can also be added for the specific case of constructing DataReader and DataWriter entities when the application wishes to indicate that the QoS should be obtaining by modifying the default values with the ones defined by the Topic Qos

Resolution: see below
Revised Text: Resolution: The FTF resolved to add the operations set_default_participant_qos and get_default_participant_qos to the DomainParticipantFactory. Furthermore the FTF resolved to add the following constants: PARTICIPANT_QOS_DEFAULT, TOPIC_QOS_DEFAULT, PUBLISHER_QOS_DEFAULT, SUBSCRIBER_QOS_DEFAULT, DATAWRITER_QOS_DEFAULT, DATAREADER_QOS_DEFAULT, DATAWRITER_QOS_USE_TOPIC_QOS, and DATAREADER_QOS_USE_TOPIC_QOS These constants can be used to indicate to the factory that entities should be created either with default QoS or with QoS that combines the QoS of the Topic with the default one for the entity (DataReader/ DataWriter) associated with that Topic. Revised Text: Changes in PIM · Section 2.1.2.2.2 DomainParticipantFactory Class · DomainParticipantFactory table : · Add operations: set_default_participant_qos ReturnCode_t qos_list QosPolicy [] get_default_participant_qos void out: qos_list QosPolicy [] · Add the following sections: 2.1.2.2.2.5 set_default_ participant_qos This operation sets a default value of the DomainParticipant QoS policies which will be used for newly created DomainParticipant entities in the case where the QoS policies are not explicitly specified in the create_participant operation. This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. 2.1.2.2.2.6 get_default_participant_qos This operation retrieves the default value of the DomainParticipant QoS, that is, the QoS policies which will be used for newly created DomainParticipant entities in the case where the QoS policies are not explicitly specified in the create_participant operation. The values retrieved get_default_participant_qos will match the set of values specified on the last successful call to set_default_participant_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3 . This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect and return INCONSISTENT_POLICY. · Section 2.1.2.2.1.1 create_publisher · 2nd paragraph. Replace "compatible" with "consistent" in the sentence: If the specified QoS policies are not compatible, the operation will fail… · Add paragraph after the 2nd paragraph "If the specified QoS policies are not compatible, the operation will fail…" The special value PUBLISHER_QOS_DEFAULT can be used to indicate that the Publisher should be created with the default Publisher QoS set in the factory. The use of this value is equivalent to the application obtaining the default Publisher QoS by means of the operation get_default_publisher_qos (Section 2.1.2.2.1.21 ) and using the resulting QoS to create the Publisher. · · Section 2.1.2.2.1.3 create_subscriber · 2nd paragraph. Replace "compatible" with "consistent" in the sentence: If the specified QoS policies are not compatible, the operation will fail… · Add paragraph after the 2nd paragraph "If the specified QoS policies are not compatible, the operation will fail…" The special value SUBSCRIBER_QOS_DEFAULT can be used to indicate that the Subscriber should be created with the default Subscriber QoS set in the factory. The use of this value is equivalent to the application obtaining the default Subscriber QoS by means of the operation get_default_subscriber_qos (Section 2.1.2.2.1.21 ) and using the resulting QoS to create the Subscriber. · Section 2.1.2.2.1.5 create_topic · 2nd paragraph. Replace "compatible" with "consistent" in the sentence: If the specified QoS policies are not compatible, the operation will fail… · Add paragraph after the 2nd paragraph "If the specified QoS policies are not compatible, the operation will fail…" The special value TOPIC_QOS_DEFAULT can be used to indicate that the Topic should be created with the default Topic QoS set in the factory. The use of this value is equivalent to the application obtaining the default Topic QoS by means of the operation get_default_topic_qos (Section 2.1.2.2.1.21 ) and using the resulting QoS to create the Topic. · Section 2.1.2.2.2.1 create_participant · Add paragraphs after the 1st paragraph "This operation creates a new DomainParticipant object…" If the specified QoS policies are not compatible, the operation will fail and no DomainParticipant will be created. The special value PARTICIPANT_QOS_DEFAULT can be used to indicate that the DomainParticipant should be created with the default DomainParticipant QoS set in the factory. The use of this value is equivalent to the application obtaining the default DomainParticipant QoS by means of the operation get_default_participant_qos (Section 2.1.2.2.2.6 ) and using the resulting QoS to create the DomainParticipant. · Section 2.1.2.4.1.5 create_datawriter · Add paragraph after the bullets that are introduced with the heading "Note that a common application pattern to construct the QoS…" The special value DATAWRITER_QOS_DEFAULT can be used to indicate that the DataWriter should be created with the default DataWriter QoS set in the factory. The use of this value is equivalent to the application obtaining the default DataWriter QoS by means of the operation get_default_datawriter_qos (Section 2.1.2.4.1.14 ) and using the resulting QoS to create the DataWriter. The special value DATAWRITER_QOS_USE_TOPIC_QOS can be used to indicate that the DataWriter should be created with a combination of the default DataWriter QoS and the Topic QoS. The use of this value is equivalent to the application obtaining the default DataWriter QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS "overrides" the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataWriter. · Section 2.1.2.5.2.5 create_datareader · Add paragraph after the bullets that are introduced with the heading "Note that a common application pattern to construct the QoS…" The special value DATAREADER_QOS_DEFAULT can be used to indicate that the DataReader should be created with the default DataReader QoS set in the factory. The use of this value is equivalent to the application obtaining the default DataReader QoS by means of the operation get_default_datareader_qos (Section 2.1.2.4.1.14 ) and using the resulting QoS to create the DataReader. The special value DATAWRITER_QOS_USE_TOPIC_QOS can be used to indicate that the DataReader should be created with a combination of the default DataReader QoS and the Topic QoS. The use of this value is equivalent to the application obtaining the default DataReader QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS "overrides" the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataReader. · Section 2.1.2.2.2.3 get_instance · After the 1st paragraph "This operation returns…" add the paragraph: The pre-defined value TheParticipantFactory can also be used as an alias for the singleton factory returned by the operation get_instance. Changes in IDL · Interface DomainParticipantFactory add: ReturnCode_t set_default_ participant_qos(in DomainParticipantQos qos); void get_default_ participant qos(inout DomainParticipantQos qos); DomainParticipant lookup_participant(in DomainId_t domainId); · Add the following lines to the IDL (before module DDS { …): #define TheParticipantFactory #define PARTICIPANT_QOS_DEFAULT #define TOPIC_QOS_DEFAULT #define PUBLISHER_QOS_DEFAULT #define SUBSCRIBER_QOS_DEFAULT #define DATAWRITER_QOS_DEFAULT #define DATAREADER_QOS_DEFAULT #define DATAWRITER_QOS_USE_TOPIC_QOS #define DATAREADER_QOS_USE_TOPIC_QOS
Actions taken:
March 8, 2004: received issue
September 23, 2004: closed issue

Discussion:
***PROPOSAL***


Concrete changes:
Section 2.1.2.2.2 DomainParticipantFactory Class
DomainParticipantFactory table :
Add operation set_default_participant_qos
Parameters:
qos_list : QosPolicy []
Return: ReturnCode_t
Add operation get_default_participant_qos
Parameters:
out qos_list : QosPolicy []
Return: void


Add subsections:


2.1.2.2.2.5 set_default_ participant_qos
This operation sets a default value of the DomainParticipant QoS policies which will be used for newly created DomainParticipant entities in the case where the QoS policies are not explicitly specified in the create_participant operation.
This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect have no effect and return INCONSISTENT_POLICY.


2.1.2.2.2.6 get_default_participant_qos
This operation retrieves the default value of the DomainParticipant QoS, that is, the QoS policies which will be used for newly created DomainParticipant entities in the case where the QoS policies are not explicitly specified in the create_participant operation.
The values retrieved get_default_participant_qos will match the set of values specified on the last succesful call to set_default_participant_qos, or else, if the call was never made, the default values listed in the QoS table in Section 2.1.3 .


This operation will check that the resulting policies are self consistent, if they are not the operation will have no effect have no effect and return INCONSISTENT_POLICY.
Section 2.1.2.2.1.1 create_publisher
Add paragraph after the 2^nd paragraph “If the specified QoS policies are not compatible, the operation will fail…”
The special value PUBLISHER_QOS_DEFAULT can be used to indicate that the Publisher should be created with the default Publisher QoS set in the factory. The use of this value is equivalent to the application obtaining the default Publisher QoS by means of the operation get_default_publisher_qos (Section 2.1.2.2.1.21 ) and using the resulting QoS to create the Publisher.
Section 2.1.2.2.1.3 create_subscriber
Add paragraph after the 2^nd paragraph “If the specified QoS policies are not compatible, the operation will fail…”
The special value SUBSCRIBER_QOS_DEFAULT can be used to indicate that the Subscriber should be created with the default Subscriber QoS set in the factory. The use of this value is equivalent to the application obtaining the default Subscriber QoS by means of the operation get_default_subscriber_qos (Section 2.1.2.2.1.21 ) and using the resulting QoS to create the Subscriber.
Section 2.1.2.2.1.5 create_topic
Add paragraph after the 2^nd paragraph “If the specified QoS policies are not compatible, the operation will fail…”
The special value TOPIC_QOS_DEFAULT can be used to indicate that the Topic should be created with the default Topic QoS set in the factory. The use of this value is equivalent to the application obtaining the default Topic QoS by means of the operation get_default_topic_qos (Section 2.1.2.2.1.21 ) and using the resulting QoS to create the Topic.
Section 2.1.2.2.2.1 create_participant
Add paragraphs after the 1^st paragraph “This operation creates a new DomainParticipant object…”
If the specified QoS policies are not compatible, the operation will fail and no DomainParticipant will be created.
The special value PARTICIPANT_QOS_DEFAULT can be used to indicate that the DomainParticipant should be created with the default DomainParticipant QoS set in the factory. The use of this value is equivalent to the application obtaining the default DomainParticipant QoS by means of the operation get_default_participant_qos (Section 2.1.2.2.2.6 ) and using the resulting QoS to create the DomainParticipant.


Section 2.1.2.4.1.5 create_datawriter
Add paragraph after the bullets that are introduced with the heading “Note that a common application pattern to construct the QoS…”
The special value DATAWRITER_QOS_DEFAULT can be used to indicate that the DataWriter should be created with the default DataWriter QoS set in the factory. The use of this value is equivalent to the application obtaining the default DataWriter QoS by means of the operation get_default_datawriter_qos (Section 2.1.2.4.1.14 ) and using the resulting QoS to create the DataWriter.
The special value DATAWRITER_QOS_USE_TOPIC_QOS can be used to indicate that the DataWriter should be created with a combination of the default DataWriter QoS and the Topic QoS. The use of this value is equivalent to the application obtaining the default DataWriter QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS “overrides” the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataWriter.


Section 2.1.2.5.2.5 create_datareader
Add paragraph after the bullets that are introduced with the heading “Note that a common application pattern to construct the QoS…”
The special value DATAREADER_QOS_DEFAULT can be used to indicate that the DataReader should be created with the default DataReader QoS set in the factory. The use of this value is equivalent to the application obtaining the default DataReader QoS by means of the operation get_default_datareader_qos (Section 2.1.2.4.1.14 ) and using the resulting QoS to create the DataReader.
The special value DATAWRITER_QOS_USE_TOPIC_QOS can be used to indicate that the DataReader should be created with a combination of the default DataReader QoS and the Topic QoS. The use of this value is equivalent to the application obtaining the default DataReader QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS “overrides” the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataReader.


Section 2.1.2.2.2.3 get_instance
After the 1^st paragraph “This operation returns…” add the paragraph:
The pre-defined value TheParticipantFactory can also be used as an alias for the singleton factory returned by the operation get_instance.


Section 2.2.3 DCPS PSM : IDL
Interface DomainParticipantFactory add:
ReturnCode_t set_default_ participant_qos(in DomainParticipantQos qos);
void get_default_ participant qos(inout DomainParticipantQos qos);
DomainParticipant lookup_participant(in DomainId_t domainId);


Add the following lines to the IDL:
#define TheParticipantFactory
#define PARTICIPANT_QOS_DEFAULT
#define TOPIC_QOS_DEFAULT
#define PUBLISHER_QOS_DEFAULT
#define SUBSCRIBER_QOS_DEFAULT
#define DATAWRITER_QOS_DEFAULT
#define DATAREADER_QOS_DEFAULT
#define DATAWRITER_QOS_USE_TOPIC_QOS
#define DATAREADER_QOS_USE_TOPIC_QOS


Issue 7134: ref-1054: Bad which_added operations in IDL (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
The "which_added" operations on the collections were designed in the PIM so
that it is possible not to compute any result when the content of the
collection had been totally changed. This is not present in the IDL. *** Proposal [THALES]
change the operation to get a boolean result (true => the information is
returned in the out parameter; false => no information) instead of sending
the information as the result.


*** Concrete changes
IDL
abstract valuetype ListBase : CollectionBase {
	boolean which_added (out LongSeq indexes);
		[instead of LongSeq which_added ();]
	...
abstract valuetype StrMapBase : CollectionBase {
	boolean which_added (out StringSeq keys);
		[instead of StrinSeq which_added ();]
	...
abstract valuetype IntMapBase : CollectionBase {
	boolean which_added (out LongSeq keys); 
		[instead of LongSeq which_added ();]

Resolution: see below
Revised Text: Resolution: Change the operation to get a boolean result (true => the information is returned in the out parameter; false => no information) instead of sending the information as the result. This change only concerns IDL Revised Text: Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · abstract valuetype ListBase : CollectionBase { boolean which_added (out LongSeq indexes); [instead of LongSeq which_added ();] … · abstract valuetype StrMapBase : CollectionBase { boolean which_added (out StringSeq keys); [instead of StrinSeq which_added ();] … · abstract valuetype IntMapBase : CollectionBase { boolean which_added (out LongSeq keys); [instead of LongSeq which_added ();] …
Actions taken:
March 9, 2004: received issue
September 23, 2004: closed issue

Issue 7136: ref-1053 Missing is_composition (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
The is_composition operation is described in the PIM, but is not in the IDL.
It concerns the valuettype RefRelation, ListRelatrion, IntMapRelation, and
StrMapRelation.


*** Proposal
add the following operation
        boolean is_composition(); on those valuetypes

Resolution: see below
Revised Text: Resolution: Add the operation is_composition on the said valuetypes.. This changes only concerns the IDL. Revised Text: Changes in IDL · Section 3.2.1.2.1 Generic DLRL Entities · valuetype RefRelation · Add operation boolean is_composition(); · valuetype ListRelation · Add operation boolean is_composition(); · valuetype StrMapRelation · Add operation boolean is_composition(); · valuetype IntMapRelation · Add operation boolean is_composition();
Actions taken:
March 10, 2004: received issue
September 23, 2004: closed issue

Issue 7169: Changing the IDL module (data-distribution-ftf)

Click
here for this issue's archive.
Source: 88solutions (Mr. Manfred R. Koethe, koethe(at)88solutions.com)
Nature: Uncategorized Issue
Severity:
Summary:
The module names in the IDL are very terse and not prefixed. In
this form they pose a high risk of a name collision with any other,
even user-written, IDL module. I would suggest to prefix the module
names as explained in the IDL Style Guide (ab/98-06-03). The "Cos"
prefix would be adequate. This means to change for example
"module DDS" to "module CosDDS", or even to a more intuitive
"module CosDataDistributionService" for the benefit of the user.
I think non of today's systems has a severe name limitation
anymore.


Further email narrowed the proposal to "CosDDS".


Proposed action: No Change


Both RTI and THALES feel that the change significantly affect users that
have started developing applications using the API. Our companies have
invested significantly in the API and a change this late would make us
lose a lot of credibility and may really upset some customers.  In the
case of RTI this includes documentation that has been in-use by
customers over the last 6 months.


In our respective markets there is a lot of people using C and C++ with
compilers that don't support name-spaces so this change affects a very
visible part of the API, namely the prefix of every function. The change
may appear small but it may actually have big impact if we consider
user-acceptance.


Furthermore, the use of the "Cos" prefix may be misleading as the
Data-Distribution Service was designed so it would be implementable
without an object service.

Resolution: closed no change
Revised Text:
Actions taken:
March 30, 2004: received issue
September 23, 2004: closed issue

Discussion:
The FTF feels that the change significantly affect users that have started developing applications using the API. Our companies have invested significantly in the API and a change this late would make us lose a lot of credibility and may really upset some customers.  In the case of RTI this includes documentation that has been in-use by customers over the last 6 months.
In the target markets there is a lot of people using C and C++ with compilers that don't support name-spaces so this change affects a very visible part of the API, namely the prefix of every function. The change may appear small but it may actually have big impact if we consider user-acceptance.
Furthermore, the use of the "Cos" prefix may be misleading as the Data-Distribution Service was designed so it would be implementable without an object service.