Issue 7964: no specific mention of interoperability in DDS 04-04-12 standard proposal
Issue 7965: DDS: DCPS generated interface FooTypeSupport
Issue 7966: DDS: DCPS - define the term "plain data structures"
Issue 7974: 2.1.3.20 WRITER_DATA_LIFECYCLE, itemized list, first bullet
Issue 7975: DDS 04-04-12 para. 2.1.1.1 Format and conventions
Issue 7976: DDS 04-04-12 Appendix B, C
Issue 8354: Typographical and grammatical errors
Issue 8355: Spelling inconsistencies between the PIM and IDL PSM
Issue 8358: Operation DataWriter::register
Issue 8359: (T#4) Typo in section 2.1.2.4.2.10 (write) and section 2.1.2.4.12 (dispose)
Issue 8360: Typo in section 2.1.2.5.2.5
Issue 8361: Default value for READER_DATA_LIFECYCLE
Issue 8362: Incorrect reference to USER_DATA on TopicQos
Issue 8363: No mention of DESTINATION_ORDER on DataWriterQos
Issue 8364: Formal parameter name improvement in IDL
Issue 8365: Spell fully the names for the DataReader operations
Issue 8366: Missing operations on DomainParticipantFactory
Issue 8367: T#18,24,) Missing operations and attributes
Issue 8368: (T#28) Typographical and grammatical errors
Issue 8369: (T#29) Missing operations to Topic class
Issue 8370: Formal parameter name change in operations of ContentFilteredTopic
Issue 8371: (T#30) Ambiguous description of TOPIC_DATA
Issue 8372: Confusing description of behavior of Publisher::set_default_datawriter_qos
Issue 8373: (T#33) Clarification in use of set_listener operation
Issue 8374: Missing description of DomainParticipant::get_domain_id
Issue 8375: (T#41) Default value for RELIABILITY max_blocking_time
Issue 8376: (T#42) Behavior when condition is attached to WaitSet multiple times
Issue 8377: Explicit mention of static DomainParticipantFactory::get_instance operation
Issue 8378: (T#45) Clarification of syntax of char constants within query expressions
Issue 8379: (T#52) Allow to explicitly refer to the default QoS
Issue 8380: (T#54) Performance improvement to WaitSet
Issue 8381: (T#55) Modification to how enumeration values are indicated in expressions
Issue 8382: (T#56) Return values of Waitset::detach_condition
Issue 8383: (T#57) Enable status when creating DomainParticipant
Issue 8384: Add autopurge_disposed_samples_delay to READER_DATA_LIFECYCLE QoS
Issue 8388: (R#106b) Parameter passing convention of Subscriber::get_datareaders
Issue 8389: (R#107) Missing Topic operations in IDL PSM
Issue 8390: (R#109) Unused types in IDL
Issue 8391: Incorrect field name for USER_DATA, TOPIC_DATA, and GROUP_DATA
Issue 8392: R#112) Incorrect SampleRejectedStatusKind constants
Issue 8393: R#114) Operations should not return void
Issue 8394: R#115) Destination order missing from PublicationBuiltinTopicData
Issue 8395: TransportPriority QoS range does not specify high/low priority values
Issue 8396: (R#119) Need lookup_instance method on reader and writer
Issue 8397: (R#120) Clarify use of DATAREADER_QOS_USE_TOPIC_QOS
Issue 8398: (R#122) Missing QoS dependencies in table
Issue 8399: Need an extra return code: ILLEGAL_OPERATION
Issue 8417: (R#124) Clarification on the behavior of dispose
Issue 8418: (R#125) Additional operations that can return RETCODE_TIMEOUT
Issue 8419: (R#127) Improve PSM mapping of BuiltinTopicKey_t
Issue 8420: Unspecified behavior of DataReader/DataWriter creation w/t mismatched Topic
Issue 8421: (R#130) Unspecified behavior of delete_datareader with outstanding loans
Issue 8422: (R#131) Clarify behavior of get_status_changes
Issue 8423: Incorrect reference to LIVELINESS_CHANGED in DataWriter::unregister
Issue 8424: (R#135) Add fields to PublicationMatchStatus and SubscriptionMatchStatus
Issue 8425: (R#138) Add instance handle to LivelinessChangedStatus
Issue 8426: (R#139) Rename *MatchStatus to *MatchedStatus
Issue 8427: (R#142) OWNERSHIP QoS policy should concern DataWriter and DataReader
Issue 8428: (R#145,146) Inconsistent description of Topic module in PIM
Issue 8429: (R#147) Inconsistent error code list in description of TypeSupport::registe
Issue 8430: (R#152) Extraneous WaitSet::wakeup
Issue 8431: (R#153) Ambiguous SampleRejectedStatus::last_reason field
Issue 8432: (R#154) Undefined behavior if resume_publications is never called
Issue 8531: DTD Error (mainTopic
Issue 8532: get_all-topic_names operation missing on figure 3-4
Issue 8533: Naming inconsistencies (IDL PSM vs. PIM) for ObjectHome operations
Issue 8534: Naming inconsistencies (IDL PSM vs. PIM) for Cache operation
Issue 8535: Bad cardinality on figure 3-4
Issue 8536: ReadOnly exception on clone operations
Issue 8537: Wrong definition for FooListener
Issue 8538: Typo CacheUsage instead of CacheAccess
Issue 8539: templateDef explanation contains some mistakes
Issue 8540: DlrlOid instead of DLRLOid in implied IDL
Issue 8541: Parameter wrongly named "object" in implied IDL
Issue 8542: Attach_Listener and detach_listener operations on ObjectHome are untyped
Issue 8543: Remove operations badly put on implied classes
Issue 8545: Behavior of DataReaderListener::on_data_available
Issue 8546: Inconsistent naming for status parameters in DataReader operations.
Issue 8547: (T#23) Syntax of partition strings
Issue 8548: Clarification of order preservation on reliable data reception
Issue 8549: (T#37) Clarification on the value of LENGTH_UNLIMITED constant
Issue 8550: (T#38) request-offered behavior for LATENCY_BUDGET
Issue 8551: (T#46) History when DataWriter is deleted
Issue 8552: (T#47) Should a topic returned by lookup_topicdescription be deleted
Issue 8553: (T#51) Identification of the writer of a sample
Issue 8554: (T#53) Cannot set listener mask when creating an entity
Issue 8555: (T#53) Cannot set listener mask when creating an entity
Issue 8556: (T#59) Deletion of disabled entities
Issue 8557: (T#60) Asynchronous write
Issue 8558: (T#61) Restrictive Handle definition
Issue 8559: (T#62, R#141) Unspecified TOPIC semantics
Issue 8560: (T#65) Missing get_current_time() function
Issue 8561: Read or take next instance, and others with an illegal instance_handle
Issue 8562: (T#69) Notification of unsupported QoS policies
Issue 8567: O#7966) Confusing terminology: "plain data structures"
Issue 8568: (R#104) Inconsistent naming of QueryCondition::get_query_arguments
Issue 8569: (R#115b) Incorrect description of QoS for built-in readers
Issue 8570: (R#117) No way to access Participant and Topic built-in topic data
Issue 8571: (R#126) Correction to DataWriter blocking behavior
Issue 8572: Clarify meaning of LivelinessChangedStatus fields and LIVELINESS le
Issue 8573: (R#133) Clarify meaning of LivelinessLost and DeadlineMissed
Issue 8574: (R#136) Additional operations allowed on disabled entities
Issue 8575: (R#144) Default value for DataWriter RELIABILITY QoS
Issue 8576: (R#150) Ambiguous description of create_topic behavior
Issue 8577: R#178) Unclear behavior of coherent changes when communication interrupted
Issue 8578: R#179) Built-in DataReaders should have TRANSIENT_LOCAL durability
Issue 8579: R#180) Clarify which entities appear as instances to built-in readers
Issue 8580: (R#181) Clarify listener and mask behavior with respect to built-in entitie
Issue 8581: R#182) Clarify mapping of PIM 'out' to PSM 'inout'
Issue 8582: (T#6) Inconsistent name: StatusKindMask
Issue 8775: Page: 2-8
Issue 8892: subset of OMG IDL
Issue 9478: Inconsistencies between PIM and PSM in the prototype of get_qos() methods
Issue 9479: Inconsistent prototype for Publisher's get_default_datawriter_qos() method
Issue 9480: String sequence should be a parameter and not return value
Issue 9481: Mention of get_instance() operation on DomainParticipantFactory beingstatic
Issue 9482: Improper prototype for get_XXX_status()
Issue 9483: Inconsistent naming in SampleRejectedStatusKind
Issue 9484: OWNERSHIP_STRENGTH QoS is not a QoS on built-in Subscriber of DataReaders
Issue 9485: Consistency between RESOURCE_LIMITS QoS policies
Issue 9486: Blocking of write() call
Issue 9487: Clarify PARTITION QoS and its default value
Issue 9488: Typos in built-in topic table
Issue 9489: Naming of filter_parameters concerning ContentFilteredTopic
Issue 9490: Incorect prototype for FooDataWriter method register_instance_w_timestamp()
Issue 9491: Compatible versus consistency when talking about QosPolicy
Issue 9492: Incorrect mention of INCONSISTENT_POLICY status
Issue 9493: Typos in QoS sections
Issue 9494: Typos in PIM sections
Issue 9495: Clarify ownership with same-strength writers
Issue 9496: Should write() block when out of instance resources?
Issue 9497: Description of set_default_XXX_qos()
Issue 9498: Naming consistencies in match statuses
Issue 9499: delete_contained_entities() on the Subscriber
Issue 9500: Return of get_matched_XXX_data()
Issue 9501: Need INVALID_QOS_POLICY_ID
Issue 9502: Clarify valid handle when calling write()
Issue 9503: Operation dispose_w_timestamp() should be callable on unregistered instance
Issue 9504: Behavior of dispose with regards to DURABILITY QoS
Issue 9505: Typo in copy_from_topic_qos
Issue 9506: Order of parameters incorrect in PSM
Issue 9507: Typo in get_discovered_participant_data
Issue 9508: Operation wait() on a WaitSet should return TIMEOUT
Issue 9509: Example in 2.1.4.4.2 not quite correct
Issue 9510: Non intuitive constant names
Issue 9511: Corrections to Figure 2-19
Issue 9516: Simplify Relation Management
Issue 9517: Cache and CacheAccess should have a common parent
Issue 9518: Object notification in manual update mode required
Issue 9519: ObjectExtent and ObjectModifier can be removed
Issue 9520: Introduce the concept of cloning contracts consistently in specification
Issue 9521: Object State Transitions of Figure 3-5 and 3-6 should be corrected
Issue 9522: Add Iterators to Collection types
Issue 9523: Harmonize Collection definitions in PIM and PSM
Issue 9524: Add the Set as a supported Collection type
Issue 9525: Make the ObjectFilter and the ObjectQuery separate Selection Criterions
Issue 9526: Add a static initializer operation to the CacheFactory
Issue 9527: Make update rounds uninterruptable
Issue 9528: Remove lock/unlock due to overlap with updates_enabled
Issue 9529: Add Listener callbacks for changes in the update mode
Issue 9530: Representation of OID should be vendor specific
Issue 9531: define both the Topic name and the Topic type_name separately
Issue 9532: Merge find_object with find_object_in_access
Issue 9533: Clarify which Exceptions exist in DLRL and when to throw them
Issue 9534: Support sequences of primitive types in DLRL Objects
Issue 9535: manual mapping key-fields of registered objects may not be changed
Issue 9536: Specification does not state how to instantiate an ObjectHome
Issue 9537: Raise PreconditionNotMet when changing filter expression on registered Obje
Issue 9538: PIM description of "get_domain_id" method is missing
Issue 9539: PIM and PSM contradicting wrt "get_sample_lost_status" operation
Issue 9540: Small naming inconsistentcies between PIM and PSM
Issue 9541: Unlimited setting for Resource limits not clearly explained
Issue 9542: Inconsistent PIM/PSM for RETCODE_ILLEGAL_OPERATION
Issue 9543: Resetting of the statusflag during a listener callback
Issue 9544: Incorrect description of enable precondition
Issue 9545: invalid reference to delete_datareader
Issue 9546: Clarify the meaning of locally
Issue 9547: Invalid DURABILITY_SERVICE reference on the DataWriter
Issue 9548: Missing autopurge_disposed_sample_delay
Issue 9549: Illegal return value register_instance
Issue 9550: Typo in section 2.1.2.5.1
Issue 9551: Extended visibility of instance state changes
Issue 9552: Clarify notification of ownership change
Issue 9553: read/take_next_instance()
Issue 9554: instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section
Issue 9555: String sequence should be a parameter and not return value
Issue 9574: Need to clarify what is meant by "RELATED_OBJECTS"
Issue 9575: clarify allowable (spec compliant) ways to implement ObjectReference[].
Issue 9964: create_contentfilteredtopic Method Prototype and Description Out
Issue 10357: Section: 2.1.1.2.1
Issue 10358: Section: 2.1.2.2.1.9
Issue 10359: Section: 2.1.2.3.1
Issue 10360: Section: 2.1.2.3.2
Issue 10361: Section: 2.1.2.3.6.1
Issue 10362: Section: 2.1.2.5.1.3
Issue 10363: Section: 2.1.2.5.3
Issue 10364: Section: 2.1.2.5.3.8
Issue 10365: Section: 2.1.2.5.3.9
Issue 10366: Section: 2.1.3
Issue 10367: Section: 2.1.3.5
Issue 10368: Section: 2.1.3.14
Issue 10369: Section: 2.1.3.18
Issue 10370: Section: 3.1.4.5
Issue 10542: PIM Spec should have separate tables for Foo types, like DCPS section does
Issue 10543: DLRL Issue: Diagrams in Fig 3.5 and Fig 3.6 look improperly captioned
Issue 10544: Request clarification on a WRITE_ONLY CacheAccess, cloning, and refresh()
Issue 10545: DLRL Issue: Need clarification on limitations of bi-directional association
Issue 10546: Request clarification of how to handle a dangling related object
Issue 10547: DLRL Issue: Int the future, allow DLRL valuetype implementations?
Issue 10548: DLRL Issue: Request clarification on the behavior of is_modified
Issue 10549: Should "set" method called outside of writable CacheAccess throw exception?
Issue 10550: DLRL Issue: Error in the ownership of a SelectionCriterion
Issue 10551: DLRL Issue: Error in the IDL for the SelectionListener
Issue 10552: IDL interfaces for ObjectListener and FooListener are inconsistent
Issue 10553: Request clarification: what can you do with a deleted object?
Issue 10554: Request clarification on interface of DLRL object with multi-attribute
Issue 10555: Can a CacheAccess::refresh() throw an AlreadyClonedInWriteMode exception?
Issue 10556: DLRL Issue: Mismatch between DLRL and CORBA on enumerations
Issue 10557: Proposed Enhancement: allow QoS directly on a DLRL object type?
Issue 10581: Section: 2.2.3
Issue 10661: Unclarities in section 3.1.4.2.3
Issue 10662: Unclarities in table in section 3.1.6.2 the row regarding the CacheAccess
Issue 10663: Clarify usage of create_cache operation
Issue 10664: Clarify usage of refresh operation on the CacheBase, section 3.1.6.3.2
Issue 10665: Clarify usage of cache_usage operation on the CacheBase, section 3.1.6.3.2
Issue 10666: Which exceptions can be raised under which circumstances?
Issue 10667: Clarify what happens in the purge operation of the CacheAccess
Issue 10668: Rewrite sentence on page 3-1, section 3.1
Issue 10669: Clarify exception condition
Issue 10670: Clarify usage of find_home_by_name
Issue 10671: Clarify exceptions with operation register_all_for_pubsub
Issue 10672: Clarify various things with operation enable_all_for_pubsub
Issue 10673: Clarify exceptions for enable_updates and disable_updates operations of Cac
Issue 10674: In section 3.1.6.3.5 regarding the CacheListener clarify some things
Issue 10675: In section 3.1.6.3.6 regarding the Contract clarify some things
Issue 10676: In section 3.1.6.3.7 regarding the ObjectHome clarify some things
Issue 10677: In section 3.1.6.3.11 regarding the FilterCriterion clarify some things
Issue 10678: getter/setter/is_modified operations
Issue 10679: Clarify usage of the destroy() operation on the ObjectRoot
Issue 10680: Clarify usage of the is_modified() operation on the ObjectRoot
Issue 10681: Description not detailed enough
Issue 10682: Clarify text on page 3-35 directly following the operation descriptions.
Issue 10683: Clarify exceptions for add/put operations on List in section 3.1.6.3.16
Issue 10684: Clarify exceptions
Issue 10685: Clarify listeners
Issue 10686: (last) end_updates call on CacheListeners
Issue 10687: Clarify typical scenario for read mode of a CacheAccess in section 3.1.6.5.
Issue 10688: Clarify typical scenario for write mode of CacheAccess in section 3.1.6.5.2
Issue 10689: Clarify what happens with selection->refresh if auto_refresh is true
Issue 10690: The Implied IDL needs to be extended with attribute examples for class Foo
Issue 10691: The generated class FooImpl is not mentioned in the implied idl
Issue 10692: Usage of Undefined unclear
Issue 10693: Clarify exceptions/usage for remove operation on List in section 3.1.6.3.16
Issue 10694: Clarify usage of Fully qualified names in the model tags in section 3.2.2.3
Issue 10695: Section 3.1.3.1 on page 3-3
Issue 10696: Unclear sentence in section 3.1.6.1.1
Issue 10697: Section 3.2.1.2.1 Generic DLRL Entities get_instance
Issue 10698: The read_state of a cache object contains some typos
Issue 10699: Clearly separate default mapping from pre-defined mapping
Issue 10700: Cache
Issue 10701: non-existing elements
Issue 10702: Unregistered objects
Issue 10703: Describe exact event propagation in case of inheritance + multiple listener
Issue 10704: samples from the underlying DataReaders
Issue 10705: Change description on mapping rules for Exceptions or return values
Issue 10706: Add exceptions, clarify usage of other exceptions
Issue 10707: DCPSError also becomes a 'runtime' exception
Issue 10717: Remove the CacheDescription object
Issue 10718: Add attribute to CacheAccess (section 3.1.6.3.3)
Issue 10719: Add an attribute to get the home_index
Issue 10720: Relationships to objects that have been deleted are not allowed.
Issue 10721: getters of relationships
Issue 10722: Prevent writing contents of CacheAccess while 'invalid' relations exists
Issue 10723: Let the attach/detach listener operation return a boolean
Issue 10724: set_query and set_parameters operation
Issue 10725: Introduce the clear() operation on the Collection interface.
Issue 10726: Support local class in Mapping XML
Issue 10727: Enable_all_for_pubsub operation
Issue 10728: set_auto_deref, deref_all, underef_all operations
Issue 10729: Cache should have a getter to obtain its related DomainParticipant.
Issue 10732: Proposal to make the is_xxx_modified operation optional
Issue 10733: The classname in FullOid makes no sense in case of a 'local' Object model
Issue 10734: Add an operation to the Cache
Issue 10735: Selection should have a non-listener way of obtaining the members
Issue 10736: FilterCriterion
Issue 10737: How are deleted objects treated in a CacheBase and a Selection
Issue 10738: objects instances in a writeable CacheAccess
Issue 10739: Describe exact behaviour of compositions and associations in the DLRL.
Issue 10740: DLRL object
Issue 10741: There is a lot of redundancy in the XML file.
Issue 10742: Indicate the semantics of merging separate topics into one single object
Issue 10743: instance_state of a DCPS instance becomes NOT_ALIVE_NO_WRITERS
Issue 10744: Extend the XML to allow optional relationships
Issue 10745: cloned objects
Issue 10746: Minor typo's and inconsistencies
Issue 10747: Typo in section 3.1.4.2.1
Issue 10748: Typo in section 3.1.6.1.2.1
Issue 10749: Various typos in section 3.1.6.3.4, Cache
Issue 10750: Various typos in section 3.1.6.3.7, ObjectHome
Issue 10751: which_contained_modified operation should be removed
Issue 10752: Typos in section 3.1.6.4.1
Issue 10753: Typos in section 3.1.6.4.3 & 3.1.6.4.4
Issue 10754: Typos in section 3.1.6.6
Issue 10755: Typos in section 3.2.1.2 IDL description
Issue 10756: Typos in section 3.2.1.2 IDL description
Issue 10757: section 3.2.1.2 IDL description on page 3-52
Issue 10758: section 3.2.1.2.2 Implied IDL
Issue 10759: In section 3.2.2.3.2.11 MultiAttribute.
Issue 10760: In section 3.2.2.3.2.11 MultiAttribute, the example xml code
Issue 10761: section 3.2.2.3.2.13 MultiRelation, 3rd bullet
Issue 10762: section 3.2.2.3.2.10 MonoAttribute, 3.2.2.3.2.12 MonoRelation
Issue 10763: In section 3.2.3.5 Code example, several typos
Issue 10764: Figure 3-4 and section 3.2.1.2.2 Implied IDL
Issue 10765: section 3.2.2.3.2.13 MultiRelation - XML code
Issue 10766: Section 3.2.1.1 Mapping Rules regarding error reporting
Issue 10768: Inconsistency in attribute definitions for valuetype ObjectRoot
Issue 10769: Figure 3-4 on page 3-16 is missing some operations in some classes
Issue 10770: use case for multiple valueFields
Issue 10771: Give each CacheAccess its own Publisher and DataWriters
Issue 10804: A descriptive name
Issue 10805: DataReader semantics for historical data are insufficient
Issue 10806: Invalid DURABILITY_SERVICE reference on the DataWriter
Issue 10807: Add name attribute to Entity
Issue 10808: Semantics instance liveliness and ownership unclear
Issue 10809: Missing TypeSupport operations
Issue 10810: Inconsistent lookup semantics
Issue 10811: Default built-in ReaderDataLifecycle values
Issue 10812: Cancel transaction
Issue 10813: Get entity enabled state
Issue 10980: Section: 2.1.2.5.2
Issue 10993: DDS DCPS Issue: PRESENTATION=GROUP and QoS
Issue 10994: Specify names of mono-relation and multi-relation fields for default mappin
Issue 10995: DDS DLRL Issue: Clarification on the use of a Set in a DLRL Query
Issue 10996: create_object and create_unregistered_object
Issue 10997: clarify behavior of content_filters in an inheritance hierarchy
Issue 10998: DDS DLRL Issue: Clarify behavior of a Composition
Issue 12212: DDS typos and omissions
Issue 12276: DURABILITYSERVICE_POLICY_NAME
Issue 12360: Specify the allowed IDL Types within DDS Topic structs
Issue 12465: 'synchrobnous' and 'asynchronous' switched
Issue 12539: Deprecated usage of IDL in the DDS spec
Issue 13448: No specific package definition for the entities of the model in case of JAVA language binding
Issue 13839: Mapping of OMG IDL to C++ for DDS
Issue 13950: Introduce new typedef
Issue 13951: For ContentFilteredTopic::get_expression_parameters the argument name is not given in the spec
Issue 14089: [DDS] Data types permissible as topic key fields
Issue 14165: : #define HANDLE_TYPE_NATIVE long typedef HANDLE_TYPE_NATIVE InstanceHandle_t;
Issue 14166: DDS defines Time_t with seconds as long, this is 32bit. This will give an issue after 2038
Issue 14814: Add new mask which will let DDS callback on the listener to gets its mask
Issue 14829: Extend Topic with method to retrieve key fields
Issue 15041: Add DDS::STATUS_MASK_NONE
Issue 15834: DDS specification should be more precise on NATIVE defines
Issue 15835: DDS should require to annotate IDL to indicate which IDL types are used for dds
Issue 15904: Add several with_profile methods
Issue 15945: All IDL should use local interfaces
Issue 16029: DDS Entities should have a name
Issue 16098: behaviour of redefining multiple times the same topic with different QoS not clearly specified
Issue 16262: get_type_name, class or object method
Issue 16266: InstanceHandle_t/Domain ID
Issue 16607: Missing APIs for (read|take)_instance_w_condition
Issue 17284: History and Reliability should be orthogonally independent concerns
Issue 17362: The compatibility rules for the Presentation QoS are too strict
Issue 17363: When using GROUP access scope presentation QoS, allow for read/take outside of begin_access and end_access block
Issue 17364: Allow for more optimized list returned by get_datareaders()
Issue 17412: Spec lacks definition regarding uniqueness of InstanceHandle_t
Issue 17413: Write with handle_nil underspecified
Issue 7964: no specific mention of interoperability in DDS 04-04-12 standard proposal (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Uncategorized Issue
Severity:
Summary:
I find no specific mention of interoperability in the DDS 04-04-12 standard proposal. It should be clarified whether the standard is intended to address interoperability, and if so, under what exact conditions (e.g., is it safe to assume that if the DCPS IDL PSM is implemented by IIOP based CORBA ORBs then it will be possible to interoperate?)
RTF Comments: The DDS specification addresses only inter-vendor portability. The specification defines the API and behavior. There is an on-going effort at OMG to address interoperability. In the meantime implementations could be built on top of IIOP. However, given that the DDS Entities are intended to be local communication endpoints and not and not references to the use of IIOP would not be sufficient to achieve interoperability as it IIOP does not address how to represent the QoS, discovery information, and other behaviors necessary to implement DDS. In addition the DDS specification was designed to be implementable on top of connectionless unreliable protocols such as IP multicast and IIOP does not offer direct facilities to do that.
Nature: Enhancement
Summary:
Document 04-04-12 para. 2.2.3 near end
In the implied IDL interface FooTypeSupport for a user type Foo,
there is an operation
DDS::ReturnCode_t register_type(in DDS::DomainParticipant
participant,
in string type_name);
IMHO the type_name argument is superfluous:
The generated stub code can fill it in automatically ("Foo").
RTF Comments: The type name is not superfluous; see section 2.1.2.3.6.1. In some applications, it may be desirable to register the same physical type multiple times (with different participants or the same participant) under different names. However, given that different Topics can already be created that use the same type, and given that typdefs can be used to create new type names. A good argument could be made that there is limited use for the added functionality provided by the type-name parameter. A use case could perhaps be used to clarify the need. As a compromise, the standard could be changed to state that a nil type name is permissible, in which case the default name will be used. Alternatively, the FooTypeSupport class could get an additional method get_type_name() that returns the default type name.
OMG document 04-04-12 para. 2.1.1.2.2 Overall Conceptual Model pg. 2-7 states: At the DCPS level, data types represent information that is sent atomically. For performance reasons, only plain data structures are handled by this level. Please define the term "plain data structures".
* The setting 'autodispose_unregistered_instances = FALSE' causes the
DataWriter [...]
Change FALSE to TRUE.
The table format used for documenting classes contains an "attributes" and an "operations" section. However, in order for applications to be portable across implementations of the DDS spec, it would be desirable to add a "constructors" section that explicitly states those constructors that take one or more arguments (i.e. non-default constructors.)
Filters and Queries are not compile-time checked and are too heavy The 04-04-12 DDS document proposes a subset of SQL for defining filters andqueries. The filter/query expressions are passed into the corresponding methods as type "string". First, this means that conforming implementations need to provide an SQL expression parser/evaluator - a fairly complex piece of software. Second, since the expressions are given as strings, checking them at compile time is not straight-forward. We request the Revision Task Force to reconsider this design decision in favor of less heavyweight approaches that allow for compile-time checks.
RTF Comments: The DDS RTF agrees in principle that it would be a good idea. However we were not able to come up with a suitable proposal that addresses the need for doing the content-filter also at the DataWriter side. Therefore the DDS RTF is recommending this issue is postponed to a future RTF where more implementation experience may be available to suggest the best approach. Resolution: No change to the specification.
The specification contains a number of misspellings and other minor typographical and grammatical errors.
The typographical and grammatical errors shall be corrected.
Revised Text:
Location Original Incorrect Text Corrected Text
2.1.2, fig. 2-4 "Topic Module" "Topic-Definition Module"
2.1.2.2.2 create_participant parameter "domainId" create_participant parameter "domain_id"
2.1.2.2.2 lookup_participant parameter "domainId" lookup_participant parameter "domain_id"
2.1.2.2.2.1 "domainId" "domain_id"
2.1.2.2.2.4 "domainId" (two occurrences) "domain_id" (two occurrences)
2.1.2.3.7, pg. 2-39 "…for a hypothetical application named "Foo"…" "…for a hypothetical application data type named "Foo"…"
2.1.2.4.1.15 "…get_default_datawriter_qos will match the set of valuesspecified on the last successful call to get_default_datawriter_qos…" "…get_default_datawriter_qos will match the set of valuesspecified on the last successful call to set_default_datawriter_qos…"
2.1.2.5, fig. 2-10 SampleInfo attribute "instance_rank" SampleInfo attribute "sample_rank"
2.1.2.5.1, fig. 2-11 transition from NO_WRITERS to ALIVE "…=++" transition from NO_WRITERS to ALIVE "…++"
2.1.2.5.1, pg. 2-57 "time-stamp" "timestamp"
2.1.2.5.1, pg. 2-59, 2nd to last para. "…snapshot of view_state…" "…snapshot of the view_state…"
2.1.2.5.1, pg. 2-61, 4th para. "…multiple DataReader." "…multiple DataReaders."
2.1.2.5.1, pg. 2-61, list item (1) "…list of DataReader…" (two occurrences) "…list of DataReaders…" (two occurrences)
2.1.2.5.1, pg. 2-61 "…acrossDataWriter entities." (two occurrences) "…acrossDataReader entities." (two occurrences)
2.1.2.5.2.7 "…multiple DataReader…" "…multiple DataReaders…"
2.1.3, pg. 2-92 "…ability to: specify and receive coherent changes see the relative order of changes." "…ability to specify and receive coherent changes and to see the relative order of changes."
2.1.3, pg. 2-98 "time-stamp" "timestamp"
2.1.3, pg. 2-101, autopurge_ nowriter_ samples_ delay row "…information regarding instances that have the view_state NOT_ALIVE_NO_WRITERS." "…information regarding instances that have the instance_state NOT_ALIVE_NO_WRITERS."
2.1.3.6 "TIME_BASED_PERIOD" "TIME_BASED_FILTER"
2.1.3.17 last para. "compatible" (two occurrences) "consistent" (two occurrences)
2.1.3.18 last para. "compatible" (two occurrences) "consistent" (two occurrences)
2.1.3.20 itemized list, first bullet "The setting 'autodispose_unregistered_ instances = FALSE' causes the DataWriter…" "The setting 'autodispose_unregistered_ instances = TRUE' causes the DataWriter…"
2.1.3.21, para. 4 "… view_state = NOT_ALIVE_NO_WRITERS…" "… instance_state = NOT_ALIVE_NO_WRITERS…"
2.1.4.1 Requested-Incom-patible-Qos-Status:: total_count row "Total cumulative count the concerned DataReader discovered a DataWriter…" "Total cumulative number of times the concerned DataReader discovered a DataWriter…"
2.1.4.4, before fig. 2-19 Reference to figure 2-18 Reference to figure 2-19
2.1.5 para. 3 "get_datareader" "lookup_datareader"
2.2.3 const long DURATION_INFINITY_SEC = 0x7ffffff;const unsigned long DURATION_INFINITY_NSEC = 0x7ffffff; const long DURATION_INFINITY_SEC = 0x7fffffff;const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff;
2.2.3 interface DomainParticipantFactory { DomainParticipant create_participant( in DomainId_t domainId, in DomainParticipantQos qos, in DomainParticipantListener a_listener); … DomainParticipant lookup_participant( in DomainId_t domainId); … interface DomainParticipantFactory { DomainParticipant create_participant( in DomainId_t domain_id, in DomainParticipantQos qos, in DomainParticipantListener a_listener); … DomainParticipant lookup_participant( in DomainId_t domain_id); …
In a number of instances, there are minor inconsistencies in spelling and naming between the specification's platform-independent model (PIM) and the included IDL platform-specific model (PSM). Resolution: In each case, the most descriptive term of the two options was chosen and the other was conformed to it.
The method DataWriter::register conflicts with the C++ 'register' keyword. Resolution: Replace register and unregister by register_instance and unregister_instance Replace register_w_timestamp and unregister_w_timestamp by register_instance_w_timestamp and unregister_instance_w_timestamp
Since the revisions are straightforward, here only the figures,tables and paragraphs are indicated which are affected by the above indicated change. - update figures 2-8, 2-9, accordingly - update tables in paragraph 2.1.2.4.2 accordingly - update text in paragraphs 2.1.2.4.2.5/6/7/8, 2.1.3.20 and 2.1.3.22.3 accordingly - update IDL in paragraph 2.2.3 accordingly
Summary: In par. 2.1.2.4.2.10 (write) and par. 2.1.2.4.12 (dispose) the specification does not specify an error code in case the specified handle is valid but does not correspond to the given instance (the key value must match), and neither for the case that the specified handle is invalid. Resolution: Specify that in general, the result is unspecified, but that depending on vendor-specific implementations, the resulting error-code is 'PRECONDITION_NOT_MET' if a wrong instance (i.e. with a wrong key-value) is provided and that the resulting error-code is 'BAD_PARAMETER' if a bad handle is provided Revised Text: Add the following text to the end of 2.1.2.4.2.10 (write) In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS-implementation, the returned error-code will be 'BAD_PARAMETER'. Replace in 2.1.2.4.2.12 (dispose), the text "Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET" by the following text: In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS-implementation, the returned error-code will be 'BAD_PARAMETER'.
In section 2.1.2.5.2.5. (create_datareader) the special value DATAWRITER_QOS_USE_TOPIC_QOS is mistakenly being used instead of DATAREADER_QOS_USE_TOPIC_QOS. Resolution: Replace the wrong text with the correct version. Revised Text: In 2.1.2.5.2.5 (create-datareader) replace the text "The special value DATAWRITER_QOS_USE_TOPIC_QOS" with "The special value "DATAREADER_QOS_USE_TOPIC_QOS
Section 2.1.3. (Supported QoS) the default value of the duration attribute of the READER_DATA_LIFECYCLE QoS is specified as "unlimited".. Resolution: Replace "unlimited" by "infinite", which in general is used in relation with durations. Revised Text: In the QoS-table of paragraph 2.1.3, replace the text "By default, unlimited" as belongs to the READER_DATA_LIFECYCLE QoS by the text "By default, infinite".
The table in section 2.1.3. (Supported QoS) wrongful specifies that USER_DATA concerns Topic. Resolution: 'Topic' should be removed from the 'concerns' column. Revised Text: In the table in section 2.1.3 (Supported QoS), remove from the "USER-DATA" row, and from the "Concerns" column, the word 'Topic'.
In the table in section 2.1.3. (Supported QoS) the DESTINATION_ORDER QoS does not mention the 'datawriter' as concerned entity. Resolution: Add DataWriter to the 'concerns' column. Revised Text: In the table in section 2.1.3 (Supported QoS), add to the "DESTINATION_ORDER" column and the "Concerns" row, the word 'DataWriter'.
In the IDL specification of section 2.2.3, the first parameter of the 'register_type' method is called 'domain' instead of 'participant' (as it is called elsewhere, like in the table of secion 2.1.2.3.6. Resolution: Change the parameter name to 'participant' in the typesupport::register_type IDL. Revised Text: In Chapter 2.2.3 (IDL specification), change the register_type parameter called 'domain' into 'participant'.
In some class diagrams, generic operations are indicated using '_xxx_' in their names instead of fully specifying all the real operations and also some operations are missing. Resolution: - add the missing operations for the dataReader - explicitly mention all operations for the dataReader Revised Text: In the class diagram Fig. 2-8 on page 2-39: - add missing operations "read_w_condition", "take_w_condition" and "return_loan". - rename "read_xxx_w_conditon" into "read_next_w_condition". - rename "take_xxx_w_condition" into "take_next_w_condition"
The class DomainParticipantFactory in figure 2-6 section 2.1.2.2. (Domain Module) misses the operations set_default_participant_qos and get_default_participant_qos.. Resolution: Add the missing operations. Revised Text: In the class diagram Fig. 2-6 of section 2.1.2.2 (Domain Module), add the operations 'set_default_participant_qos' and 'get_default_participant_qos'.
In some of the figures some operations are missing. Resolution: The missing operations shall be added. Revised Text: Location Missing operation 2.1.2.5, fig. 2-10 delete_contained_entities() 2.1.2.2, fig. 2-6 set_default_publisher_qos()get_default_publisher_qos()set_default_subscriber_qos()get_default_subscriber_qos()set_default_topic_qos()get_default_topic_qos()
The specification contains a number of misspellings and other minor typographical and grammatical errors. Resolution: The typographical and grammatical errors shall be corrected. Revised Text: Location Original Incorrect Text Corrected Text 2.1.2.1, fig. 2-5 "Status" (the class name) "Status" 2.1.2.2, fig. 2-6 "domainId" "domain_id"
In the DCPS PSM the Topic class does not specify the methods set_qos, get_qos, set_listener and get_listener.
Resolution:
The methods set_qos, get_qos, set_listener and get_listener shall be added to the IDL description of the Topic class.
Revised Text:
In the IDL in 2.2.3:
interface Topic : Entity, TopicDescription {
…
ReturnCode_t set_qos(
in TopicQos qos);
void get_qos(
inout TopicQos qos);
ReturnCode_t set_listener(
in TopicListener a_listener,
in StatusMask mask);
TopicListener get_listener();
ReturnCode_t get_inconsistent_topic_status(
inout: InconsistentTopicStatus);
…
};
Some of the formal parameter names of ContentFilteredTopic methods are vague. Resolution: The names shall be changed into more distinct names. Revised Text: Location Original incorrect name Corrected name section 2.1.2.2.1, create_contentfilteredtopic expression_parameters filter_parameters section 2.1.2.2.1.7 topic_name related_topic section 2.1.2.2.1.7 expression_parameters filter_parameters section 2.1.2.3.3, get_expression_parameters expression_parameters filter_parameters section 2.1.2.3.3, set_expression_parameters expression_parameters filter_parameters
The last part of the description states: "They both concern Topic, DataWriter and DataReader…"although that furter in the text is described that TOPIC_DATA is only applicable for Topics it would be better to remove this part of the description. Resolution: The last section of paragraph 2.1.3.2 shall be removed. Revised Text: The text: "This QoS is very similar in intent to USER_DATA……primarily on the DataReader/DataWriter." shall be removed.
The description of the Publisher method set_default_datawriter_qos describes the use in case the qos was not explicitly specified at the create_datawriter operation. However, specifying the qos policy at the create_datawriter is not optional, it should state in case the default is used. Resolution: The description shall be modified to clarify the use-case of using the defaults. Revised Text: In section 2.1.2.4.1.15: Replace "in the case where the QoS policies are not explicitly specified" with "in the case where the QoS policies are defaulted
The description of the Entity method set_listener does not describe the result of this method if called with the value of the listener parameter set to NIL. the value NIL is passed for the listener parameter. Chapter 2.1.2.1.1.3 (set_listener): Explicitly state that passing the value NIL for the listener is valid and clears the listener. Resolution: The description of this use-case shall be added to the description. Revised Text: Only one listener can be attached to each Entity. If a listener was already set, the operation set_listener will replace it with the new one. Consequently if the value 'nil' is passed for the listener parameter to this method any existing listener will be removed.
In the class description of the DomainParticipant the description of the attribute domain_id is missing Resolution: The attribute domain_id shall be added to the table in 2.1.2.2.1 The description of attribute domain_id shall be added as section 2.1.2.2.1.26 Revised Text: 1.1.2.2.1.26 domain_id The domain_id identifies the Domain of the DomainParticipant. At creation the DomainParticipant is associated to this domain
The default value of the RELIABILITY qos policy attribute max_blocking_time is not specified. Resolution: The default value shall be specified an arbitrary value greater then zero to avoid that writers will encounter timeouts on acceptable temporary buffer saturations and the value should not be too big since real-time behavior would expect that anything causing writers to block would not sustain for a long time. Revised Text: In section 2.1.3: Add to the description of the RELIABILITY QosPolicy value RELIABLE in the QosPolicy table the text: "The default max_blocking_time=100ms.
It is not clearly defined what should happen when the same condition is attached to the same WaitSet multiple times. Resolution: Explicitly state that this has no effect: subsequent attachments of the same Condition will be ignored. Revised Text: Add a small piece of text to section 2.1.2.1.6.1, which explains that adding a Condition that is already attached to that WaitSet has no effect.
The get_instance method is mentioned in the PIM, but not in the IDL PSM. Resolution: Explicitly state that this is a static method and that it is therefore not specified in IDL. Revised Text: Add a piece of text to section 2.1.2.2.2.3 explaining that the get_instance method is a static method implemented as a native language construction, and can therefore not be expressed in IDL. Add the "static" keyword before the get_instance method mentioned in the table of section 2.1.2.2.2. Add piece of text in section 2.2.2 (right after the introduction of default constructors for WaitSet and GuardCondition) which explains that the DomainParticipantFactory has a static method called get_instance in the native classes that implement it.
It is not clear how the value of a char constant should be expressed in a query expression. Resolution: Clarify that a char constant in a query expression must be places between single quotes. Revised Text: Add a bullet to Appendix B in the section "Token expression" where the char constant is introduced and where is explained how it should be defined (between single quotes, just like the tring). Keep Appendix C inline with this as well
It would be nice to be able to use the "<item>_QOS_DEFAULT" constant in both the set_default_<item>_qos method of its factory and in its set_qos method as well.
Resolution:
Explicitly allow that passing the default qos constant ("<item>_QOS_DEFAULT") to the "set_default_<item>_qos" method in its factory will reset the default qos value for the item to its initial factory default state.
Also state that using the "<item>_QOS_DEFAULT" constant in the set_qos method of an item will change the qos of that item according to the current default of its container entity at the time the call is made.
Revised Text:
For each "set_default_<item>_qos" method in each factory, add the fact that the "<item>_QOS_DEFAULT" constant may be used to revert it back to its factory settings. This impacts sections 2.1.2.2.1.20, 2.1.2.2.1.22, 2.1.2.2.1.24, 2.1.2.4.1.14 and 2.1.2.5.2.15.
For each set_qos method in each entity, state that the corresponding "<item>_QOS_DEFAULT" constant may be used to change the qos of the item according to the current default of its container entity at the time the call is made, provided that this does not change any immutable qos once the entity is enabled. This impacts only section 2.1.2.1.1.1, since the set_qos method explanations are not repeated in the descriptions for every entity specialization.
The get_conditions and wait methods of the WaitSet pass the Conditions in which the user is interested back to the application as out-parameters. This causes unnecessary memory allocations each time a WaitSet is used for that purpose. Resolution: Make the WaitSet result sequence of the inout type for performance reasons, especially because the application is aware of the desired (worst-case) length. The user is then able to recycle these sequences every time. Revised Text: In the table in section 2.1.2.1.6 change the parameter types of the Condition Sequence from out to inout. Explain in sections 2.1.2.1.6.3 and 2.1.2.1.6.4 that the user can either pre-allocate the sequence and force the middleware to overwrite its contents, or to not to pre-allocate and let the middleware allocate the memory for him. Also change the IDL definition for both methods in section 2.2.3.
Appendix B describes an enumeration value as a name::value, during a telephone conference (in a hurry) this is decided to solve ambiguity between attribute names and enumeration labels. The description states that the name specifies the field, that should be the enumeration type. In addition the enumeration type should be a fully specified type name including its scope. This is a lot to specify in a query expression especially because within a query expression the enumeration value is always related to a field that already identifies the type. In addition in SQL enumeration labels are represented as string literals i.e. the values are put between single quotes. Resolution: Treat enumeration values as string literals and place them between single quotes instead of using a scope operator. Revised Text: In Appendix B in the section "Token expression" where the ENUMERATEDVALUE is introduced, replace the sentence that states that "A double colon '::' is used to separate the name of the enumeration from that of the field." with a sentence that states that enumeration labels should be treated as string literals and should therefore be put between single quotes. In the next sentence, remove the part which states that the name of the enumeration should correspond to the name specified in the IDL definition. (But keep the part of the sentence that states that the name of the value should correspond to the names of the labels. Keep Appendix C in line with this as well.
section 2.1.2.1.6.2. (Waitset::detach_condition) describes to return BAD_PARAMETER if the given condition is not attached to the waitset. It would be more appropriate to return PRECONDITION_NOT_MET. Resolution: Change the return-code Revised Text: In section 2.1.2.1.6.2 (Waitset::detach_condition), change all mentioning of BAD_PARAMETER into PRECONDITION_NOT_MET.
DomainParticipants , being entities, can be both enabled or disabled.. Because the DomainParticipantFactory is not an entity and therefore does not have a QoS, it doesn't support a Factory QosPolicy which specifies how to create a DomainParticipant (either enabled or disabled).
Resolution:
Add a DomainParticipantFactoryQos policy to the DomainParticipantFactory, and add the operation set_qos() and get_qos() to the DomainParticipantFactory class. (However, do not make the DomainParticipantFactory an Entity itself!)
Revised Text:
In section 2.1.2.2.2, add the get_qos and set_qos methods to the table. Create two new sections (2.1.2.2.2.7 and 2.1.2.2.2.8), which explain the semantics of these get_qos and set_qos. Also explain that the fact that although the DomainParticipantFactory has a qos, it is not an Entity, since it does not have any StatusConditions or Listeners and cannot be enabled.
Add to the table in section 2.1.3 for the ENTITY_FACTORY policy in the "Concerns" column also the DomainParticipantFactory.
Add to the IDL in section 2.2.3 the following things:
struct DomainParticipantFactoryQos {
EntityFactoryQosPolicy entity_factory;
};
interface DomainParticipantFactory {
…..
ReturnCode_t set_qos(in DomainParticipantQos qos);
ReturnCode_t get_qos(inout DomainParticipantQos qos);
};
The READER_DATA_LIFECYCLE QoS specifies an autopurge_nowriter_samples_delay, however for the same reasons there should also be a autopurge_disposed_samples_delay. Resolution: Add the missing operation Revised Text: In section 2.1.3.21 add at the end: The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance, any untaken samples will also be lost In figure 2-12, class ReaderDataLifecycleQosPolicy, add "autopurge_disposed_samples_delay: Duration_t" In section 2.2.3 (IDL) add the field "Duration_t autopurge_disposed_samples_delay" to struct ReaderDataLifecycleQosPolicy
The mapping from PIM to IDL PSM for the operation Subscriber::get_datareaders represents the PIM 'out' sequence parameter to an IDL 'out' parameter. This maaping is inconsistent with that in other places in the API, in which PIM 'out' parameters are represented as 'inout' in the PSM. An 'out' parameter is also undesirable from a performance perspective. Proposed Resolution: The sequence argument to Subscriber::get_datareaders should be an 'inout' in the IDL PSM. Proposed Revised Text: In section 2.2.3: ReturnCode_t get_datareaders( inout DataReaderSeq readers, in SampleStateMask sample_states, in ViewStateMask view_states, in InstanceStateMask instance_states);
The Topic interface in the PSM is missing the following operations which are present in the PIM: get_qos, set_qos, get_listener, and set_listener.
Proposed Resolution:
Add the missing operations to the IDL interface.
Proposed Revised Text:
In section 2.2.2:
interface Topic : Entity, TopicDescription {
ReturnCode_t get_qos(inout TopicQos qos);
ReturnCode_t set_qos(in TopicQos qos);
TopicListener get_listener();
ReturnCode_t set_listener(
in TopicListener a_listener,
StatusKindMask mask);
};
The types TopicSeq, SampleStateSeq, ViewStateSeq and InstanceStateSeq all appear in the IDL PSM but are never used. Proposed Resolution: Remove the unused types from the IDL PSM. Proposed Revised Text: The following declarations should be removed from the IDL PSM: typedef sequence<Topic> TopicSeq; typedef sequence <SampleStateKind> SampleStateSeq; typedef sequence<ViewStateKind> ViewStateSeq; typedef sequence<InstanceStateKind> InstanceStateSeq;
The QoS table in section 2.1.3 does not mention the field names in the USER_DATA, TOPIC_DATA, and GROUP_DATA QoS policies. The UML diagram in figure 2-12 gives the names of these fields as "data"; however, that name is inconsistent with the names given in the IDL PSM. Proposed Resolution: The table and figure should indicate that the name of the field in each policy is "value." That name is consistent with the IDL PSM. Proposed Revised Text: In 2.1.3 figure 2-12, UserDataQosPolicy: value [*] : char In 2.1.3 figure 2-12, TopicDataQosPolicy: value [*] : char In 2.1.3 figure 2-12, GroupDataQosPolicy: value [*] : char In the table in 2.1.3, in the "Value" column of the USER_DATA, TOPIC_DATA, and GROUP_DATA rows: "value": a sequence of octets
The constants in the enumeration SampleRejectedStatusKind should correspond to the fields of the RESOURCE_LIMITS QoS policy.
Proposed Resolution:
Remove the constant REJECTED_BY_TOPIC_LIMIT. Add the constants REJECTED_BY_SAMPLES_LIMIT and REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT.
Proposed Revised Text:
enum SampleRejectedStatusKind {
REJECTED_BY_INSTANCE_LIMIT,
REJECTED_BY_SAMPLES_LIMIT,
REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT
};
A number of operations in the specification have a void return type. However, without a specified return type, an implementation cannot indicate that an error occurred. Proposed Resolution: The following methods currently return void and should return ReturnCode_t instead. · GuardCondition::set_trigger_value · DomainParticipant::get_default_publisher_qos · DomainParticipant::get_default_subscriber_qos · DomainParticipant::get_default_topic_qos · DomainParticipant::assert_liveliness · DomainParticipantFactory::get_default_participant_qos · Publisher::get_default_datawriter_qos · Subscriber::get_default_subscriber_qos · DataWriter::assert_liveliness · Subscriber::notify_datareaders (The get_qos operations on each concrete Entity type are show to return void in the IDL PSM but a list of QoS policies in the PIM. That inconsistency is addressed in another issue.) Proposed Revised Text: In the GuardCondition Class table in 2.1.2.1.8, the void return type of set_trigger_value should be replaced by ReturnCode_t. The return type of that operation must be similarly changed in the IDL PSM in 2.2.3. In the DomainParticipant Class table in 2.1.2.2.1, the void return type of the get_default_*_qos operations and the assert_liveliness operation should be replaced by ReturnCode_t. The return types of those operations should be similarly changed in the IDL PSM in 2.2.3. In the Publisher Class table in 2.1.2.4.1, the void return type of get_default_datawriter_qos should be replaced by ReturnCode_t. The return type of that operation must be similarly changed in the IDL PSM in 2.2.3. In the DataWriter Class table in 2.1.2.4.2, the void return type of assert_liveliness should be replaced by ReturnCode_t. The return type of that operation must be similarly changed in the IDL PSM in 2.2.3. In the Subscriber Class table in 2.1.2.5.2, the void return type of get_default_datareader_qos and notify_datareaders should be replaced by ReturnCode_t. The return type of those operations must be similarly changed in the IDL PSM in 2.2.3.
The PublicationBuiltinTopicData type is missing a destination order field.
Proposed Resolution:
Add the missing field in both the PIM and the IDL PSM.
Proposed Revised Text:
In the "DCPSPublication" row of the table in 2.1.5, pg. 2-131, add a sub-row like the following after the existing "ownership_strength" sub-row:
destination_order DestinationOrderQosPolicy Policy of the corresponding DataWriter
In the IDL PSM, modify the PublicationBuiltinTopicData declaration as follows (the member immediately preceding the new member is show below in order to demonstrate the position of the new member):
struct PublicationBuiltinTopicData {
OwnershipStrengthQosPolicy ownership_strength;
DestinationOrderQosPolicy destination_order;
};
The specification does not state what the valid range of the transport priority values is, now does it state whether higher or lower values correspond to higher priorities. Proposed Resolution: Stipulate that the range of TransportPriorityQosPolicy::value is the entire range of a 32 bit signed integer. Larger numbers indicate higher priority. However, the precise interpretation of the value chosen is transport- and implementation-dependent. Proposed Revised Text: The second paragraph of section 2.1.3.14 contains the sentence: "As this is specific to each transport it is not possible to define the behavior generically." This sentence should be rewritten as follows: "Any value within the range of a 32-bit signed integer may be chosen; higher values indicate higher priority. However, any further interpretation of this policy is specific to a particular transport and a particular implementation of the Service. For example, a particular transport is permitted to treat a range of priority values as equivalent to one another."
There are get_key_value operations in the DataReader and DataWriter to translate from an instance handle to a key. However, in order for a client of the Service to use the per-instance read and take operations of a DataReader, it would be convenient to have an operation to translate in the other direction: from key value(s) to an instance handle. Proposed Resolution: Add operations DataReader::lookup_instance and DataWriter::lookup_instance. Proposed Revised Text: Append the following rows to the DataWriter Class table in 2.1.2.4.2: lookup_instance InstanceHandle_t Instance Data Add a new section "2.1.2.4.2.23 lookup_instance" with the following contents: This operation takes as a parameter an instance (to get the key value) and returns a handle that can be used in successive operations that accept an instance handle as an argument. This operation does not register the instance in question. If the instance has not been previously registered, or if for any other reason the Service is unable to provide an instance handle, the Service will return the special value HANDLE_NIL. Append the following rows to the DataReader Class table in 2.1.2.5.3: lookup_instance InstanceHandle_t Instance Data Add a new section "2.1.2.5.3.33 lookup_instance" with the following contents: This operation takes as a parameter an instance (to get the key value) and returns a handle that can be used in successive operations that accept an instance handle as an argument. If for any reason the Service is unable to provide an instance handle, the Service will return the special value HANDLE_NIL.
Title: (R#120) Clarify use of DATAREADER_QOS_USE_TOPIC_QOS constant when creating DataReader on ContentFilteredTopic or MultiTopic Summary: The specification defines the constant DATAREADER_QOS_USE_TOPIC_QOS that may be used to specify the QoS of a DataReader. The meaning of such usage is unclear when the DataReader's TopicDescription is a ContentFilteredTopic or a MultiTopic since those types do not have QoS of their own. Proposed Resolution: A ContentFilteredTopic is based on a single Topic; therefore, the meaning of DATAREADER_QOS_USE_TOPIC_QOS is well-defined in that case: it refers to the QoS of the Topic accessible via the ContentFilteredTopic::get_related_topic operation. The meaning of DATAREADER_QOS_USE_TOPIC_QOS is not well-defined in the case of a MultiTopic; using it to set the QoS of a DataReader of a MultiTopic is an error. Specifically, passing the constant to Subscriber::create_datareader when a MultiTopic is also passed to that operation will result in the operation returning nil. Proposed Revised Text: The last paragraph of section "2.1.2.5.2.5 create_datareader" (which begins "The special value…") should be rewritten as follows: Provided that the TopicDescription passed to this method is a Topic or a ContentFilteredTopic, the special value DATAREADER_QOS_USE_TOPIC_QOS can be used to indicate that the DataReader should be created with a combination of the default DataReader QoS and the Topic QoS. (In the case of a ContentFilteredTopic, the Topic in question is the ContentFilteredTopic's "related Topic.") The use of this value is equivalent to the application obtaining the default DataReader QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS "overrides" the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataReader. It is an error to use DATAREADER_QOS_USE_TOPIC_QOS when creating a DataReader with a MultiTopic; this method will return a nil value in that case.
A DataReader must specify a TimeBasedFilterQosPolicy::minimum_separation value that is less than or equal to its DeadlineQosPolicy::period value. (Otherwise, all matched DataWriters will be considered to miss every deadline.) There are dependencies among the fields of ResourceLimitsQosPolicy: max_samples >= max_samples_per_instance. The above dependencies are not made explicit in the specification. Proposed Resolution: The above dependencies should be made explicit in the QoS policy table in section 2.1.3. Proposed Revised Text: The following sentence should be added to the "Meaning" column of the "DEADLINE" row: "It is inconsistent for a DataReader to have a deadline period less than its TIME_BASED_FILTER's minimum_separation." The following sentence should be added to the "Meaning" column of the "TIME_BASED_FILTER" row: "It is inconsistent for a DataReader to have a minimum_separation longer than its deadline period." The following sentence should be added to the "Meaning" column of the "max_samples" row: "It is inconsistent for this value to be less than max_samples_per_instance." The following sentence should be added to the "Meaning" column of the "max_samples_per_instance" row: "It is inconsistent for this value to be greater than max_samples."
It would be useful to have an additional return code called RETCODE_ILLEGAL_OPERATION. This return code would be useful, for example, in preventing the user from performing certain operations on the built-in DataReaders. Their QoS values are stated in the specification; vendors need not allow those values to be changed. Users should also not be allowed to delete built-in Entities. If the user tries to perform either of these two operations, the choices of return code we could use that are in accordance to the spec are: · RETCODE_ERROR · RETCODE_UNSUPPORTED · RETCODE_BAD_PARAMETER · RETCODE_PRECONDITION_NOT_MET · RETCODE_IMMUTABLE_POLICY All of the above fall short of helping the user find out what the problem really is. · RETCODE_ERROR: This is the generic error code; it does not give much information as to what might be wrong. · RETCODE_UNSUPPORTED: This choice would be semantically incorrect. The failure is not due to a vendor's failure to support an optional feature of the specification, but rather to the user's violation of a policy consistent with the specification that was set by that vendor. · RETCODE_BAD_PARAMETER: This return code is a little confusing. For instance, when trying to delete a built-in DataReader, the reader parameter passed is a valid DataReader and the function is expecting a reader. Such usage would seem to constitute passing a good parameter, not a bad one. · RETCODE_PRECONDITION_NOT_MET: There is no precondition that the user could change that would make the call work. Therefore, this result would be confusing. · RETCODE_IMMUTABLE_POLICY: This return code could potentially work when trying to change the QoS policies of the built-in DataReaders but not when attempting to delete them. However, it would still be semantically incorrect. The problem is not that the user is trying to change immutable QoS policies. The QoS policies being changed may be mutable; what is not allowed is the Entity whose policies are in question. Such a return result could lead the user to think that s/he is confused about which QoS policies are mutable. Proposed Resolution: Add a return code RETCODE_ILLEGAL_OPERATION. This return code indicates a misuse of the API provided by the Service. The user is invoking an operation on an inappropriate Entity or at an inappropriate time. There is no precondition that could be changed to allow the operation to succeed. Vendors may use this new return code to indicate violations of policies they have set that are consistent with, but not fully described by, the specification. It is therefore necessary that the return code be considered a "standard" return code (like RETCODE_OK, RETCODE_BAD_PARAMETER, and RETCODE_ERROR) that could potentially be returned by any operation having the return type ReturnCode_t. Proposed Revised Text: Add the following row to the "Return codes" table in 2.1.1.1: ILLEGAL_OPERATION An operation was invoked on an inappropriate object or at an inappropriate time (as determined by policies set by the specification or the Service implementation). There is no precondition that could be changed to make the operation succeed. In the paragraph following the table, the sentence "Any operation with return type ReturnCode_t may return OK or ERROR" should be restated "Any operation with return type ReturnCode_t may return OK, ERROR, or ILLEGAL_OPERATION." The sentence "The return codes OK, ERROR, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation" should be restated as "The return codes OK, ERROR, ILLEGAL_OPERATION, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation".
The description of DataWriter::dispose needs to clarify whether it can be called with a nil handle. Proposed Resolution: DataWriter::dispose should just behave like DataWriter::write in that if the instance is not yet registered, the Service will automatically register it for the user. In that case, the operation should not return PRECONDITION_NOT_MET. Proposed Revised Text: The second-to-last paragraph in section 2.1.2.4.2.12 states "The operation must be only called on registered instances. Otherwise the operation will return the error PRECONDITION_NOT_MET." This paragraph should be removed.
The specification currently states that the DataWriter::write operation may return TIMEOUT under certain circumstances. However, the DataWriter operations dispose, register, unregister, and their variants may also block due to a temporarily full history. Proposed Resolution: Revise the documentation for the listed operations to state that they may return TIMEOUT if the RELIABILITY max_blocking_time elapses. Proposed Revised Text: The following paragraph should be appended to sections 2.1.2.4.2.5, 2.1.2.4.2.6, 2.1.2.4.2.7, 2.1.2.4.2.8, 2.1.2.4.2.12, and 2.1.2.4.2.13: This operation may block if it would cause data to be lost or one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time this operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, this operation will fail and return TIMEOUT
The IDL PSM defines the type BuiltinTopicKey_t to be an array of element type BUILTIN_TOPIC_KEY_TYPE_NATIVE. This definition prevents some compilers from permitting shallow copies of instances of this type.
Proposed Resolution:
Redefine BuiltinTopicKey_t to be a structure containing an array rather than the array itself.
Proposed Revised Text:
In 2.2.3, change this:
typedef BUILTIN_TOPIC_KEY_TYPE_NATIVE BuiltinTopicKey_t[3]
to this:
struct BuiltinTopicKey_t {
BUILTIN_TOPIC_KEY_TYPE_NATIVE value[3];
}
The specification does not currently state whether it is permissible to create a DataReader or DataWriter with a TopicDescription that was created from a DomainParticipant other than that used to create the reader or writer's factory. Proposed Resolution: The use case in question is not allowed; create_datareader and create_datawriter should return nil in that case. Proposed Revised Text: The following paragraph should be appended to section 2.1.2.4.1.5 create_datawriter: The Topic passed to this operation must have been created from the same DomainParticipant that was used to create this Publisher. If the Topic was created from a different DomainParticipant, this operation will fail and return a nil result. The following paragraph should be appended to section 2.1.2.5.2.5 create_datareader: The TopicDescription passed to this operation must have been created from the same DomainParticipant that was used to create this Subscriber. If the TopicDescription was created from a different DomainParticipant, this operation will fail and return a nil result.
The specification does not state what should occur if the user attempts to delete a DataReader when it has one or more outstanding loans as a result of a call to DataReader::read, DataReader::take, or a variant thereof. Proposed Resolution: State that Subscriber::delete_datareader should fail and return PRECONDITION_NOT_MET in that case. Proposed Revised Text: In section 2.1.2.5.2.6 delete_datareader, there is a paragraph (beginning "The deletion of a DataReader is not allowed…") that describes the operation's behavior in the event that some conditions created from the reader have not been deleted. Following that paragraph a new paragraph should be added: The deletion of a DataReader is not allowed if it has any outstanding loans as a result of a call to read, take, or one of the variants thereof. If the delete_datareader operation is called on a DataReader with one or more outstanding loans, it will return PRECONDITION_NOT_MET
The specification does not make clear whether the set of status kinds returned by Entity::get_status_changes when that operation is invoked on a factory Entity (such as a Publisher) should include the changed statuses of the Entities created from that factory (such as a DataWriter). Proposed Resolution: Clarify that the set of status kinds will only contain the statuses that have changed on the Entity on which get_status_changes is invoked and not that Entity's contained Entities. Proposed Revised Text: Append the following sentence to section 2.1.2.1.1.6: "A 'triggered' status on an Entity does not imply that that status is triggered on the Entity's factory."
In the description of DataWriter::unregister in section 2.1.2.4.2.7 it says that if an instance is unregistered via a call to DataWriter::unregister, a matched DataReader will get an indication that its LIVELINESS_CHANGED status has changed. However, unregister refers to an instance; the LIVELINESS_CHANGED status is based on the liveliness of a DataWriter, not an instance. Proposed Resolution: Instead the specification should state that the DataReader will receive a sample with a NOT_ALIVE_NO_WRITERS instance state. Proposed Revised Text: The sentence: DataReader objects that are reading the instance will eventually get an indication that their LIVELINESS_CHANGED status (as defined in Section 2.1.4.1) has changed. …should be rewritten: DataReader objects that are reading the instance will eventually receive a sample with a NOT_ALIVE_NO_WRITERS instance state if no other DataWriter objects are writing the instance.
There are two limitations to the PublicationMatchStatus and SubscriptionMatchStatus that prevent them from being used to detect the loss of a match:
· The specification does not indicate whether those statuses are considered to have changed when a match is lost (e.g. as a result of a loss of liveliness or an incompatible QoS change).
· The status structures contain fields that indicate the total number of matches that have ever occurred, but they lack fields to indicate the number of current matches.
Proposed Resolution:
Two fields should be added to each status structure: current_count and current_count_change. The specification should be updated to state that the publication and subscription match statuses are considered to have changed both when a match is established and when it is lost.
Proposed Revised Text:
Update the table in 2.1.4.1 as follows:
DataReader SUBSCRIPTION_MATCH_STATUS The DataReader has found a DataWriter that matches the Topic and has compatible QoS or has stopped communicating with a DataWriter that was previously considered to have matched.
DataWriter PUBLICATION_MATCH_STATUS The DataWriter has found DataReader that matches the Topic and has compatible QoS or has stopped communicating with a DataReader that was previously considered to have matched.
Update PublicationMatchStatus and SubscriptionMatchStatus in figure 2-13 to add the following attributes to each:
current_count : long
current_count_change : long
Update the PublicationMatchStatus section of the table on page 2-119 with the following rows:
current_count The number of DataReaders currently matched to the concerned DataWriter.
current_count_change The change in current_count since the last time the listener was called or the status was read.
Update the SubscriptionMatchStatus section of the table on page 2-119 with the following rows:
current_count The number of DataWriters currently matched to the concerned DataReader.
current_count_change The change in current_count since the last time the listener was called or the status was read.
Modify the declarations of the PublicationMatchStatus and SubscriptionMatchStatus structures in the IDL PSM in 2.2.3 as follows:
struct PublicationMatchStatus {
long total_count;
long total_count_change;
long current_count;
long current_count_change;
InstanceHandle_t last_subscription_handle;
};
struct SubscriptionMatchStatus {
long total_count;
long total_count_change;
long current_count;
long current_count_change;
InstanceHandle_t last_publication_handle;
};
It would be useful to have a field in LivelinessChangedStatus that provides the instance handle for the last DataWriter for which there was a change in liveliness.
Proposed Resolution:
Add a field last_publication_handle to LivelinessChangedStatus.
Proposed Revised Text:
Add an attribute "last_publication_handle : InstanceHandle_t" to LivelinessChangedStatus in figure 2-13.
Add a row to the LivelinessChangedStatus section of the table on page 2-118:
last_publication_handle Handle to the last DataWriter whose change in liveliness caused this status to change.
Revise the definition of LivelinessChangedStatus in the IDL PSM in 2.2.3:
struct LivelinessChangedStatus {
InstanceHandle_t last_publication_handle;
};
Most statuses (and the callbacks corresponding to them) have names ending in a past tense verb (e.g. LivelinessLost, LivelinessChanged, *DeadlineMissed, etc.). This convention makes the names very understandable because they refer to an actual thing that happened. The publication and subscription match statuses/callbacks violate this convention, however. They are named after the match itself, not the event of matching. Proposed Resolution: To make the match statuses/callbacks consistent, they should be called PublicationMatchedStatus (on_publication_matched) and SubscriptionMatchedStatus (on_subscription_matched). Proposed Revised Text: Replace "PublicationMatchStatus" with "PublicationMatchedStatus," "on_publication_match" with "on_publication_matched," "SubscriptionMatchStatus" with "SubscriptionMatchedStatus," and "on_subscription_match" with "on_subscription_matched" in the DomainParticipantListener table in 2.1.2.2.3. Perform the same substitutions in the DataWriter table in 2.1.2.4.2 and in the DataWriterListener table in 2.1.2.4.4. Perform the same substitutions in the DataReader table in 2.1.2.5.3 and in the DataReaderListener table in 2.1.2.5.7. Rename "PublicationMatchStatus" to "PublicationMatchedStatus" and "SubscriptionMatchStatus" to "SubscriptionMatchedStatus" in figure 2-13, in the immediately following table of statuses, and in the IDL PSM definitions of the types PublicationMatchStatus, SubscriptionMatchStatus, DataWriterListener, DataReaderListener, DataWriter, and DataReader.
The OWNERSHIP QoS policy only concerns the Topic Entity. It is the only such policy; all other Topic QoS policies also concern the DataReader and DataWriter, which may override the value provided by the Topic.
The OWNERSHIP QoS policy is also missing from the PublicationBuiltinTopicData and SubscriptionBuiltinTopicData structures.
Proposed Resolution:
The OWNERSHIP QoS policy should concern the Topic, DataReader, and DataWriter Entities. It should have requested vs. offered (RxO) semantics: the two sides must agree on its value.
A field of type OwnershipQosPolicy should be added to the PublicationBuiltinTopicData and SubscriptionBuiltinTopicData structures.
Proposed Revised Text:
Change the "Concerns" column of the OWNERSHIP row of the table on page 2-94 to read "Topic, DataReader, DataWriter."
The second paragraph of section 2.1.3.8 OWNERSHIP begins "This QoS policy only applies to Topic and not to DataReader or DataWriter…" This paragraph should be removed.
Add the following rows to the built-in topic table on page 2-131:
DCPSPublication ownership OwnershipQosPolicy Policy of the corresponding DataWriter
DCPSSubscription Ownership OwnershipQosPolicy Policy of the corresponding DataReader
Modify the definitions of the DataWriterQos and DeadReaderQos structures in the IDL PSM in 2.2.3:
struct DataWriterQos {
OwnershipQosPolicy ownership;
};
struct DataReaderQosPolicy {
OwnershipQosPolicy ownership;
};
Several members in the Topic module are described as attributes in the UML diagram in 2.1.2.3 but as operations in the following tables and in the IDL PSM. These include: · TopicDescription::type_name · TopicDescription::name · ContentFilteredTopic::filter_expression · ContentFilteredTopic::expression_parameters · MultiTopic::subscription_expression · MultiTopic::expression_parameters Also, the topic name and type name members are needlessly repeated in the tables of all of the TopicDescription subclasses. They are non-abstract; they need only appear in the TopicDescription table. Proposed Resolution: The read-only attributes should appear as such in the PIM tables. "Attributes" that can be changed return ReturnCode_t from the corresponding "set" methods; for clarity, they should consistently appear as operations in both the tables and the UML diagram. The duplicate descriptions of the topic name and type name attributes should be removed. The IDL PSM should continue to express all of the members as methods to preserve the consistency of the naming conventions used in all programming languages that may be generated from the IDL. Proposed Revised Text: In figure 2-7, replace the ContentFilteredTopic attribute expression_parameters with two operations: get_expression_parameters and set_expression_parameters. Replace the MultiTopic attribute expression_parameters with two operations: get_expression_parameters and set_expression_parameters. Revise the TopicDescription Class table in 2.1.2.3.1 as follows: TopicDescription attributes readonly name string readonly type_name string operations get_participant DomainParticipant Rewrite section 2.1.2.3.1.2 as follows: 2.1.2.3.1.2 type_name The type name used to create the TopicDescription. Rewrite section 2.1.2.3.1.3 as follows: 2.1.2.3.1.3 name The name used to create the TopicDescription. Remove the get_type_name and get_name operations from the Topic Class table in 2.1.2.3.2. Remove the get_type_name, get_name, and get_filter_expression operations from the ContentFilteredTopic Class table in 2.1.2.3.3. Add the following attributes to that table: attributes readonly filter_expression string Rewrite section 2.1.2.3.3.2 as follows: 2.1.2.3.3.2 filter_expression The filter_expression associated with the ContentFilteredTopic. That is, the expression specified when the ContentFilteredTopic was created. Remove the get_type_name, get_name, and get_subscription_expression operations from the MultiTopic Class table in 2.1.2.3.4. Add the following attributes to that table: attributes readonly subscription_expression string Rewrite section 2.1.2.3.3.2 as follows: 2.1.2.3.4.1 get_subscription_expression The subscription_expression associated with the MultiTopic. That is, the expression specified when the MultiTopic was created.
The description of register_type in 2.1.2.3.6.1 first says that the operation may return PRECONDITION_NOT_MET but later says that the only "special" error code that may be returned is OUT_OF_RESOURCES. Proposed Resolution: The operation should be able to return either PRECONDITION_NOT_MET or OUT_OF_RESOURCES. Proposed Revised Text: The last sentence of section 2.1.2.3.6.1 should read "Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET and OUT_OF_RESOURCES."
The operation WaitSet::wakeup is listed in the UML diagrams in 2.1.2.1 and 2.1.4.4. This operation is not listed in the WaitSet table in 2.1.2.1.6. Proposed Resolution: The GuardCondition class already provides a mechanism for manually waking up a WaitSet. The wakeup method should be struck from the UML diagrams noted above. Proposed Revised Text: Remove the wakeup operation from the WaitSet class in figure 2-5 and in figure 2-18.
The value of the SampleRejectedStatus::last_reason field is undefined in that case where the user calls get_sample_rejected_status when no samples have been rejected. Proposed Resolution: Introduce a new SampleRejectedStatusKind value NOT_REJECTED and stipulate that it is to be used in the case described above. Proposed Revised Text: Add the following sentence to the description of SampleRejectedStatus::last_reason in the table on page 2-118: "If no samples have been rejected, the reason is the special value NOT_REJECTED." Modify the definition of SampleRejectedStatusKind in 2.2.3 to add a constant NOT_REJECTED.
The specification fails to state what should happen to publications suspended with Publisher::suspend_publications if Publisher::resume_publications is never called by the time the Publisher is deleted. Proposed Resolution: The Publisher may be deleted in the situation described. Any samples that have not yet been sent will be discarded. Proposed Revised Text: Add the following sentence to the last paragraph of section 2.1.2.4.1.8: "If the Publisher is deleted before resume_publications is called, any suspended updates yet to be published will be discarded."
The specification states that the mainTopic tag in the classMapping XML element is mandatory. However the example provided after does not contain that item, showing that it is actually not mandatory. Resolution: Change the status of that item in the DTD, to make it optional.
The get_all_topic_names() operation is mentioned in section 3.1.6.3.5 (ObjectHome) and in the IDL, but not in Figure 3-4. Resolution: Add the missing operation on the UML diagram
The parameter of "ObjectHome::set_filter" operation is named "expression" in the PIM and "filter" in the IDL. The ObjectHome operation to register operation is named "register_object" in the UML diagram and in the ObjectHome table, but it is named "register_created_object" in the text and in the IDL. Resolution: Name everywhere "expression" the parameter of the set_filter" operation Name everywhere "register_object" the operation to register an object
The parameter of "Cache:: find_home_by_index" is named "registration_index" in the PIM and "index" in the IDL Resolution: Name everywhere that parameter "index"
Bad cardinality for relations Cache-> CacheListener and ObjectHome->ObjectListener on figure 3-4 Summary: While the PIM text and the IDL state that several CacheListeners may be attached to a Cache and several ObjectListeners may be attached to an ObjectHome, the UML diagram shows a cardinality of at most 1 for those relations Resolution: Correct the figure to be in accordance with the rest of the document.
The IDL section states that the operations "clone" and "clone_object" on "ObjectRoot" as well as the operation "clone_foo" on the implied IDL for "Foo" may raise the exception "ReadOnlyMode", while this is not true. Resolution: Correct the IDL
A wrong copy-paste lead to wrong inheritance and methods for the FooListener interface in the implied IDL (while it is correct in the PIM). In addition, the IDL for ObjectListener should have commented out the operation that is actually defined in the derived FooListener. FooListener is mentioned once in the PIM as FooObjectListener. Resolution: Fix the definition (based on ObjectListener) and name the class FooListener everywhere
In section 3.1.6.5, the specification suggests that it is sensible to create a CacheUsage per thread. This should say a CacheAccess instead. Resolution: Change CacheUsage to CacheAccess in the sentence.
In section 3.2.2.3.2.3, the templateDef is explained. The 2nd bullet presents the possible values of the pattern attribute, but in this list "Ref" is missing. Furthermore, the example uses the wrong attribute name in its 2nd attribute: it says basis="StrMap", while this should be pattern="StrMap" Resolution: Correct the mistakes.
In the implied IDL, "DlrlOid" is used 3 times instead of the correct "DLRLOid" Resolution: Use "DLRLOid" everywhere
The "set" operation in the implied "FooRef" IDL class has a parameter named object. Since IDL may be used both case-sensitive and case-insensitive, this may not be allowed (possible confusion with CORBA::Object) Resolution: Name this parameter "an_object"
The ObjectListeners that need to be registered to an ObjectHome are typed (i.e. a FooListener must be attached to a FooHome), but the definition of the attach and detach methods can only be found in the generic IDL part. This way it is possible to attach a BarListener to a FooHome. Resolution: Move those operations from the generic IDL to the implied one.
The "remove" operations of the collection types are mentioned in the implied IDL part, while their signatures have no typed parameters. In addition, the parameter for the get operation (key) wrongly starts with a capital letter (while all parameters are supposed to be in small letters) Resolution: Add those operations on the generic roots and remove them from the generated classes (implied IDL). Correct the spelling of the "key" parameter.
It is not clearly defined whether the on_data_available notification should be generated on every arrival of new data, or just on the status change that happens when coming from a no data situation. Resolution: For every arrival of new data, a notification should be generated, regardless of whether the previous data has already been read before. Revised Text: Introduce textual changes to 2.1.4.2.2 that describes the complete set of conditions under which the read-communication status will change. The data available status is considered to have changed each time a new sample becomes available or the ViewState, SampleState, or InstanceState of any existing sample changes for any reason other than a read or take. Specific changes that cause the status to be changed include: · The arrival of new data · The disposal of an instance · The loss of liveliness of a writer of an instance when no other writer of that instance exists · Unregistration of an instance by the last writer of that instance
In section 2.1.2.1.1.7 it is explained which entity operations may be invoked on an entity that has not yet been enabled. However, in subsequent sections describing the behavior of operations on specialized entities that are disabled, fewer than the above-mentioned operations are mentioned. Resolution: Add the missing operations for each specialized entity to the list of operations that will never return RETCODE_NOT_ENABLED. Also state explicitly that Conditions obtained from disabled entities will never trigger, until the corresponding entities become enabled. Revised Text: On page 2-13, section 2.1.2.1.1.7 it is stated that the operation that gets the StatusCondition can be invoked on a disabled entity. Add some text that specifies that a StatusCondition obtained this way will not trigger until the corresponding entity becomes enabled. On page 2-21, section 2.1.2.2.1 it is mentioned that for the DomainParticipant all operation except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition, all factory methods (create_topic, create_publisher, create_subscriber) and all delete methods (delete_topic, delete_publisher, delete_subscriber). On page 2-35, section 2.1.2.3.2 it is mentioned that for the Topic all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition. On page 2-42, section 2.1.2.4.1 it is mentioned that for the Publisher all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition, create_datawriter, delete_datawriter. On page 2-48, section 2.1.2.4.2 it is mentioned that for the DataWriter all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition. On page 2-63, section 2.1.2.5.2 it is mentioned that for the Subscriber all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition, create_datareader and delete_datareader. On page 2-73, section 2.1.2.5.3 it is mentioned that for the DataRaeder all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition.
In section 2.1.3 (Supported Qos) the table describes that specifying the PARTITION QOS by an empty sized sequence implies all partitions. However the default partition is specified to be exactly one partition with the name "". Any partition should be specified by means of wildcards. It is unclear how the default partition and wildcards can be used at publisher and subscriber side. Resolution: Concerning default partitions: The default value for PartitionQosPolicy is an empty sequence of names.The empty sequence of partition names is equivalent to a single partition name, the empty string. Concerning wildcards: "Wildcards" refers to the regular expression language defined by the POSIX fnmatch API (1003.2-1992 section B.6). Either Publisher or Subscriber may include regular expressions in partition names, but no two names that both contain wildcards will ever be considered to match. This means that although regular expressions may be used both at publisher as well as subscriber side, the service will not try to match 2 regular expressions (between publishers and subscribers). Revised Text: Change the PARTITION row of the table in 2.1.3 to state that the default value is an empty sequence, which is equivalent to a sequence containing the single element "". Add the text about the wildcards format and restrictions
Does reliability include order preservation up to API level? In other words should data be made available to applications if older data exists but has not yet arrived (e.g. due to network irregularities), note that if a late arriving sample is accepted even after newer samples are made available then state inconsistencies may occur. In addition, not accepting a late coming sample should generate a sample lost notification. Resolution: Specify that data from a single writer (reliable and/or best-effort) will NOT be made available out-of-order. Revised Text: TBD
It is not clear what the value of unlimited resource limits is. Resolution: Probably it is defined by the in IDL defined constant LENGTH_UNLIMITED=-1. This should be clarified. Revised Text: Replace "unlimited" with "LENGTH_UNLIMITED" in table in 2.1.3
In par. (2.1.3.7) is described that latency budget will neither prohibit connectivity nor send notifications if incompatible. However it also describes the RxO compatibility rule that will never be visible to applications. This is somewhat confusing and this description is only required because this QoS attribute is specified to be subject to the RxO pattern, however, since the description states that the latency budget is a hint to the service and that the service may apply an additional delay for optimizations then are we really speaking of RxO between datawriters and datareaders? Resolution: Make latency budget truly RxO by making connectivity dependent of compatibility rules and adding appropriate error notifications. Revised Text: Remove the "therefore the service will not fail to match…" from section 2.1.3.7 Add new text that describes the RxO consequences
The specification does not clearly describe what should happen with data in a reliable datawriter's history when the datawriter is deleted? Should it disappear immediately or wait until all messages in the history are delivered? Resolution: The right thing to do is provide some operation such that the user can wait for all data to be delivered: Add an operation Publisher::delete_datawriter_after_acknowledgement( Duration_t timeout). Like DataReader::wait_for_historical_data, this operation takes its own timeout rather than using ReliabilityQosPolicy::max_blocking_time because it has to potentially wait for many writes to complete. As soon as all outstanding reliable samples are acknowledged, the DataWriter will be deleted and the operation will return OK. If timeout expires, however, before all samples are acknowledged, the operation will return TIMEOUT and the writer will not be deleted. The regular Publisher::delete_datawriter should delete the writer immediately without waiting for any reliable samples to be acknowlwedged. Its description should be clarified accordingly. If delete_datawriter_after_acknowledgement fails, the user will then have the choice of either calling it again or accepting the possible loss of some samples and calling delete_datawriter. Revised Text: Change existing text according the resolution.
It is unclear if a topic found by lookup_topicdescription (section 2.1.2.2.1.13.) should also be deleted if not used anymore similar to find_topic. Resolution: lookup_topicdescription, unlike find_topic, should search only among the locally created topics. Therefore, it should never (at least as far as the user is concerned) create a new topic description. So looking up the topic should not require any extra deletion. (It is of course permitted to delete a topic one has looked up, provided it has no readers or writers, but then it is really deleted and subsequent lookups will fail). Revised Text: Change text in section 2.1.2.2.1.13 accordingly
For applications it is not possible to relate a sample to its datawriter. There are many use cases where it is required to be able to make such a relation Resolution: Add an 'InstanceHandle_t publication_handle' field (the handle to the remote writer, not the data instance) to SampleInfo. The user can use this handle to call get_matched_publication_data () Revised Text: Change SampleInfo definition and related explanation accordingly.
The entity::set listener method sets a listener in combination with a mask that specifies the event interest. Listeners can also be set at construction of entities by passing the listener as parameter to the entity factory method. However it is not possible to set a mask during construction. Resolution: Add an event mask parameter (listener_mask) to entity constructors. Revised Text: Change signature and description of all entity factory methods.
The entity::set listener method sets a listener in combination with a mask that specifies the event interest. Listeners can also be set at construction of entities by passing the listener as parameter to the entity factory method. However it is not possible to set a mask during construction. Resolution: Add an event mask parameter (listener_mask) to entity constructors. Revised Text: Change signature and description of all entity factory methods.
Resolution: Discard as it duplicates Issue#8554
Currently entities must be enabled before they can be deleted. Resolution: Specify that entities may be deleted if not enabled. Revised Text: Explicilty state on each class that the delete method can be called also on disabled entities.
Resolution: Specify that entities may be deleted if not enabled. Revised Text: In Section 2.1.2.1.1.7 enable; at the end of the paragraph: "If an Entity has not yet been enabled, the only operations that can be invoked on it are the ones to set or get the QoS policies and the listener, the ones that get the StatusCondition, and the 'factory' operations that create other entities. Other operations will return the error NOT_ENABLED." Add the paragraph: It is legal to delete an Entity that has not been enabled by calling the proper operation on its factory.
Some customers require guarantees on delivery to the network, i.e. a means to block until the service can guarantee that the data will be or is received by all recipients. Resolution: Add a Publisher::wait_for_acknowlegments(timeout) method that will block until all data written by all its writers has been acknowledged. If called with Publisher is suspended it will return PRECONDITION_NOT_MET Revised Text: Add a paragraph describing the function, add the function to the publisher-table, change the PSM to include the new function.
The current IDL PSM contains the following lines: #define HANDLE_TYPE_NATIVE long #define HANDLE_NIL_NATIVE 0 typedef HANDLE_TYPE_NATIVE InstanceHandle_t; const InstanceHandle_t HANDLE_NIL = HANDLE_NIL_NATIVE; The two #defines can be vendor-specific. However, the constant definition in the last line restricts the HANDLE_TYPE_NATIVE to be of integer, char, wide_char, boolean, floating_pt, string, wide_string, fixed_pt or octet type; IDL does not allow any other (e.g. structured) types to be assigned a constant value. Resolution: The PSM contains a number of other elements that cannot be accurately expressed in IDL (e.g. static methods). As in those other cases, a comment should be added stating that structured and other non-primitive types may be used for HANDLE_TYPE_NATIVE and HANDLE_NIL_NATIVE even though IDL can't express it Revised Text: Mention the above in the introduction of the PSM.
a) The semantics of the DURABILITY Qos attribute "service_cleanup_delay" in relation to RxO mechanism is not specified. b) There is no relation between the history and resource-limits of the durability-service and the history and resource-limits of readers and writers. c) the durability-service still has to be configured by means of the above mentioned parameters: 'service_cleanup_delay', 'history' and 'resource-limits' Resolution: Remove service_cleanup_delay from the DURABILITY QoS policy for readers and writers. Add a new QoS "DURABILITY_SETTINGS" on the Topic whose sole purpose is to configure the durability service. The QoS policy should include the parameters: 'service_cleanup_delay', 'history' and 'resource-limits'. Revised Text: Remove the service_cleanup_delay from the DURABILITY QoS for readers and writers Remove the history QoS and resource-limits QoS policies from the TopicBuiltinTopicData Add the new DURABILITY_SETTINGS QoS and explain its behavior/meaning. Add this QoS policy also to the TopicBuiltinTopicData. Update the PSM accordingly
The DDS supports timestamping of information, either automatically or manually. These timestamps also appear in the SampleInfo data. What is missing is a get_current_time() call that allows applications to retrieve the current-time in the format as is utilized by the DDS. This means that the returned format and starting-time is the same as is used by the default/internal DDS timestamping. Resolution: Add such a 'get_current_time()' function to the participant class Revised Text: Add and explain the method and update the PSM.
Whenever an instance handle is passed as an input-parameter to a dataReader method, it may be invalid e.g. because they are disposed and reclaimed. The semantics of this case should be clarified. Resolution: Assuming that implementations want to check for invalid handles, they should generically return 'BAD_PARAMETER'. Revised Text: Specify that BAD_PARAMETER is returned when providing illegal handles to: · Read/Take_instance · Read/Take_next_instance · Read/Take_next_instance_with_condition · Get_key_value (both on dataReader and dataWriter)
If a QoS policy is not supported (i.e. is part of an unsupported profile) but the user supplies a non-default value for it, how should the system react Resolution: Just return UNSUPPORTED Revised Text: Add a remark in the specification that retcode UNSUPPORTED is returned when supplying a QoS-policy that is not supported by the middleware (i.e. is part of an optional-profile that is not supported by the specific middleware implementation)
Section 2.1.1.2.2 states: "At the DCPS level, data types represent information that is sent atomically. For performance reasons, only plain data structures are handled by this level." It is not clear what "plain data structures" means. Proposed Resolution: Remove the second sentence quoted above from the specification. Proposed Revised Text: Remove the sentence "For performance reasons, only plain data structures are handled by this level" from section 2.1.1.2.2, page 2-7.
Discard. This issue duplicates Issue#7966
The operations QueryCondition::get_query_arguments and QueryCondition::set_query_arguments are named inconsistently with respect to similar operations on the ContentFilteredTopic and the MultiTopic. Proposed Resolution: Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters both in the PIM and PSM. Proposed Revised Text: Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters in the table in section 2.1.2.5.9. Rename set_query_arguments to set_query_parameters in the paragraph immediately following the table in the same section. Rename get_query_arguments to get_query_parameters in the title of section 2.1.2.5.9.2. Rename set_query_arguments to set_query_parameters within that section (two occurrences). Rename set_query_arguments to set_query_parameters in the title to section 2.1.2.5.9.3. Rename set_query_arguments to set_query_parameters in figure 2-18. Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters in the IDL PSM in section 2.3.3, page 2-144.
Summary: In Section 2.1.5, there is a table that lists all the QoS policies that are used to create built-in readers. Since the policies are for creating built-in readers, the table should only list the QoS for the corresponding subscriber, reader, and topic. It shouldn't list any policies that occur only in DataWriterQos. Specifically, TRANSPORT_PRIORITY, LIFESPAN, and OWNERSHIP_STRENGTH, all of which apply only to DataWriters, are currently listed erroneously. The following QoS are supposed to apply to DataReaders (or their related entities) but are missing from the table: ReaderDataLifecycleQosPolicy, EntityFactoryQosPolicy For the QoS that are already listed in the table, some of them don't list the default values of some of the fields. DURABILITY: missing service_cleanup_delay value RELIABILITY: missing max_blocking_time value Proposed Resolution: Remove TRANSPORT_PRIORITY, LIFESPAN, and OWNERSHIP_STRENGTH from the table. Add the following values: READER_DATA_LIFECYCLE autopurge_nowriter_samples_delay = INFINITE ENTITY_FACTOR autoenable_created_entities = TRUE DURABILITY service_cleanup_delay = 0 RELIABILITY max_blocking_time = 100 milliseconds Proposed Revised Text: Change the table in section 2.1.5, page 2-129, as described above.
The specification already provides the operations get_matched_publication_data and get_matched_subscription_data on the DataReader and DataWriter. These operations allow applications to look up information about entities that exist in the domain without having to use the built-in DataReaders directly. It would be useful to have the corresponding ability to look up information about remote DomainParticipants and Topics; however, no such operations exist.
Proposed Resolution:
Add the following operations:
· ReturnCode_t DomainParticipant::get_discovered_participants(inout InstanceHandle_t[] participant_handles)
· ReturnCode_t DomainParticipant::get_discovered_participant_data(inout ParticipantBuiltinTopicData publication_data, InstanceHandle_t participant_handle)
· ReturnCode_t DomainParticipant::get_discovered_topics(inout InstanceHandle_t[]topic_handles)
· ReturnCode_t DomainParticipant::get_discovered_topic_data(inout TopicBuiltinTopicData topic_data, InstanceHandle_t topic_handle)
Proposed Revised Text:
Add the names of the aforementioned new operations to figure 2-6.
Append the following rows to the DomainParticipant Class table in 2.1.2.2.1:
get_discovered_participant_data ReturnCode_t
inout: publication_data ParticipantBuiltinTopicData
participant_handle InstanceHandle
get_discovered_participants ReturnCode_t
inout: participant_handles InstanceHandle_t []
get_discovered_topic_data ReturnCode_t
inout: topic_data TopicBuiltinTopicData
topic_handle InstanceHandle
get_discovered_topics ReturnCode_t
inout: topic_handles InstanceHandle_t []
Insert new sections to describe the new operations:
2.1.2.2.1.26 get_discovered_participant_data
This operation retrieves information on a DomainParticipant that has been discovered on the network. The participant must be in the same domain as the participant on which this operation is invoked and must not have been "ignored" by means of the DomainParticipant ignore_participant operation.
The participant_handle must correspond to such a DomainParticipant. Otherwise, the operation will fail and return PRECONDITION_NOT_MET.
Use the operation get_matched_participants to find the DomainParticipants that are currently discovered.
The operation may also fail if the infrastructure does not hold the information necessary to fill in the participant_data. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.27 get_discovered_participants
This operation retrieves the list of DomainParticipants that have been discovered in the domain and that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_participant operation.
The operation may fail if the infrastructure does not locally maintain the connectivity information. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.26 get_discovered_topic_data
This operation retrieves information on a Topic that has been discovered on the network. The topic must have been created by a participant in the same domain as the participant on which this operation is invoked and must not have been "ignored" by means of the DomainParticipant ignore_topic operation.
The topic_handle must correspond to such a topic. Otherwise, the operation will fail and return PRECONDITION_NOT_MET.
Use the operation get_discovered_topics to find the topics that are currently discovered.
The operation may also fail if the infrastructure does not hold the information necessary to fill in the topic_data. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.27 get_discovered_topics
This operation retrieves the list of Topics that have been discovered in the domain and that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_topic operation.
The operation may fail if the infrastructure does not locally maintain the connectivity information. In this case the operation will return UNSUPPORTED.
The DDS spec currently states that the max_blocking_time parameter of the RELIABILITY QoS only applies for data writers that are RELIABLE and have HISTORY QoS of KEEP_ALL. These assertions are not true. Depending on the RESOURCE_LIMITS QoS, even a KEEP_LAST writer may eventually need to block. Proposed Resolution: The specification needs to be updated in the table of QoS in Section 2.1.3 and in the DataWriter section 2.1.2.4.2.10 for write to account for the case in which (max_samples < max_instances * HISTORY depth). In this case, the writer may attempt to write a new value for an existing instance whose history is not full and fail because it exceeds the max_samples limit. Therefore, if (max_samples < max_instances * HISTORY depth), then in the situation where the max_samples resource limit is exhausted the middleware is allowed to discard samples of some other instance as long as at least one sample remains for that instance. If it is still not possible to make space available for the new sample, the writer is allowed to block. The behavior in the case where max_samples < max_instances must also be described. In that case the writer is allowed to block. Proposed Revised Text: In the QoS table in 2.1.3, change the first sentence of the "Meaning" cell of the RELIABILITY max_blocking_time row to: "This setting applies only to the case where kind=RELIABLE." In section 2.1.2.4.2.10, replace the final paragraph with the following: If the RELIABILITY kind is set to RELIABLE, the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block waiting for space to become available. If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the write operation will fail and return TIMEOUT. Specifically, the DataWriter may block in the following situations (although the list may not be exhaustive), even if its HISTORY kind is KEEP_LAST. · If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances * HISTORY depth), then in the situation where the max_samples resource limit is exhausted the Service is allowed to discard samples of some other instance as long as at least one sample remains for such an instance. If it is still not possible to make space available to store the modification, the writer is allowed to block. · If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances), then the DataWriter may block regardless of the HISTORY depth. In section 2.1.3.13 RELIABILITY, the second paragraph currently states: The setting of this policy has a dependency on the setting of the HISTORY and RESOURCE_LIMITS policies. In case the RELIABILITY kind is set to RELIABLE and the HISTORY kind set to KEEP_ALL the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. The above text should be rewritten as follows: The setting of this policy has a dependency on the RESOURCE_LIMITS policy. In case the RELIABILITY kind is set to RELIABLE the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded.
(R#132) Clarify meaning of LivelinessChangedStatus fields and LIVELINESS lease_duration Summary: The specification of LivelinessChangedStatus doesn't explain what the terms "active" and "inactive" mean nor what change is expected when various events occur. For example, the following actions should be accounted for: · Loss of liveliness by a previously alive writer · Re-assertion of liveliness on a previously lost writer · Normal deletion of an alive writer · Normal deletion of a not-alive writer · Assertion of liveliness on a new writer · A new writer is discovered (i.e. on_publication_match) but its liveliness has not yet been asserted The specification is also not clear about the usage of a DataReader's LIVELINESS lease_duration it is not clear whether this field is used solely for QoS compatibility comparison with matching remote writers or if it is also the rate at which the reader will update its LivelinessChangedStatus. Proposed Resolution: Change "active" to "alive" and "inactive" to "not_alive" in the LivelinessChangedStatus field names. In response to the list of events above: · Previously alive writer is lost: alive_count_change == -1, not_alive_count_change = 1 · Lost writer re-asserts liveliness: alive_count_change == 1, not_alive_count_change == -1 · Normal deletion of alive writer: alive_count_change == -1, not_alive_count_change == 0 · Normal deletion of not alive writer: alive_count_change == 0, not_alive_count_change == -1 · New writer asserts liveliness for first time: alive_count_change == 1, not_alive_count_change == 0 · New writer but hasn't yet asserted liveliness: LivelinessChangedStatus is not changed Specify that the information communicated by a reader's LivelinessChangedStatus is out of date by no more than a lease_duration. That is, the reader commits to update its LivelinessChangedStatus if necessary at least once during its lease_duration, although it may update more often if it chooses. Proposed Revised Text: Add an additional paragraph to the end of section 2.1.3.10 LIVELINESS: The information communicated by a DataReader's LivelinessChangedStatus is out of date by no more than a single lease_duration. That is, the reader commits to updating its LivelinessChangedStatus if necessary at least once during each lease_duration, although it is permitted to update more often. Change active_count to alive_count, inactive_count to not_alive_count, active_count_change to alive_count_change, and inactive_count_change to not_alive_count_change in figure 2-13 on page 2-117. The rows for the LivelinessChangedStatus fields in the table on page 2-118 should be as follows: alive_count The total number of currently active DataWriters that write the Topic read by the DataReader. This count increases when a newly matched DataWriter asserts its liveliness for the first time or when a DataWriter previously considered to be not alive reasserts its liveliness. The count decreases when a DataWriter considered alive fails to assert its liveliness and becomes not alive, whether because it was deleted normally or for some other reason. not_alive_count The total count of currently DataWriters that write the Topic read by the DataReader that are no longer asserting their liveliness. This count increases when a DataWriter considered alive fails to assert its liveliness and becomes not alive for some reason other than the normal deletion of that DataWriter. It decreases when a previously not alive DataWriter either reasserts its liveliness or is deleted normally. alive_count_change The change in the alive_count since the last time the listener was called or the status was read. not_alive_count_change The change in the not_alive_count since the last time the listener was called or the status was read. Change active_count to alive_count, inactive_count to not_alive_count, active_count_change to alive_count_change, and inactive_count_change to not_alive_count_change in the IDL PSM on page 2-141.
The specification does not state whether the LivelinessLostStatus should be considered changed (and the on_liveliness_lost listener callback called) once-when the writer first lost its liveliness-or whether should it be called after every period in which the writer fails to assert its liveliness. For example, if a writer with liveliness set to MANUAL_BY_TOPIC doesn't write for two LIVELINESS lease_duration periods, should the writer listener's on_liveliness_lost callback be called once per lease_duration, or should the callback be called only once after the first lease_duration? The analogous ambiguity exists with respect to the *DeadlineMissedStatuses. Proposed Resolution: The cases are somewhat different in that the deadline is under the application's control while a loss of liveliness is not (e.g. it may occur as a result of a network failure). Therefore, the *DeadlineMissed statuses should be considered changed (and the listeners invoked) at the end of every deadline period. The LivelinessLostStatus, on the other hand, should be considered changed only when a writer's state changes from alive to not alive, not after every lease_duration period thereafter. Proposed Revised Text: Change the description in the RequestedDeadlineMissed total_count row of the table in 2.1.4.1 (page 2-118) to read: "Total cumulative number of missed deadlines detected for any instance read by the DataReader. Missed deadlines accumulate; that is, each deadline period the total_count will be incremented by one for each instance for which data was not received." Change the description in the LivelinessLostStatus total_count row of the table in 2.1.4.1 (page 2-119) to read: "Total cumulative number of times that a previously-alive DataWriter became not alive due to a failure to actively signal its liveliness within its offered liveliness period. This count does not change when an already not alive DataWriter simply remains not alive for another liveliness period." Change the description in the OfferedDeadlineMissed total_count row of the table in 2.1.4.1 (page 2-119) to read: "Total cumulative number of offered deadline periods elapsed during which a DataWriter failed to provide data. Missed deadlines accumulate; that is, each deadline period the total_count will be incremented by one."
The specification states that before an entity is enabled, the only operations that can be invoked on it are get/set listener, get/set QoS, get_statuscondition, and factory methods. This list is unnecessarily restrictive. Proposed Resolution: The following operations should also be allowed: · TopicDescription::get_name · TopicDescription::get_type_name · DomainParticipant::lookup_topicdescription · Publisher::lookup_datawriter · Subscriber::lookup_datareader · Entity::get_status_changes and all get_*_status operations. Note that no status is considered 'triggered' when an Entity is disabled. · All get_/set_default_*_qos operations Proposed Revised Text: Revise the fourth paragraph of section 2.1.2.1.1.7 enable to read: If an Entity has not yet been enabled, the following operations may be invoked on it in general: · Operations to set or get an Entity's QoS policies (including default QoS policies) and listener · get_statuscondition · 'factory' operations · get_status_changes and other get status operations (although no status of a disabled entity is ever considered changed) · 'lookup' operations Other operations may explicitly state that they may be called on disabled entities; those that do not will return the error NOT_ENABLED. Add the following sentence to sections 2.1.2.3.1.2 (TopicDescription::get_type_name) and 2.1.2.3.1.3 (TopicDescription::get_name): "This operation may be invoked on a Topic that is not yet enabled or on a ContentFilteredTopic or MultiTopic based on such a Topic."
The specification states that the default value of the RELIABILITY QoS policy on a DataWriter is BEST_EFFORT; however this makes it automatically incompatible with readers that request a RELIABLE value. This situation is not a problem per se, but it means that applications desiring RELIABLE communications must change the default configuration in two places: both with the reader and with the writer. Proposed Resolution: Changing the default value-on the DataWriter only-to RELIABLE would make the initial configuration of DDS applications simpler. The default behavior would still be the same because the DataReader would still default to BEST_EFFORT and therefore the default communication would be BEST_EFFORT. However, applications desiring a RELIABLE setting would have to change the defaults in only one place: with the DataReader. Proposed Revised Text: Append the following sentence to the "Meaning" column of the RELIABILITY RELIABLE row of the table in 2.1.3: "This is the default value for DataWriters." The final sentence in the "Meaning" column of the RELIABILITY BEST_EFFORT row of the table in 2.1.3 currently states: "This is the default value." This sentence should be amended: "This is the default value for DataReaders and Topics."
The description of the DomainParticipant::create_topic operation in section 2.1.2.2.1.5 states that if an existing topic is found with the same name and QoS, that Topic will be returned; no duplicate Topic will be created. However, the specification fails to describe what will happen in the event that the name and QoS match but the listener is different. Additionally, the behavior places a barrier of understanding before the user, because create_topic behaves differently from all other factory methods in this respect. Proposed Resolution: Revise the specification to remove the language about reusing Topics. The create_topic operation, like all other 'create' operations in the specification, should always return a new Topic. Proposed Revised Text: Section 2.1.2.2.1.5 contains the following paragraphs; they should both be stricken from the specification: The implementation of create_topic will automatically perform a lookup_topicdescription for the specified topic_name. If a Topic is found, then the QoS and type_name of the found Topic are matched against the ones specified on the create_topic call. If there is an exact match, the existing Topic is returned. If there is no match the operation will fail. The consequence is that the application can never create more than one Topic with the same topic_name per DomainParticipant. Subsequent attempts will either return the existing Topic (i.e., behave like find_topic) or else fail. If a Topic is obtained multiple times by means of a create_topic, it must also be deleted that same number of times using delete_topic.
The Publisher entity has operations begin_coherent_changes and end_coherent_changes that allow groups of updates to be received by subscriptions as if they were a single update. Although the specification already contains a general statement about receivers not making updates available until all have been received, no specific mention of communication interruptions or configuration changes is made. This omission has caused questions to be raised with regards to the interactions between coherent changes and partitions, late-joining DataReaders, and network failures. Proposed Resolution: The specification should be amended to state that a Publisher should not prevent users from changing its partitions while it is in the middle of publishing a set of coherent changes, as the effect of doing so is no different than that of any other connectivity change. However, in the event that connectivity changes occur between the publishers and receivers of data such that some receiver is not able to obtain the entire set, that receiver must act as if it had received none of the data. Proposed Revised Text: Append the following text to the second paragraph of section 2.1.2.4.1.10 begin_coherent_changes: A connectivity change may occur in the middle of a set of coherent changes; for example, the set of partitions used by the Publisher or one of its Subscribers may change, a late-joining DataReader may appear on the network, or a communication failure may occur. In the event that such a change prevents an entity from receiving the entire set of coherent changes, that entity must behave as if it had received none of the set.
The table in 2.1.5 says that built-in DataReaders should have TRANSIENT durability. However, the description of that durability states that support for it is optional. Proposed Resolution: The specification should be changed to state that built-in readers should have TRANSIENT_LOCAL durability. Proposed Revised Text: Change TRANSIENT to TRANSIENT_LOCAL in the DURABILITY row of the table on page 2-129.
The specification does not explicitly state whether clients of the built-in DataReaders should be able to discover other entities that belong to the same participant by those means. In other words, if a DataReader 'A' belongs (indirectly) to a DomainParticipant 'B', will information about A appear when one reads from the subscription built-in reader of B?
We believe that most users will not want to "discover" entities they created themselves; the purpose of the built-in entities is to discover what exists elsewhere on the network. Furthermore, there is currently no way for a client of the built-in reader to distinguish between entities belonging to its own DomainParticipant and those that exist elsewhere on the network.
Proposed Resolution:
Clarify the descriptions of the built-in topics to indicate that data pertaining to entities of the same participant will not be made available there.
A mechanism to determine whether an instance handle (read from a built-in topic or obtained through any other means) represents a particular known entity is generally useful. Add the following operations:
· InstanceHandle_t Entity::get_instance_handle()
· boolean DomainParticipant::contains_entity(InstanceHandle_t a_handle)
Proposed Revised Text:
Add the following sentence to the end of the first paragraph on page 2-129:
A built-in DataReader object obtained from a given participant will not provide data pertaining to other entities created (directly or indirectly) from that participant under the assumption that such objects are already known to the application.
Add the following row to the Entity Class table in section 2.1.2.1.1:
get_instance_handle InstanceHandle_t
Add the description of the new operation as a new section:
2.1.2.1.1.8 get_instance_handle
Get the instance handle that represents the Entity in the built-in topic data, in various statuses, and elsewhere.
Add the following row to the DomainParticipant Class table in section 2.1.2.2.1:
contains_entity boolean
a_handle InstanceHandle_t
Add the description of the new operation as a new section:
2.1.2.2.1.26 contains_entity
This operation checks whether or not the given instance handle represents an entity that was created, directly or indirectly, from the DomainParticipant. The instance handle for an Entity may be obtained from built-in topic data, from various statuses, or from the Entity operation get_instance_handle.
Add the new operations to the IDL PSM in section 2.2.3:
interface Entity {
InstanceHandle_t get_instance_handle();
};
interface DomainParticipant : Entity {
boolean contains_entity(InstanceHandle_t a_handle);
};
This issue subsumes two related issues. · Presumably, listener callbacks pertaining to built-in entities should fall back to the DomainParticipantListener of their containing DomainParticipant in the usual way in the event that those built-in entities have not requested to receive the callbacks themselves. However, this behavior may prove inconvenient in practice: users-even those completely uninterested in built-in entities-must recognize the callbacks pertaining to those entities and deal with them in some way whenever they install a participant listener. Implementers are also constrained, as they will find it difficult to ensure the correct listener behavior while preserving the freedom to create built-in entities on demand. · The specification does not state the behavior of installing a nil listener or what mask values are acceptable in that case. Proposed Resolution: Installing a nil listener should be equivalent to installing a listener that does nothing. It is acceptable to provide a mask with a nil listener; in that case, no callback will be delivered to the entity or to its containing entities. A DomainParticipant's built-in Subscriber and all of its built-in Topics should by default have nil listeners with all mask bits set. Therefore their callbacks will no propagate back to the DomainParticipantListener unless the user explicitly calls set_listener on them. Proposed Revised Text: Insert a new paragraph after the existing first paragraph of section 2.1.2.1.1.3 set_listener: It is permitted to set a nil listener with any listener mask; it is behaviorally equivalent to installing a listener that does nothing. Append a new sentence to the final bullet in the list in section 2.1.4.3.1: Any statuses appearing in the mask associated with a nil listener will neither be dispatched to the entity itself nor propagated to its containing entities. Insert the following paragraph immediately following the table of built-in entity QoS on page 2-129: Built-in entities have default listener settings as well. A DomainParticipant's built-in Subscriber and all of its built-in Topics have nil listeners with all statuses appearing in their listener masks. The built-in DataReaders have nil listeners with no statuses in their masks.
There is already a convention in the specification of mapping an out parameter in the PIM to an inout parameter in the IDL PSM. This convention is useful because it preserves more precise semantics in the PIM while allowing for more performant implementations in language PSMs based on the IDL PSM. However, the convention is never explicitly described in the specification, which could lead to confusion among readers. Proposed Resolution: The section 2.2.2 PIM to PSM Mapping Rules should explicitly describe and endorse the aforementioned convention. Proposed Revised Text: Insert a new paragraph after the current first paragraph in section 2.2.2: 'Out' parameters in the PIM are conventionally mapped to 'inout' parameters in the PSM in order to minimize the memory allocation performed by the Service and allow for more efficient implementations. The intended meaning is that the caller of such an operation should provide an object to serve as a "container" and that the operation will then "fill in" the state of that object appropriately.
In most cases in the specification, when constants of a type named like <something>Kind need to be combined together, a type <something>Mask is defined in the IDL PSM in addition to <something>Kind. The case of StatusKind is inconsistent, however: its mask type is called StatusKindMask, not StatusMask. Proposed Resolution: Replace StatusKindMask with StatusMask. Clarify the name mapping convention in section 2.2.2. Proposed Revised Text: Append the following sentence to the fourth paragraph of section 2.2.2: "The name of the mask type is formed by replacing the word 'Kind' with the word 'Mask'." Replace "StatusKindMask" with "StatusMask" everywhere it appears in the IDL PSM in section 2.2.3.
I think the following sentence is incorrect: a Listener is used to provide a callback for synchronous access and a WaitSet associated with one or several Condition objects as far as I can tell, these roles should be reversed. There are a bunch of other bugs in the spec. I'll try to report them as time permits.
In the DDS specification version 05-03-09, page 2-1 specifies that the PSM is for the OMG IDL platform. In section 2.2.1 (page 2-193), more detail is given as to what this PSM looks like. One area I don't see covered is the subset of OMG IDL that the DDS specification supports from a user's perspective. When a user defines an IDL file, what data types should and shouldn't be used? It looks as though valuetypes are not supported, but is it possible to explicitly specify what OMG IDL types are supported by implementations of the DDS specification?
The RTF agrees that this area is underspecified. However, it could not come to a resolution on this. The types that should be supported by DDS implementations as well as the mechanisms for extending those types are currently under investigation by the DDS SIG. Therefore is expected that this issue will be resolved in a subsequent RTF.
ummary:
According to the PIM, get get_qos() method returns the QosPolicy [ ]. According to the PSM, the qos is a parameter and the
method returns void.
Proposed Resolution:
The PIM should be updated to be consistent with the PSM.
In addition, the return value in both the PIM and PSM should be changed from void to ReturnCode_t.
Proposed Revised Text:
Section 2.1.2.1.1 Entity Class; Entity class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.2.1 Domain Module; DomainParticipant class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.3.1 TopicDescription Class; Topic class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.4.1 Publisher Class; Publisher class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.4.2 DataWriter Class; DataWriter class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.5.2 Subscriber Class; Subscriber class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.5.3 DataReader Class; DataReader class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.2.3 DCPS PSM : IDL
interface Entity
Change:
// void get_qos(inout EntityQos qos);
To
// ReturnCode_t get_qos(inout EntityQos qos);The PIM should be updated to be consistent with the PSM. In addition, the return value in both the PIM and PSM should be changed from void to ReturnCode_t..
Summary: In the PSM it is returning void. However, in the PIM it is returning ReturnCode_t. Also, all other get_defatult_xxx_qos() methods return ReturnCode_t in both the PIM and the PSM. Proposed Resolution: The return code should be changed to ReturnCode_t in the PSM. Proposed Revised Text: Section 2.2.3 DCPS PSM : IDL interface Publisher : Replace void get_default_datawriter_qos(inout DataWriterQos qos); With ReturnCode_t get_default_datawriter_qos(inout DataWriterQos qos);
Summary:
The string sequence parameter in the get_expression_parameters() method of the ContentFilteredTopic and MultiTopic and in the
get_query_parameters() method of the QueryCondition are all listed as the return value in both the PIM and PSM.
It is desirable for the string sequence to be used as a parameter for consistency and to allow for an error return.
Proposed Resolution:
The PIM and the PSM should have the string sequence as a parameter and the methods should return ReturnCode_t.
Proposed Revised Text:
Section 2.1.2.3.3 ContentFilteredTopic class; ContentFilteredTopic class table
Change row from:
get_expression_parameters string[]
To
get_expression_parameters ReturnCode_t
inout: expression_parameters string[]
Section 2.1.2.3.4 MultiTopic Class [optional]
Change row from:
get_expression_parameters string[]
To
get_expression_parameters ReturnCode_t
inout: expression_parameters string[]
Section 2.2.3 DCPS PSM : IDL
interface ContentFilteredTopic
Replace:
StringSeq get_expression_parameters();
With:
ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);
interface MultiTopic
Replace:
StringSeq get_expression_parameters();
With:
ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);The PIM and the PSM should have the string sequence as a parameter and the methods should return ReturnCode_t.
Title: R#4 Mention of get_instance() operation on the DomainParticipantFactory being static in the wrong section Summary: The last paragraph of section 2.1.2.2.2.4 (lookup_participant) mentioning that get_instance() is a static operation probably belongs in the preceding section 2.1.2.2.2.3 (get_instance). Proposed Resolution: Move the paragraph to the correct section Proposed Revised Text: Section 2.1.2.2.2.4 lookup_participant Remove the last paragraph: The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM. Section 2.1.2.2.2.3 get_instance Add the paragraph removed from above: The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.
Summary:
In the PIM, all get_XXX_status() methods return the relevant status by value. This does not allow for an error return and is inconsistent with other operations that accept a parameter.
The same is true for the PSM except for get_inconsistent_topic_status() on the Topic which returns ReturnCode_t and the status is a parameter.
Proposed Resolution:
In the PIM and the PSM, the operations should return ReturnCode_t with the status as a parameter.
Proposed Revised Text:
Section 2.1.2.3.2 Topic Class; Replace
get_inconsistent_topic_status InconsistentTopicStatus
With
get_inconsistent_topic_status ReturnCode_t
inout: status InconsistentTopicStatus
Section 2.1.2.4.2 DataWriter Class;
Replace
get_liveliness_lost_status LivelinessLostStatus
get_offered_deadline_missed_status OfferedDeadlineMissedStatus
get_offered_incompatible_qos_status OfferedIncompatibleQosStatus
get_publication_match_status PublicationMatchedStatus
With
get_liveliness_lost_status ReturnCode_t
inout: status LivelinessLostStatus
get_offered_deadline_missed_status ReturnCode_t
inout: status OfferedDeadlineMissedStatus
get_offered_incompatible_qos_status ReturnCode_t
inout: status OfferedIncompatibleQosStatus
get_publication_match_status ReturnCode_t
inout: status PublicationMatchedStatus
Section 2.1.2.5.2 Subscriber Class;
Replace
get_sample_lost_status SampleLostStatus
With
get_sample_lost_status ReturnCode_t
inout: status SampleLostStatus
Section 2.1.2.5.3 DataReader Class;
Replace
get_liveliness_changed_status LivelinessChangedStatus
get_requested_deadline_missed_status RequestedDeadlineMissedStatus
get_requested_incompatible_qos_status RequestedIncompatibleQosStatus
get_sample_rejected_status SampleRejectedStatus
get_subscription_match_status SubscriptionMatchedStatus
With
get_liveliness_changed_status ReturnCode_t
inout: status LivelinessChangedStatus
get_requested_deadline_missed_status ReturnCode_t
inout: status RequestedDeadlineMissedStatus
get_requested_incompatible_qos_status ReturnCode_t
inout: status RequestedIncompatibleQosStatus
get_sample_rejected_status ReturnCode_t
inout: status SampleRejectedStatus
get_subscription_match_status ReturnCode_t
inout: status SubscriptionMatchedStatus
Section 2.2.3 DCPS PSM : IDL
interface DataWriter; Replace:
LivelinessLostStatus get_liveliness_lost_status();
OfferedDeadlineMissedStatus get_offered_deadline_missed_status();
OfferedIncompatibleQosStatus get_offered_incompatible_qos_status();
PublicationMatchedStatus get_publication_match_status();
With
ReturnCode_t get_liveliness_lost_status(inout LivelinessLostStatus status);
ReturnCode_t get_offered_deadline_missed_status(inout OfferedDeadlineMissedStatus status);
ReturnCode_t get_offered_incompatible_qos_status(inout OfferedIncompatibleQosStatus status);
ReturnCode_t get_publication_match_status(inout PublicationMatchedStatus status);
interface DataReader; Replace:
SampleRejectedStatus get_sample_rejected_status();
LivelinessChangedStatus get_liveliness_changed_status();
RequestedDeadlineMissedStatus get_requested_deadline_missed_status();
RequestedIncompatibleQosStatus get_requested_incompatible_qos_status();
SubscriptionMatchedStatus get_subscription_match_status();
SampleLostStatus get_sample_lost_status();
With:
ReturnCode_t get_sample_rejected_status( inout SampleRejectedStatus status );
ReturnCode_t get_liveliness_changed_status(inout LivelinessChangedStatus status);
ReturnCode_t get_requested_deadline_missed_status(inout RequestedDeadlineMissedStatus status);
ReturnCode_t get_requested_incompatible_qos_status(inout RequestedIncompatibleQosStatus status);
ReturnCode_t get_subscription_match_status(inout SubscriptionMatchedStatus status);
ReturnCode_t get_sample_lost_status(inout SampleLostStatus status);
In the PIM and the PSM, the operations should return ReturnCode_t with the status as an out parameter.
Summary: We have REJECTED_BY_SAMPLES_LIMIT which comes from the max_samples in the ResourceLimitsQosPolicy. However, we have REJECTED_BY_INSTANCE_LIMIT which comes from the max_instances. Proposed Resolution: It should be named REJECTED_BY_INSTANCES_LIMIT. Proposed Revised Text: Section 2.2.3 DCPS PSM : IDL enum SampleRejectedStatusKind; Replace REJECTED_BY_INSTANCE_LIMIT With REJECTED_BY_INSTANCES_LIMIT
Summary: The OWNERSHIP_STRENGTH QoS only applies to DataWriters, yet it is listed in the table of the QoS of the built-in Subscriber and DataReader objects in Section 2.1.5. Proposed Resolution: Remove OWNERSHIP_STRENGTH from the aforementioned table. Proposed Revised Text: Section 2.1.5 In the table that follows the sentence: The QoS of the built-in Subscriber and DataReader objects is given by the following table: Remove the row for 'OWNERSHIP_STRENGTH'
Summary: In the description of the TIME_BASED_FILTER QoS, we are missing the description of the consistency requirements with the DEADLINE QoS, which is mentioned in the table in Section 2.1.3. Also, we should mention some consistency requirements between max_samples and max_samples_per_instance within the RESOURCE_LIMITS QoS. Proposed Resolution: In Section 2.1.3.12 on the TIME_BASED_FILTER QoS we should make explicit mention that the minimum_separation must be <= the period of the DEADLINE QoS. In both the table in Section 2.1.3 and in Section 2.1.3.22 on the RESOURCE_LIMITS QoS we should mention the consistency requirements that max_samples >= max_samples_per_instance. Proposed Revised Text: Section 2.1.3.12 TIME_BASED_FILTER; Add the following paragraph to the end of the section: The setting of the TIME_BASED_FILTER policy must be set consistently with that of the DEADLINE policy. For these two policies to be consistent the settings must be such that "deadline period>= minimum_separation." An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered. Section 2.1.3.22 RESOURCE_LIMITS Add the following paragraph before the last paragraph in the section: The setting of RESOURCE_LIMITS max_samples must be consistent with the setting of the max_samples_per_instance. For these two values to be consistent they must verify that max_samples >= max_samples_per_instance. Section 2.1.3.22 RESOURCE_LIMITS Add the following paragraph at the end of the section: An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.
In Section 2.1.3.12 on the TIME_BASED_FILTER QoS we should make explicit mention that the minimum_separation must be <= the period of the DEADLINE QoS. In both the table in Section 2.1.3 and in Section 2.1.3.22 on the RESOURCE_LIMITS QoS we should mention the consistency requirements that max_samples >= max_samples_per_instance.
Blocking of write() call depending on RESOURCE_LIMITS, HISTORY, and RELIABILITY QoS Summary: Section 2.1.2.4.2.11 states that even writers with KEEP_LAST HISTORY QoS can block and describes some scenarios. Some of these scenarios may no longer be valid depending on whether the implementation is willing to sacrifice reliability. In the table in Section 2.1.3, it states the max_blocking _time in the RELIABILITY QoS only applies for RELIABLE and KEEP_ALL HISTORY QoS. In Section 2.1.3.14 it is only mentioned that the writer can block if the RELIABILITY QoS is set to RELIABLE. Proposed Resolution: At the very least, remove mention of the requirement that the HISTORY QoS be KEEP_ALL for blocking to apply in the table in Section 2.1.3. Proposed Revised Text: Section 2.1.3 QoS Table On the entry for the RELIABILITY QoS max_blocking_time Replace: This setting applies only to the case where kind=RELIABLE and the HISTORY is KEEP_ALL. With: This setting applies only to the case where kind=RELIABLE.
Remove mention of the requirement that the HISTORY QoS be KEEP_ALL for blocking to apply in the table in Section 2.1.3.
Summary:
In the table in Section 2.1.3, the default partition value is said to be a zero-length sequence, which "is equivalent to a sequence containing a single element consisting of an empty string", which will match any partition. However, if an empty string will match any partition, it is not consistent with normal regular expression matching.
Proposed Resolution:
It is desirable to have the behavior in that if a special partition is specified, it will only match others who have that special partition. If the default behavior is that it will match all partitions, there is no way for a newly created entity to prevent others from matching it, unless the special partition is used.
Therefore, we should not overload the meaning of the empty string to mean matching everything. Instead, the empty string is the default partition. An empty partition sequence or a partition sequence that consists of wildcards only will automatically be assumed to be in the default empty string partition.
Proposed Revised Text:
Section 2.1.3 Supported QoS PARTITION Table
On the "Meaning" Column for the PARTITION QoS;
Replace the following paragraph:
The default value is an empty (zero-length) sequence. This is treated as a special value that matches any partition. And is equivalent to a sequence containing a single element consisting of the empty string.
With
The empty string ("") is considered a valid partition that is matched with other partition names using the same rules of string matching and regular-expression matching used for any other partition name (see Section 2.1.3.13)
The default value for the PARTITION QoS is a zero-length sequence. The zero-length sequence is treated as a special value equivalent to a sequence containing a single element consisting of the empty string.
It is desirable to have the behavior in that if a special partition is specified, it will only match others who have that special partition. If the default behavior is that it will match all partitions, there is no way for a newly created entity to prevent others from matching it, unless the special partition is used. Therefore, we should not overload the meaning of the empty string to mean matching everything. Instead, the empty string is the default partition. An empty partition sequence or a partition sequence that consists of wildcards only will automatically be assumed to be in the default empty string partition. Also in Section 2.1.3.13 PARTITIOIN, the "connection" between a reader and writer is described as an "association". The correct term used elsewhere in the spec is "match". Therefore the use of "association" in this section should be replaced with the term "match".
Summary: In the table in Section 2.1.5, for both the DCPSPublication and DCPSSubscription there is a typo in that "ownershiph" should be "ownership". Also, the destination_order row in the DCPSPublication should be of type "DesinationOrderQosPolicy" and not "QosPolicy". Also, the presentation row in the DCPSPublication should be of type "PresentationQosPolicy" and not "DestinationOrderQosPolicy". Also, in the paragraph at the top of the page containing the table there is a typo where "crated" should be "created". Proposed Resolution: Fix the typos. Proposed Revised Text: Section 2.1.5, 2 paragraphs above the Builtin-Topic table; at the end of the paragraph: Replace "crated" with "created" in the sentence: "application that crated them." Section 2.1.5 Builtin-Topic table; Replace DCPSPublication fieldname 'ownershiph' with 'ownership' Replace DCPSSubscription fieldname 'ownershiph' with 'ownership' Replace the type of the DCPSPublication, destination_order field from 'QosPolicy" to "DestinationOrderQosPolicy" Replace the type of the DCPSPublication presentation field from 'DestinationOrderQosPolicy" to "PresentationQosPolicy
The method name is get/set_expression_parameters() whereas the parameter passed in is the "filter_parameters". Understandably the full name is filter expression parameters since the ContentFilteredTopic has a "filter_expression" attribute. Compare this with the MultiTopic which has the same named methods which take in "expression_parameters" and has a "subscription_expression" attribute. The name "filter_parameters" is also used in the create_contentfilteredtopic() method on the DomainParticipant. Proposed Resolution: Change the name of "filter_parameters" to "expression_parameters" for more consistency. Proposed Revised Text: Section 2.1.2.2.1 DomainParticipant Class; DomainParticipant class table On the row describing the operation "create_contentfilteredtopic" Replace parameter name "filter_ parameters" With parameter name "expression_ parameters" Section 2.1.2.2.1.7 create_contentfilteredtopic Last paragraph replace "filter_ parameters" with "expression_ parameters" Section 2.1.2.3.3 ContentFilteredTopic Class; ContentFilteredTopic class table On the row describing the operation "set_expression_parameters" Replace parameter name "filter_ parameters" With parameter name "expression_ parameters" Section 2.1.2.3.3 ContentFilteredTopic Class On the second bullet towards the end of the section: Replace "filter_ parameters" with "expression_ parameters" On the last paragraph just above section 2.1.2.3.3.1: Replace "filter_ parameters" with "expression_ parameters" Section 2.1.2.3.3.3 get_expression_parameters On the first paragraph: Replace "filter_ parameters" with "expression_ parameters" Section 2.1.2.3.3.4 set_expression_parameters On the first paragraph: Replace "filter_ parameters" with "expression_ parameters" Section 2.2.3 DCPS PSM : IDL interface DomainParticipant On the operation create_contentfilteredtopic Replace formal parameter name "filter_ parameters" with "expression_ parameters"
Change the name of "filter_parameters" to "expression_parameters" for more consistency.
Incorrect prototype for the FooDataWriter method register_instance_w_timestamp() in the PSM Summary: The handle is incorrectly a parameter when it is already the return. Proposed Resolution: Remove the incorrect handle parameter. Proposed Revised Text: Section 2.2.3 DCPS PSM : IDL interface FooDataWriter On the register_instance_w_timestamp remove the parameter "in DDS::InstanceHandle_t handle," The resulting operation is: DDS::InstanceHandle_t register_instance_w_timestamp(in Foo instance_data, in DDS::Time_t source_timestamp);
In the third paragraph of Section 2.1.3, it is stated that "some QosPolicy values may not be compatible with other ones". In this context we are really talking about the consistency of related QosPolicies as compatibility is already a concept concerning requested/offered semantics. Proposed Resolution: Reword the sentence to use the term "consistency" which is already used later in the paragraph. Proposed Revised Text: Section 2.1.3 Supported QoS 3rd paragraph Replace "compatible" with "consistent" in the sentence: "Some QosPolicy values may not be compatible with other ones." Resulting in: "Some QosPolicy values may not be consistent with other ones."
Reword the sentence to use the term "consistency" which is already used later in the paragraph
Summary: In Section 2.1.3.7 concerning the DEADLINE QoS, it is stated that if the QoS is set inconsistently, i.e. period is < minimum_separation of the TIME_BASED_FILTER QoS, the INCONSISTENT_POLICY status will change and any associated Listeners/WaitSets will be triggered. There is no such status. Instead the set_qos() operation will error with return code INCONSISTENT_POLICY. Proposed Resolution: Mention the return code instead. Proposed Revised Text: Section 2.1.3.7 DEADLINE Remove the last sentence in the section: "An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered."
Summary: In Section 2.1.3.11 (LIVELINESS QoS) the second condition for compatibility uses "=<" for less than or equal to where "<=" might be more readable. Also, the last paragraph states "equal or greater to" where "equal or greater than" might be more readable. In next to last paragraph of Section 2.1.3.14 (RELIABILITY QoS), there is a typo where "change form a newer value" should be "change from a newer value". In Section 2.1.3.22 (READER_DATA_LIFECYCLE QoS) the last two paragraphs mention how "view_state becomes NOT_ALIVE_xxx" where it should be the "instance_state". Proposed Resolution: Make the aforementioned changes Proposed Revised Text: Section 2.1.3.11 LIVELINESS Second bullet in the enumeration near the end of the section: Replace "offered lease_duration =< requested lease_duration" With "offered lease_duration <= requested lease_duration" Section 2.1.3.11 LIVELINESS Last paragraph; replace: "Service with a time-granularity equal or greater to the lease_duration." With: "Service with a time-granularity greater or equal to the lease_duration." Section 2.1.3.14 RELIABILITY Next to last paragraph. Raplace: "change form a newer value" With: "change from a newer value". Section 2.1.3.22 READER_DATA_LIFECYCLE Paragraph before the last Replace "view_state" with "inatance_state" in: "maintain information regarding an instance once its view_state becomes NOT_ALIVE_NO_WRITERS." Section 2.1.3.22 READER_DATA_LIFECYCLE Last paragraph: Replace "view_state" with "inatance_state" in: "maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED."
Summary: In Section 2.1.2.4.1.10 (begin_coherent_changes) there is a typo in the last sentence of the section where "if may be useful" should be "it may be useful". In the second paragraph of Section 2.1.2.2.2.4 (lookup_participant) there is a typo where "multiple DomainParticipant" should be "multiple DomainParticipants". Proposed Resolution: Make the suggested corrections. Proposed Revised Text: Section 2.1.2.4.1.10 begin_coherent_changes Last sentence, replace: "if may be useful" With "it may be useful" Section 2.1.2.2.2.4 lookup_participant Second paragraph replace "If multiple DomainParticipant belonging" With "If multiple DomainParticipant entities belonging"
First typo "noe" is invalid. Must was already fixed in the last revision. Otherwise, make the suggested corrections.
Summary: In Section 2.1.3.9 in the paragraph dealing with when there are multiple same-strength writers, the next to last sentence describes that the owner must remain the same until one of several conditions are met. The condition where "a new DataWriter with the same strength that should be deemed the owner according to the policy of the Service" should be explicitly mentioned although it may have been implied. Proposed Resolution: Add the explicit mention of the additional condition above. Proposed Revised Text: Section 2.1.3.9.2 EXCLUSIVE kind 5th paragraph; replace the sentence: It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, or a new DataWriter with higher strength modifies the instance. With: It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, a new DataWriter with higher strength modifies the instance, or a new owner with the same strength that is deemed by the Service to be the owner modifies the instance.
Currently it is stated that write() and dispose() may block and return TIMEOUT when the RELIABILITY QoS kind is set to RELIABLE and any of the RESOURCE_LIMITS QoS is hit. We should reconsider the action taken when it is instance resource limits that are hit. If instance resources are kept around until they are unregistered (and not even yet considering how RELIABILITY or DURABILITY QoS affects this), then it seems awkward to block when the user is required to take action. Perhaps returning immediately with OUT_OF_RESOURCES makes more sense in this situation. Proposed Resolution: When the writer is out of instance resources because all max_instances have been registered or written, the write/dispose() call will return OUT_OF_RESOURCES instead of blocking if it can be detected. Proposed Revised Text: Section 2.1.2.4.2.11 write Above the paragraph starting with "In case the provided handle is valid"; add the paragraph: Instead of blocking, the write operation is allowed to return immediately with the error code OUT_OF_RESOURCES provided the following two conditions are met: 1. The reason for blocking would be that the RESOURCE_LIMITS are exceeded. 2. The service determines that waiting the 'max_waiting_time' has no chance of freeing the necessary resources. For example, if the only way to gain the necessary resources would be for the user to unregister an instance. Section 2.1.2.4.2.12 write_w_timestamp After the paragraph "This operation may block" add the paragraph: This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11). Section 2.1.2.4.2.13 dispose After the paragraph "This operation may block…" add the paragraph: This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11). Section 2.1.2.4.2.14 dispose_w_timestamp After the paragraph "This operation may block…" add the paragraph: This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11). Section 2.1.2.4.2.14 dispose_w_timestamp After the paragraph "This operation may block…" add the paragraph: This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11). Section 2.1.2.4.2.5 register Replace the paragraph: This operation may block if the RELIABILITY kind is set to RELIABLE and the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT. With: This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11). This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11). Section 2.1.2.4.2.5 register_w_timestamp Replace the paragraph: This operation may block and return TIMEOUT under the same circumstances described for the register_instance operation (Section 2.1.2.4.2.5 ). With: This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11). This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
When the writer is out of instance resources because all max_instances have been registered or written, the write/dispose() call will return OUT_OF_RESOURCES instead of blocking if it can be detected.
For XXX = participant, topic, publisher, subscriber, and datareader, the specification states "in the case where the QoS policies are not explicitly specified". For XXX = datawriter, the specification states "in the case where the QoS policies are defaulted". The latter is technically more correct. Proposed Resolution: Use the wording in set_default_datawriter_qos(). Proposed Revised Text: Section 2.1.2.2.1.20 set_default_publisher_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.1.21 get_default_publisher_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.1.22 set_default_subscriber_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.1.23 get_default_subscriber_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.1.24 set_default_topic_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.1.25 get_default_topic_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.2.5 set_default_participant_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.2.2.6 get_default_participant_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.4.1.16 get_default_datawriter_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.5.2.15 set_default_datareader_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted Section 2.1.2.5.2.16 get_default_datareader_qos First paragraph replace: in the case where the QoS policies are not explicitly specified With in the case where the QoS policies are defaulted
For better naming consistency with other statuses, the PUBLICATION_MATCH_STATUS and SUBSCRIPTION_MATCH_STATUS may be renamed to PUBLICATION_MATCHED_STATUS and SUBSCRIPTION_MATCHED_STATUS. Also the get_publication_match_status and get_subscription_match_status operations may be renamed to get_publication_matched_status and get_subscription_matched_status. In addition the callback is named on_XXX_matched. Proposed Resolution: Rename PUBLICATION_MATCH_STATUS to PUBLICATION_MATCHED_STATUS, SUBSCRIPTION_MATCH_STATUS to SUBSCRIPTION_MATCHED_STATUS Proposed Revised Text: Section 2.1.2.4 Publication Module Figure 2-9; DataWriter class Rename get_publication_match_status() To get_publication_matched_status() Section 2.1.2.4.2 DataWriter Class DataWriter class table Rename get_publication_match_status() To get_publication_matched_status() Section 2.1.2.4.2.19 get_publication_match_status Rename section heading to: 2.1.2.4.2.19 get_publication_matched_status Replace "allows access to the PUBLICATION_MATCH_QOS" With: "allows access to the PUBLICATION_MATCHED communication status " Section 2.1.2.5 Subscription Module Figure 2-9; DataReader class Rename get_subscription_match_status() To get_subscription_matched_status() Section 2.1.2.4.2 DataReader Class DataReader class table Rename get_subscription_match_status() To get_subscription_matched_status() Section 2.1.2.5.3.25 get_subscription_match_status Rename section heading to: 2.1.2.5.3.25 get_subscription_matched_status Section 2.1.2.5.3.25 get_subscription_match_status Rename "SUBSCRIPTION_MATCH_STATUS" to "SUBSCRIPTION_MATCHED_STATUS" Section 2.1.4.4 Conditions and Wait-sets Figure 2-19; DataReader class Rename get_publication_match_status() To get_publication_matched_status() Section 2.1.4.1 Communication Status Communication status table replace: PUBLICATION_MATCH With PUBLICATION_MATCHED Communication status table replace: SUBSCRIPTION_MATCH With SUBSCRIPTION_MATCHED Section 2.2.3 DCPS PSM : IDL Status constants Replace: const StatusKind PUBLICATION_MATCH_STATUS = 0x0001 << 13; const StatusKind SUBSCRIPTION_MATCH_STATUS = 0x0001 << 14; With const StatusKind PUBLICATION_MATCHED_STATUS = 0x0001 << 13; const StatusKind SUBSCRIPTION_MATCHED_STATUS = 0x0001 << 14; interface DataWriter Replace: PublicationMatchedStatus get_publication_match_status(); With PublicationMatchedStatus get_publication_matched_status(); interface DataReader Replace: SubscriptionMatchedStatus get_subscription_match_status(); With SubscriptionMatchedStatus get_subscription_matched_status();
Rename PUBLICATION_MATCH_STATUS to PUBLICATION_MATCHED_STATUS, SUBSCRIPTION_MATCH_STATUS to SUBSCRIPTION_MATCHED_STATUS
Should delete_contained_entities() on the Subscriber (and even the DataReader or DomainParticipant) be allowed to return PRECONDITION_NOT_MET? Summary: As described in Section 2.1.2.5.2.6, delete_datareader() can return PRECONDITION_NOT_MET if there are any outstanding loans. In a similar fashion, should we allow delete_contained_entities() on the Subscriber (and even the DataReader or DomainParticipant for that matter) to also return PRECONDITION_NOT_MET in this situation? Proposed Resolution: Return PRECONDITION_NOT_MET when delete_contained_entities() is called on either the DataReader, Subscriber, or DomainParticipant when a DataReader has outstanding loans. Proposed Revised Text: Section 2.1.2.2.1.18 delete_contained_entities Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph The operation will return PRECONDITION_NOT_MET if the any of the contained entities is in a state where it cannot be deleted. Section 2.1.2.4.1.14 delete_contained_entities Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted. Section 2.1.2.5.2.14 delete_contained_entities Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted. This will occur, for example, if a contained DataReader cannot be deleted because the application has called a read or take operation and has not called the corresponding return_loan operation to return the loaned samples. Section 2.1.2.5.3.30 delete_contained_entities Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.
Return PRECONDITION_NOT_MET when delete_contained_entities() is called on either the DataReader, Subscriber, or DomainParticipant when a DataReader has outstanding loans.
In get_matched_subscription_data, we return PRECONDITION_NOT_MET in this situation. However, in get_matched_publication_data, we return BAD_PARAMETER. Previously, they were both returning PRECONDITION_NOT_MET. In addition, in both sections we state "The operation get_matched_XXXs to find the XXXs that are currently matched" should probably read "can be used to find". Proposed Resolution: Make it consistent by returning BAD_PARAMETER in both. Proposed Revised Text: Section 2.1.4.2.23 get_matched_subscription_data In the first sentence of the second paragraph, replace Replace "the operation will fail and return PRECONDITION_NOT_MET." With "the operation will fail and return BAD_PARAMETER."
The Requested/OfferedIncompatibleQosStatus contains the last_policy_id and we need to set this to something in case no QoS policy has ever been incompatible. Proposed Resolution: Add "const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;" to the PSM. Proposed Revised Text: Section 2.2.3 DCPS PSM : IDL In the Qos section add the following to the list of QosPolicyId_t: const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;
In Section 2.1.2.4.2.11 the write() operation will return PRECONDITION_NOT_MET if the handle is "valid but does not correspond to the given instance". Further, it goes on to state that "in the case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER'." We should clarify what is "valid" versus "invalid". Valid means the handle corresponds to a registered instance. Proposed Resolution: Clarify that valid means the handle corresponds to a registered instance. When the handle is valid and does not correspond to the given instance should be up to the implementation to be able to detect this or not. Proposed Revised Text: Section 2.1.2.4.2.11 write Remove the last paragraph that reads "In case the provided handle is valid" Add a new paragraph directly following the one that reads "If handle is any value other than HANDLE_NIL" as follows: In case the provided handle is valid, i.e. corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'. Section 2.1.2.4.2.13 dispose Replace the next to last paragraph that reads "In case the provided handle is valid." With the same paragraph above: In case the provided handle is valid, i.e corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.
Clarify that valid means the handle corresponds to a registered instance. When the handle is valid and does not correspond to the given instance should be up to the implementation to be able to detect this or not.
In Section 2.1.2.4.2.14 (dispose_w_timestamp) it states that the operation will return PRECONDITION_NOT_MET if called on an instance that has not yet been registered. This is not true as the operation will implicitly register the instance just as write does. This restriction was also originally in 2.1.2.4.2.13 (dispose) but has already been removed. Proposed Resolution: Remove the offending paragraph. Proposed Revised Text: Section 2.1.2.4.2.14 dispose_w_timestamp Remove the last two paragraphs , that is the text starting from "The operation must be only called on registered instances." till the end of the section.
Remove the offending paragraph. In addition, specify the behavior with regards to passing an invalid instance_handle to the operations: dispose_w_timestamp, write_w_timestamp, unregister_instance call; is behavior is the same that was specified for write, or dispose. Also align the explanation given for passing an 'invalid' handle to the operation "unregister" with the explanation in the other sections
In Section 2.1.2.4.2.13 (dispose) it states "in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it". Is this really necessary? Is it not acceptable to allow late-joining readers to see an instance with the NOT_ALIVE_DISPOSED instance state? Does this also apply to TRANSIENT_LOCAL? We think disposed instances should be propagated to new-ly discovered applications, otherwise there would be no way to enforce ownership of a disposed instance. Furthermore the application should be notified of disposed instances even if this is the first time the middleware sees the instance because in practice there is no way for the middleware to tell if the application has seen the instance already; for example, following a network partition the middleware may have notified of NOT_ALIVE_NO_WRITERS and following the application taking all the samples it could have reclaimed the information on that instance, so when it sees it again it thinks it is the first time; the application meanwhile could still have information on that instance… So the user case where a newly joining reader wants to not receive instances that have been disposed before it joined should be handled on the writer side by either explicitly unregistering the instances, or having some new QoS that auto-unregisters disposed instances. Another issue is whether the act of disposing on the writer side should automatically remove previous samples for that instance, and whether that is done for particular values of the HISTORY (e.g. when it is KEEP_LAST only, or KEEP_LAST with depth==1, or, even for KEEP_ALL). Seems like the control of this should be another QoS on the WRITER_LIFECYCLE. Proposed Resolution: For now eliminate the following text from Section 2.1.2.4.2.13 (dispose) "In case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it". Proposed Revised Text: Section 2.1.2.4.2.13 dispose Remove the paragraph: In addition, in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it.
Eliminate the following text from Section 2.1.2.4.2.13 (dispose) "In case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it".
In Section 2.1.2.5.2.17 there is a typo in the last paragraph where "datawriter_qos" should be "datareader_qos". Proposed Resolution: Correct the typo. Proposed Revised Text: Section 2.1.2.5.2.17 copy_from_topic_qos Replace "datawriter_qos" with "datareader_qos" in the first sentence of the last paragraph that currently reads "This operation does not check the resulting datawriter_qos for consistency".
In the PSM for get_discovered_topic_data() and get_discovered_participant_data() on the DomainParticipant, the data parameter should be first followed by the handle. The order is correct in the PIM. Proposed Resolution: Make the suggested modifications. Proposed Revised Text: Section 2.2.3 DCPS PSM : IDL In the DomainParticipant interface: Change the order of the parameters to get_discovered_participant_data from "in InstanceHandle_t participant_handle, inout ParticipantBuiltinTopicData participant_data" to inout ParticipantBuiltinTopicData participant_data, in InstanceHandle_t participant_handle". Change the order of the parameters to get_discovered_topic_data from "in InstanceHandle_t topic_handle, inout TopicBuiltinTopicData topic_data" to inout TopicBuiltinTopicData topic_data, in InstanceHandle_t topic_handle".
In Section 2.1.2.2.1.28 there is a typo in the next to last paragraph where "get_matched_participants" should be "get_discovered_participants". Proposed Resolution: Correct the typo. Proposed Revised Text: Section 2.1.2.2.1.28 get_discovered_participant_data In the next to last paragraph replace "get_matched_participants" with "get_discovered_participants" where it currently reads "Use the operation get_matched_participants to find ".
Currently TIMEOUT is not a specified valid return code for the wait() operation. The specification explicitly states that timeout is conveyed by returning OK with an empty list of conditions. We should consider adding TIMEOUT as an explicit valid return value. Proposed Resolution: Add TIMEOUT as a valid return code to wait(). Proposed Revised Text: Section 2.1.2.1.6.3 wait In the next to last paragraph, replace "If the duration is exceeded, wait will also return with the return code OK. In this case, the resulting list of conditions will be empty." With "If the duration is exceeded, wait will return with return code TIMEOUT."
In Section 2.1.4.4.2 (Trigger State of the ReadCondition) the last paragraph describes an example. However, it is not quite true because reading samples belonging to the latest generation will cause the view_state to become NOT_NEW.
For the sake of the example considered, it may not be necessary to specify the view_state since it is not absolutely relevant to the desired condition being triggered when a new sample arrives given that all other samples were previously at least read.
Proposed Resolution:
Remove mention of the view_state.
Proposed Revised Text:
Section 2.1.4.4.2 Trigger State of the ReadCondition
In the last paragraph, change the sentence from
"A ReadCondition that has a sample_state_mask = {NOT_READ}, view_state_mask = {NEW} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the NEW samples are either read or taken."
To
"A ReadCondition that has a sample_state_mask = {NOT_READ} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the new samples are either read or taken. "
Section 2.1.4.4.2 Trigger State of the ReadCondition
In that last paragraph change the last sentence from
"that would only change the SampleState to READ but the sample would still have (SampleState, ViewState) = (READ, NEW) which overlaps the mask on the ReadCondition".
To
"that would only change the SampleState to READ which still overlaps the mask on the ReadCondition".
The following literals are defined: DURATION_INFINITY_SEC DURATION_INFINITY_NSEC TIMESTAMP_INVALID_SEC TIMESTAMP_INVALID_NSEC These are incorrectly named and should be: DURATION_INFINITE_SEC DURATION_INFINITE_NSEC TIME_INVALID_SEC TIME_INVALID_NSEC Proposed Resolution: Add the correct names. Proposed Revised Text: Section 2.2.3 DCPS PSM : IDL Replace: const long DURATION_INFINITY_SEC = 0x7fffffff; const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff; const long TIMESTAMP_INVALID_SEC = -1; const unsigned long TIMESTAMP_INVALID_NSEC = 0xffffffff; With: const long DURATION_INFINITE_SEC = 0x7fffffff; const unsigned long DURATION_INFINITE_NSEC = 0x7fffffff; const long TIME_INVALID_SEC = -1; const unsigned long TIME_INVALID_NSEC = 0xffffffff;
In Figure 2-19 in Section 2.1.4.4 (Conditions and Wait-sets): There is no such delete_statuscondition() operation on the Entity. The ReadCondition should have a view_state_mask and an instance_state_mask instead of a lifecycle_state_mask. Proposed Resolution: Make the suggested corrections. Proposed Revised Text: Section 2.1.4.4 Conditions and Wait-sets In Figure 2-19 Remove "delete_statuscondition()" from the operations listed on the Entity. Remove "lifecycle_state_mask [*] : ViewStateKind" from the attributes listed on the ReadCondition. Add "view_state_mask [*] : ViewStateKind" and "instance_state_mask [*] : InstanceStateKind" to the end of the attributes listed on the ReadCondition.
The purpose of the DLRL has been described as being able “to provide more direct access to the exchanged data, seamlessly integrated with the native-language constructs”. This means that DLRL should offer applications an OO-view on the information model(s) they use. In this view, objects behave in the same way as ordinary, native language objects.
Providing intuitive object access and object navigation should be key-benefits of DLRL compared to plain DCPS usage, where instances and their relations need to be resolved manually. Object navigation in DLRL therefore needs to be simple and intuitive, just like navigating between objects in any ordinary native OO-language.
It is in this aspect that DLRL falls short: object navigation is not simple and intuitive, since it requires intermediate objects (RefRelations and ObjectReferences) that abstract applications from the navigable objects. The purpose of these intermediate objects was to serve as some sort of smart pointers, that abstract applications from knowledge about the exact location and even about the existence of objects (to allow a form of lazy instantiation).
However, since the potential benefits from smart pointer management are rather dependent on the underlying target language, the DLRL specification does not address them and only explains the effort that an application should do in the absence of any smart pointer support. This results in the following problems:
The way in which a DLRL implementation solves pointer arithmetic is not standardized and may change from vendor to vendor and from language to language.
When smart pointer arithmetic is not available, applications will be expected to do lots of extra relational management, which is not in the scope of most application programmers.
Proposed Resolution:
Simplify relation management by removing all intermediate relation objects from the API (Reference, Relation, RefRelation, ObjectReference, ListRelation and MapRelation). Navigation of single relations is done by going directly from ObjectRoot to ObjectRoot (simplifying the IDL object model as well). Implementations can still choose to do smart resource management (e.g. lazy instantiation), but they should do so in a fully transparent way, one that is invisible to applications.
This approach also makes the PIM and PSM (which deviated quite a lot from eachother with respect to these intermediate relation-like objects) more consistent.
Proposed Revised Text:
Section 3.1.5.2, 2nd paragraph, 1st sentence: “DLRL classes are linked to other DLRL classes by means of Relation Objects”. This should be replaced with “… by means of relations.”.
Change the Object Diagram of Figure 3.4. (an alternative Object Diagram will be provided).
Change the table immediately following Figure 3.4 by removing the ObjectReference, Reference, Relation, RefRelation, ListRelation, StrMapRelation and IntMapRelation entries from it.
Remove the foot-note directly following this table (starting with with number 1) that says: “The specification does … (lazy instantiation).”
Section 3.1.6.3.2: Remove the sequence of ObjectReference attribute from the CacheAccess table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.
Section 3.1.6.3.2: Remove the deref method from the CacheAccess table and from the explanation below it.
Section 3.1.6.3.3: Remove the sequence of ObjectReference attribute from the Cache table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.
Section 3.1.6.3.3: Remove the deref method from the Cache table and from the explanation below it.
Section 3.2.1.2.1: Remove the following lines from the CacheAccess and Cache interface:
readonly attribute ObjectReferenceSeq refs;
ObjectRoot deref( in ObjectReference ref) raises (NotFound);
Section 3.1.6.3.5: Remove the sequence of ObjectReference attribute from the ObjectHome table, and from the explanation below it.
Section 3.2.1.2.1: Remove the following line from the ObjectHome interface:
readonly attribute ObjectReferenceSeq refs;
Section 3.1.6.3.5: Change the entire explanation of the auto_deref attribute from:
“a boolean that indicates if ObjectReference corresponding to that type should be implicitly instantiated (TRUE) or if this action should be explicitly done by the application when needed by calling a deref operation (auto_deref). As selections act on instantiated objects (see section 3.1.6.3.7 for details on selections), TRUE is a sensible setting when selections are attached to that home.”
to:
“a boolean that indicates whether the state of a DLRL Object should always be loaded into that Object (auto_deref = TRUE) or whether this state will only be loaded after it has been accessed explicitly by the application (auto_deref = FALSE).”
Section 3.1.6.3.5: Change the entire explanation of the deref_all method from:
“ask for the instantiation of all the ObjectReference that are attached to that home, in the Cache (deref_all).”
To:
“ask to load the most recent state of a DLRL Object into that Object for all objects managed by that home (deref_all).”
Section 3.1.6.3.5: Change the entire explanation of the underef_all method from:
“ask for the removal of non-used ObjectRoot that are attached to this home (underef_all).”
To:
“ask to unload all object states from objects that are attached to this home (underef_all).”
Section 3.1.6.3.6: Replace all occurrences of ObjectReference with ObjectRoot in the ObjectListener table. Also remove the second parameter of the on_object_modified method.
Section 3.1.6.3.6: Change the explanation of on_object_created from:
“… this operation is called with the ObjectReference of the newly created object (ref).”
to:
“… this operation is called with the value of the newly created object (the_object).”
Section 3.1.6.3.6: Change the explanation of on_object_modified from:
“This operation is called with the ObjectReference of the modified object (ref) and its old value (old_value); the old value may be NULL.”
To:
“This operation is called with the new value of the modified object (the_object).”
Section 3.1.6.3.6: Change the explanation of on_object_deleted from:
“… this operation is called with the ObjectReference of the newly deleted object (ref).”
To:
“… this operation is called with the value of the newly deleted object (the_object).
Section 3.1.6.3.10: Replace all occurrences of ObjectReference with ObjectRoot in the SelectionListener table.
Section 3.2.1.2.1: Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:
local interface ObjectListener {
boolean on_object_created ( in ObjectReference ref );
/****
* will be generated with the proper Foo type
* in the derived FooListener
* boolean on_object_modified ( in ObjectReference ref,
* in ObjectRoot old_value);
****/
boolean on_object_deleted ( in ObjectReference ref );
};
local interface SelectionListener {
/***
* will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in ( in ObjectRoot the_object );
void on_object_modified ( in ObjectRoot the_object );
*
***/
void on_object_out ( in ObjectReference the_ref );
};
To:
local interface ObjectListener {
/****
* will be generated with the proper Foo type
* in the derived FooListener
boolean on_object_created ( in ObjectRoot the_object );
boolean on_object_modified ( in ObjectRoot the_object );
boolean on_object_deleted ( in ObjectRoot the_object );
*
****/
};
local interface SelectionListener {
/***
* will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in ( in ObjectRoot the_object );
void on_object_modified ( in ObjectRoot the_object );
void on_object_out (in ObjectRoot the_object );
*
***/
};
Section 3.2.1.2.2: Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:
local interface FooListener: DDS::ObjectListener {
void on_object_modified ( in DDS ::ObjectReference ref,
in Foo old_value );
};
local interface FooSelectionListener : DDS::SelectionListener {
void on_object_in ( in Foo the_object );
void on_object_modified ( in Foo the_object );
};
To:
local interface FooListener: DDS::ObjectListener {
boolean on_object_created ( in Foo the_object );
boolean on_object_modified ( in Foo the_object );
boolean on_object_deleted ( in Foo the_object );
};
local interface FooSelectionListener : DDS::SelectionListener {
void on_object_in ( in Foo the_object );
void on_object_modified ( in Foo the_object );
void on_object_out (in Foo the_object );
};
Section 3.1.6.3.13: Remove the ObjectReference attribute from the ObjectRoot table, and from the explanation below it.
Section 3.2.1.2.1: Remove the following line from the IDL in the ObjectRoot:
readonly attribute ObjectReference ref;
Section 3.1.6.3.13: Change the following sentence from:
“In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute:”
To:
“In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute (including single- and multi-relation attributes):”
Section 3.1.6.3.14 can be removed (ObjectReference).
Section 3.2.1.2.1: Remove the following lines from the IDL:
/*****************
* ObjectReference
*****************/
struct ObjectReference {
DLRLOid oid;
unsigned long home_index;
};
typedef sequence<ObjectReference> ObjectReferenceSeq;
Section 3.1.6.3.15 can be removed (Reference).
Section 3.1.6.3.20 can be removed (Relation).
Section 3.1.6.3.21 can be removed (RefRelation).
Section 3.1.6.3.22 - Section 3.1.6.3.24 can be removed (ListRelation, IntMapRelation and StrMapRelation).
Section 3.2.1.2.1: Remove the following lines from the IDL:
/********************************
* Value Bases for Relations
*********************************/
valuetype RefRelation {
private ObjectReference m_ref;
boolean is_composition();
void reset();
boolean is_modified ( in ReferenceScope scope );
};
valuetype ListRelation : ListBase {
private ObjectReferenceSeq m_refs;
boolean is_composition();
};
valuetype StrMapRelation : StrMapBase {
struct Item {
string key;
ObjectReference ref;
};
typedef sequence <Item> ItemSeq;
private ItemSeq m_refs;
boolean is_composition();
};
valuetype IntMapRelation : IntMapBase {
struct Item {
long key;
ObjectReference ref;
};
typedef sequence <Item> ItemSeq;
private ItemSeq m_refs;
boolean is_composition();
};
Section 3.2.1.1: 1st paragraph after the numbered list of DLRL entities, remove the following sentence: “(with the exception of ObjectReference, …. , so that it can be embedded). Section 3.2.1.2.2: Change the following lines in IDL from:
valuetype FooStrMap : DDS::StrMapRelation { // StrMap<Foo>
…
valuetype FooIntMap : DDS::IntMapRelation { // IntMap<Foo>
To:
valuetype FooStrMap : DDS::StrMap { // StrMap<Foo>
…
valuetype FooIntMap : DDS::IntMap { // IntMap<Foo>
Section 3.2.2.3.1: Remove the “Ref” value from the allowed list of patterns, so change the templateDef . The templatedef then changes from:
<!ATTLIST templateDef name CDATA #REQUIRED
pattern (List | StrMap | IntMap | Ref) #REQUIRED
itemType CDATA #REQUIRED>
To (see also Issues T_DLRL#7 and T_DLRL#8):
<!ATTLIST templateDef name CDATA #REQUIRED
pattern (Set | StrMap | IntMap) #REQUIRED
itemType CDATA #REQUIRED>
Section 3.2.2.3.2.3, 2nd bullet: Remove the “Ref” pattern from the list of supported constructs.
Section 3.2.3.2: Replace the forward valuetype declaration for RadarRef with a forward declaration of type Radar, so change from:
valuetype RadarRef // Ref<Radar>
To:
valuetype Radar;
Section 3.2.3.3: Remove the following line from the XML (in both XML examples):
“<templateDef name=“RadarRef”
pattern=“Ref” itemType=“Radar”/>”
Simplify relation management by removing all intermediate relation objects from the API (Reference, Relation, RefRelation, ObjectReference, ListRelation and MapRelation). Navigation of single relations is done by going directly from ObjectRoot to ObjectRoot (simplifying the IDL object model as well). Implementations can still choose to do smart resource management (e.g. lazy instantiation), but they should do so in a fully transparent way, one that is invisible to applications. This approach also makes the PIM and PSM (which deviated quite a lot from each other with respect to these intermediate relation-like objects) more consistent.
Both the CacheAccess and Cache have some functional overlap. It would be nice if this overlap would be migrated to a common generalization (for a good reason, see also Issue T_DLRL#3). Proposed Resolution: Introduce a new class called CacheBase, that has represents the common functionality. Both the Cache and the CacheAccess inherit from this common base-class.
Introduce a new class called CacheBase, that has represents the common functionality. Both the Cache and the CacheAccess inherit from this common base-class.
The DLRL offers two different update modes for its Primary Cache: an automatic mode in which object creations, updates and deletions are pushed into the Cache and a manual mode in which the Cache content are refreshed on user-demand. >From the perspective of a Cache-user, it is important to find out what has happened to the contents of the Cache during the latest update session. In automatic update mode, Listeners are triggered for each Object creation, modification or deletion in the primary Cache. However, when the Cache is in manual update mode none of these Listeners are triggered and no means exist to examine what has happened during the last update round. The same can be said for the CacheAccess, that does not have an automatic update mode and neither has any means to examine the changes that were applied during the last invocation of the “refresh” method. Proposed Resolution: We therefore propose to add some extra methods to the ObjectHome, that allow an application to obtain the list of Objects that have been created, modified or deleted in the latest update round of a specific CacheBase
We therefore propose to add some extra methods to the ObjectHome, that allow an application to obtain the list of Objects that have been created, modified or deleted in the latest update round of a specific CacheBase.
The ObjectExtent is a manager for a set objects. Basically it is a wrapper that offers functions to modify its contents and to create a sub-set based on on a user-defined function. The problem with using an Extent is that it overlaps with the get_objects method introduced in issue T_DLRL#3, and that it is not clear whether a new Extent should be allocated each time the user obtains it from the ObjectHome, or whether the existing Extent should be re-used and therefore its contents be overwritten with every update. Furthermore, every application can easily write its own code that modifies every element in this sequence (no specialized ObjectModifier is required for that, a simple for-loop can do the trick), and similarly an application can also write code to filter each element and to store matching results in another sequence. Filtering and modifying objects like this are really business logic, and do not have to be part of a Middleware specification. Proposed Resolution: Remove the ObjectModifier and ObjectExtent from the specification. This saves two implied interfaces that are not required for most types of applications, but which can still be solved very well at application level. Replace the extent on the ObjectHome with a sequence of ObjectRoots.
Remove the ObjectModifier and ObjectExtent from the specification. This saves two implied interfaces that are not required for most types of applications, but which can still be solved very well at application level. Replace the extent on the ObjectHome with a sequence of ObjectRoots.
Summary: The specification states that it is possible to clone an Object from the primary Cache into a CacheAccess, together with its related or contained objects for a specified navigatable depth. (We will refer to such an Object tree as a cloning contract from now on). However, while the cloning of objects is done on contract level, the deletion of clones is done on individual object level. What should happen to related objects when the top level object is deleted? Furthermore, it is unclear what the result should be when a relationship from an object A to an object B changes from object A to object C. Should the next refresh of the CacheAccess only refresh the states of objects A and B, or should object C be added and object B be removed from the CacheAccess? Proposed Resolution: Formally introduce the concept of a cloning contract into the API to replace all other clone-related methods. Cloning contracts are defined on the CacheAccess and are evaluated when the CacheAccess is refreshed.
Formally introduce the concept of a cloning contract into the API to replace all other clone-related methods. Cloning contracts are defined on the CacheAccess and are evaluated when the CacheAccess is refreshed.
Object State Transitions of Figure 3-5 and 3-6 should be corrected and simplified Summary: The state transition diagrams in Figure 3-5 and 3-6 are difficult to understand, and the 2nd diagram of Figure 3-5 is missing. (Instead of this 2nd diagram, the first diagram of Figure 3-6 has wrongly been duplicated here). Furthermore, since it is difficult to distinguish between primary and secondary Objects and their primary and secondary states, it would be nice if more intuitive names and states could be used instead. Finally, some of the possible conditions in which a state transition can occur are not mentioned in these state transition diagrams, which would even require for them to become more complex. Proposed Resolution: Introduce new names for the different states, and try to re-use the same set of states for each diagram. We propose not to speak about primary and secondary objects, but to speak about Cache Objects (located in a Cache) and CacheAccess objects (located in a CachAccess). Furthermore, we propose not to speak about primary and secondary states, but to speak about a READ state (with respect to incoming modifications) and a WRITE state (with respect to local modifications). Decouple Objects in the Cache from Objects in a CacheAccess, it makes the the the idea of what a Cache or CacheAccess represent more understandable. The Cache represents the global Object states as accepted by the System, a READ_ONLY CacheAccess represents a temporary state of a Cache, and a READ_WRITE or WRITE_ONLY CacheAccess represents the state of what the user intends the system to do in the future. Since a Cache then only represents the global state of the system (and not what the user intends to do), it does not have a WRITE state (it will be VOID). A READ_ONLY CacheAccess also has no WRITE state (VOID), but a WRITE_ONLY CacheAccess has no READ state (VOID). A READ_WRITE CacheAccess has both a WRITE and a READ state, and the WRITE state represents what the user has modified but not yet committed, and the READ state represent what the system has modified during its last update.
Introduce new names for the different states, and try to re-use the same set of states for each diagram. We propose not to speak about primary and secondary objects, but to speak about Cache Objects (located in a Cache) and CacheAccess objects (located in a CacheAccess). Furthermore, we propose not to speak about primary and secondary states, but to speak about a READ state (with respect to incoming modifications) and a WRITE state (with respect to local modifications). Decouple Objects in the Cache from Objects in a CacheAccess, it makes the idea of what a Cache or CacheAccess represent more understandable. The Cache represents the global Object states as accepted by the System, a READ_ONLY CacheAccess represents a temporary state of a Cache, and a READ_WRITE or WRITE_ONLY CacheAccess represents the state of what the user intends the system to do in the future. Since a Cache then only represents the global state of the system (and not what the user intends to do), it does not have a WRITE state (it will be VOID). A READ_ONLY CacheAccess also has no WRITE state (VOID), but a WRITE_ONLY CacheAccess has no READ state (VOID). A READ_WRITE CacheAccess has both a WRITE and a READ state, and the WRITE state represents what the user has modified but not yet committed, and the READ state represent what the system has modified during its last update. To really decouple Cache Objects from CacheAccess objects, it is even possible to allow an object to be cloned in multiple CacheAccesses. But this issue is up for discussion. Probably every writeable CacheAccess should then have its own DCPS Publisher, so that an object cloned in multiple Writable CacheAccesses will be seen in the DCPS as being owned by two different DataWriters. The Ownership QoS will then decide how to handle this situation.
Summary: It would be nice to have an iterator for Collection types to be able to iterate through the entire Collection. For Maps there should be iterators for both the keys and the values. Proposed Resolution: Add an abstract Iterator class to the DLRL, which has typed implementations to access the underlying data.
Summary: The Collection definitions are very different between the PIM and the PSM. Proposed Resolution: Use corresponding Collection definitions in PIM and PSM. Make a strict separation in the IDL between typed operations (to be implemented in the typed specializations, but to be mentioned in the untyped parents) and untyped operations (to be implemented in the untyped parents). Also remove methods that have a functional overlap with other methods.
Use corresponding Collection definitions in PIM and PSM. Make a strict separation in the IDL between typed operations (to be implemented in the typed specializations, but to be mentioned in the untyped parents) and untyped operations (to be implemented in the untyped parents). Also remove methods that have a functional overlap with other methods. Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Summary: In many applications there is a need for an unordered Collection without keys. Proposed Resolution: Add the Set as a supported Collection type in DLRL.
Add the Set as a supported Collection type in DLRL
Summary: In the current specification, the ObjectQuery inherits from the ObjectFilter, making it an ObjectFilter as well. That means that performing Queries can no longer be delegated to the DCPS, since the Selection invokes the check_object method on the ObjectFilter for that purpose. Proposed Resolution: Make the ObjectFilter and the ObjectQuery to separate classes with a common parent called SelectionCriterion. A SelectionCriterion can be then be attached to a Selection, which will either invoke the check_object method in case of a Filter, or delegate the Query to DCPS in case of a Query.
Make the ObjectFilter and the ObjectQuery to separate classes with a common parent called SelectionCriterion. A SelectionCriterion can be then be attached to a Selection, which will either invoke the check_object method in case of a Filter, or delegate the Query to DCPS in case of a Query
From the current DLRL specification it is not clear how to obtain your initial CacheFactory. Proposed Resolution: Add a static get_instance method to make the CacheFactory a singleton, just like we did for the DomainParticipantFactory in the DCPS.
Add a static get_instance method to make the CacheFactory a singleton, just like we did for the DomainParticipantFactory in the DCPS
Summary: According to the current specification, it is possible to interrupt an update round, by invoking the disable_update mehod in the middle of such an update round. This makes no sense, since it can leave the Cache in an undefined and possibly inconsistent state. The specification does also not explain how to recover from such a state. Proposed Resolution: Make sure that the automatic update mode can never be changed while in the middle of an update round. This way, update rounds can never be interrupted and the Cache will always be in a consistent state. This also removes the need for the interrupted and update_round parameters in the callback methods of the CacheListener. Also remove the related_cache parameter from the CacheListener, since it is not needed and is also missing in the IDL.
Make sure that the automatic update mode can never be changed while in the middle of an update round. This way, update rounds can never be interrupted and the Cache will always be in a consistent state. This also removes the need for the interrupted and update_round parameters in the callback methods of the CacheListener. Also remove the related_cache parameter from the CacheListener, since it is not needed and is also missing in the IDL
Summary: It is not clear why we should need a lock/unlock on the Cache when we can turn on and off the automatic updates. If an application does not want to be interrupted by incoming updates, it can simply disable the automatic updates, re-enabling them afterwards. Proposed Resolution: Remove the lock and unlock methods of the Cache.
Summary: The CacheListener currently supports only two call-backs to signify the start and end of an update round. However because listeners are only used in enabled update mode it is important that when the DLRL switches between the enabled and disabled update mode that the listeners are notified, as the switch between update modes does not necessarily originate from the thread that registered the listener as well, and the fact that updates are enabled or disabled is a major event that should be known by the listeners. Proposed Resolution: Add two methods to the CacheListener interface, one for signalling a switch to automatic update mode, and for for signalling a switch to manual update mode.
Add two methods to the CacheListener interface, one for signalling a switch to automatic update mode, and for for signalling a switch to manual update mode.
Summary: The OID currently consists of two numbers: a creator_id and a local_id. The philosophy is that each writer should obtain its own unique creator_id, and can then sequence number each object created with it to obtain unique object identifiers. The specification does not specify how the writers should obtain their unique creator_id. Building a mechanism to distribute unique OIDs requires knowledge about the underlying system characteristics, and this information is only available in DCPS. Proposed Resolution: Make the definition of the OID vendor specific. This allows a vendor to specify its own algorithms to guarantee that each object has got a unique identifier. The only location where the application programmer actually has to know the contents of the OID is in the create_object_with_oid method on the ObjectHome. However, we see no use-case for this method and propose to remove it.
Make the definition of the OID vendor specific. This allows a vendor to specify its own algorithms to guarantee that each object has got a unique identifier. The only location where the application programmer actually has to know the contents of the OID is in the create_object_with_oid method on the ObjectHome. However, we see no use-case for this method and propose to remove it.
XML mapping file does not allow you to define both the Topic name and the Topic type_name separately Summary: In the DCPS, there is a clear distinction between a topic name and a topic type. (Both names must be provided when creating a Topic). However, the DLRL mapping XML only allows us to specify one name attribute which is called ‘name’. It is unclear whether this name should identify the type name or the topic name. Currently we just have to assume that the topic name and type name are always chosen to be equal, but that does not have to be the in a legacy topic model. Proposed Resolution: Add a second (optional) attribute to the mainTopic, extensionTopic, placeTopic and multiPlaceTopic that identifies the type name. If left out, the type is assumed to be equal to the topic name.
Add a second (optional) attribute to the mainTopic, extensionTopic, placeTopic and multiPlaceTopic that identifies the type name. If left out, the type is assumed to be equal to the topic name.
Summary: Currently there are separate methods to find a specific object based on its OID in the Cache and in a CacheAccess. It would be nice to have one method to search for an Object in any CacheBase. Proposed Resolution: Add a CacheBase parameter to the find_object method and remove the find_object_in_access method.
Add a CacheBase parameter to the find_object method and remove the find_object_in_access method.
Summary: The DLRL PSM specifies a number of Exceptions, but these are not explained in the PIM, and they do not cover the entire range of all possible errors. Proposed Resolution: Make an extensive list of all possible Exceptions and explain them in the PIM as well. Add a String message to the exception that can give more details about the context of the exception.
Make an extensive list of all possible Exceptions and explain them in the PIM as well. Add a String message to the exception that can give more details about the context of the exception
Summary: The current Metamodel explains the different BasicTypes that are supported in DLRL. Although on DCPS sequences are supported for all primitive types, the DLRL states that the only sequences that can be supported are sequences of octet. Proposed Resolution: Explicitly state that the DLRL supports sequences of all supported primitive types.
Indicate that in case of manual mapping key-fields of registered objects may not be changed Summary: When using the DLRL with pre-defined mapping, keyfields of the topic can be mapped to ordinary attributes of a DLRL object. However, chaning these attributes on the DLRL object results in a change of identity on DCPS. Proposed Resolution: Do not allow attributes that are mapped to key fields in the underlying Topic to be modified after the DLRL object has been registered. Throw a PreconditionNotMet Exception if this rule is violated.
Do not allow attributes that are mapped to key fields in the underlying Topic to be modified after the DLRL object has been registered. Throw a PreconditionNotMet Exception if this rule is violated.
Summary: There is no (default) constructor specified for the ObjectHome class. Nowhere in the specification it is specified how an ObjectHome should be instantiated and what the default values will be for auto_deref and for the filter expression. Proposed Resolution: Explicitly state that the default constructor should be used to instantiate an ObjectHome. Also state that by default the value of auto_deref will be set to true, and the filter expression will be set to NULL. Setting auto_deref to true by default ensures that the application developer has to make the conscious decision to set the auto_deref functionality to false for performance gain, which is more natural then the other way around
Explicitly state that the default constructor should be used to instantiate an ObjectHome. Also state that by default the value of auto_deref will be set to true, and the filter expression will be set to NULL. Setting auto_deref to true by default ensures that the application developer has to make the conscious decision to set the auto_deref functionality to false for performance gain, which is more natural then the other way around
Raise a PreconditionNotMet when changing a filter expression on a registered ObjectHome Summary: ObjectHome contains a set_filter method to set the filter attribute. This method may only be called before an object home is registered. However the only exception that is thrown is the BadParameter exception. We believe this exception does not cover the fact that the set_filter can be called after the objecthome is registered, as bad parameter is not a good description for the error that should be generated then. Proposed Resolution: Raise a PreconditionNotMet Exception when the set_filter method is invoked after the ObjectHome has been registered to a Cache.
To clearly distinguish between a FilterCriterion (used with Selections) and a content-filter (used at the ObjectHome), we propose to rename the attribute named "filter" to "content_filter". Furthermore, raise a PreconditionNotMet Exception when the set_content_filter method is invoked after the ObjectHome has been registered to a Cache.
Summary: In section 2.1.2.2.2, the "get_domain_id" method is mentioned in the table, but is not explained in the following sections. Proposed Resolution:: Add a section that explains the "get_domain_id" method. Proposed Revised Text:: Replace section 2.1.2.2.1.26 with the following one: 2.1.2.2.1.26 get_domain_id This operation retrieves the domain_id used to create the DomainParticipant. The domain_id identifies the Domain to which the DomainParticipant belongs. As described in the introduction to Section 2.1.2.2.1 each Domain represents a separate data "communication plane" isolated from other domains.
PIM and PSM are contradicting with respect to the "get_sample_lost_status" operation. Summary: According to the PIM in section 2.1.2.5.2(.12), the Subscriber class has got an operation called "get_sample_lost_status". According to the PSM in section 2.2.3, this operation is not part of the Subscriber, but of the DataReader. Proposed Resolution:: Move the "get_sample_lost_status" operation in the PIM to the DataReader as well. RTI: We propose ewmoving this from the Subscriber altoguether and moving it to the DataReader. Proposed Revised Text:: In the Subscriber table in section 2.1.2.5.2 Subscriber Class Remove the entry on the operation get_sample_lost_status() In the DataReader table in section section 2.1.2.5.3 DataReader Class Add the entry on the get_sample_lost_status() operation that was removed from the Subscriber class Add section 2.1.2.5.3.24, previous 2.1.2.5.3.24 becomes 2.1.2.5.3.25: 2.1.2.5.3.24 get_sample_lost_status This operation allows access to the SAMPLE_LOST_STATUS communication status. Communication statuses are described in Section 2.1.4.1, "Communication Status," on page 2-125.
Summary: In section 2.1.2.4.1.17, the explanation for the "copy_from _topic_qos" operation mentions two parameters called "topic_qos" and "datawriter_qos_list". Both parameter names do not exist. In the PSM (section 2.2.3) the first two parameters for all "read()" and "take()" methods (and their variants) are consistently called "received_data" and "sample_infos". In the DataReader PIM in section 2.1.2.5.3, these same names are only used for the "read()" and "take()" methods. All their variants and all have a first parameter called "data_values". The FooDataReader PIM has the same issue, but even uses the name "data_values" for the read() and take() methods themselves. Proposed Resolution:: Replace "topic_qos" with "a_topic_qos" and "datawriter_qos_list" with "a_datawriter_qos". Consistently use the parameter name "received_data" in bot the PIM and the PSM. We propose we either ignore the second change regarding 'data_values' or change it the other way around (from received_data to data_values). This impacts the specification less. There are a lot of places that would be affected by the change to "received_data" from "data_values" Proposed Revised Text:: Section 2.1.2.4.1.17 copy_from_topic_qos: 1st paragraph, replace: "topic_qos" with "a_topic_qos" 1st, 2nd, and 3rd paragraph, replace: "datawriter_qos_list" with "a_datawriter_qos" Section 2.2.3 replace formal paramater name "received_data" with "data_value" or "data_values" depending on whether the typeis a sequence or not This affects DataReader::take* DataReader::read*, FooDataReader::take* and FooDataReader::read* Section 2.1.2.5.3 DataReader Class table replace "received_data" with "data_values". This affects the operations: return_loan take read Section 2.2.3 DCPS PSM : IDL Change formal parameter of read/take operations from "received_data" with "data_values". This affects the operations:
Replace "topic_qos" with "a_topic_qos" and "datawriter_qos_list" with "a_datawriter_qos". Change it the other way around (from received_data to data_values). This impacts the specification less. There are a lot of places that would be affected by the change to "received_data" from "data_values".
Summary: In section 2.1.3.19 it is not clear how to specify unlimited resource limits. (It is mentioned in the QoS table in section 2.1.3 that the default setting for resource_limits is length_unlimited, but in the context of 2.1.3.19 this is not repeated). Proposed Resolution:: Specify in Section 2.1.3.19 that the constant LENGTH_UNLIMITED must be used to specify unlimited resource limits. Proposed Revised Text:: In section 2.1.3.19 add the following paragraph before the last paragraph in the section (the one that starts with "The setting of RESOURCE_LIMITS …": The constant LENGTH_UNLIMITED may be used to indicate the absence of a particular limit. For example setting max_samples_per_instance to LENGH_UNLIMITED will cause the middleware to not enforce this particular limit.
Specify in Section 2.1.3.19 that the constant LENGTH_UNLIMITED must be used to specify unlimited resource limits.
Summary: See also issue R#123 of our previous Issues document. (Addition of an IllegalOperation Errorcode). This issue has been solved on the PIM level, but the ReturnCode has not been added to the IDL PSM. Proposed Resolution:: Add the RETCODE_ILLEGAL_OPERATION ReturnCode to the PSM in section 2.2.3. Proposed Revised Text:: Section 2.2.3 DCPS PSM : IDL after the line "const ReturnCode_t RETCODE_NO_DATA = 11;" add the line: const ReturnCode_t RETCODE_ILLEGAL_OPERATION = 12;
Add the RETCODE_ILLEGAL_OPERATION ReturnCode to the PSM in section 2.2.3.
Summary: In section 2.1.4.2.1, it is explained that a statusflag becomes TRUE if a plain communication status changes, and becomes FALSE again each time the application accesses the plain communication status via the proper get_<plain_communication_status> operation. This is not a complete description, since it only assumes an explicit call to read the communication status. It is also possible (by attaching a Listener) to implicitly read the status (it is then passed as a parameter to the registered callback method), and then afterwards the status flag should also be set to FALSE as well. Furthermore, the Status table in section 2.4.1 mentions that all total_count_change fields are being reset when a Listener callback is performed. The same thing happens when a get_<plain_communication_status> operation is invoked. It would make sense that a Listener callback behaves in a similar way as an when explicitly reading the plain communication status. Proposed Resolution:: Mention explicitly in section 2.1.4.2.1 that a status flag is also set to FALSE when a listener callback for that status has been performed. (We need to think what consequences this will have for NIL-Listeners, that behave like a no-op. Probably they should also reset the flag in that case.) Proposed Revised Text:: In section 2.1.4.2.1 after the paragraph: For the plain communication status, the StatusChangedFlag flag is initially set to FALSE. It becomes TRUE whenever the plain communication status changes and it is reset to FALSE each time the application accesses the plain communication status via the proper get_<plain communication status> operation on the Entity. Add the paragraphs: The communication status is also reset to FALSE whenever the associated listener operation is called as the listener implicitly accesses the status which is passed as a parameter to the operation. The fact that the status is reset prior to calling the listener means that if the application calls the get_<plain communication status> from inside the listener it will see the status already reset. An exception to this rule is when the associated listener is the 'nil' listener. As described in section 2.1.4.3.1 the 'nil' listener is treaded as a NOOP and the act of calling the 'nil' listener does not reset the communication status.
Mention explicitly in section 2.1.4.2.1 that a status flag is also set to FALSE when a listener callback for that status has been performed. (We need to think what consequences this will have for NIL-Listeners, that behave like a no-op. Probably they should also reset the flag in that case.)
Summary: In section 2.1.2.2.1. DomainParticipant Class it says: The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled It should say: The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled Proposed Resolution: Proposed Revised Text: In section 2.1.2.2.1. DomainParticipant Class, paragraph at the end of the section before the bullet points Replace: The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled With: The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled
Summary: On page 2-70 at the end of section 2.1.2.5.2 (Subscriber Class) the description states that a list of operations including delete_datareader may return NOT_ENABLED. The operation delete_datareader should be removed from this list. Proposed Resolution: Proposed Revised Text: In section 2.1.2.5.2 Subscriber Class, at the end right before section 2.1.2.5.2.1 replace paragraph: All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, create_datareader, and delete_datareader may return the value NOT_ENABLED. With: All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, and create_datareader may return the value NOT_ENABLED.
Summary: On 2-94 section 2.1.2.5.5 (SampleInfo Class) the description of publication_handle states that it identifies locally the DataWriter that modified the instance. Clarify that locally means the instance_handle from the builtin Publication DataReader belonging to the Participant of the DataReader from which the sample is read. Proposed Resolution: Proposed Revised Text: In section 2.1.2.5.5 SampleInfo Class, replace the bullet: the publication_handle that identifies locally the DataWriter that modified the instance. With the bullet: the publication_handle that identifies locally the DataWriter that modified the instance. The publication_handle is the same InstanceHandle_t that is returned by the operation get_matched_publications on the DataReader and can also be used as a parameter to the DataReader operation get_matched_publication_data. In section 2.1.2.5.3.33 get_matched_publications after the first paragraph add the paragraph. The handles returned in the 'publication_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataWriter. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSPublications" builtin topic. In section 2.1.2.5.3.33 get_matched_publications after the first paragraph add the paragraph: The handles returned in the 'subscription_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataReader. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSSubscriptions" builtin topic.
Summary:
In the QoS table (section 2.1.3. Supported QoS) the 'concerns' row illegally specifies the DataWriter for the DURABILITY_SERVICE QoS.
Proposed Resolution:
Proposed Revised Text:
Section 2.1.3 Supported QoS, QoS Table:
Entry for DURABILITY_SERVICE QoS, remove the word 'DataWriter' from the 'concerns' column.
The RTF could not come to a resolution on this. deferred
Summary: In the QoS table for built-in Subscriber and DataReader objects (Section 2.1.5 Built-in Topics) the value for autopurge_disposed_sample_delay is missing. Proposed Resolution: Proposed Revised Text: In the UML figure in section 2.1.3 Supported QoS Class ReaderDataLifecycleQoS, Add the field: autopurge_disposed_sample_delay : Duration_t In section 2.1.5 Built-in Topics, QoS table, READER_DATA_LIFECYCLE row, add: autopurge_disposed_sample_delay = infinite
Summary: In section 2.1.2.4.2.5 register_instance the description states that if this operation exceeds the max_blocking_time this operation will return TIMEOUT. However this is not possible because the operation cannot return a ReturnCode_t value. Proposed Resolution: Proposed Revised Text: Section section 2.1.2.4.2.5 register_instance At the end of the 5th paragraph Replace: If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT With: If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return HANDLE_NIL
Summary: On page 2-65 the second last bullet states The sample_rank indicates the number or samples of the same instance that follow the current one in the collection. The 'or' should be 'of'. Proposed Resolution: Proposed Revised Text: Section 2.1.2.5.1 Access to the data, second to last bullet Replace 'or' with 'of' in the sentence: The sample_rank indicates the number or samples of the same instance that follow the current one in the collection. Resulting in: The sample_rank indicates the number of samples of the same instance that follow the current one in the collection.
Summary:
The instance state is only accessible via sampleInfo and this requires the availability of data.
This implies that the dispose and the no writer state of an instance may not be noticed if the application has taken all samples.
Subsequent instance state changes are only notified if all samples are taken.
Consequently, it's very hard to receive notifications on disposal of instances.
Requires data so applications should use read instead of take.
But take is required for subsequent notifications.
Applications are not notified on arrival of data if they choose not to take all data (read or take not all)
Occasionally Application may need to react on disposal or the no writer state of instances e.g. cleanup allocated resources and applications may also continuously take all samples to save resources.
In this case a dispose or no writer state will only be noticed if a new generation appears, which may never happen.
Occasionally applications may want to keep all read samples and still be notified on data arrival.
Applications should be notified whenever new data arrives whether they have taken all previous data samples or not
According to the spec (section 2.1.2.5.3.8) it is possible to get 'meta samples' that is samples that have a SampleInfo but have no associated data, this can be used to notify of disposal, no writers and such.
Proposed Resolution:
Always reset the read communication status flag on any read or take operation.
Provide a notification mechanism on the DataReader that specifies the instance handle of the instance whose state has changed.
-> This is managed by the meta-sample mechanism mentioned above
Provide a method on an instance handle to access the instance state.
Modify figure 2-16 and section 2.1.4.2.2 to state that the ReadCommunicationStatus is reset to FALSE whenever the corresponding listener operation is called, or else if a read or take operation is called on the associated DataReader
In addition the ON_DATA_ON_READERS status is reset if the on_data_available is called. The inverse (resetting the ON_DATA_AVAILABLE status when the on_data_on_readers is called) does not happen.
Proposed Revised Text:
Section 2.1.2.5 Subscription Module, Figure 2-10
Add the following field to the SampleInfo class:
valid_data : boolean
Section 2.1.2.5.1 Access to the data (see attached document access_to_the_data2CMP.pdf for the resulting section with changes)
>>Aftter 2nd paragraph "Each of these" add the section heading:
2.1.2.5.1.1 Interpretation of the SampleInfo
3rd paragraph; add the following bullet after the bullet that starts with "The instance_state of the related instance"
The valid_data flag. This flag indicates whether there is data associated with the sample. Some samples do not contain data indicating only a change on the instance_state of the corresponding instance.
>>Before the paragraph that starts with "For each sample received" add the section headings:
2.1.2.5.1.2 Interpretation of the SampleInfo sample_state
>>Before the paragraph that starts with "For each instance the middleware internally maintains" add the section heading:
2.1.2.5.1.3 Interpretation of the SampleInfo instance_state
>>Before the paragraph that starts with "For each instance the middleware internally maintains two counts: the disposed_generation_count and no_writers_generation_count" add the following subsections (2.1.2.5.1.4, and 2.1.2.5.1.5):
2.1.2.5.1.4 Interpretation of the SampleInfo valid_data
Normally each DataSample contains both a SampleInfo and some Data. However there are situations where a DataSample contains only the SampleInfo and does not have any associated data. This occurs when the Service notifies the application of a change of state for an intance that was caused by some internal mechanism (such as a timeout) for which there is no associated data. An example of this situation is when the Service detects that an instance has no writers and changes the coresponding instance_state to NOT_ALIVE_NO_WRITERS.
The actual set of scenarios under which the middleware returns DataSamples containing no Data is implementation dependent. The application can distinguish wether a particular DataSample has data by examining the value of the valid_data flag. If this flag is set to TRUE, then the DataSample contains valid Data, if the flag is set to FALSE the DataSample contains no Data.
To ensure corerctness and portability, the valid_data flag must be examined by the application prior to accessing the Data associated with the DataSample and if the flag is set to FALSE, the application should not access the Data associated with the DataSample, that is, teh application should access only the SampleInfo.
2.1.2.5.1.5 Interpretation of the SampleInfo disposed_generation_count and no_writers_generation_count
Before the paragraph that starts with "The sample_rank and generation_rank available in the SampleInfo are computed …" add the section heading:
2.1.2.5.1.6 Interpretation of the SampleInfo sample_rank, generation_rank, and absolute_generation_rank
>>Before the paragraph that starts with "These counters and ranks allow the application to distinguish" add the section heading:
2.1.2.5.1.7 Interpretation of the SampleInfo counters and ranks
>>Before the paragraph that starts with "For each instance (identified by the key), the middleware internally…" add the section heading:
2.1.2.5.1.8 Interpretation of the SampleInfo view_state
>>Before the paragraph that starts with "The application accesses data by means of the operations read or take on the DataReader" add the section heading:
2.1.2.5.1.9 Data access patterns
Section 2.1.2.5.5 Sample Info class
Add another bullet to the list:
The valid_data flag that indicates whether the DataSample contains data or else it is only used to communicate of a change in the instance_state of the instance.
Section 2.2.3 DCPS PSM : IDL
struct SampleInfo
Add the following field at the end of the structure:
boolean valid_data
The resulting structure is:
struct SampleInfo {
SampleStateKind sample_state;
ViewStateKind view_state;
InstanceStateKind instance_state;
Time_t source_timestamp;
InstanceHandle_t instance_handle;
InstanceHandle_t publication_handle;
long disposed_generation_count;
long no_writers_generation_count;
long sample_rank;
long generation_rank;
long absolute_generation_rank;
boolean valid_data;
};
According to the spec (section 2.1.2.5.3.8) it is possible to get 'meta samples' that is samples that have a SampleInfo but have no associated data, this can be used to notify of disposal, no writers and such. So this part is not a problem. The above problems can be solved as follows: · Always reset the read communication status flag on any read or take operation. · State that meta-samples (samples with no data) are used to provide a notification mechanism on the DataReader that specifies the instance handle of the instance whose state has changed. Specifically the issue can be resolved by: 1. Modifying figure 2-16 and section 2.1.4.2.2 to state that the ReadCommunicationStatus is reset to FALSE whenever the corresponding listener operation is called, or else if a read or take operation is called on the associated DataReader 2. Changing the description of the ON_DATA_ON_READERS status such that it is reset if the on_data_available is called. The inverse (resetting the ON_DATA_AVAILABLE status when the on_data_on_readers is called) does not happen.
Summary: In section 2.1.3.9.2 EXCLUSIVE kind (the last sentence on page 2-114) the specification states that ownership changes are notified via a status change. However there is no status change that notifies of ownership change. The only way to detect it is to look at the SampleInstance and see that the publication_handle has changed. Proposed Resolution: Remove the sentence. We could add the Status, Listener, and Callback, but it seems unnecessary until we see some actual use-cases that require this… Proposed Revised Text: In section 2.1.3.9.2 EXCLUSIVE kind, last sentence in last paragraph, remove the sentence: "The DataReader is also notified of this via a status change that is accessible by means of the Listener or Condition mechanisms."
Remove the sentence. We could add the Status, Listener, and Callback, but it seems unnecessary until we see some actual use-cases that require this.
Must read/take_next_instance() require that the handle corresponds to a known data-object? Summary: In sections for read/take_next_instance() and read/take_next_instance_w_condition() it states that if detectable the implementation should return BAD_PARAMETER in this case or otherwise the situation is unspecified. It might be desirable to allow for an invalid handle to be passed in, especially in the case that the user is iterating through instances and takes all samples to an instance that is NOT_ALIVE and has no writers in which case that action may actually free that instance, "invalidating" the handle of that instance. Proposed Resolution: Allow passing a handle that does not correspond to any instance currently on the DataReader to read_next_instance/take_next_instance. This handle should be sorted in a deterministic way with regards to the other handles such that the iteration is not interrupted. Proposed Revised Text: Section 2.1.2.5.3.16 read_next_instance Replace the paragraph: This operation implies the existence of some total order 'greater than' relationship between the instance handles. The specifics of this relationship are not important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instances, that is, it does not depend on the actual samples received or available. For the purposes of this explanation it is 'as if' each instance handle was represented as a unique integer. With: This operation implies the existence of a total order 'greater-than' relationship between the instance handles. The specifics of this relationship are not all important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instance handles: It should not depend on the state of the instance (e.g. whether it has data or not) and must be defined even for instance handles that do not correspond to instances currently managed by the DataReader. For the purposes of the ordering it should be 'as if' each instance handle was represented as a unique integer. Section 2.1.2.5.3.16 read_next_instance Remove the paragraph: The behavior of the read_instance operation follows the same rules as the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan. Replace the paragraph: This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified. With Note that it is possible to call the 'read_next_instance' operation with an instance handle that does not correspond to an instance currently managed by the DataReader. This is because as stated earlier the 'greater-than' relationship is defined even for handles not managed by the DataReader. One practical situation where this may occur is when an applications is iterating though all the instances, takes all the samples of a NOT_ALIVE_NO_WRITERS instance, returns the loan (at which point the instance information may be removed, and thus the handle becomes invalid), and tries to read the next instance. Section 2.1.2.5.3.17 take_next_instance Replace the paragraph: This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified. With Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader. Section 2.1.2.5.3.18 read_next_instance_w_condition Replace the paragraph: This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified. With Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call read_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader. Section 2.1.2.5.3.19 take_next_instance_w_condition Replace the paragraph: This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified. With Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance_w_condition with an instance handle that does not correspond to an instance currently managed by the DataReader.
Allow passing a handle that does not correspond to any instance currently on the DataReader to read_next_instance/take_next_instance. This handle should be sorted in a deterministic way with regards to the other handles such that the iteration is not interrupted.
Clarification to when a instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section Summary: In Section 2.1.3.22 (READER_DATA_LIFECYCLE QoS) the fourth paragraph mention how "the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken'". This should be corrected to state "for instances for which all samples have been 'taken' and either instance_state = NOT_ALIVE_NO_WRITERS or instance_state = NOT_ALIVE_DISPOSED and there are no 'live' writers". In light of this the statement in the last paragraph stating that once the state becomes NOT_ALIVE_DISPOSED after the autopurge_disposed_samples_delay elapses, "the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost" is not entirely true. If there are other 'live' writers, the DataReader will maintain the state on the instance of which DataWriters are writing to it. We should change the "will purge all" to "may purge all" or even "will purge". Alternatively, we could describe in further detail when it "will purge all", i.e. when there are no 'live' writers. The biggest thing here is to decide whether the instance lifecycle can end directly from the NOT_ALIVE_DISPOSED state (as Figure 2-11 currently states) or whether we must force it to go though the NOT_ALOVE_NO_WRITES; that is, in the case where the last writer unregisters a disposed instance do we transition to NOT_ALIVE_NO_WRITERS+NOT_ALIVE_DISPOSED or do we finish the lifecycle directly without notifying the user (as it is indicated now) We think the current behavior is better because from the application reader point of view, the instance does not exists once it DISPOSED, the fact that we keep the instance state such that we can retain ownership is a detail inside the middleware, so it would be unnatural to get a further indication that the instance (that it no longer knows about) has now no writers. We suggest the proposed changes should reflect this point of view. Proposed Resolution: Make the suggested corrections: (1) Correct when readers can claim resources to include NOT_ALIVE_DISPOSED state when there are no live writers. So we always reclaim when there are no writers and all the samples for that instance are taken; these samples will include a sentinel mata-sample with an instance state that will be either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED (2) Clarify that the auto_purge_disposed samples removes only the samples, but not the instance; the instance will only removed in the above case. Proposed Revised Text: Section 2.1.3.22 READER_DATA_LIFECYCLE QoS Replace the paragraph: Under normal circumstances the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken.' With Under normal circumstances the DataReader can only reclaim all resources for instances for which there are no writers and for which all samples have been 'taken.' The last sample the DataReader will have taken for that instance will have an instance_state of either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED depending on whether the last writer that had ownership of the instance disposed it or not. Refer to Figure 2-11 for a statechart describing the transitions possible for the instance_state. In the Paragraph starting with "The autopurge_nowriter_samples_delay defines.." Replace once its view_state becomes NOT_ALIVE_NO_WRITERS With once its instance_state becomes NOT_ALIVE_NO_WRITERS Replace the paragraph: The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost With The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain samples for an instance once its instance_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all samples for the instance.
Make the suggested corrections: .(1) Correct when readers can claim resources to include NOT_ALIVE_DISPOSED state when there are no live writers. So we always reclaim when there are no writers and all the samples for that instance are taken; these samples will include a sentinel mata-sample with an instance state that will be either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED (2) Clarify that the auto_purge_disposed samples removes only the samples, but not the instance; the instance will only removed in the above case.
Summary: In Section 2.1.2.5.2.11 (notify_datareaders) the first sentence states This operation invokes the operation on_data_available on the DataReaderListener objects attached to contained DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState. In Section 2.1.4.2.2 (Changes in Read Communication Statuses) it states in the first paragraph that the "StatusChangedFlag becomes false again when all samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entities". In Figure 2-16 in the same section, the transition from the TRUE state to FALSE is accompanied by the condition "DataReader:take[all data taken by application]". However, in Section 2.1.4.4 (Condition and Wait-sets) in the last step in the general use pattern deals with using the result of the wait operation and in the third sub-bullet it states how if the wait unblocked due to a StatusCondition and the status change is DATA_AVAILABLE, the appropriate action is to call read/take on the relevant DataReader. If only a take of all samples will reset the status, then simply calling read in this use pattern will not reset the status and the given general use pattern will actual spin. Proposed Resolution: The actual condition for the StatusChangedFlag to become false should then be that the status has been considered read/accessed by the user. This should be considered as such when the listener for a Read Communication Status is called similar to Plain Communication Statuses (see T#6). In addition, it should be such if the user calls read/take on the associated DataReader. Subscriber's DATA_ON_READERS status is reset if the on_data_on_readers is called (same as for all listeners). In addition Subscriber's DATA_ON_READERS status is reset if the user calls read or take on any of the DataReaders belonging to the Subscriber. In addition, the Subscriber's DATA_ON_READERS status is also reset if the on_data_available callback is called on the DataReaderListener. This is needed such that if the application calls notify_datareaders it will reset the status. The inverse, i.e. resetting the DATA_AVAILABLE status when the on_data_on_readers callback is called) does not happen. Proposed Revised Text: Section 2.1.2.5.2.11 notify_datareaders In the first sentence, change DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState To DataReader entities with a DATA_AVAILABLE status that is considered changed. Section 2.1.4.2.2 Changes in Read Communication Statuses Change the last sentence of the first paragraph from The StatusChangedFlag becomes false again when all the samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entitites. To The DATA_AVAILABLE StatusChangedFlag becomes false again when either the corresponding listener operation (on_data_available ) is called or a read or take operation is called on the associated DataReader. The DATA_ON_READERS StatusChangedFlag becomes false again when any of the following occurs: o The corresponding listener operation (on_data_on_readers) is called. o The on_data_available listener operation is called on any DataReader belonging to the Subscriber. o The read or take on any DataReader belonging to the Subscriber In Figure 2-16 Introduce two figures one for the DATA_ON_READERS and the other for the DATA_AVAILABLE
The actual condition for the StatusChangedFlag to become false should then be that the status has been considered read/accessed by the user. This should be considered as such when the listener for a Read Communication Status is called similar to Plain Communication Statuses (see ISSUE# [T#6). In addition, it should be such if the user calls read/take on the associated DataReader. Subscriber's DATA_ON_READERS status is reset if the on_data_on_readers is called (same as for all listeners). In addition Subscriber's DATA_ON_READERS status is reset if the user calls read or take on any of the DataReaders belonging to the Subscriber. In addition, the Subscriber's DATA_ON_READERS status is also reset if the on_data_available callback is called on the DataReaderListener. This is needed such that if the application calls notify_datareaders it will reset the status. The inverse, i.e. resetting the DATA_AVAILABLE status when the on_data_on_readers callback is called) does not happen
Need to clarify what is meant by "RELATED_OBJECTS" -- ObjectRoot has an is_modified method that takes a scope, OBJECT_ONLY, CONTAINED_OBJECTS and RELATED_OBJECTS. Whereas it is clear that OBJECT_ONLY means only attributes on this ObjectRoot and that CONTAINED_OBJECTS includes any changes to objects that this object refers to, it is not clear what RELATED_OBJECTS means.
Need to clarify and/or enumerate allowable (spec compliant) ways to implement ObjectReference[]. The method definitions have return values or parameters specified by ObjectReference[]. Given that this is part of the platform independent model, it would appear that one could approach the implementation of this in C++ in one of two ways -- as std::vector<ObjectReference *> or as an ObjectReference array. It is unclear which would be the preferred spec compliant way to do this. Alternatively, one could implement this as a language specific container, vector, list, map or whatever is the best performing container for the given situation, but would that not be in compliance with the spec at all?
I was looking at the DDS specification and saw that the name of the first parameter for DomainParticipant's create_contentfilteredtopic method is "name" in the table of methods (section 2.1.2.2.1) and it is "topic_name" in the method description (section 2.1.2.2.1.7). I assume the name should be consistent.
I don't think that it is explicitly stated anywhere in this spec that each DataWriter and each DataReader maintains its own set of samples. Samples are not maintained by Publishers, Subscribers. It is implied in several places, e.g. the fact that RESOURCE_LIMITS applies to DataWriter and DataReader and not to Publisher and Subscriber. It took me a while to figure this out. It is possible for a Publisher to have more than one DataWriter with the same topic, and a Subscriber to have more than one DataReader with the same topic. The semantics of DataReader.take, for instance, are unclear unless it is understood that each DataReader has its own samples. Evidently a take operation on one DataReader does not disturb the samples of another DataReader with the same Publisher and Topic. Please clarify the specification in this regard. Figure 2-1 and accompanying text would be one place.
Context: "The resulting type is specified by the type_name argument." What happens if the specified type is inconsistent with the subscription_expression?
The class description for TopicDescription says "no attributes" but in fact two attributes are listed immediately below.
Context: "A Topic is identified by its name, which must be unique in the whole Domain." There is nothing in the description of create_topic that indicates that this constraint is enforced. Is it possible for multiple domain_participants to execute create_topic with the same name? What happens if they specify different types?
Context: "The application may pass nil as the value for the type_name. In this case the default typename as defined by the TypeSupport (i.e., the value returned by the get_type_name operation) will be used." What happens if register_type is given a type_name that differs from the result of get_type_name?
Context: "For each instance the middleware internally maintains an instance_state." Obviously this instance_state could be different for different DomainParticipants. Might it be different for two Subscribers of the same DomainParticipant? How about two DataReaders of the same Subscriber?
Context: DataReader.read operation Why are sample_states, view_states, and instance_states provided as separate parameters? Aren't they contained in sample_infos? More explanation required.
Context: "The use of this variant allows for zero-copy access to the data and the application will need to “return the loan” to the DataWriter using the return_loan operation (see Section 2.1.2.5.3.20 )." Should be DataReader?
Context: "The act of taking a sample removes it from the DataReader so it cannot be ‘read’ or ‘taken’ again." See my earlier comment. Apparently if there are other DataReaders for the same Topic and the same Subscriber, their samples are not disturbed. Assuming this is true, it should be stated.
Context: "And three integers: history_depth, max_samples, max_instances, max_samples_ per_instance" There are four integers here.
Context: "The “persistence service” is the one responsible for implementing the DURABILITY kinds TRANSIENT and PERSISTENCE." You mean PERSISTENT
Context: "In other words, the DataReader may miss some datasamples but it will never see the value of a data-object change from a newer value to an order value." You mean "older".
Context: "If the kind is set to KEEP_ALL, then the Service will attempt to maintain and deliver all the values of the instance to existing subscribers. The resources that the Service can use to keep this history are limited by the settings of the RESOURCE_LIMITS QoS. If the limit is reached, then the behavior of the Service will depend on the RELIABILITY QoS. If the reliability kind is BEST_EFFORT, then the old values will be discarded." This violates the ordinary English meaning of KEEP_ALL. Can't the same effect be achieved by specifying KEEP_LAST with history_depth=max_samples_per_instance? If so, then KEEP_ALL should not be allowed for RELIABILITY=BEST_EFFORT.
Context: "However an exiting DCPS model by construction is unlikely to rely heavily on inheritance between its ‘classes.’" You mean "existing".
It would be helpful if the DLRL section's Platform-Independent Model broke out separate interface tables for its generated types, as the DCPS spec does with FooTypeSupport, FooDataReader, FooDataWriter, etc. This would enable the spec to clarify some slightly confusing tables, such as the table for a SelectionListener that shows on_object_in, on_object_out, and on_object_modified accepting an ObjectRoot parameter, when we really know it accepts a Foo. Breaking out a separate FooSelectionListener to demonstrate this would be useful, as it was for the DCPS section.
It looks like the Fig 3.5 and Fig 3.6 diagrams have incorrect captions. I believe they should be corrected as follows: Fig 3.5: "read_state" and "write_state" should be swapped; the VOID diagram refers to read_state. The caption on the right-hand state chart should be "read_state of a *Cache object* in READ_ONLY or READ_WRITE mode". Fig 3.6's first state chart should have a caption that reads "read_state of a CacheAccess object in READ_ONLY or READ_WRITE mode"; the second state chart's caption should read "CacheAccess object", not just "CacheAccess"
The spec is not completely clear on what you can and cannot do with a WRITE_ONLY CacheAccess. Is it legal to clone an object into a WRITE_ONLY CacheAccess, or are we only permitted to create brand new objects (via create_object) in a WRITE_ONLY CacheAccess? On a related note, is a CacheAccess::refresh() for a WRITE_ONLY CacheAccess always a noop, or should it throw an exception? It seems like it should thrown an exception (such as a WriteOnlyMode exception, which doesn't exist) to be consistent with CacheAccess::write, which throws a ReadOnlyMode exception when you call it from a READ_ONLY CacheAccess.
Bi-directional associations involving multi-relations (either 1-to-N or M-to-N) and underspecified. Modifications to one side of a bi-directional relationship is supposed to be automatically reflected in the other side of the relationship, (see 3.1.3.2.2) but that is not always possible given the provided Collection interfaces.
The behavior is clear if one of the multi-relations involved is a Set.
A change one side of the association causes an add/remove on the Set side of the association.
The behavior is interpretable if one of the multi-relations involved is a List. In a UML 1-to-N or M-to-N relationship, is is expected that there will not be duplicate entries in the "multi" side of the relationship. In other words, in a Foo<->*Bar relationship, the same Bar will not occur more than once in the Foo's list of Bars. Using this interpretation, you can interpret a n "add" to the non-List side of the relationship as implying that a new object is added to the end of the list, and a "remove" as implying that the object is removed from the List. In other words, we can treat the List just like a Set and allow changes to the association from both sides.
The caveat is that you can't have duplicate entries in a List that is involved in a bi-directional association. (Then again, what if you modify the association from the List side, and you put duplicate values? Do we allow that?)
Bi-directional assoications get very tricky when IntMaps or StrMaps are involved.
If two maps are involved (e.g. IntMap to StrMap), then it is impossible to modify the association. If I add to it from the IntMap side, then I have to somehow specify the key to use on the StrMap side. The Collection interfaces provide no mechanism to to this.
If only one map is involved (either 1-to-N or M-to-N), then I must to all modifications from the map side. Otherwise, I have no way to indicate the Map key when I modify the association.
So, the specification need to be clarified on bi-directional associations:
1. The behavior of a List in a bi-directional association must be clarified. Is what I said above correct, or should there be limitations on how you can use a List in a bi-directional association?
2. The usage of IntMaps and StrMaps must be clarified. There are several possibilities:
a. IntMaps and StrMaps are not allowed in bi-directional associations
b. IntMaps and StrMaps are allowed, but you must make all modifications to the association from the Map side of the association.
Map-to-Map associations are not allowed.
c. All types of associations are allowed, and the OMG will add to the Collection API to provide the methods needed to set the map keys appropriately from either directionHow should dangling relationships be handled? For example, suppose I have a Foo->Bar relationship. My Foo has a related Bar. 1. I clone the Foo and the Bar into the CacheAccess. 2. Someone deletes the Bar, but does not update the Foo's relationship. 3. I refresh the CacheAccess, which deletes my Bar in the CacheAccess, but my Foo thinks it's still related. What should happen when I call Foo.get_bar()? a. throw an exception? (NotFound? AlreadyDeleted?) b. return a NULL?
The DLRL spec uses CORBA valuetypes to specify DLRL objects. The implementations of these CORBA valuetypes (e.g. Foo) are completely provided by the DLRL -- the user doesn't have to implement anything, which is great from a simplicity standpoint. However, this limits the DLRL valuetypes to being not much more that fancy structs with inheritance and relationships, but no behavior. One of the capabilities of standard CORBA valuetypes is the ability to specify valuetype operations in IDL and then implement them in the target language. The CORBA valuetype specification has permits the user to write the valuetype's implementation class, providing behavior for the methods specified in IDL. The user then implements a valuetype factory as a hook to create instances of the user-written valuetype implementation. I can see a similar comcept as useful to DLRL. It would probably be useful for a user to specify valuetype operations in DLRL IDL and implement them in the target language. One way to do this is to have the DLRL compiler generate a Foo class with a pure virtual method for each valuetype operation. The user inherits from it, implementing a MyFoo with implemntations of the pure virtual methods. We also need a factory; naturally, we'd use the FooHome. The generated FooHome would include a pure virtual "create()", or something like it, which the user would implement in a derived MyFooHome class to create instances of his MyFoo. Instead of creating a FooHome and registering it with the Cache, the user creates a MyFooHome and registers it with the Cache. The DLRL core uses the MyFooHome's overridden create() method to make new Foo handles (which are actually MyFoo handles, containing the user's behavior). There may be holes in this, but the basic idea is probably useful. It would permit a DLRL object model to be a full object model.
The spec implies that is_modified is only valid in the context of a listener callback (I suppose that would be an ObjectListener or a SelectionListener callback). See section 3.1.6.4.1, general scenario, which says that after end_updates is called, the "modification states of the updated objects is cleaned". Does that mean that, outside of an ObjectListener or SelectionListsner, a call to is_modified always returns false? Also, the spec is not clear about what happens in a CacheAccess. A CacheAccess, of course, has no listeners. So, for the is_modified() methods to be useful from a CacheAccess, they'd have to be valid from outside of a listener for an object in a CacheAccess. Is that the case? There is also the corner case of a manual Selection. A manual Selection might not have a listener. There wouldn't be any way to call is_modified () on an object in a manual Selection that doesn't have a Listener -- you'd have to attach a Listener to the manual selection, and call is_modified in the callback that happens when you call refresh(). Is that what the spec intends?
A set_<attribute> call on a DLRL object is only valid from within a writable CacheAccess. It seems that, if an application calls set_<attribute> on a DLRL object outside of a writable CacheAccess, then set_<attribute> should throw an exception -- probably a PreconditionNotMet exception. However, the spec doesn't indicate that an exception should be thrown in this case. It only mentions the case where the attribute is a key field in the non-default mapping (3.1.6.3.14)
The spec says that the Selection takes ownership of the SelectionCriterion parameter passed in create_selection (see 3.1.6.3.7, ObjectHome, create_selection bullet item). However, this violates CORBA "in" parameter passing semantics, in which the client owns an "in" parameter and is responsible for it. The right way (in CORBA terms) to do this is for the client to create the FooSelectionCriterion on the heap, store it in a FooSelectionCriterion_var smart pointer, and let the smart pointer release it when it goes out of scope. The Selection can make a "copy" of the FooSelectionCriterion, presumably by bumping up its reference count.
The IDL for SelectionListsner has an error. The base SelectionListener class shows on_object_out() accepting an ObjectRoot; but the generated FooSelectionListener shows on_object_out (correctly) accepting a Foo. Putting the base SelectionListener's on_object_out() inside of the comment would fix this.
The base IDL and implied IDL are inconsistent for ObjectListener and FooListener: 1. The return values are "boolean" in the base class, "void" in the derived. Should be "boolean" for all. 2. The comments in ObjectListener IDL imply that only on_object_modified has a Foo-specific version generated in the derived class. But FooListener has Foo-specific versions for all three operations. FooListener version is correct.
If I have a handle on an object (such as a Foo), and the Foo is deleted from underneath me, I understand that I'll get an AlreadyDeleted exception if I try to do anything (call a setter or a getter) on the Foo. However, it seems like you should be able to get the OID and the read_state (which should be OBJECT_DELETED) from a deleted object. Can you? Can I call oid() and read_state() on a deleted object without getting an AlreadyDeleted exception?
Does a Collection attribute have a setter?
For example, suppose I have a Foo with a List<Long> in it (this isn't valid IDL, please humor me):
valuetype Foo : DDS::ObjectRoot
{
public List<long> my_longs;
}
That generates a Foo::get_my_longs() in the target language. Should it also generate a Foo::set_my_longs() in the target language?
Or should there simply be a Foo::get_my_longs(), and I modify the List<long> via add, put, etc?
Can a CacheAccess::refresh() throw an AlreadyClonedInWriteMode exception? Suppose an object is cloned into a writable CacheAccess with scope=RELATED_OBJECTS and some depth >1. Now, suppose object2, which is unrelated, is cloned into a different writable CacheAccess. Then, suppose incoming updates cause object and object2 to be related, where a subsequent CacheAccess::refresh() would pull object2 into the first writable CacheAccess. But, wait, it's already in the other writable CacheAccess. Does that cause an exception on the refresh()?
The DLRL spec indicates that enumerations are mapped to 8-bit integers or strings, not to IDL enums. See 3.1.4.2.3 Can that be right? An IDL enum is a 32-bit integer. See 3.11.2.4 in the 04-03-01 CORBA spec. It seems strange not to map a DLRL enum to an IDL enum. Is there a reason?
It might be useful to allow QoS settings directly on a DLRL object type. For things like HISTORY, it would be a lot cleaner to apply QoS at the DLRL object type level (say, an object type and everything related to a certain depth) and let DLRL pass it through to DCPS. That way, the user doesn't need to know his application's DLRL-to-DCPS mapping -- the same QoS application code could be used regardless of how the DLRL-to-DCPS mapping is configured. The user's QoS code wouldn't depend on the details of the application's mapping, which could change.
The PSM mapping of BuiltinTopicKey_t is defined to be as; struct BuiltinTopicKey_t { BUILTIN_TOPIC_KEY_TYPE_NATIVE value[3]; }; But the DDS Interoperability Wire Protocol (RTPS) specifies that GUID consists of 12 bytes GuidPrefix and 4 bytes EntityId. In order to map GUID and BuiltinTopicKey, we should define BuiltinTopicKey as the following; struct BuiltinTopicKey_t { BUILTIN_TOPIC_KEY_TYPE_NATIVE value[4]; };
Problem: Various things are unclear in this section, for example the last bulleted list in this section does not take the Set type of a multi-valued attribute into account. A bullet needs to be added to state that the set does not contain an index key. Also replace 'row' with the word 'instance' in accordance with other issues. Also it needs to be indicated that in case of predefined mapping that the user defined keys identify the object, not the OID. In accordance with another issue, replace the word 'row' with 'instance'. Replace the word 'cell' with 'field' Solution: Replace: Mono-valued attributes and relations are mapped to one (or several) cell(s)7 in a single row whose key is the means to unambiguously reference the DLRL object (i.e., its oid or its full oid, depending on the owner class characteristics as indicated in the previous section): With: Mono-valued attributes and relations are mapped to one (or several) field(s)7 in a single instance whose key is the means to unambiguously reference the DLRL object (i.e. its oid, its full oid, or its user defined keys, depending on the owner class characteristics as indicated in the previous section): Replace (in the first bulleted list of the section): reference to another DLRL object (i.e., relation) -> as many cells as needed to reference unambiguously the referenced object (i.e., its oid, or its full oid as indicated in the previous section). With: reference to another DLRL object (i.e., relation) -> as many fields as needed to reference unambiguously the referenced object (i.e., its oid, its full oid, or its user defined keys as indicated in the previous section). Replace: Multi-valued attributes are mapped to one (or several) cell(s) in a set of rows (as many as there are items in the collection), whose key is the means to unambiguously designate the DLRL object (i.e., oid or full oid) plus an index in the collection. o For each item, there is one instance that contains the following, based on the type of attribute: o simple basic type -> one cell of the corresponding DCPS type; o enumeration -> one cell of type integer or string; o simple structures -> as many cells as needed to hold the structure; o reference to another DLRL object -> as many cells as needed to reference unambiguously the referenced object (i.e., its oid, or its full oid as indicated in the previous section). o The key for that row is the means to designate the owner's object (i.e., its oid or full oid) + an index, which is: o An integer if the collection basis is a list (to hold the rank of the item in the list). o A string or an integer9 if the collection basis is a map (to hold the access key of the item in the map). With: Multi-valued attributes are mapped to one (or several) field(s) in a set of instances (as many as there are items in the collection), whose key is the means to unambiguously designate the DLRL object (i.e., oid or full oid, or its user defined keys) plus an optional index in the collection. o For each item, there is one instance that contains the following, based on the type of attribute: o simple basic type -> one field of the corresponding DCPS type; o enumeration -> one field of type integer or string; o simple structures -> as many fields as needed to hold the structure; o reference to another DLRL object -> as many fields as needed to reference unambiguously the referenced object (i.e., its oid, its full oid, or its user defined keys as indicated in the previous section). o The key for that row is the means to designate the owner's object (i.e., its oid, full oid, or its user defined keys) + an optional index, which is: o An integer if the collection basis is a list (to hold the rank of the item in the list). o A string or an integer9 if the collection basis is a map (to hold the access key of the item in the map). o No index if the collection basis is a set (The item value field is implicitly the key in this case)
Problem: In the table in section 3.1.6.2 in the row regarding the CacheAccess it should be clarified that one CacheAccess should be used by one thread. If multiple threads require access to the same CacheAccess it is the application responsibility to ensure thread safety. Solution: Replace: Class that encapsulates the access to a set of objects. It offers methods to refresh and write objects attached to it; CacheAccess objects can be created in read mode, in order to provide a consistent access to a subset of the Cache without blocking the incoming updates or in write mode in order to provide support for concurrent modifications/updates threads. With (sentence at the end added): Class that encapsulates the access to a set of objects. It offers methods to refresh and write objects attached to it; CacheAccess objects can be created in read mode, in order to provide a consistent access to a subset of the Cache without blocking the incoming updates or in write mode in order to provide support for concurrent modifications/updates threads. A CacheAccess should only be used by one thread, if multiple threads require access to the same CacheAccess then it is the responsibility of the application to ensure thread safety.
Problem: The create_cache operation is responsible for creating a DCPS publisher and/or subscriber, depending on the cache usage. If this creation fails a DCPSerror should be thrown, the text should state this. It should also be stated that a cache is created by default with updated_enabled() returning false, forcing the application to explicitly enable the cache for updates. This prevents that the application immediately starts receiving updates after the enable_all_for_pubsub. It should also be clarified that the QoS settings on the participant determine if the publisher/subscriber will be created as enabled or disabled entities. IE if the QoS setting for autoenable_created_entities is set to true on the partcipant the Subscriber and Publisher are created in an enabled state, if set to false then both entities will be created in a disabled state. Solution: Replace: Depending on the cache_usage a Publisher, a Subscriber, or both will be created for the unique usage of the Cache. These two objects will be attached to the passed DomainParticipant. With: Depending on the cache_usage a Publisher, a Subscriber, or both will be created for the unique usage of the Cache. These two objects will be attached to the passed DomainParticipant. If the creation of the Publisher and/or Subscriber required by the Cache fails a DCPSError is raised. The Cache is created with updates disabled by default (updates_enabled() returning false). The autoenable_created_entities QoS setting of the entity_factory of the passed DomainParticipant determines if the Publisher and/or Subscriber will be created in an enabled or a disabled state. If this QoS setting is set to true then these entities will be created in an enabled state, if set to false then these entities will be created in a disabled state. The Publisher and/or Subscriber themselves will always have their entity_factory.autoenable_created_entities QoS setting set to false, ensuring that DataWriter and DataReader entities are created in a disabled state, this setting may be overridden before the register_all_for_pubsub() call to the created Cache, which will result in the DataWriter and DataReader entities to be created as enabled entities, in this scenario updates will be received from the moment the register_all_for_pubsub is called, but can only be viewed after a call to the enable_all_for_pubsub. The creation of Topic entities during the register_all_for_pubsub is also slaved to the passed DomainParticipant's entity_factory.autoenable_created_entities QoS setting at the time the register_all_for_pubsub() is called.
Problem: In section 3.1.6.3.2 in the explantion of the refresh operation it should be stated a DCPSError may be raised if an error occurred while trying to read data from DCPS. And it should be stated a PreconditionNotMet exception is raised in case the cache_usage of the cache base excludes read operations. Solution: Replace: o Refresh the contents of the Cache with respect to its origins (DCPS in case of a main Cache, Cache in case of a CacheAccess). With: o Refresh the contents of the CacheBase with respect to its origins (DCPS in case of a main Cache, Cache in case of a CacheAccess). A PreconditionNotMet is raised if the cache_usage excludes read operations. A DCPSError is raised if an error occurred while trying to read data from DCPS. If the CacheBase represents a Cache that has updates_enabled set to true, then this operation is considered a no-op. In section 2.2.1.2 IDL Description on page 3-57 regarding the definition of the local interface CacheBase replace: void refresh( ) raises (DCPSError); With: void refresh( ) raises (DCPSError, PreconditionNotMet);
Problem: The explanation of the cache usage only talks about the intent to support write operations. However if the usage is WRITE_ONLY there is no intent to support read operations, such as refresh as well. This should be stated as such. Solution: Replace: The cache_usage indicates whether the cache is intended to support write operations (WRITE_ONLY or READ_WRITE) or not (READ_ONLY). This attribute is given at creation time and cannot be changed afterwards. With: The cache_usage indicates whether the cache is intended to support write operations only (WRITE_ONLY) or read operation only (READ_ONLY) or both read and write operation (READ_WRITE). This attribute is given at creation time and cannot be changed afterwards.
Problem: The description of the write operation does not specify which exceptions can be thrown. Not that the IDL description on page 3-57 also states the ReadOnlyMode exception can be raised, but that exception no longer exists! So fix this as well. See issue XXX and XXX for mention of the TimeOut and InvalidObjects exceptions. Solution: Replace: o Write objects (write). If the CacheAccess::cache_usage allows write operation, those objects can be modified and/or new objects created for that access and eventually all the performed modifications written for publications. With: o Write objects (write). If the CacheAccess::cache_usage allows write operation, those objects can be modified and/or new objects created for that access and eventually all the performed modifications written for publications. A PreconditionNotMet is raised if the CacheAccess::cache_usage does not allow the write operation (i.e. usage is READ_ONLY). A TimeOut exception is raised if one of the underlying DCPS DataWriter entities timed out. A DCPSError is raised if the write failed due to an error in the DCPS layer. In section 3.2.1.2 on page 3-57 regarding the IDL description of the write() operation of the interface CacheAccess replace: void write () raises ( ReadOnlyMode, DCPSError); With: void write () raises (DCPSError, PreconditionNotMet, InvalidObjects, TimeOut);
Clarify which exceptions can be raised under which circumstances in the write operation (section 3.1.6.3.3, page 3-21)
Problem: The purge operation needs to be further detailed as what the consequences of the operation are. Also the wrong quotation mark at the end of the description needs to be removed as well. Solution: Replace: Detach all contracts (including the contracted DLRL Objects themselves) from the CacheAccess (purge)." With: Detach all Contracts (including the contracted ObjectRoots themselves) from the CacheAccess (purge). If the CacheAccess is writeable then the CacheAccess will unregister itself for each purged (previously written) ObjectRoot. If the CacheAccess was the last writeable CacheAccess within the scope of the owning Cache for the purged (previously written) ObjectRoot, then an explicit unregister_instance is performed for that instance at the respective DataWriter entity. A DCPSError is raised if the purge failed due to an error on DCPS level. In section 3.2.1.2 on page 3-57 IDL description regarding the CacheAccess interface replace: void purge (); With: void purge () raises (DCPSError);
Clarify what happens in the purge operation of the CacheAccess (section 3.1.6.3.3, page 3-21
Problem: DLRL is unmistakenly linked with DCPS. DLRL requires DCPS entities to create a cache for example, and one can directly request the publisher or subsriber from such a Cache. A DLRL not built on top of DCPS is not a DLRL at all. Solution: Replace: It is an optional layer that may be built on top of the DCPS layer. With: It is an optional layer that is built on top of the DCPS layer.
Problem: The explaination on page 3-24 for the create_cache operation states that a PreconditionNotMet is only raised if the usage of the access is not compatiable with the usage of the cache. It should also say that a PreconditionNotMet is raised if an access is attempted to be created while the cache is not yet enabled for pub sub. Solution: Add the new reason in the description.
Clarify exception condition for precondition not met in the create_access operation of the Cache in section 3.1.6.3.4 on page 3-24
Problem: In section 3.1.6.3.4 on page 3-23 in the description of the operation find_home_by_name of the Cache entity it states that an already registered home can be retrieved using its name. It should be clarified that this name is the fully qualified name (in IDL sense, with '::' as seperator). The name of ObjectHome for Object Foo which is defined in module test thus becomes 'test::Foo'. Solution: Replace: o retrieve an already registered ObjectHome based on its name (find_home_by_name) or based on its index of registration (find_home_by_index). If no registered home can be found that satisfies the specified name or index, a NULL is returned. With: o retrieve an already registered ObjectHome based on its fully qualified name (in IDL sense, meaning for an ObjectHome representing class 'Foo' which was defined in module 'test' has the name "test::Foo") (find_home_by_name) or based on its index of registration (find_home_by_index). If no registered home can be found that satisfies the specified name or index, a NULL pointer is returned.
Problem: Description of the register_all_for_pubsub operation on page 3-23 in section 3.1.6.3.4 doesn't say under which conditions the DCPSError exception is raised Solution: Add the following sentence to the description of the register_all_for_pubsub operation: A DCPSError is raised if an error was encountered while trying to create the DCPS entities
Problem: In section 3.1.6.3.4 on page 3-23 regarding the operation enable_all_for_pubsub several clarifications should be made: 1) In the first sentence before the word 'QoS' is should say (immutable) to clarify it's mainly the idea that immutable QoS settings can be changed at that time. 2) The expression "those two operations" in the second sentence should be replaced by "this operation and the register_all_for_pubsub operation". 3) It should state the DCPSError is raised if an error occurred while enabling the DCPS entities. Solution: Replace: o enable the derived Pub/Sub infrastructure (enable_all_for_pubsub). QoS setting can be performed between those two operations. One precondition must be satisfied before invoking the enable_all_for_pub_sub method: the pubsub_state must already have been set to REGISTERED before. A PreconditionNotMet Exception is thrown otherwise. Invoking the enable_all_for_pub_sub method on an ENABLED pubsub_state will be considered a no-op. With: o enable the derived Pub/Sub infrastructure (enable_all_for_pubsub). Changes to the (Immutable) QoS settings can be performed between this operation and the register_all_for_pubsub operation. One precondition must be satisfied before invoking the enable_all_for_pub_sub method: the pubsub_state must already have been set to REGISTERED before. A PreconditionNotMet Exception is thrown otherwise. A DCPSError is raised if an error occurred while enabling the DCPS entities. Invoking the enable_all_for_pub_sub method on an ENABLED pubsub_state will be considered a no-op.
Problem: In section 3.1.6.3.4 on page 3-23 for the descriptions of the enable and disable updates operation it should state a PreconditionNotMet is thrown if the cache is created with a usage of WRITE_ONLY. It should also state that multiple calls to the operations is considered a no-op. It should also be stated that calling the disable_updates operation results in all cache listeners being triggered with the on_updates_disabled call and for the enable the on_updates_enabled is called in scope of the thread making the call to enable or disable updates. Solution: Replace (see issue XXX(PT-DLRL-TYPO-0011) which already made changes to the disable_updates description): o disable_updates causes incoming but not yet applied updates to be registered for further application, any update round in progress will be completed before the disable updates instruction is taken into account. o enable_updates causes the registered (and thus not applied) updates to be takeninto account, and thus to trigger the attached Listener, if any. With: o disable_updates causes incoming but not yet applied updates to be registered for further application, any update round in progress will be completed before the disable updates instruction is taken into account. All registered CacheListeners will be triggered with the on_updates_disabled call (in scope of the thread calling the disable_updates operation) signaling, to any interested party, that updates on the Cache will no longer be automatically processed and no longer result in listener triggers. If the cache_usage of the Cache is WRITE_ONLY then a PreconditionNotMet is raised. o enable_updates causes the registered (and thus not applied) updates to be taken into account, and thus to trigger the attached CacheListeners, if any. All registered CacheListeners will be triggered before any updates are applied with the on_updates_enabled call (in scope of the thread calling the enabled_updates operation) signaling, to any interested party, that the updates on the Cache will be automatically processed and thus result in listener triggers. If the cache_usage of the Cache is WRITE_ONLY then a PreconditionNotMet is raised. In the IDL description in section 3.2.1.2 on page 3-58 replace: // --- Updates management void enable_updates (); void disable_updates (); With: // --- Updates management void enable_updates () raises (PreconditionNotMet); void disable_updates ()raises (PreconditionNotMet);;
Clarify exceptions for the enable_updates and disable_updates operations of the Cache. Also clarify that the cache listener should be triggered within the scope of these operations
Problem (1/3): In the description for the on_begin_updates it says at the end '(assuming that updates_enabled is true)', which is a bit vague as to the context of that statement. Replace it with something like "This operation will only be triggered for a Cache which has updates_enabled set to true." That statement should also be added at the end of the on_end_updates operation. This clarification is required because the other two operations are not dependant on the state of the updates_enabled… Solution (1/3): Replace: o on_begin_updates indicates that updates are following. Actual modifications in the cache will be performed only when exiting this method (assuming that updates_enabled is true). o on_end_updates indicates that no more update is foreseen. With: o on_begin_updates indicates that updates are following. Actual modifications in the Cache will be performed only when exiting this method. This operation will only be triggered for a Cache which has updates_enabled set to true. o on_end_updates indicates that no more update is foreseen. This operation will only be triggered for a Cache which has updates_enabled set to true. Problem (2/3): The paragraph following the descriptions for all operations says: "In between, the updates ….". Since two new operations where added in the last spec revision this statement is now unclear. In between which operations? This should be clarified to state in between the on_begin_updates and on_end_updates. Solution (2/3): Replace: In between, the updates are reported on home or selection listeners. Section 3.1.6.4, "Listeners Activation," on page 3-41 describes which notifications are performed and in what order. With: In between the on_begin_updates and the on_end_updates calls the updates are reported on home or selection listeners. Section 3.1.6.4, "Listeners Activation," on page 3-41 describes which notifications are performed and in what order. Problem (3/3): The description for the on_updates_enabled and on_updates_disabled both start with a wrong quotation mark. It should be removed. Solution (3/3): Replace: o "on_updates_enabled - indicates that the Cache has switched to automatic update mode. Incoming data will now trigger the corresponding Listeners. o "on_updates_disabled - indicates that the Cache has switched to manual update mode. Incoming data will no longer trigger the corresponding Listeners, and will only be taken into account during the next refresh operation. With: o on_updates_enabled - indicates that the Cache has switched to automatic update mode. Incoming data will now trigger the corresponding Listeners. o on_updates_disabled - indicates that the Cache has switched to manual update mode. Incoming data will no longer trigger the corresponding Listeners, and will only be taken into account during the next refresh operation.
In section 3.1.6.3.5 regarding the CacheListener clarify some things… And remove some typos
Problem (1/4) (typo): Each attribute and operation description wrongly starts with a quotation mark, these should be removed. Problem (2/4) (typo): The description of attribute depth talks about a RELATED_OBJECT_SCOPE. This should be RELATED_OBJECTS_SCOPE (objects should thus be plural). Problem (3/4) (clarification): During the description of the scope attribute the various scopes are explained, for clarification purposes insert the type of scope right after the explaination. Problem (4/4) (clarification): In the description for the set_depth operation, indicate that the depth is ignored unless the scope is set to RELATED_OBJECTS_SCOPE, just like it says at the getter for the depth attribute. Solution: Replace: o "The top-level object (contracted_object). This is the object that acts as the starting point for the cloning contract. o "The scope of the cloning request (i.e., the object itself, or the object with all its (nested) compositions, or the object with all its (nested) compositions and all the objects that are navigable from it up till the specified depth). o "The depth of the cloning contract. This defines how many levels of relationships will be covered by the contract (UNLIMITED_RELATED_OBJECTS when all navigable objects must be cloned recursively). The depth only applies to a RELATED_OBJECT_SCOPE. It offers methods to: o "Change the depth of an existing contract (set_depth). This change will only be taken into account at the next refresh of the CacheAccess. o "Change the scope of an existing contract (set_scope). This change will only be taken into account at the next refresh of the CacheAccess. With: o The top-level object (contracted_object). This is the object that acts as the starting point for the cloning contract. o The scope of the cloning request (i.e., the object itself (SIMPLE_OBJECT_SCOPE), or the object itself along with all its (nested) compositions (CONTAINED_OBJECTS_SCOPE), or the object itself along with all its (nested) compositions and all the objects that are navigable from it up till the specified depth (RELATED_OBJECTS_SCOPE)). o The depth of the cloning contract. This defines how many levels of relationships will be covered by the contract (UNLIMITED_RELATED_OBJECTS when all navigable objects must be cloned recursively). The depth only applies to a RELATED_OBJECTS_SCOPE. It offers methods to: o Change the depth of an existing contract (set_depth). This change will only be taken into account at the next refresh of the CacheAccess. The depth only applies to a RELATED_OBJECTS_SCOPE. o Change the scope of an existing contract (set_scope). This change will only be taken into account at the next refresh of the CacheAccess.
In section 3.1.6.3.6 regarding the Contract clarify some things… And remove some typos
Problem 1 The description for the name attribute (see issue XXX) needs to be clarified into saying that the name attribute gives the fully qualified name using IDL '::' as seperators (i.e. ObjectHome representing class Foo in module test has as name 'test::Foo'). Solution 1: Replace: o the public name of the application-defined class (name). With: o the public fully qualified name (using IDL seperators '::') of the application-defined class (name). For an ObjectHome representing class Foo in module test it's name becomes 'test::Foo'. Problem 2: The description for the index attribute of the object home needs to specify what the index is in case the home is not yet registered with any Cache. It is our suggestion to make this operation return -1 as value for an ObjectHome which has not yet been registered to any Cache. The description also contains a type, as it talks about index, but it should be registration_index. Solution 2: Replace: o the index under which the ObjectHome has been registered by the Cache (see Cache::register_home operation). With: o the index (registration_index) under which the ObjectHome has been registered by the Cache (see Cache::register_home operation). If the ObjectHome was not yet registered to any Cache then -1 is returned as value. In the IDL description in section 3.2.1.2 on page 3-53 the attribute type of the registration_index needs to be replaced from unsigned_long to long to allow -1 as return value. Replace: readonly attribute unsigned long registration_index; With: readonly attribute long registration_index; In the IDL description in section 3.2.1.2 on page 3-58 regarding the Cache interface description change the return value of the register_home operation from unsigned_long into long: Replace: unsigned long register_home ( in ObjectHome a_home) raises ( PreconditionNotMet); With: long register_home ( in ObjectHome a_home) raises (PreconditionNotMet); Problem 3: The description for the selections and listeners attributes should be in plural, furthermore the attribute names in the description should be bold . Solution 3: Replace: o the list of attached Selection (selections). o the list of attached ObjectListener (listeners). With: o the list of attached Selection objects (selections). o the list of attached ObjectListener objects (listeners). Problem 4: The description for the set_content_filter operation states that it can only be set before the home is registered to a Cache. But this is not correct, it should state that it must be set while the cache the home is registered to (if any) has a pubsub state of INITIAL. As long as the readers have not been created it should not be prohibited to set the content filter. Solution 4: Replace: o set the content_filter for that ObjectHome (set_content_filter). As a content filter is intended to be mapped on the underlying infrastructure it can be set only before the ObjectHome is registered (see Cache::register_home). An attempt to change the filter expression afterwards will raise a PreconditionNotMet. Using an invald filter expression will raise an SQLError. With: o set the content_filter for that ObjectHome (set_content_filter). As a content filter is intended to be mapped on the underlying infrastructure it can be set only if the Cache to which the ObjectHome belongs (if any) has not yet been registered for pubsub. (see Cache::register_all_for_pubsub). An attempt to change the filter expression afterwards will raise a PreconditionNotMet. Using an invald filter expression will raise an SQLError. Problem 5: The description of the deref_all operation talks about the 'most recent' state, but this is not correct. As in 'manual' update mode (using the refresh operation on the Cache) one wants to load the state as known during the last refresh operation, which is not neccesarily the most recent state known in DCPS. Reasoning behind this is that one doesn't want to have inconcistent states within the DLRL, where one object state is significantly 'newer' then other states, especially since 'dereferencing' is only meant to load in a state which is not done at refresh time to gain performance. The deref_all is not meant as a refresh on home level! Solution 5: Replace: o ask to load the most recent state of a DLRL Object into that Object for all objects managed by that home (deref_all). With: o ask to load the last known state (at the time of the last update round) of a DLRL Object into that Object for all objects managed by that home (deref_all). Problem 6: The description about the create_selection operation talks about a SelectionCriteration (second line) where it should talk about a QueryCriterion. In the last line it should clarify that the PreconditionNotMet is raised if the cache it belongs to is still set to INITIAL with relation to the DCPS state as well as if the home does not yet belong to a Cache. Solution 6: Replace: o create a Selection (create_selection). The criterion parameter specifies the SelectionCriterion (either a FilterCriterion or an SelectionCriterion) to be attached to the Selection, the auto_refresh parameter specifies if the Selection has to be refreshed automatically or only on demand (see Selection) and a boolean parameter specifies, when set to TRUE, that the Selection is concerned not only by its member objects but also by their contained ones (concerns_contained_objects); attached SelectionCriterion belong to the Selection that itself belongs to its creating ObjectHome. When creating a Selection while the DCPS State of the Cache is still set to INITIAL, a PreconditionNotMet is raised. With: o create a Selection (create_selection). The criterion parameter specifies the SelectionCriterion (either a FilterCriterion or an QueryCriterion) to be attached to the Selection, the auto_refresh parameter specifies if the Selection has to be refreshed automatically or only on demand (see Selection) and a boolean parameter specifies, when set to TRUE, that the Selection is concerned not only by its member objects but also by their contained ones (concerns_contained_objects); attached SelectionCriterion belong to the Selection that itself belongs to its creating ObjectHome. When creating a Selection if the ObjectHome does not yet belong to a Cache or while the DCPS State of the Cache that the ObjectHome belongs to is still set to INITIAL, a PreconditionNotMet is raised. Problem 7: The description of the create_unregistered_object should be changed to indicate that only the identity should be set/changed for objects created by this operation. One wants to prevent relations and such to be set on unregistered objects. The only reason this operation exists is to allow an application to set the identity of the object if the object is mapped using predefined mapping rules. In this mode the DLRL cannot determine the keys itself and thus needs user input. Solution 7: Replace the word 'content' on the first line with 'identity'. Problem 8: Clarify the register_object method only takes objects created by the create_unregistered_object method of the same home instance. Solution 8: Replace: o register an object resulting from such a pre-creation (register_object). This operation embeds a logic to derive from the object content a suitable oid; only objects created by create_unregistered_object can be passed as parameter, a PreconditionNotMet is raised otherwise. If the result of the computation leads to an existing oid, an AlreadyExisting exception is raised. Once an object has been registered, the fields that make up its identity (i.e. the fields that are mapped onto the keyfields of the corresponding topics) may not be changed anymore. With: o register an object resulting from such a pre-creation (register_object). This operation embeds a logic to derive from the object content a suitable oid; only objects created by create_unregistered_object (of the same ObjectHome instance) can be passed as parameter, a PreconditionNotMet is raised otherwise. If the result of the computation leads to an existing oid, an AlreadyExisting exception is raised. Once an object has been registered, the fields that make up its identity (i.e. the fields that are mapped onto the keyfields of the corresponding topics) may not be changed anymore. Problem 9: Clarify behavior for the find_object operation if no object with the specified OID can be located. (return NULL). It is not desireable to throw an exception if no object can be found (Notfound) as that is too heavy wait just to report that no object with the specified OID exists.. Solution 9: Replace: o retrieve a DLRL object based on its oid in the in the specified CacheBase (find_object). With: o retrieve a DLRL object based on its oid in the in the specified CacheBase (find_object). If no such object can be located NULL is returned. In the IDL description on page 3-54 in section 3.2.1.2 regarding the ObjectHome replace: ObjectRoot find_object ( in DLRLOid oid, in CacheBase source) raises ( NotFound); With: ObjectRoot find_object ( in DLRLOid oid, in CacheBase source); Problem 10: Clarify the behavior of the get_topic_name operation if the passed attribute is not defined within the home. (it should just return NULL) Solution 10: Replace: o retrieve the name of the topic that contains the value for one attribute(get_topic_name). If the DCPS State of the Cache is still set to INITIAL, a PreconditionNotMet is raised. With: o retrieve the name of the topic that contains the value for one attribute (get_topic_name), if the attribute is unknown then NULL is returned. If the DCPS State of the Cache is still set to INITIAL, a PreconditionNotMet is raised. Problem 11: It should be clarified that objects with the state DELETED should not be contained in the get_objects operation of the ObjectHome. The text about ObjectRoot turning into Foo should also be removed, as its already covered by the undefined bit see XXX. Solution 11: Replace: o obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome (get_objects). For example the type ObjectRoot[ ] will be substituted by a type Foo[ ] in a FooHome. With: o obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome (get_objects). Objects with the state DELETED are not contained within this list. Problem 12: Remove the text about ObjectRoot becoming Foo in the generated home for the description of the get_created_objects, get_modified_objects, get_deleted_objects. Its already stated with the undefined bit… See XXX Solution 12: Replace: o obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome and that have been created, modified or deleted during the last refresh operation (get_created_objects, get_modified_objects and get_deleted_objects respectively). The type ObjectRoot[ ] will be substituted by a type Foo[ ] in a FooHome. With: o obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome and that have been created, modified or deleted during the last refresh operation (get_created_objects, get_modified_objects and get_deleted_objects respectively).
In section 3.1.6.3.7 regarding the ObjectHome clarify some things… And remove some typos
Problem: In the description for the check_object method is should state that the membership_state is optional in the sense that implementations of the spec may or may not use this parameter. If not used it always states an UNDEFINED_MEMBERSHIP state. Solution: Replace: o check if an object passes the filter - return value is TRUE - or not - return value is FALSE (check_object). This method is called with the first parameter set to the object to be checked and the second parameter set to indicate whether the object previously passed the filter (membership_state). The second parameter (which is actually an enumeration with three possible values - UNDEFINED_MEMBERSHIP, ALREADY_MEMBER and NOT_MEMBER) is useful when the FilterCriterion is attached to a Selection to allow writing optimized filters. With: o check if an object passes the filter - return value is TRUE - or not - return value is FALSE (check_object). This method is called with the first parameter set to the object to be checked and the second parameter set to indicate whether the object previously passed the filter (membership_state). The second parameter (which is actually an enumeration with three possible values - UNDEFINED_MEMBERSHIP, ALREADY_MEMBER and NOT_MEMBER) is optional and may be useful when the FilterCriterion is attached to a Selection to allow writing optimized filters. The membership_state parameter has a default value of UNDEFINED_MEMBERSHIP in case the implementation does not support this option.
Problem: For each attribute a getter, setter and is_xxx_modified (in case of non-keyfields) operation is to be generated in the derived class. This should be made more explicity by adding the operations in the table contents on page 3-34. It should also be added in the IDL description in section 3.2.1.2 on page 3-51 and in figure 3-4 on page 3-16 Since keyfields determine the identity of a DLRL object, they cannot be changed. That means no is__<attribute>_modified operations needs to be generated for them. Solution: Add the following to the table on page 3-34: For each attribute defined on the class: get_<attribute_name> <undefined attribute type> set_<attribute_name> void value <undefined attribute type> For each attribute defined on the class which is not a relation to another DLRL object: is_<attribute_name>_modified boolean For each attribute defined on the class which is a relation (mono relation / multi relation) to another DLRL object: is_<attribute_name>_modified boolean scope ReferenceScope Section 3.1.6.3.14: Replace: · is_<attribute>_modified, to get if this attribute has been modified by means of incoming modifications (cf. method is_modified). With: · is_<attribute_name>_modified, to get if this attribute has been modified by means of incoming modifications (cf. method is_modified). Since keyfields cannot be changed by incoming modifications, this operation will not be generated for attributes that represent such keyfields. Add the following text in the description for valuetype ObjectRoot on page 3-51 in section 3.2.1.2 within the closing bracket of the objectroot valuetype. /* For each attribute of the application type 'Foo' the following operations * will be generated: * * <attribute_type> get_<attribute_name>(); * void set_<attribute_name>(<attribute_type> value); * * If the attribute is a MonoAttribute or MultiAttribute the following operation will * be generated: * * boolean is_<attribute_name>_modified(); * * If the attribute is a MonoRelation or MultiRelation the following operation will * be generated: * * boolean is_<attribute_name>_modified(ReferenceScope scope); */ Add in figure 3-4 on page 3-16 to the ObjectRoot class operations listing the following operations: get_<attribute_name>() set_<attribute_name>() is_<attribute_name>_modified()
The getter/setter/is_modified operations for attributes and relations should be added to the table listing on page 3-34 and in figure 3-4 on page 3-16
Problem: It should be clarified that a destroyed ObjectRoot is only removed from the CacheAccess after the write() operation has been performed. The ObjectRoot will produce AlreadyDeleted exceptions only after that time. Solution: Replace: o mark the object for destruction (destroy), to be executed during a write operation. If the object is not located in a writeable CacheAccess, a PreconditionNotMet is raised. With: o mark the object for destruction (destroy), to be executed during a write operation on the owning CacheAccess. After the write operation has been completed the object will be removed from the CacheAccess and subsequent calls to operations on the object may result in AlreadyDeleted exceptions being raised. If the object is not located in a writeable CacheAccess, a PreconditionNotMet is raised.
Problem: Firstly all references to primary and secondary (or clone) objects should be replaced with cache and cacheaccess objects respectively. Furthermore the usage of the is_modified operation is unclear, it states that it returns false when the read_state is new. Suppossedly because nothing is modified (it's the first time the data is seen). In that light we can only wonder what the is_modified should return when the DLRL receives a new sample for the underlying topics of the Object, but all the attributes have the exact same value (although the real compare is more complex, as some attributes are actually foreign keys and if a new generation of a relation appeared then the attribute value may not have changed, but the relation represented by the attribute has… but that's more of an implementation issue I suppose), or when a dispose is received, but no new data sample. In those cases should this operation return false, or true? That's not clear from the current description of the operation. Basically we see two options for the is_modified operation 1) true is only returned when one of the is_xxx_modified operations returns true 2) true is only returned if the read_state of the object is something else then NOT_MODIFIED The first operation has some added value, as an application now does not need to access each is_xxx_modified operation and find out nothing has changed. However the drawback is that the read_state may be MODIFIED or DELETED, but the is_modified operation still indicates nothing has changed. Forcing an application to always use the read_state() operation along with the is_modified operation and never being able to just use one or the other to determine if something changed at the object (because an object becoming new or disposed is ussually something an application desires to know). This approach also makes the is_modified operation rather complex, as it's implementation has to evaluate each mono attribute by value, each relation by value AND by reference (by value is needed if it was NotFound before and after an update round, and by reference is needed in case of new generations). This all makes the is_modified operation from a seemingly light weight operation into a heavy weight one. Furthermore if we look at the added benefit of the is_modified operation, it's that we can provide a scope, asking it to check out it's related/contained objects as well for modifications. To us that's the real advantage of this operation over simply getting the read_state of the operation. And that's why we prefer option 2, only returning true if the read_state() is something else then NOT_MODIFIED. Because then an application can simply use the is_modified operation to determine what it wants to do with the object, whether it's new, deleted, modified or one of it's related objects has something modified. That does not matter, one call will tell the application whatever it needs to know, if the application does not require modification info on related objects, then it can just use the read_state or the is_modified with a simple_object_scope. Solution: Replace: o see if the object has been modified by incoming modifications (is_modified). is_modified takes as parameter the scope of the request (i.e., only the object contents, the object and its component objects, the object and all its related objects). In case the object is newly created, this operation returns FALSE; 'incoming modifications' should be understood differently for a primary object and for a clone object. o For a primary object, they refer to incoming updates (i.e., coming from the infrastructure). o For a secondary object (cloned), they refer to the modifications applied to the object by the last CacheAccess::refresh operation. With: o see if the object has been updated in the current update round (is_modified). is_modified takes as parameter the scope of the request, i.e.,only the object contents (SIMPLE_OBJECT_SCOPE), the object contents and its composed objects contents (CONTAINED_OBJECTS_SCOPE, unlimited depth), the object contents, its composed objects contents and all its related (non-composed) objects contents (RELATED_OBJECTS_SCOPE, depth of 1 for related objects and unlimited depth for contained objects). Incoming modifications should be understood differently for a cache object and for a cacheaccess object. o For a cache object, they refer to incoming updates (i.e., coming from the infrastructure). o For a cacheaccess object that is cloned from a cache objects, they refer to the modifications applied to the object by the last CacheAccess::refresh operation.
Problem: The description of the getter, setter and is_xxx_modified operation for attributes of a DLRL object is not detailed enough. An exception listing should be added, and for what type of attributes the exception is valid and under which circumstances. Exceptions for get_<attribute_name> operations: · DCPSError (which should be made a 'runtime' exception, to prevent nasty catch clause needed for each getter!!) o For all (shared) attributes: § a DCPSError if some error happened in DCPS (state might need to be fetched from DCPS, if the object was not dereffed). · NotFound o For mono relation shared attributes: § A NotFound exception if the related attribute could not be located. Exceptions for set_<attribute_name> operations: · PreconditionNotMet o For all (shared) attributes: § Object is not in a (writeable) cacheaccess o For any (shared) attribute that is a key field (predefined mapping only) § Object is already registered (i.e. identity may not be registered anymore) o For mono relation shared attributes: § If the value of the parameter is NIL, but the relation was modeled as a mandatory relation (see XXX) § If the object in the parameter has different keys then the owner object and the relation is mapped using so called 'shared' keys. o For mono/multi relation shared attributes: § 'Owning' object (or owner of the collection if multi relation) is not yet registered. § 'Target' object (or owner of the collection if multi relation) is not yet registered. § If the object (or owner of the collection if multi relation) represented by the parameter has already been deleted (this does not include marked for destruction!) § If the object (or owner of the collection if multi relation) in the parameter is not defined in the scope of the same CacheAccess o For multi relation shared attributes: § If the parameter value is NIL Furthermore the descriptions should also be augmented. The setter description should state that a setter for a collection type basically clears the entire contents of the collection of the 'owner' objects and then copies in the content of the collection in the parameter. Making the setter for a collection work like a clear of the contents of the 'old' collection and then an element for element add of the elements of the other collection. The is_xxx_modified operation description should also state that it takes a ReferenceScope as parameter for (mono/multi) relations. SIMPLE_CONTENT_SCOPE only takes the reference to the related object into account (for multi relations this means elements in the collection, not the multi relation object (pointer) itself which should never change during the life cycle of the owning object!) and REFERENCED_CONTENTS_SCOPE takes the reference to the related object into account as well as the content of the object. Solution: TBD
The description for the get_<attribute_name>, set_<attribute_name> and is_<attribute_name>_modified is not detailed enough.
Problem: The text following the operation descriptions of the ObjectRoot on page 3-35 talks about state transitions taking place between the start of an update round and the end of an update round. This is confusing. It should state that state transitions are applied directly following the start of an update round and are cleared directly following the end of an update round. Solution: Replace: A Cache Object represents the global system state. It has a read_state whose transitions represent the updates as they are received by the DCPS. Since Cache Objects cannot be modified locally, they have no corresponding write_state (i.e. their write_state is set to VOID). State transitions occur between the start of an update round and the end of of an update round. When in automatic updates mode, the start of the update round is signaled by the invocation of the on_begin_updates callback of the CacheListener, while the end of an update round is signaled by the invocation of the on_end_updates callback of the CacheListener. When in manual update mode, the start of an update round is defined as the start of a refresh operation, while the end of an update round is defined as the invocation of the next refresh operation. With: A Cache Object represents the global system state. It has a read_state whose transitions represent the updates as they are received by the DCPS. Since Cache Objects cannot be modified locally, they have no corresponding write_state (i.e. their write_state is set to VOID). State transitions are applied directly following the start of an update round and are cleared directly following the end of an update round. When in automatic updates mode, the start of the update round is signaled by the invocation of the on_begin_updates callback of the CacheListener, while the end of an update round is signaled by the invocation of the on_end_updates callback of the CacheListener. When in manual update mode, the start of an update round is defined as the start of a refresh operation, while the end of an update round is defined as the invocation of the next refresh operation.
Problem: The add/put operations on the List in section 3.1.6.3.16, the add/remove operations on the Set in section 3.1.6.3.17, the put operation in the StrMap in section 3.1.6.3.18 and the put operation in the IntMap in section 3.1.6.3.19 should mention they raise a PreconditionNotMet if: - The owner ObjectRoot of the Collection is not contained within a (writeable) CacheAccess - The owner ObjectRoot has not yet been registered (i.e. has no identity) - Value is NIL - In case the Collection represents a MultiRelation (instead of a MultiAttribute) o The target (to be added) ObjectRoot has not yet been registered (i.e. has no identity) o The target (to be added) ObjectRoot is contained within a different CacheAccess or no CacheAccess o The target (to be added) ObjectRoot has already been deleted (different then marked for deletion!) The put operation of the List should also state a PreconditionNotMet is raised if the index provided is smaller then 0 or larger then the length of the list. In the IDL description in section 3.2.1.2 on page 3-55, 3-56 add the raise clauses to the operations: For the list valuetype (also fix type of the first attribute of the List valuetype, operation put which takes an index, not a key as param) replace: void add( in ObjectRoot value ); void put( in long key, in ObjectRoot value ); with: void add( in ObjectRoot value ) raises (PreconditionNotMet); void put( in long index, in ObjectRoot value ) raises (PreconditionNotMet); For the Set valuetype: Replace: void add( ObjectRoot value ); void remove( ObjectRoot value ); With: void add(in ObjectRoot value ) raises (PreconditionNotMet); void remove(in ObjectRoot value ) raises (PreconditionNotMet); For the StrMap valuetype: Replace: void put( in string key, in ObjectRoot value ); With: void put( in string key, in ObjectRoot value ) raises(PreconditionNotMet); For the IntMap valuetype: Replace: void put( in long key, in ObjectRoot value ); With: void put( in long key, in ObjectRoot value ) raises (PreconditionNotMet); In section 3.2.1.2.2 Implied IDL on page 3-61 and 3-62 Replace for the FooList: void add( in Foo value ); void put( in long key, in Foo value ); With: void add( in Foo value ) raises (PreconditionNotMet); void put( in long index, in Foo value ) raises (PreconditionNotMet); Replace for the FooSet: void add( in Foo value ); void remove( in Foo value ); With: void add(in Foo value ) raises (PreconditionNotMet); void remove(in Foo value ) raises (PreconditionNotMet); Replace for the FooStrMap: void put( in string key, in Foo value ); With: void put( in string key, in Foo value ) raises(PreconditionNotMet); Replace for the FooIntMap: void put( in long key, in Foo value ); With: void put( in long key, in Foo value )raises(PreconditionNotMet); Solution: TBD
Clarify exceptions for the add/put operations on the List in section 3.1.6.3.16. Also for the add/remove operations on the Set in section 3.1.6.3.17. Also for the put operation in the StrMap in section 3.1.6.3.18. Also for the put operation in the IntMap in section 3.1.6.3.19
Problem: The get operation on the List and StrMap and IntMap classes should mention that a NoSuchElement is raised if the List/IntMap/StrMap does not contain an element for the index/key specified with the get operation. In the IDL description in section 3.2.1.2 on page 3-55, 3-56 add the raise clauses to the operations: For the list valuetype (also fix type of the first attribute of the List valuetype, operation put which takes an index, not a key as param) replace: ObjectRoot get( in long key ); With: ObjectRoot get( in long index ) raises (NoSuchElement); For the StrMap valuetype: Replace: ObjectRoot get( in string key ); With: ObjectRoot get( in string key ) raises (NoSuchElement); For the IntMap valuetype: Replace: ObjectRoot get( in long key ); With: ObjectRoot get( in long key ) raises (NoSuchElement); In section 3.2.1.2.2 Implied IDL on page 3-61 and 3-62 For the list valuetype (also fix type of the first attribute of the List valuetype, operation put which takes an index, not a key as param) replace: Foo get( in long key ); With: Foo get( in long index ) raises (NoSuchElement); For the StrMap valuetype: Replace: Foo get( in string key ); With: Foo get( in string key ) raises (NoSuchElement); For the IntMap valuetype: Replace: Foo get( in long key ); With: Foo get( in long key ) raises (NoSuchElement); Solution: TBD
Clarify exceptions for the get operations on the List in section 3.1.6.3.16 and StrMap in section 3.1.6.3.18 and the IntMap in section 3.1.6.3.19
Problem: In section 3.1.6.4 it should state that listeners are only triggered if the related cache has updated_enabled returning true, that is not clear now from the text. Solution: Replace (on page 3-41): As described in Section 3.1.6.2, "DLRL Entities," on page 3-15, there are three kinds of listeners that the application developer may implement and attach to DLRL entities: CacheListener, ObjectListener, and SelectionListener. All these listeners are a means for the application to attach specific application code to the arrival of some events. They are therefore only concerned with incoming information. With: As described in Section 3.1.6.2, "DLRL Entities," on page 3-15, there are three kinds of listeners that the application developer may implement and attach to DLRL entities: CacheListener, ObjectListener, and SelectionListener. All these listeners are a means for the application to attach specific application code to the arrival of some events. They are therefore only concerned with incoming information. Listeners are only triggered if the related Cache has updates_enabled returning true, with the exception of the operations modifying the result of the updates_enabled operation (disable_updates/enable_updates).
Clarify listeners are only triggered if the related cache has updates_enabled returning true. See section 3.1.6.4
Problem: It is currently not clear that the modification states are cleared after the last call to the CacheListener (multiple cache listeners may be registered!). This should be clarified. Solution: Replace: o Finally all the CacheListener::end_updates operations are triggered and the modification states of the updated objects is cleaned; the order in which these listeners are triggered is not specified. With: o Finally all the CacheListener::end_updates operations are triggered and the modification states of the updated objects is cleaned after the last CacheListener::end_updates has been triggered; the order in which these listeners are triggered is not specified.
Clarify that after the (last) end_updates call on CacheListeners that all modification information is cleared. (section 3.1.6.4.1, last bullet)
Problem: In section 3.1.6.5.1 it states the typical scenario for a cache access in read mode. But it would be nice if it was expand with a step between 4 and 5 which states that if desired new contracts can be created, existing contracts can be modified or deleted and step 3 can be repeated. Solution: TBD
Problem: Section 3.1.6.5.2 still talks about the clone_object operation, neglects to talk about the create_unregistered_object/register_objects operations and in step 5 it talks about modifying the attached. (attached what?!) it probably should say objects. Also between step 6 and 7 it should say Solution: Replace: 3.1.6.5.2 Write Mode The typical scenario for write mode is as follows: 1. Create the CacheAccess for write purpose (Cache::create_access). 2. Clone some objects in it (ObjectRoot::clone or clone_object). 3. Refresh them (CacheAccess::refresh). 4. If needed create new ones for that CacheAccess (ObjectHome:: create_object). 5. Modify the attached (plain access to the objects). 6. Write the modifications into the underlying infrastructure (CacheAccess::write). 7. Purge the cache (CacheAccess::purge); step 2 can be started again. 8. Eventually, delete the CacheAccess (Cache::delete_access). With: 3.1.6.5.2 Write Mode The typical scenario for write mode is as follows: 1. Create the CacheAccess for write purpose (Cache::create_access). 2. Attach some cloning contracts to it (CacheAccess::create_contract) 3. Execute these contracts (CacheAccess::refresh). 4. If needed create new objects for that CacheAccess (ObjectHome:: create_object or ObjectHome::create_unregistered_object followed by ObjectHome::register_object). 5. Modify the objects (plain access to the objects). 6. Write the modifications into the underlying infrastructure (CacheAccess::write). 7. If needed create new contracts, delete/change exisiting contracts and then goto step 3 again. 8. Purge the cache (CacheAccess::purge); step 2 can be started again. 9. Eventually, delete the CacheAccess (Cache::delete_access).
Clarify the typical scenario for write mode of a CacheAccess in section 3.1.6.5.2 and fix typos.
Problem: In section 3.1.6.3.9 regarding the Selection the refresh operation needs to be clarified. I.E. the behavior if the selection is created with auto_refresh set to true (we suggest making it a no-op). Solution: On page 3-31 in section 3.1.6.3.9 Replace: o request that the Selection updates its members (refresh). With: o request that the Selection updates its members (refresh). If the Selection was created with auto_refresh set to true, then this operation is considered a no-op.
Problem:
In the implied IDL section 3.2.1.2.2 on page 3-59 it shows what classes/operation are generated for a fictional type Foo. However this example is too simplistic, it would be helpful to extend the example for valuetype Foo, so that the valuetype contains one simple attribute, one simple key field attribute and one mono relation and one multi relation, as an example to what methods are generated because of such attributes.
Our suggestion is to add the following attributes to the Foo class:
public long x; //keyfield of the underlying topic
public long y;
public Bar a_bar;
public BarSet bars;
The Bar class itself is left out of consideration for the example.
Solution:
Replace:
This section contains the implied IDL constructs for an application-defined class named
Foo.
#include "dds_dlrl.idl"
valuetype Foo: DDS::ObjectRoot {
// some attributes and methods
};
With:
This section contains the implied IDL constructs for an application-defined class named
Foo. For example purposes several attributes are defined on Foo. Namely:
· public long x (keyfield of the underlying Foo topic)
· public long y (a regular field)
· public Bar a_bar (mono relation to a Bar DLRL object)
· public BarSet bars (multi relation of Bar DLRL objects)
The related Bar classes are not worked out in the implied IDL, but just mentioned as forward valuetype defintions.
#include "dds_dlrl.idl"
//forward declarations of Bar and it's helper classes. Bar itself is not worked out
//in the implied IDL
valuetype Bar;
valuetype BarSet;
valuetype Foo: DDS::ObjectRoot {
//getter methods for all attributes
long get_x();
long get_y();
Bar get_a_bar() raises (DDS::NotFound);
BarSet get_bars();
//setter methods for all attributes
void set_x(long val) raises (DDS::PreconditionNotMet);
void set_y(long val) raises (DDS::PreconditionNotMet);
void set_a_bar(Bar val) raises (DDS::PreconditionNotMet);
void set_bars(BarSet val) raises (DDS::PreconditionNotMet);
//is_xxx_modified methods for all attributes
boolean is_x_modified();
boolean is_y_modified();
boolean is_a_bar_modified(DDS::ReferenceScope scope);
boolean is_bars_modified(DDS::ReferenceScope scope);
};
Problem:
In section 3.2.1.2.2 Implied IDL add directly after the definition of the Foo valuetype:
valuetype FooImpl : Foo{
//place for application to implement application defined operations
// (operations signature known at Foo level!)
};
Solution:
TBD
Everywhere where the word undefined is used it is not clear what this means, and the word may be used not only for Foo types, but also FooSelection classes, which makes it unclear what an undefined word means in the context. We suggest putting undefined between < and > and to state within the < and > the type which makes: <undefined> for Foo <undefined>Selection for FooSelection Etc.
Problem: The remove operation should do nothing if it did not contain an element with the specified key. The remove operation on the collection (list, strmap, intmap, but NOT set) should mention it raises a PreconditionNotMet if: - The owner ObjectRoot of the List is not contained within a (writeable) CacheAccess - The owner ObjectRoot has not yet been registered (i.e. has no identity) This should also be fixed in the IDL descriptions (normal and implied) In section 3.2.1.2 IDL description: For the List replace: void remove( ) ; With: void remove( ) raises (PreconditionNotMet);; For the StrMap valuetype: Replace: void remove( in string key ); With: void remove( in string key ) raises (PreconditionNotMet); For the IntMap valuetype: Replace: void remove( in long key ); With: void remove( in long key ) raises (PreconditionNotMet); Solution: TBD
Clarify exceptions and usage for the remove operation on the List in section 3.1.6.3.16 Also for the remove operation on the StrMap in section 3.1.6.3.18. Also for the remove operation on the IntMap in section 3.1.6.3.19.
Problem: The XML as described in section 3.2.2.3 and sub section has a problem in relation to IDL modules. How to represent to valuetypes named Foo which are defined in different modules for example? It is currently not clear how this should be done. It is our suggestion to clarify that fully qualified names should be used in IDL sense. Where a valuetype Foo in module Test would be referred to as "Test::Foo". The subsections of section 3.2.2.3 should state for each element attribute if fully qualified names must be used. Solution: In section 3.2.2.3 replace: Model tags are specified by means of XML declarations that must be compliant with the DTD listed in the following section; subsequent sections give details on the constructs. With: Model tags are specified by means of XML declarations that must be compliant with the DTD listed in the following section; subsequent sections give details on the constructs. The elements in the DTD often have attributes which give the name of IDL defined entities. To correctly identify such entities, fully qualified names (in IDL sense) should be used as much as possible. For example a valuetype Foo in module Test would be referred to as "Test::Foo". In section 3.2.2.3.2.2 EnumDef replace: This tag contains an attribute name (scoped name of the IDL enumeration) and as many value sub-tags that needed to give values. With: This tag contains an attribute name (fully qualified name of the IDL enumeration) and as many value sub-tags that needed to give values. In section 3.2.2.3.2.3 TemplateDef replace: This tag contains three attributes: o name - gives the scoped name of the type. o pattern - gives the construct pattern. The supported constructs are: List, StrMap, IntMap, and Set. o itemType - gives the type of each element in the collection. With: This tag contains three attributes: o name - gives the fully qualified name of the type. o pattern - gives the construct pattern. The supported constructs are: List, StrMap, IntMap, and Set. o itemType - gives the fully qualified type name of each element in the collection. In section 3.2.2.3.2.4 AssociationDef replace: o class - contains the scoped name of the class. With: o class - contains the fully qualified name of the class. In section 3.2.2.3.2.5 compoRelationDef replace: o class - contains the scoped name of the class. With: o class - contains the fully qualified name of the class. In section 3.2.2.3.2.6 ClassMapping replace: This tag contains one attribute name that gives the scoped name of the class and: With: This tag contains one attribute name that gives the fully qualified name of the class and: In section 3.2.2.3.2.7 MainTopic replace: This tag gives the main DCPS Topic, to which that class refers. The main Topic is the topic that gives the existence of a object (an object is declared as existing if, and only if, there is an instance in that Topic matching its key value. It comprises one attribute (name) that gives the name of the Topic, one (optional) attribute (typename) that gives the name of the type (if this attribute is not supplied the type name is considered to be equal to the topic name) and: With: This tag gives the main DCPS Topic, to which that class refer. The main Topic is the topic that gives the existence of an object (an object is declared as existing if, and only if, there is an instance in that Topic matching its key value). It comprises one attribute (name) that gives the name of the Topic (must adhere to topic naming rules, see section B-3 regarding the TOPICNAME), one (optional) attribute (typename) that gives the fully qualified name of the type (if this attribute is not supplied the type name is considered to be equal to the topic name) and:
Problem: Section 3.1.3.1 on page 3-3 states that a simple type may be a 'simple structure', a footnote explains that a simple structure may be a structure that can be mapped inside one DCPS data. But this is still very unclear, we would like to clarify what a simple structure is by changing the footnote to state: "A simple structure is a structure that only contains members of simple type" This must also be changed on page 3-5, the last line on the page. Furthermore the union type should also be added to the list of simple types on page 3-3.
Clarify the term 'simple structure' and add the union-type to the listing of the simple type on the top of page 3-3 in section 3.1.3.1
Problem: The sentence "Even if an object is not changeable by several threads at the same time, there is a need to manage concurrent threads of modifications in a consistent manner." Is very unclear as to what is meant. This should be clarified. Solution: ?
Problem: For clarification purposes the generic IDL on page 3-59 in section 3.2.1.2.1 regarding the cache factory should mention the get_instance operation, in comments ofcourse. Solution: Add the following code to the CachFactory local interface definition: /* To be implemented as a static operation in the implementation: * * CacheFactory get_instance(); */
Section 3.2.1.2.1 Generic DLRL Entities get_instance should be mentioned with the cacheFactory
If we look at the state diagram of the read_state of the cache object, we can see several typos. With typos we mean transitions that are impossible (when reading other parts of the spec). the specification states that the DLRL works with update rounds, after the start of an update round updates are processed and applied onto objects, this may lead to state changes. Once the update round end, this information is cleared again. This last statement is of important, because if at the end of an update round the 'modification info' is cleared, then the read_state would return to NOT_MODIFIED after the end of updates signal (assuming it was NEW or MODIFIED) or it would be garbage collected if the state was DELETED. But this means there is no state transition between the NEW and MODIFIED state, not between the MODIFIED and DELETED states. There is also no transition from NEW to DELETED. These three transitions should thus be removed. Another 'typo' is that one transition is missing, namely from the start point directly to the DELETED state. Because it might happen that the DLRL detects a new object which is already disposed, and this is not something that can be ignored! This situation happens when a sample is read which has the following states (on DCPS level) view_state = NEW, sample_state = NOT_READ, instance_state = NOT_ALIVE_DISPOSED. In this case the DLRL cannot give this sample the state NEW, as it is disposed this update round and thus should not be shown as new, as that an incorrect view on things, the only way to go is to show it as DELETED, since ignoring such information is not a choice the DLRL can make. So this transition must be added. These changes produce the following diagram: The things said for the read_state of a cache object can also be said for the read_state of a cacheaccess object. An additional item missing is that the read_state of a cache access object in read_write mode can also go from start to void and then to end, in case of a local creation and then destruction through a write. This was not correctly mentioned, this also means the read_state diagram is valid for any usage of the cache access, the accompying text must be changed for this as well. The read_state of the cache access object is thus basically the same as the cache object, except the void state and it's transitions are added and some transitions have different descriptions
Problem: In Section 3.1.6.3.7 of the ObjectHome, there are descriptions for the create_object and the create_unregistered_object methods. They fail to mention that create_object may not be used for classes that have a keyDescription of "NoOid", and that create_unregistered_object may not be used for classes that have a keyDescription of "FullOid" or "SimpleOid". A PreconditionNotMet should be thrown in these cases. Solution: Section 3.1.6.3.7 on page 3-28 Replace: · create a new DLRL object (create_object). This operation takes as parameter the CacheAccess concerned by the creation. The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Not satisfying either precondition will raise a PreconditionNotMet. With: · create a new DLRL object (create_object). This operation takes as parameter the CacheAccess concerned by the creation. The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Furthermore, this ObjectHome may not manage any topics that have their keyDescription set to "NoOid" in the XML mapping file. Not satisfying all these preconditions will raise a PreconditionNotMet. Section 3.1.6.3.7 on page 3-29 Replace: · pre-create a new DLRL object in order to fill its content before the allocation of the oid (create_unregistered_object); this method takes as parameter the CacheAccess concerned with this operation. The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Not satisfying either precondition will raise a PreconditionNotMet. With: · pre-create a new DLRL object in order to fill its content before the allocation of the oid (create_unregistered_object); this method takes as parameter the CacheAccess concerned with this operation. The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Furthermore, this ObjectHome may only manage topics that have their keyDescription set to "NoOid" in the XML mapping file. Not satisfying all these preconditions will raise a PreconditionNotMet.
Clearly separate default mapping from pre-defined mapping with respect to object creation.
Problem: It is not clear whether it is allowed to create a CacheAccess when the Cache is not yet in enabled mode. Since the CacheAccess is not usable until the Cache is enabled, it makes sense not to allow the creation of a CacheAccess in that case. Solution: Make clear that a PreconditionNotMet is raised when a CacheAccess is created in a Cache that is not yet enabled. Section 3.1.6.3.4 Replace: The purpose of the CacheAccess must be compatible with the usage mode of the Cache: only a Cache that is write-enabled can create a CacheAccess that allows writing. Violating this rule will raise a PreconditionNotMet: With: The Cache must have its pubsub_state set to ENABLED before it is allowed to create a CacheAccess. Furthermore, the purpose of the CacheAccess must be compatible with the usage mode of the Cache: only a Cache that is write-enabled can create a CacheAccess that allows writing. Violating any of these rules will raise a PreconditionNotMet.
Cache shall throw a PreConditionNotMet when trying to create a CacheAccess when not yet in enabled_all_for_pubsub mode
Problem: It is not clear what will happen when someone tries to obtain or remove a non-existing element from a Collection type (i.e. use the get/remove operation with a non-existing key/index). The NoSuchElement Exception was meant for that purpose. Solution: Explicitly mention the siutuations in which the NoSuchElement exception will be raised. Section 3.1.6.3.16 Replace: · "remove - to remove the item with the highest index from the collection. · "get - to retrieve an item in the collection (based on its index). With: · remove - to remove the item with the highest index from the collection. If the List is already empty, a NoSuchElement is raised. · get - to retrieve an item in the collection (based on its index). If the specified index is greater than or equal to the length of the List, a NoSuchElement is raised. Section 3.1.6.3.17 Replace: · "remove - to remove an element from the Set. If the specified element is not contained in the Set, the operation is ignored. With: · remove - to remove an element from the Set. If the specified element is not contained in the Set, a NoSuchElement is raised. Section 3.1.6.3.18 AND 3.1.6.3.19 Replace: · "remove - to remove an item from the collection. · "get - to retrieve an item in the collection (based on its key). With: · remove - to remove an item from the collection. If no item matches the specified key, a NoSuchElement is raised. · get - to retrieve an item in the collection (based on its key). If no item matches the specified key, a NoSuchElement is raised.
When accessing or removing non-existing elements in Collections a NoSuchElement should be thrown.
Problem: It is not clear what the read_state and write_state should be of an unregistered object. Solution: Set both states to VOID until the object gets registered. Section 3.1.6.3.7 Replace: · pre-create a new DLRL object in order to fill its content before the allocation of the oid (create_unregistered_object); this method takes as parameter the CacheAccess concerned with this operation. The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Not satisfying either precondition will raise a PreconditionNotMet. With: · pre-create a new DLRL object in order to fill its content before the allocation of the oid (create_unregistered_object); this method takes as parameter the CacheAccess concerned with this operation. The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Not satisfying either precondition will raise a PreconditionNotMet. An unregistered object has both its read_state and its write_state set to VOID, and may only be used to assign a unique combination of key-values to it. The object should be registered before anything else can be done with it.
Unregistered objects should have their READ_STATE and WRITE_STATE set to VOID.
Problem: In section 3.1.6.3.8 it is described how events that are not handled by child-objects are propagated to their parent objects. It is clearly described that when the callback function returns TRUE, the event will not be propagated to the parent Listener, otherwise it will. However, what should happen when two listeners are attached to the same Home and one of them returns TRUE and the other one returns FALSE? Solution: Clearly state that as long as only one of the Listeners returns FALSE, the event will be propagated to the parent listener. section 3.1.6.3.8 Replace: Each of these methods must return a boolean. TRUE means that the event has been fully taken into account and therefore does not need to be propagated to other ObjectListener objects (of parent classes). With: Each of these methods must return a boolean. TRUE means that the event has been fully taken into account and therefore does not need to be propagated to other ObjectListener objects (of parent classes). In case of multiple listeners being attached to the same ObjectHome: as long as one or more of these listeners return FALSE, the event will be propagated to the parent classes, regardless of the number of listeners that returns TRUE.
.
Problem: Clearly state that it is not allowed to directly access information from the underlying DataReaders of a Cache Subscriber. Doing so might modify the status of these samples (for example their view_state and sample_state), potentially corrupting the Cache state as well. Solution: TBD.
Indicate that it is not allowed to directly access samples from the underlying DataReaders
Problem: Section 3.2.1.1 Change description on mapping rules for Exceptions or return values. Currently it states that return-values are used for recoverable errors and exceptions for non-recoverable errors. Remove this description: exceptions are used for both cases. Solution: TBD.
Problem:
Several new Exceptions need to be introduced in the exception list on page 3-18:
· BadParameter - To be thrown when an illegal parameter is used for an operation. An illegal parameter is defined as a NIL pointer
· TimeOut - To be thrown during the write operation of the CacheAccess to indicate a time out occurred while trying to write the contents of the CacheAccess.
On page 3-48 in section 3.2.1.2 IDL description the exception list should be expanded to include the following two definitions:
exception BadParameter {string message;};
exception TimeOut {string message;};
Solution:
TBD
Problem: Currently the DCPSError is used in various operations to indicate an error occuring in DCPS, however it is implementation specific for which operations the DCPS is accessed. For example the getter of a simple attribute needs to access DCPS if the state of the object was not yet dereffed. And because it's up to the implementation to decide where to keep it management information, it also possible to get DCPS errors on operations which the specification does not define them. For example a refresh on a Selection or the get_objects on a CacheBase or ObjectHome may raise DCPSErrors in our implementation. To give a better flexibility we suggest to make the DCPSError exception runtime as well, especially since often such exceptions are not recoverable. In such cases that the exception IS recoverable we suggest introducing a specific exception for that case. For example introducing a timeout exception in the write operation of the CacheAccess to deal with the fact the DCPS DataWriter timed out, as that is recoverable, but an error code indicating the DataWriter is deleted is not recoverable. Solution: Replace (in section 3.2.1.1): Exceptions in DLRL will be mapped according to the default language mapping rules, except for the AlreadyDeleted exception. Since this exception can be raised on all methods and attributes (which is not possible to specify in IDL versions older than 3.0), it is not explicitly mentioned in the raise clause of each operation. Implementors may choose to map it onto an exception type that does not need to be caught explicitly, simplifying the DLRL code significantly. With: Exceptions in DLRL will be mapped according to the default language mapping rules, except for the AlreadyDeleted and DCPSError exceptions. Since these exception can be raised on all methods and attributes (which is not possible to specify in IDL versions older than 3.0), it is not explicitly mentioned in the raise clause of each operation. Implementors may choose to map it onto an exception type that does not need to be caught explicitly, simplifying the DLRL code significantly.
Problem:
The CacheDescription object is used within the create_cache operation on the CacheFactory only. It is a strange concept; a holder for a name and participant only. Instead one should simply provide the name and participant as parameters to the create_cache operation instead of needing to cumbersomely wrap it in a Cachedescription object.
Solution:
We propose to change the signature of the create_cache operation from
create_cache(CacheUsage cache_usage, CacheDescription description)
to
create_cache(string name, CacheUsage cache_usage, DomainParticipant participant);
Replace (in the table description of the CacheFactory on page 3-19)
create_cache Cache
cache_usage CacheUsage
description CacheDescription
With:
create_cache Cache
name string
cache_usage CacheUsage
participant DomainParticipant
In the text on page 3-19 describing the create_cache operation change the text regarding the CacheDescription to reflect the change as well
Replace:
This method takes as a parameter cache_usage, which indicates the future usage of the Cache (namely WRITE_ONLY-no subscription, READ_ONLY-no publication, or READ_WRITE-both modes) and a description of the Cache (at a minimum, this CacheDescription gathers the concerned DomainParticipant as well as a name allocated to the Cache). Depending on the cache_usage a Publisher, a Subscriber, or both will be created for the unique usage of the Cache. These two objects will be attached to the passed DomainParticipant.
With:
This method takes as a parameter name, which represents the name allocate to the Cache; a paramater cache_usage, which indicates the future usage of the Cache (namely WRITE_ONLY-no subscription, READ_ONLY-no publication, or READ_WRITE-both modes) and a parameter participant, which contains the concerned DomainParticipant. Depending on the cache_usage a Publisher, a Subscriber, or both will be created for the unique usage of the Cache. These two objects will be attached to the passed DomainParticipant.
(see next page for more)
On page 3-59 in section 3.2.1.2 IDL description replace:
/************************************************
* CacheFactory : Factory to create Cache objects
************************************************/
valuetype CacheDescription {
public CacheName name;
public DDS::DomainParticipant domain;
};
local interface CacheFactory {
Cache create_cache (
in CacheUsage cache_usage,
in CacheDescription cache_description)
raises (
DCPSError,
AlreadyExisting);
Cache find_cache_by_name(
in CacheName name);
void delete_cache (
in Cache a_cache);
};
With:
/************************************************
* CacheFactory : Factory to create Cache objects
************************************************/
local interface CacheFactory {
Cache create_cache (
In CacheName name;
in CacheUsage cache_usage,
in DDS::DomainParticipant participant)
raises (
DCPSError,
AlreadyExisting);
Cache find_cache_by_name(
in CacheName name);
void delete_cache (
in Cache a_cache);
};
Problem: Currently it is only possible to retrieve the names identifying the object homes for which a CacheAccess contains objects, but it would be desirable if instead of getting an array of string you'd get an array of indexes. This would allow for a more performance friendly way for applications to jump to the code to handle objects of a specific type in a CacheAccess Solution: Add the following row to the attributes listing of the CacheAccess directly following the row for attribute type_names: contained_types integer[] Add the following description to the list of description for attributes on page 3-21 directly following the description of attribute type_names: o A list of indexes that represents the types for which the CacheAccess contains at least one object (contained_types). In the IDL description on page 3-57 section 3.2.1.2 add attribute definition in the interface CacheAccess: readonly attribute LongSeq contained_types; In figure 3-4 on page 3-16 add the contained_types to the attribute listing of class ObjectHome.
Add attribute to CacheAccess (section 3.1.6.3.3) to get the contained_types based upon the home indexes instead of the homes names.
Problem: It would be convient (and performance reasons desirable) to get the index of the home associated with an ObjectRoot in one call, this would especially be usefull in combination with the contained_types operation on the CacheAccess. Solution: Add the following row to the attributes listing of the ObjectRoot directly following the row for attribute owner: home_index integer Add the following description to the list of description for attributes on page 3-21 directly following the description of attribute owner: o the index (home_index) under which it's related ObjectHome is registered to the Cache On Section 3.2.1.2 IDL description on page 3-51 regarding valuetype ObjectRoot add the attribute description: readonly long home_index;
Add an attribute to get the home_index for the home associated with an ObjectRoot (section 3.1.6.3.14) next to the operation to retrieve the home pointer.
Problem: It is possible for situations to arise where an ObjectRoot has a relation to an ObjectRoot which has already been deleted. Imagine we have a relationship between class Foo and Class Bar. In update round 1 we receive both Foo and Bar and create the relationship from Foo to Bar. In update round 2 the Bar object is disposed and thus the read_state is changed to DELETED, at the end of the update round the Bar object is actually deleted, accessing it from that moment on will raise the AlreadyDeleted exception. In this update round the relationship from Foo to Bar has not been reset (this is not unlikely in hybrid (DCPS and DLRL mixed) systems, or even DLRL only systems (topic samples are lost for example). If we try to navigate from Foo to the Bar object after update round 2, then we will get that Bar object, however accessing any operation gives us the (runtime) exception AlreadyDeleted. Because the above example deals with an error situation (The Foo object should have been updated in update round 2 and the relation to object Bar should have been reset, or the sample doing that shouldn't have been lost) we propose to clarify in the specification that it is NOT allowed to have a relationship to an object with the object state DELETED at the time of a refresh. Such relations should transparently be reset by the DLRL to a NotFound exception and the Foo object in the example should be marked as MODIFIED during the update round the deletion of the related object is detected, a call to the is_xxx_modified operation of the Foo object regarding relation Bar should return true. This also ensures that an application can NEVER get an AlreadyDeleted exception unless the application has specifically done something wrong (like maintain a reference to the Bar object in it's own application code and ignored the deletion event). Not allowing relationships to objects that are deleted ensures the same behavior is found for all situation involving relations, which is convinient for the application as it doesn't need to take exceptional situation into account but can be assured the middleware resolves such situations in a uniform manner. Foo is received but no Bar is available results in NotFound. Foo is received, Bar is received as disposed results in NotFound. Foo is received, Bar is received, Bar is then disposed results in NotFound. Solution: Explain the above behavior in the specification.
Problem: It is possible for getters of relationships between DLRL objects to throw a Notfound exception when the related object cannot be located by the DLRL. However if that related object arrives in a later update round the NotFound should be cleared and the getter should return the related object. The owner of that relation should be seen as modified and the is_xxx_modified operation for that relation attribute should return true even if the object itself was not explicitly updated in that update round. Example: Update round 1: We receive object Foo and detect it has a relation to object Bar, but this object is not known. The relation is thus set to a NotFound exception if accessed. Update round 2: We now receive the Bar object that is the target of the Foo objects bar relation. We do not receive an update for the Foo object itself though. We propose in situation like above to mark Object Foo as modified and set the received Object Bar as the target of Object Foo. Is_xxx_modified for that relation returns true. This should be explained in the specification. Solution: TBD
When an object arrives which results in a NotFound relation to become valid, then this is seen as a state change for the owner object of that relation
Problem:
It is desirable to prevent an application from writing changes that are in itself inconsistent. Changes are inconsistent when for example a NIL pointer for a relation that was modeled as mandatory is still set (in predefined mapping, what would one put into the key values?!) or a relation to an object that is marked to be disposed in the next write operation (setting such relations is allowed, as long as they are reset again when writing the contents.). The latter is to be prevented to ensure receiving application do not get inconsistent data, i.e. a relation to a deleted object which could result in an AlreadyDeleted exception is to be prevented at all costs.
To prevent such inconsistencies to be inserted into the system we propose to let the write operation throw an InvalidObjects exception to indicate the application has created an invalid state within the CacheAccess. To determine which objects are invalid an operation should be added to retrieve all invalid objects in the CacheAccess. And each ObjectRoot should have an operation to retrieve the names of the relations which cause the object to be marked as an invalid object.
Solution:
Add new Exception in the exception list on page 3-18:
· InvalidObjects - To be thrown during the write operation of the CacheAccess to indicate invalid objects exist in the CacheAccess. An invalid object is defined as an object which has one or more invalid relation(s).
On page 3-48 in section 3.2.1.2 IDL description the exception list should be expanded to include the following definition:
exception InvalidObjects {string message;};
Change the description of the write() operation on the CacheAccess on page 3-21, add the following:
An InvalidObjects exception is raised if one of the objects contained within the CacheAccess has one or more invalid relation(s)
Add the following operation to the CacheAccess class in figure 3-4 on page 3-16:
get_invalid_objects()
Add the following operation description to the end of the table of the CacheAccess in section 3.1.6.3.3, page 3-21.
get_invalid_objects ObjectRoot[]
Add the following description for operation get_invalid_objects() right after the description of operation delete_contract
· Returns a list of all ObjectRoots which have one or more invalid relation(s). This operation should be used to recover from an InvalidObjects exception thrown by the write() operation. (get_invalid_objects)
Add the following operation to the collection class in figure 3-4 on page 3-16
get_invalid_elements()
get_element_status()
Add the following operations to the ObjectRoot class in figure 3-4 on page 3-16:
get_invalid_relations()
get_relation_status ()
Add the following operation description to the end of the table of the Collection entity in section 3.1.6.1.15 on page 3-38
get_invalid_elements <undefined element type>[]
get_element_status RelationStatus
value <undefined element type>
Add the following description for operation 'get_invalid_elements()' right after the description of operation 'values':
· Returns all elements which are seen as invalid by the DLRL. An element is invalid when it refers to an object with the write_state OBJECT_DELETED or when the collection is a composition, but the constituent object referred to by the element is also referred by another composition relation or when the relation is an association but the associated object referred to by the element does not correctly point back to the object. (get_invalid_elements)
· Returns an enumeration indicating wether the element indicated by the value parameter is valid or invalid (and the exact relation why it is invalid). If the element is not known within the scope of the Collection than a NoSuchElement exception is raised. (get_element_status)
Add the following operation description to the end of the table of the ObjectRoot entity in section 3.1.6.3.14 on page 3-34:
get_invalid_relations string[]
get_relation_status RelationStatus
name string
Add the following description for operation invalid_relations () right after the description of operation is_modified
· Returns a list of relation names which are seen as invalid by the DLRL. A relation is invalid when it refers to an object with the write_state OBJECT_DELETED or when the relation is a NIL pointer but modeled as a mandatory relation (cardinality of 1) or when the relation is a composition, but the constituent object is also referred by another composition relation or when the relation is an association but the associated object does not correctly point back to the object. For relations that are collections the cardinality reason can not result in the relation being seen as invalid. (get_invalid_relations)
· Returns an enumeration indicating wether the relation indicated by the name parameter is valid or invalid (and the exact relation why it is invalid). If no relation is known with that name within the scope of the ObjectRoot than a PreconditionNotMet exception is raised. (get_relation_status)
On Section 3.2.1.2 IDL description on page 3-47 directly following the ObjectState enum add the following enum:
enum RelationStatus{
//The relation has no violations.
VALID,
//The relation is a NIL pointer but was modeled as a mandatory relation.
CARDINALITY_VIOLATION,
//The relation points to an object with read_state OBJECT_DELETED or
//an object which was already garbage collected.
LIVELINESS_VIOLATION,
//The related object does not correctly associate itself with the 'owner' object
//of this relation.
ASSOCIATION_VIOLATION,
//The related object is a constituent object in more then one composition relation.
COMPOSITION_VIOLATION
};
On Section 3.2.1.2 IDL description on page 3-51 regarding valuetype ObjectRoot add the operation description:
StringSeq get_invalid_objects();
RelationStatus get_relation_status(in string name) raises (PreconditionNotMet);
On Section 3.2.1.2 IDL description on page 3-55 regarding valuetype List add the operation description:
LongSeq get_invalid_elements();
RelationStatus get_element_status(in long index) raises (NoSuchElement);
On Section 3.2.1.2 IDL description on page 3-56 regarding valuetype StrMap add the operation description:
StringSeq get_invalid_elements();
RelationStatus get_element_status(in string key) raises (NoSuchElement);
On Section 3.2.1.2 IDL description on page 3-56 regarding valuetype IntMap add the operation description:
LongSeq get_invalid_elements();
RelationStatus get_element_status(in long key) raises (NoSuchElement);
On Section 3.2.1.2 IDL description on page 3-56 regarding valuetype Set add the operation description:
/* To be properly typed in the generated derived classes:
*
* ObjectRootSeq get_invalid_elements();
* RelationStatus get_element_status(in ObjectRoot value) raises (NoSuchElement);
*/
On section 3.2.1.2.2 Implied IDL on page 3-62 regarding valuetype FooSet add the operation description:
FooSeq get_invalid_elements();
RelationStatus get_element_status(in Foo value) raises (NoSuchElement);
In section 3.2.1.2 IDL description on page 3-57 regarding the CacheAccess add the exception InvalidObjects to the raises clause of the write operation.
void write() raises (PreconditionNotMet, DCPSError, InvalidObjects, TimeOut);
See issues XXX, XXX and XXX for details about the other exceptions and why the readonlymode exception was axed.
Problem: Change the return type of the attach_listener and detach_listener operation on the Cache entity to return a boolean instead of void. True indicates the operation was successfully (i.e. the listener was successfully attached or successfully removed). False indicates the operation was not successfully (i.e. the listener could not be attached, because it already was attached for the attach operation and for the detach operation it means the listener could not be detached because it was not attached in the first place. Solution: On page 3-22 in the table for section 3.1.6.3.4 replace: attach_listener void listener CacheListener detach _listener void listener CacheListener With: attach_listener boolean listener CacheListener detach _listener boolean listener CacheListener On page 3-23 for the description of the attach/detach listener add a sentence indicating the boolean return value and its meaning Replace: · attach/detach a CacheListener (attach_listener, detach_listener). With: · attach/detach a CacheListener, if the operation was successful true is returned and false if otherwise (attach_listener, detach_listener).
Let the attach/detach listener operation return a boolean instead of void on the Cache entity
Problem: The set_query and set_parameters operation both have a return value of type boolean indicating success or not and an exception which is raised if an error was detected. Obviously one of the two is not neccesary, if an exception is raised then the return value is irrelevant and vice versa. So a choice needs to be made for one or the other. We propose to keep the exception as that is in line with other similar operation (set_content_filter on the ObjectHome for example). Also update the IDL description on page 3-52 in section 3.2.1.2. Solution: Change the return value of the set_query and set_parameter operations from boolean to void. And replace the following text: o set the value of the expression and its parameters (set_query); a TRUE return value indicates that they have been successfully changed. o set the values of the parameters (set_parameters). The number of parameters must fit with the values required by the expression. A TRUE return value indicates that they have been successfully changed. With: o set the value of the expression and its parameters (set_query). If the expression is not valid or if it's parameters do not match the number of parameters in the expression then a SQLError is raised. o set the values of the parameters (set_parameters). The number of parameters must fit with the values required by the expression, else a SQLError is raised. On page 3-52 in section 3.2.1.2 IDL description replace: boolean set_query ( in string expression, in StringSeq parameters) raises (SQLError); boolean set_parameters ( in StringSeq parameters ) raises (SQLError); With: void set_query (in string expression, in StringSeq parameters) raises (SQLError); void set_parameters ( in StringSeq parameters ) raises (SQLError);
Remove return value for the set_query and set_parameters operation of the QueryCriterion.
Problem: It would be convient to have a clear() operation on the collection interface which simply removes all elements from the collection (does not effect added/modified/deleted _elements operations results…). While we are at it, remove the type in the table stating there are no attributes as well (since two attributes are listed!!). Solution: Replace: Collection no attributes length integer values Undefined [] (e.g. of type ObjectRoot or Primitive type) It provides the following attributes: o length - the length of the collection. o values - a list of all values contained in the Collection. With: Collection attributes length integer values Undefined [] (e.g. of type ObjectRoot or Primitive type) operations clear void It provides the following attributes: o length - the length of the collection. o values - a list of all values contained in the Collection. It provides the following methods: · clear - to clear the contents of the Collection, does not affect the added_elements, modified_elements, deleted_elements results. If the object this Collection belongs to is not registered or does not belong to a (writeable) CacheAccess a PreconditionNotMet is raised In the IDL description on page 3-55 add the following to the abstract valuetype Collection: void clear() raises(PreconditionNotMet); In figure 3-4 on page 3-16 the clear() operation should be added to the operation listing of class Collection
Problem: Currently the Mapping tags do not give any support for local classes, they mention local attributes but ignore locally defined valuetypes in the DLRL IDL, in default mapping these valuetypes would always lead to DCPS topics being generated from them. Solution: Make the local element a sub-tag of the dlrl tag as well. If the local tag is defined as sub-tag of the dlrl element, then the name should be fully qualified and refer to a class being 'local', if the local tag is a sub-tag of the classMapping element then it means the attribute is local and the name does not have to be fully qualified.
Problem: Certain QoS settings on DCPS entities may conflict with DLRL usage, such as a history for samples, this cannot be represented within DLRL. Or setting auto_purge_disposed_samples_delay to something else then infinite or settings for coherent updates qos setings Solution: If such qoS settings are set in conflict with required settings for DLRL a PreconditionNotMet will be raised, explain this in the enable_allfor_pubsub operation.
Enable_all_for_pubsub operation throws PreconditionNotMet exception if QoS policies conflict with DLRL required QoS policies
Problem: The descriptions of the deref_all, underef_all and set_auto_deref should be stated to be implementation dependant, as these operations basically are only a standardized set of operations which can be used by DLRL applications to optimize the memory usage of the underlying DLRL implementation, but the exact affects and such are impossible to describe on specification level. And whether it is more efficient for an implementation to always load the states or do it on request basis is an implementation issue, so we would like to state for the mentioned three operation that the affects are implementation dependant.
The affects of the set_auto_deref, deref_all, underef_all operations on the object home should be made implementation dependant.
Problem: Currently it is not possible to navigate from a Cache to its underlying DomainParticipant. Such navigation may be convenient because the underlying DomainParticipant may be needed to manage statuses, control connectivity, change QoS settings and assert liveliness. Solution: Add an attribute to the Cache that refers to the underlying DomainParticipant. Section 3.1.6.2, Figure 3-4: Add a DomainParticipant class and a reference from the Cache to that DomainParticipant class. Name the reference "the_participant". Section 3.1.6.3.4: Add the following entry to the table: Cache attributes the_participant DDS::DomainParticipant Section 3.1.6.3.4 Replace: · the state of the cache with respect to the underlying Pub/Sub infrastructure (pubsub_state), as well as the related Publisher (the_publisher) and Subscriber (the_subscriber). With: · the state of the cache with respect to the underlying Pub/Sub infrastructure (pubsub_state), as well as the related DomainParticipant (the_participant), Publisher (the_publisher) and Subscriber (the_subscriber). Section 3.2.1.2: Add to the IDL of the Cache interface the following line: readonly attribute DDS::DomainParticipant the_participant;
Problem: The is_xxx_modified operation (generated for each shared attribute in each DLRL object) is very heavy weight to implement, while it may not be used very much. We propose to make it optional, for example by a setting in the XML file, a setting on the corresponding ObjectHome, or by some QoS or something.. Solution: TBD.
Problem: When using a keyDescription set to "FullOid" in the XML mapping file, each OID field is accompanied by a class-name field that represents the actual type of a specific object instance (i.e. the name of the class that that object instance was instantiated to.) However, in case of a purely 'local' object model, this class name has no meaning outside the scope of the application that uses this local model. If the topics used for transport are also mapped to another 'local' object model, you get a confusing mix of local and global information in the same topic. This sort of confusion should be avoided at all costs: topics may only transport information that refers to the global information model, not to some local representation of it. Solution: Either allow the FullOid tag to only be used in case of a 'global' object model (default mapping, topic model is generated to match it), or describe that the class-name should not represent the name of a 'local' DLRL class, but rather the name of the topic that represents the state of that 'local' class. TBD.
Problem: Currently when an application is interested in the creation/modification/deletion of specific objects, he can attach a Listener to the corresponding ObjectHome. We will then get separate callbacks for each object of this type that gets created/deleted/modified. But maybe the user doesn't want all these separate callbacks and just wants to deal with a subset of all possible events (for example only creations). He could do so by not attaching ObjectListeners to the ObjectHomes, but by using the CacheListener and then iterate through all Homes using for example only the get_created_objects. It would be nice if the Cache could then return a list of Homes that contain information that needs to be processed, to prevent the user from also having to iterate through all homes that have nothing to report. Solution: Add an operation to the Cache that returns a list of indexes for all homes that received updates in the current update round. Section 3.1.6.2, Figure 3-4: Add an operation to the Cache called "get_updated_home_indexes". Section 3.1.6.3.9: Add the following entry to the table: Cache operations get_updated_home_indexes integer[] Section 3.1.6.3.4 Add: · retrieve the indexes of all ObjectHomes that received updates in the current update round (get_updated_home_indexes). Section 3.2.1.2: Add to the IDL of the Cache interface the following line: LongSeq get_updated_home_indexes( );
Add an operation to the Cache to retrieve the Homes that have received information that needs to be processed.
Problem: The only way to find out which objects were inserted into/modified in/removed from a Selection is by attaching a SelectionListener to that Selection and by processing the callbacks on these events. However, there should also be a way to obtain this information without having to resort to Listeners. Solution: Add operations to the Selection that return a list of objects that are inserted into/modified in/removed from that Selection. Section 3.1.6.2, Figure 3-4: Add 3 operations to the Selection called "get_inserted_members", "get_modified_members" and get_removed_members". Section 3.1.6.3.9: Add the following entry to the table: Selection operations get_inserted_members <undefined>[] get_modified_members <undefined>[] get_removed_members <undefined>[] Section 3.1.6.3.9 Add: · get_inserted_members returns all objects that entered the Selection in the current update round. · get_modified_members returns all objects still belonging to the Selection whose content has changed in the current update round. · get_removed_members returns all objects that exited the Selection in the current update round. Section 3.2.1.2.1: In the following lines of the Generic IDL of the Selection interface: Replace: /*** * Following method will be generated properly typed * in the generated derived classes * SelectionListener set_listener ( in SelectionListener listener); * ***/ With: /*** * Following method will be generated properly typed * in the generated derived classes * SelectionListener set_listener (in SelectionListener listener); ObjectRootSeq get_inserted_members ( ); ObjectRootSeq get_modified_members ( ); ObjectRootSeq get_removed_members ( ); * ***/ Section 3.2.1.2.2: In the Implied IDL of the FooSelection interface add the following lines: FooSeq get_inserted_members ( ); FooSeq get_modified_members ( ); FooSeq get_removed_members ( );
Selection should have a non-listener way of obtaining the members that were inserted and removed.
Problem: When a Selection is in auto_refresh mode, normal operation is that it only evaluates objects in the update round in which they receive updates: objects that do not get updated have no reason to enter/exit a Selection. However, that mechanism is based on the assumption that the filter only evaluates the state of the object itself. What if the filter evaluates other things, like the state of related objects. Then the filter should also be re-applied if the state of such a related object changes. Solution: There may be a need to set the scope of a FilterCriterion: this scope will then define when and how objects should be re-evaluated by the Selection. TBD.
The FilterCriterion should have some mechanism to define the scope and granularity of updates it needs to process when in auto_refresh mode
Problem: It needs to be clarified how deleted objects are treated in a CacheBase and a Selection. In the update round in which they are reported as being deleted, are they still part of the members of that CacheBase/Selection or not? (See also section 3.1.6.4.1 last bullet.) Solution: Since deleted objects are no longer alive, we propose not to mention them in the members attribute of CacheBases/Selections. In the update round in which they become deleted they will be reported as being deleted in those CacheBases/Selections, but this deletion is executed immediately. Section 3.1.6.3.2 Replace: · "A list of (untyped) objects that are contained in this CacheBase. To obtain objects by type, see the get_objects method in the typed ObjectHome. With: · A list of (untyped) objects (excluding the ones that are marked as deleted) that are contained in this CacheBase. To obtain objects by type, see the get_objects method in the typed ObjectHome. Section 3.1.6.3.7 Replace: · obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome (get_objects). For example the type ObjectRoot[ ] will be substituted by a type Foo[] in a FooHome. With: · obtain from a CacheBase a (typed) list of all objects (excluding the ones that are marked as deleted) that match the type of the selected ObjectHome (get_objects). For example the type ObjectRoot[ ] will be substituted by a type Foo[ ] in a FooHome. Section 3.1.6.3.9: Replace: · the list of the objects that are part of the selection (members). With: · the list of the objects (excluding the ones that are marked as deleted and the ones that no longer pass the filter) that are part of the selection (members).
It needs to be clarified how deleted objects are treated in a CacheBase and a Selection.
Problem: When should objects instances in a writeable CacheAccess be regsitered to the DataWriter? Upon entrance into the writeable CacheAccess or only when actually performing the write operation. This choice may impact ownership matters. Same question for newly created objects in a CacheAccess: when should they be registered? Solution: Our proposal is to only register any changes during the write operations, otherwise overhead is very heavy when cloning a large tree of Objects into a writeable CacheAccess. Probably most of these objects will not become modified and then doing a register upon entrance of each instance might result in huge numbers of unnecessary register messages. TBD.
It should be clear when objects instances in a writeable CacheAccess are registered to the DataWriter.
Problem: It is possible to annotate certain relations in the XML as being compositions and/or associations. Section 3.1.3.2.2 briefly describes how such relationships should behave. However, although it is possible to enforece constraints on certain relationships on the writer side, it is not possible to enforece them on the reader side. Especially not since the constraints are part of the 'local' object model, while the topics sould have been written by somebody with another 'local' model where these constraints are not enforced. What should happen to a composition relation where multiple objects claim possession of the same compound object? Or to an association where the associated object does not refer back to the object that associates it? Solution: In our opinion these constraints can only be enforced in the local object model, not on the entire system. (Unless of course the entire system shares the same object model). Because of this we propose to enforece these constraints only on the writer side of the DLRL: when objects in a writeable CacheAccess are modified and do not adhere to these constraints, the write operations will raise an InvalidObjects (See also issue PT-DLRL-ARCH-0008). TBD.
Problem: What should happen to an object that passes a contentfilter once, but then its update gets blocked by the contentfilter? Currently the DLRL behaves as if no update was received, potentially resulting in the wrong assumption that the previous state is still the most recent state available. Solution: This problem cannot be solved without some support on DCPS ContentFilteredTopic level as well, since a special state needs to be introduced to represent instances that are blocked by a filter. The idea is to introduce a special state called NOT_COMPLIANT, that has similar properties to the DELETED state, except for the fact that it has another meaning. This NOT_COMPLIANT state can also be used for other purposes (See issue PT-DLRL-ARCH-0029). TBD.
Indicate what should happen to a DLRL object that passed a ContentFilter before, but then later on gets blocked by this filter.
Problem: Inheritance is redundantly modeled in both IDL and XML. What to do when there is inheritance according to IDL but not the XML or vice-versa? Also, it is not clear in case of multiple level inheritance (C extends B extends A) which topic should act as the main topic: for class C in this example, is the main topic the one corressponding to the highest parent (A), or the topic corresponding to the closest parent (B)? Finally, also the place topic description is very redundant: for each attribute located in the same place topic, the placetopic definition needs to be repeated. Solution: Our proposal is to not mention the extension topics in the IDL: just use normal class mapping and deduce from the IDL that two classes have an inheritance relationship. Furthermore we propose to define the topic definition (with its keydescription outside the scope of the class mapping: each attribute can then just refer to a known topic definition without repeating it. TBD.
Problem: It needs to become clear how a situation is handled where a place- or extension topic is disposed, but the main topic is not, or vice-versa. What should be the result for the Object representing this combination?. Solution: Our proposal is that an object gets deleted if one or more of its constituent topics are deleted, and that it starts a new generation if one or more of its constituent topics starts a new generation. The same beaviour would then apply wrt Liveliness: the combination looses liveliness if one of the constituent topics does. TBD.
Problem: It should clearly be stated what to do to the lifecycle of a DLRL object when the instance_state of the corresponding DCPS instance becomes NOT_ALIVE_NO_WRITERS. Solution: Our proposal is to mark this situation in the DLRL object, but not to start a new DLRL generation. There must be some mechanism however to notify the application about changes to the Liveliness of such an object. We propose to add a specific ObjectState called LIVELINESS_LOST, which is a separate state variable from the ObjectState. A transition to this state can be signalled by a callback method on the ObjectListener called "on_object_liveliness_lost". When liveliness is regained, the ObjectState can revert back to the LIVELINESS_ASSURED. A new callback should be added for that situation called "on_object_liveliness_regained". Similarly the ObjectHome should provide methods called "get_liveliness_lost_objects" and "get_liveliness_regained_objects". TBD.
Indicate what to do when the instance_state of a DCPS instance becomes NOT_ALIVE_NO_WRITERS.
Problem: Extend the XML to allow optional relationships (i.e. relationships with a cardinality of 0..1). This way it is always possible to explicitly allow NULL pointer relations, even in case of predefined mapping. Solution: An optional relationship should be accompanied by a boolean field that specifies whether the foreign keyfields should be interpreted or not. TBD.
Extend the XML to allow optional relationships (i.e. relationships with a cardinality of 0..1).
Problem: What should happen to cloned objects that, when the CacheAccess is refreshed, are no longer covered by any contract? You cannot just treat them as if they were deleted. Solution: Probably a special state needs to be introduced for this called NOT_COMPLIANT. TBD.
Indicate what should happen to cloned objects that, when the CacheAccess is refreshed, are no longer covered by any contract
Problem:
Remove quotations in the table contained in the Contents chapter (page 3-1).
Solution:
Replace:
Section Title Page
"Platform Independent Model (PIM)" 3-1
"OMG IDL Platform Specific Model (PSM)" 3-45
With:
Section Title Page
Platform Independent Model (PIM) 3-1
OMG IDL Platform Specific Model (PSM) 3-45
Problem:
Footnote 2 is on wrong page.
Solution:
Move footnote 2 from page 3-3 to page 3-2
Problem:
In section 3.1.3.3, paragraph two, last sentence says 'They appear in grey on the schema.' Referring to the next figure, but no objects are in grey on that figure.
Solution:
Remove the sentence 'They appear in grey on the schema.' So that the paragraph looks as follows.
Replace:
Note that two objects that will be part of a DLRL model (namely ObjectRoot that is the
root for all the DLRL classes as well as ObjectHome that is the class responsible for
creating and managing all DLRL objects of a given class) are featured to show the
conceptual relations between the metamodel and the model. They appear in grey on the
schema.
With:
Note that two objects that will be part of a DLRL model (namely ObjectRoot that is the
root for all the DLRL classes as well as ObjectHome that is the class responsible for
creating and managing all DLRL objects of a given class) are featured to show the
conceptual relations between the metamodel and the model.
Problem:
The first sentence of section 3.1.5.1 should not say:
A DLRL class is associated with several DCPS Topic
But say
A DLRL class is associated with several DCPS Topics
IE topics should be made plural.
Solution:
Replace:
A DLRL class is associated with several DCPS Topic
With:
A DLRL class is associated with several DCPS Topics
Problem:
In figure 3-4 the CacheFactory shows an operation called 'find_cache', which should be 'find_cache_by_name'. The ObjectHome shows an operation called 'get_new_objects' which should be 'get_created_objects'
Solution:
Change operation name 'find_cache' in the CacheFactory class into 'find_cache_by_name'.
Change operation name 'get_new_objects' in the ObjectHome class into 'get_created_objects'.
Problem:
In the table in section 3.1.6.2 in the row regarding the Cache two typos need to be corrected regarding the location of the word 'first' in the sentence regarding attaching objects to the CacheAccess and the last bullet needs to be rewritten and be made more clear.
Solution:
Replace:
Class whose instance represents a set of objects that are locally available. Objects within a Cache can be read directly; however to be modified, they need to be attached first to a CacheAccess. Several Cache objects may be created but in this case, they must be fully isolated:
o A Publisher can only be attached to one Cache.
o A Subscriber can only be attached to one Cache.
o Only DLRL objects belonging to one Cache can be put in relation.
With:
Class whose instance represents a set of objects that are locally available. Objects within a Cache can be read directly; however to be modified, they first need to be attached to a CacheAccess. Several Cache objects may be created but in this case, they must be fully isolated:
o A Publisher can only be attached to one Cache.
o A Subscriber can only be attached to one Cache.
o A DLRL object can only have relationships with DLRL objects in the same Cache.
Problem:
The table in section 3.1.6.2 in the row regarding the ObjectListener talks about a 'peculiar' ObjectHome. This should say 'particuliar'.
Solution:
Replace:
Interface to be implemented by the application to be made aware of incoming updates on the objects belonging to one peculiar ObjectHome.
With:
Interface to be implemented by the application to be made aware of incoming updates on the objects belonging to one particuliar ObjectHome.
Problem:
On page 3-18 remove wrong quotation marks and the word identify in the explanation of the AlreadyExisting exception should be identity.
Replace:
o "DCPSError: if an unexpected error occured in the DCPS
o "BadHomeDefinition: if a registered ObjectHome has dependencies to other, unregistered ObjectHomes.
o "NotFound: if a reference is encountered to an object that has not (yet) been received by the DCPS.
o "AlreadyExisting: if a new object is created using an identify that is already in use by another object.
o "AlreadyDeleted - if an operation is invoked on an object that has already been deleted.
o "PreconditionNotMet - if a precondition for this operation has not (yet) been met.
o "NoSuchElement - if an attempt is made to retrieve a non-existing element from a Collection.
o "SQLError - if an SQL expression has bad syntax, addresses non-existing fields or is not consistent with its parameters.
With:
o DCPSError: if an unexpected error occured in the DCPS
o BadHomeDefinition: if a registered ObjectHome has dependencies to other, unregistered ObjectHomes.
o NotFound: if a reference is encountered to an object that has not (yet) been received by the DCPS.
o AlreadyExisting: if a new object is created using an identity that is already in use by another object.
o AlreadyDeleted - if an operation is invoked on an object that has already been deleted.
o PreconditionNotMet - if a precondition for this operation has not (yet) been met.
o NoSuchElement - if an attempt is made to retrieve a non-existing element from a Collection.
o SQLError - if an SQL expression has bad syntax, addresses non-existing fields or is not consistent with its parameters.
Problem:
The first sentence in section 3.1.6.3.2, CacheBase, should talk about Cache-like objects, not Cache objects. Various wrong quotation marks are used in the attribute and operation listings. Attributes and operation names are not in bold at the description as is common throughout the spec. A bullet is missing for the explanation of the kind attribute. A new line is missing before the sentence "It offers methods to:". And the word 'cache' in the explanation of CacheUsage should be CacheBase
Solution:
Replace:
CacheBase is the base class for all Cache classes. It contains the common functionality that supports Cache and CacheAccess.
With:
CacheBase is the base class for all Cache-like classes. It contains the common functionality that supports Cache and CacheAccess.
Replace:
The public attributes give:
o "The cache_usage indicates whether the cache is intended to support write operations (WRITE_ONLY or READ_WRITE) or not (READ_ONLY). This attribute is given at creation time and cannot be changed afterwards.
o "A list of (untyped) objects that are contained in this CacheBase. To obtain objects by type, see the get_objects method in the typed ObjectHome.
The kind describes whether a CacheBase instance represents a Cache or a CacheAccess.It offers methods to:
o "Refresh the contents of the Cache with respect to its origins (DCPS in case of a main Cache, Cache in case of a CacheAccess).
With:
The public attributes give:
o The cache_usage indicates whether the CacheBase is intended to support write operations (WRITE_ONLY or READ_WRITE) or not (READ_ONLY). This attribute is given at creation time and cannot be changed afterwards.
o A list of (untyped) objects that are contained in this CacheBase. If an error in DCPS occurred, a DCPSError is raised. To obtain objects by type, see the get_objects method in the typed ObjectHome.
o The kind describes whether a CacheBase instance represents a Cache or a CacheAccess.
It offers methods to:
o Refresh the contents of the CacheBase (refresh) with respect to its origins (DCPS in case of a main Cache, Cache in case of a CacheAccess).
Problem:
Several wrong quotation marks in section 3.1.6.3.3, CacheAccess, can be seen in the description of various operations.
Solution:
We propose to remove these quotation marks before operation descriptions of type_names, create_contract and delete_contract
Problem (1/2):
On page 3-29 in section 3.1.6.3.8 regarding the ObjectListener it incorrectly states the parameters of the operations as ObjectReference (which doesn't even exist anymore!), ObjectRoot and ObjectRoot. All these return type should be replaced with Undefined as the operations are generated in the typed ObjectListener class as typed operations.
Solution (1/2):
Replace the return types of the operation in the table with Undefined.
Problem (2/2):
On page 3-30 at the top it states that four operations are following, but only three operations are listed! There are only 3 operations!!
Solution (2/2):
Replace:
It is defined with four methods:
With:
It is defined with three methods:
Problem:
On page 3-29 in section 3.1.6.3.8 regarding the Selection it incorrectly states the types of the attributes 'members' and listener as ObjectRoot[] and SelectionListener respectivly. This should be replaced by <undefined>[] and <undefined>SelectionListener respectively.
The operation set_listener also wrongly specifies the return type and parameter type, both should be replaced by <undefined>SelectionListener.
Solution:
Obvious.
Problem:
In the table on page 3-31 regarding the SelectionCriterion it states the type of attribute kind as 'SelectionCriteria'. However this type does not exist anywhere! It should be CriterionKind. Also the attribute listing underneath the table is not conform the style used throughout the DLRL spec.
Solution:
Replace SelectionCriteria with CriterionKind in the table, make the style conform.
Problem:
In the table on page 3-32 in section 3.1.6.3.11 about the FilterCriterion is states that parameter 'an_object' has as type 'ObjectRoot' However this should be '<undefined>' as the method is properly generated in the 'FooFilter' class.
Solution:
Replace the parameter type of an_object with the word <undefined>.
Problem:
In the table on page 3-32 in section 3.1.6.3.12 regarding the QueryCriterion it states that the parameter names for the set_query (second param) and the set_parameters is 'arguments'. This is not consistent with the IDL PSM, which states 'parameters' as name. They should be made the same. The names in the table should be replaced with the name 'parameters'.
Solution:
Replace the parameter's name 'arguments' with 'parameters'.
Problem:
In section 3.1.6.3.3 CacheAccess, the table states that the operations 'the_object' parameter has as type ObjectRoot. However since these operations are generated in the derived class it should state '<undefined>' as type for the parameters.
Solution:
Obvious
Problem:
The first paragraph on page 3-33 in section 3.1.6.3.14 still talks about primary and secondary objects (or clones). In the last spec change primary objects where renamed to Cache objects and secondary objects(or clones) to CacheAccess objects. Note that cacheAccess object may be clones of cache objects, but that does not need to be the case.
The text should be revised. The text also talks about an ObjectReference, which no longer exists, that text should be removed.
Solution:
Replace:
ObjectRoot is the abstract root for any DLRL class. It brings all the properties that are needed for DLRL management. ObjectRoot are used to represent either objects that are in the Cache (also called primary objects) or clones that are attached to a CacheAccess (also called secondary objects). Secondary objects refer to a primary one with which they share the ObjectReference.
With:
ObjectRoot is the abstract root for any DLRL class. It brings all the properties that are needed for DLRL management. ObjectRoot are used to represent either objects that are in the Cache (also called cache objects) or objects that are attached to a CacheAccess (also called cacheaccess objects). Cacheaccess objects may be clones of cache objects, in that case they share the same OID.
Problem:
The table in section 3.1.6.3.19 regarding the intmap has an attribute defined called 'keys'. The type listed is 'string[]', but this should be 'integer[]', obviously.
Solution:
Obvious
Problem:
The text in section 3.1.6.2 states that any entity for which a generated class exists are indicated in italics. However when we look at the figure we see that this is not correct. The CacheBase class is incorrectly shown in italics, and the Selection class is incorrectly NOT shown in italics. The CacheListener should also not be in italics, as no class is generated for it, the same goes for the SelectionCriterion and the Collection. The ObjectHome however SHOULD be in italics.
Solution:
Make ObjectHome and Selection in italics
Make CacheBase, CacheListener, SelectionCriterion and collection not in italics.
Problem:
In section 3.2.1.2.2 Implied IDL on page 3-60. A white space is missing in the attribute definition of the selections attribute between the attribute name and attribute type.
Solution:
Replace:
readonly attribute FooSelectionSeqselections;
With:
readonly attribute FooSelectionSeq selections;
Problem:
In section 3.2.2.3.2.9 typo in section name ExtensionTable, it should be ExtensionTopic.
Solution:
Replace:
3.2.2.3.2.9 ExtensionTable
With:
3.2.2.3.2.9 ExtensionTopic
Problem:
In section 3.2.3.2 IDL Model description on page 3-69 and 3-70 a module DLRL is mentioned twice, which does not exist. Track inherits from DLRL::ObjectRoot, it should be module DDS::ObjectRoot, same goes for valuetype Radar on page 3-70
Solution:
Replace:
#include "dlrl.idl"
valuetype stringStrMap; // StrMap<string>
valuetype TrackList; // List<Track>
valuetype Radar;
valuetype Track : DLRL::ObjectRoot {
public double x;
public double y;
public stringStrMap comments;
public long w;
public Radar a_radar;
};
valuetype Track3D : Track {
public double z;
};
valuetype Radar : DLRL::ObjectRoot {
public TrackList tracks;
};
With:
#include "dlrl.idl"
valuetype stringStrMap; // StrMap<string>
valuetype TrackList; // List<Track>
valuetype Radar;
valuetype Track : DDS::ObjectRoot {
public double x;
public double y;
public stringStrMap comments;
public long w;
public Radar a_radar;
};
valuetype Track3D : Track {
public double z;
};
valuetype Radar : DDS::ObjectRoot {
public TrackList tracks;
};
Problem:
In section 3.2.3.4 Underlying DCPS Data Model the RADAR-TOPIC table is completely missing.
Solution:
Add:
RADAR-TOPIC Topic to store all Radar objects, as well as the embedded attributes.
OID Field to store the oid identifier.
Problem:
Faulty quotation marks can be found in the method description in sections 3.1.6.3.16, 3.1.6.3.17, 3.1.6.3.18 and 3.1.6.3.19. For example it states:
o "remove - to remove the item with the highest index from the collection.
But it should be:
o remove - to remove the item with the highest index from the collection.
All quotation marks should be removed!
Solution:
Remove quotation marks in section 3.1.6.3.16 for operations:
- remove
- added_elements
- removed_elements
- modified_elements
- add
- put
- get
Remove quotation marks in section 3.1.6.3.17 for operations:
- add
- remove
- contains
- added_elements
- removed_elements
Remove quotation marks in section 3.1.6.3.18 for operations:
- keys
- remove
- added_elements
- removed_elements
- modified_elements
- put
- get
Remove quotation marks in section 3.1.6.3.19 for operations:
- keys
- remove
- added_elements
- removed_elements
- modified_elements
- put
- get
Problem: Section 3.1.4.2.1 speaks of tables and rows, it is suggested to change this into topics and instances to correspond better with DCPS terminology. It is also suggested to speak of 'fields to uniquely identify the object' instead of 'fields needed to store a reference to that object'. It is also wise to change the word application in the last sentence of the last paragraph to implementation, as this mechanism is worked out by a DLRL implementation, not an application. Solution: Replace: Each DLRL class is associated with at least one DCPS table, which is considered as the 'main' table. A DLRL object is considered to exist if it has a corresponding row in this table. This table contains at least the fields needed to store a reference to that object (see below). To facilitate DLRL management and save memory space, it is generally desirable that a derived class has the same main table as its parent concrete class (if any)5, with the attributes that are specific to the derived class in an extension table. For example, this allows the application to load all the instances of a given class (including its derivations) in a single operation. With: Each DLRL class is associated with at least one DCPS topic, which is considered as the 'main' topic. A DLRL object is considered to exist if it has a corresponding instance in this topic. This topic contains at least the fields to uniquely identify the object (see below). To facilitate DLRL management and save memory space, it is generally desirable that a derived class has the same main topic as its parent concrete class (if any)5, with the attributes that are specific to the derived class in an extension topic. For example, this allows the implementation to load all the instances of a given class (including its derivations) in a single operation.
Problem: The last sentence of section 3.1.6.1.2.1 is in conflict with section 3.1.6.3.4 which states that a missing object home (i.e. no subscription exists) raises a BadHomeDefinition (see details on the register_all_for_pubsub operation). Making navigation to an object for which no subscription exists impossible. The BadHomeDefinition option is superior, as it forces application developers to think about their local object model and remove relations they do not need, not only to object home they happened not to register, but also relations between object homes they have registered. The power of DLRL is in the fact each application can tailor the object model to it's own specific needs, removing relations from DLRL management which are simply not of any interest (this is a performance saver!). It's also undesirable to ignore missing object homes and just return a NotFound exception as it's not clear to an application developer that these exceptions are occuring because he forgot to register a home, the bad home definition makes things much more explicit, without a loss in flexibility. Solution: Replace: If a relation points towards an object for which no subscription exists, navigating through that relation will raise an error (NotFound). With: If a relation points towards an object for which no subscription exists, a BadHomeDefinition exception is raised when the Cache is registered for publication/subscription.
Problem (1/5): Section 3.1.6.3.4 on the table on page 3-22 errornously states the operations load(), lock() and unlock(), which no longer exist since the previous specification update. They have been removed from the idl as well and figure 3-4, but not here. Solution (1/5): Remove load(), lock() and unlock() operations from the table on page 3-22 Problem (2/5): Section 3.1.6.3.4 on page 3-24 still contains various descriptions about operations that no longer exist, namely load, deref, lock, unlock. All these operations were removed in the previous spec revision. Solution (2/5): Remove the following texts: o explicitly request taking into account the waiting incoming updates (load). In case updates_enabled is TRUE, the load operation does nothing because the updates are taken into account on the fly; in case updates_enabled is FALSE, the load operation 'takes' all the waiting incoming updates and applies them in the Cache. The load operation does not trigger any listener (while automatic taking into account of the updates does - see Section 3.1.6.4, "Listeners Activation," on page 3-41 for more details on listener activation) and may therefore be useful in particular for global initialization of the Cache. o transform an ObjectReference to the corresponding ObjectRoot. This operation can return the already instantiated ObjectRoot or create one if not already done. These ObjectRoot are not modifiable (modifications are only allowed on cloned objects attached to a CacheAccess in write mode). o lock the Cache with respect to all other modifications, either from the infrastructure or from other application threads. This operation ensures that several operations can be performed on the same Cache state (i.e., cloning of several objects in a CacheAccess). This operation blocks until the Cache can be allocated to the calling thread and the waiting time is limited by a time-out (to_in_milliseconds). In case the time-out expired before the lock can be granted, an exception (ExpiredTimeOut) is raised. o unlock the Cache. Problem (3/5): In section 3.1.6.3.4 on page 3-24 in the middle of the page an indent is missing for the following text. And the paragraph is also ended with a ':', which should be a '.'. In the middle of the text, behind the first bold, italic word Cache it also has a ':', which should be a ';' Solution (3/5): Replace: The purpose of the CacheAccess must be compatible with the usage mode of the Cache: only a Cache that is write-enabled can create a CacheAccess that allows writing. Violating this rule will raise a PreconditionNotMet: With: The purpose of the CacheAccess must be compatible with the usage mode of the Cache; only a Cache that is write-enabled can create a CacheAccess that allows writing. Violating this rule will raise a PreconditionNotMet. Problem (4/5): Section 3.1.6.3.4 on page 3-22 right underneath the table gives an overview of the attributes of the Cache. The first bullet takes three attributes into one description, while everywhere else separate bullets are used for each attribute. This should be no different. Solution (4/5): Replace: · the state of the cache with respect to the underlying Pub/Sub infrastructure (pubsub_state), as well as the related Publisher (the_publisher) and Subscriber (the_subscriber). With: · the state of the cache with respect to the underlying Pub/Sub infrastructure (pubsub_state) · the related Publisher (the_publisher) · the related Subscriber (the_subscriber). Problem (5/5): In the description of the disable_updates() operation of the Cache in section 3.1.6.3.4 on page 3-22 still talks about the possibility of updates being interrupted. This 'possibility' was removed in the previous spec revision. An update round will always be finished normally. Solution (5/5): Replace: disable_updates causes incoming but not yet applied updates to be registered for further application. If it is called in the middle of a set of updates (see Listener operations), the Listener will receive end_updates with a parameter that indicates that the updates have been interrupted. With: disable_updates causes incoming but not yet applied updates to be registered for further application, any update round in progress will be completed before the disable updates instruction is taken into account.
Problem 1: In the table on page 3-26 and 3-27 regarding the ObjectHome entity some types of attributes, parameters and return types are incorrectly shown. In the rest of the document whenever an operation is to have a return type/parameter of a specialized class or an attribute is of a specialized class it does not state the class (for example ObjectRoot) but states the word Undefined between < and > appended with the word if the type (Selection, StrMap, etc, making <undefined>Selection. This is missing various times in the table in section 3.1.6.3.7 regarding the ObjectHome. Specifically for the attributes 'selections' and listener' and for the operations attach_listener (listener param), detach_listener (listener param), create_selection (return type), delete_selection (a_selection param), create_object (return type), create_unregister_object (return type), register_object (unregister_object param), find_object (return type), get_objects(return type), get_created_objects (return type), get_modified_objects (return type), get_deleted_objects (return type). Problem 2: The attribute 'listener' in the table, should be 'listeners' as used everywhere else in the ObjectHome defintion (see figure 3-4 on page 3-16 and the description for the attribute on page 3-28) Problem 3: The attribute 'class_name' in the table should be 'name' to be made consistent with figure 3-4 on page 3-16. It should also be updated in the description on page 3-27 Problem 4: The attribute parent (type: ObjectHome) and children (type: ObjectHome[]) should be added to the table. Their descriptions should also be added in the attribute listing directly following the table. (see figure 3-4 where they are mentioned) Solution: XXX TBD
Problem:
The which_contained_modified operation should be removed from the table on page 3-33 as well as from the text description on page 3-35, as the operation no longer exists (see figure 3-4). Getting the modified elements from collections can be done through the relevant collection interface operation (added_elements, modified_elements and removed_elements). In the IDL description in section 3.2.1.2 on page 3-50 the enum RelationKind, valuetype RelationDescription and it's derived valuetypes ListRelationDescription, IntMapRelationDescription and StrMapRelationDescription as well as the sequence typedef for RelationDescriptions should be removed.
Solution:
Remove the which_contained_modified operation from the table and remove the following text (at the top of page 3-35)
o get which contained objects have been modified (which_contained_modified). This method returns a list of descriptions for the relations that point to the modified objects (each description includes the name of the relation and if appropriate the index or key that corresponds to the modified contained object).
On page 3-50 in section 3.2.1.2 remove the following:
enum RelationKind {
REF_RELATION,
LIST_RELATION,
INT_MAP_RELATION,
STR_MAP_RELATION};
valuetype RelationDescription {
public RelationKind kind;
public RelationName name;
};
valuetype ListRelationDescription : RelationDescription {
public long index;
};
valuetype IntMapRelationDescription : RelationDescription {
public long key;
};
valuetype StrMapRelationDescription : RelationDescription {
public string key;
};
typedef sequence<RelationDescription> RelationDescriptionSeq;
Problem: Section 3.1.6.4.1 still talks about operations that no longer exist (how_many_added, how_many_removed, etc). In the second bullet these operations are mentioned. It is our suggestion to remove the mention of those operations, and simply state that there are operations to retrieve precisely which parts of the object has been modified, but don't mention explicitly which operations are available for this purpose. Solution: Replace: o Then all the updates are actually applied in the cache16. When an object is modified, several operations allow to get more precisely which parts of the object are concerned (see ObjectRoot::is_modified operations as well as the operations for Collection, namely, is_modified, how_many_added, how_many_removed, removed_values, and which_added); these operations can be called in the listeners. With: o Then all the updates are actually applied in the cache16. When an object is modified, several operations allow to get more precisely which parts of the object are concerned, such operations can be called in the listeners.
Problem: Sections 3.1.6.4.3 & 3.1.6.4.4 describe the sequence of listener triggers, they both state the selection listener is triggered first and then the object listeners. However this is not in correspondance with section 3.1.6.4.2, which states objectlisteners are triggered first! It's our suggestion to make everything uniform and let object listeners be triggered first and then selection listeners, as that sequence seems more logical. Naturally the paragraph of the selectionlistener should start with the word 'Then' instead of 'First' if they are swapped and vice virsa for the paragraph about ObjectListeners. Solution: Obvious
Problem:
The section 3.1.6.6 regarding generated classes contains many typos. It mentions the FooQuery class to be generated, but that is no longer the case! The queryCriterion class is all that is needed for query filters. In the Implied IDL on page 3-60, section 3.2.1.2.2 the local interface for Fooquery should be removed as well!
It also mentions a FooRelation class, which no longer exists, and names the collection types incorrectly (FooListRelation should simply be FooList), etc.
It also forgets to mention the FooSet class and the Foo class itself.
Solution:
Replace:
Assuming that there is an application class named Foo (that will extend ObjectRoot), the following classes will be generated:
o FooHome : ObjectHome
o FooListener : ObjectListener
o FooSelection : Selection
o FooSelectionListener : SelectionListener
o FooFilter : FilterCriterion
o FooQuery : FooFilter, QueryCriterion
o And for relations to Foo objects (assuming that these relations are described in the applicative mode - note also that the actual name of these classes will be indicated by the application):
o "FooRelation" : RefRelation
o "FooListRelation" : ListRelation
o "FooStrMapRelation" : StrMapRelation
o "FooIntMapRelation" : IntMapRelation
With:
Assuming that there is an application class named Foo (that will extend ObjectRoot), the following classes will be generated:
o Foo: ObjectRoot
o FooHome : ObjectHome
o FooListener : ObjectListener
o FooSelection : Selection
o FooSelectionListener : SelectionListener
o FooFilter : FilterCriterion
o FooList : List
o FooStrMap : StrMap
o FooIntMap : IntMap
o FooSet : Set
On page 3-60 in section 3.2.1.2.2 remove:
local interface FooQuery : DDS::QueryCriterion, FooFilter {
};
Problem:
In section 3.2.1.2 IDL description on page 3-48 and 3-49 for the definition of the ObjectListener the on_object_created and on_object_deleted operations are listed, however these operations are generated on the derived class 'Foo', thus should be contained within the comments just like operation on_object_modified. This was not correctly revised during the last spec revision.
Solution:
Replace:
local interface ObjectListener {
boolean on_object_created (
in ObjectRoot the_object);
/**** will be generated with the proper Foo type* in the derived
* FooListener
* boolean on_object_modified (
* in ObjectRoot the_object);
****/
boolean on_object_deleted (
in ObjectRoot the_object);
};
With:
local interface ObjectListener {
/* Will be generated with the proper Foo type in the derived FooListener
*
* boolean on_object_modified (in ObjectRoot the_object);
* boolean on_object_created (in ObjectRoot the_object);
* boolean on_object_deleted (in ObjectRoot the_object);
*/
};
Problem:
In section 3.2.1.2 IDL description on page 3-49 for the definition of the SelectionListener the on_object_out operation is listed, however this operation is generated on the derived class 'Foo', thus should be contained within the comments just like operations on_object_modified and on_object_in. This was not correctly revised during the last spec revision.
Solution:
Replace:
local interface SelectionListener {
/* Will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in (
in ObjectRoot the_object);
void on_object_modified (
in ObjectRoot the_object);
*
***/
void on_object_out (
in ObjectRoot the_object);
};
With:
local interface SelectionListener {
/* Will be generated with the proper Foo type in the derived FooSelectionListener
*
* void on_object_in (in ObjectRoot the_object);
* void on_object_modified (in ObjectRoot the_object);
* void on_object_out (in ObjectRoot the_object);
*/
};
Problem:
In section 3.2.1.2 IDL description on page 3-52 the criterion attribute of the Selection interface is incorrectly defined within the comments section. It should simply be listed as an attribute, as it wont be generated in the specialized FooSelection. In the implied IDL in section 3.2.1.2.2 on page 3-60 a wrong attribute is stated as well (filter attribute, it should be removed).
Solution:
In section 3.2.1.2 IDL description on page 3-52 replace (only the beginning of the interface definition is shown):
local interface Selection {
// Attributes
// ----------
readonly attribute boolean auto_refresh;
readonly attribute boolean concerns_contained;
/***
* Following attributes will be generated properly typed
* in the generated derived classes
*
readonly attribute SelectionCriterion criterion;
readonly attribute ObjectRootSeq members;
readonly attribute SelectionListener listener;
*
*/
With:
local interface Selection {
// Attributes
// ----------
readonly attribute boolean auto_refresh;
readonly attribute boolean concerns_contained;
readonly attribute SelectionCriterion criterion;
/* The following attributes will be generated properly typed
* in the generated derived classes
*
* readonly attribute ObjectRootSeq members;
* readonly attribute SelectionListener listener;
*/
In the implied IDL in section 3.2.1.2.2 on page 3-60 replace:
local interface FooSelection : DDS::Selection {
readonly attribute FooFilter filter;
readonly attribute FooSeq members;
readonly attribute FooSelectionListener listener;
FooSelectionListener set_listener (
in FooSelectionListener listener);
};
With:
local interface FooSelection : DDS::Selection {
readonly attribute FooSeq members;
readonly attribute FooSelectionListener listener;
FooSelectionListener set_listener (
in FooSelectionListener listener);
};
In section 3.2.1.2 IDL description on page 3-52 the criterion attribute of the Selection interface is incorrectly defined within the comments section.
Problem: In section 3.2.1.2.2 Implied IDL, typo in the FooHome defintion for the create_selection operation. The parameters listed are incorrect. They should be DDS::SelectionCriterion criterion instead of FooFilter filter. And the concerns_contained_objects attribute is missing all together. Furthermore the raises clause is wrong as well, the BadParameter exception did not exist in that definition of the specification, it should be PreconditionNotMet. Solution: Replace: FooSelection create_selection ( in FooFilter filter, in boolean auto_refresh) raises (DDS::BadParameter); With: FooSelection create_selection (in DDS::SelectionCriterion criterion, in boolean auto_refresh, in boolean concerns_contained_objects) raises (DDS::PreconditionNotMet);
In section 3.2.1.2.2 Implied IDL, typos in the FooHome defintion for the create_selection operation
Problem: The description in section 3.2.2.3.2.11 MultiAttribute still refers to the indexField attribute of the MultiPlaceTopic as being mandatory. This is wrong, it is defined as an implied attribute in IDL. Solution: Replace: o A mandatory sub-tag to give the DCPS Topic where it is placed (multiPlaceTopic). This sub-tag follows the same pattern as placeTopic, except it has a mandatory attribute in addition to state the field needed for storing the collection index. With: o A mandatory sub-tag to give the DCPS Topic where it is placed (multiPlaceTopic). This sub-tag follows the same pattern as placeTopic, except it has an optional attribute in addition to state the field needed for storing the collection index, which should be used if the collection is a List or Map type.
Problem: The example XML code forgets to properly fill out the multiPlaceTopic, forgetting to list the index attribute of the multiPlaceTopic. Also the content attribute of the keyDescription has an incorrect value. It should be FullOid, not FullOID (cases). Solution: Replace: <multiAttribute name="comments"> <multiPlaceTopic name="COMMENTS-TOPIC" <keyDescription content="FullOID"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </multiPlaceTopic> <valueField>COMMENT</valueField> </multiAttribute> With: <multiAttribute name="comments"> <multiPlaceTopic name="COMMENTS-TOPIC" indexField="INDEX"> <keyDescription content="FullOid"> <keyField>CLASS</keyField> <keyField>OID</keyField> </keyDescription> </multiPlaceTopic> <valueField>COMMENT</valueField> </multiAttribute>
Problem: In section 3.2.2.3.2.13 MultiRelation, the 3rd bullet states a valueKey sub-tag, which does not exist! It should be keyDescription instead! Solution: Replace: This tag gives the mapping for a multi-valued relation. It has: o A mandatory attribute to give the name of the relation. o A mandatory sub-tag to give the DCPS Topic where it is placed (multiPlaceTopic - see Section 3.2.2.3.2.11). o One valueKey sub-tag (see Section 3.2.2.3.2.12). With: This tag gives the mapping for a multi-valued relation. It has: o A mandatory attribute to give the name of the relation. o A mandatory sub-tag to give the DCPS Topic where it is placed (multiPlaceTopic - see Section 3.2.2.3.2.11). o One keyDescription sub-tag (see Section 3.2.2.3.2.12).
In section 3.2.2.3.2.13 MultiRelation, 3rd bullet states a valueKey sub-tag, which does not exist!
Problem: In section 3.2.2.3.2.10 MonoAttribute, 3.2.2.3.2.12 MonoRelation the cases for the content attributes values of the keyDescription is incorrect Solution: In section 3.2.2.3.2.10 on page 3-67 replace: <monoAttribute name="y"> <placeTopic name="Y_TOPIC"> <keyDescription content="SimpleOID"> <keyField>OID</keyField> </keyDescription> </placeTopic> <valueField>Y</valueField> </monoAttribute> With: <monoAttribute name="y"> <placeTopic name="Y_TOPIC"> <keyDescription content="SimpleOid"> <keyField>OID</keyField> </keyDescription> </placeTopic> <valueField>Y</valueField> </monoAttribute> In section 3.2.2.3.2.12 on page 3-68 replace: <monoRelation name="a_radar"> <keyDescription content="SimpleOID"> <keyField>RADAR_OID</keyField> </keyDescription> </monoRelation> With: <monoRelation name="a_radar"> <keyDescription content="SimpleOid"> <keyField>RADAR_OID</keyField> </keyDescription> </monoRelation>
In section 3.2.2.3.2.10 MonoAttribute, 3.2.2.3.2.12 MonoRelation the cases for the content attributes values of the keyDescription is incorrect
Problem:
In section 3.2.3.5 Code example on page 3-74 and 3-75 several typos can be found. It talks about a DLRL module, which should be DDS in the definition of the CacheFactory variable and Cache variable.
The create_cache operation is missing a parameter (cache name).
The home objects should have their default constructors called.
The create_object operations sometimes has a - as seperator instead of a _.
The create_object should take a cache access as parameter instead of the cache, this cache access should also be created.
The write operation should be performed on the created cache access, not on the cache.
The closing }; should also be removed, as it is never opened.
The setter operation for the operations are also incorrect. They should be set_x, not x. And the relation has a put operation which does not exist.
The first setter for the a_radar attribute should operate on t1, not t2.
Solution:
Replace:
DDS::DomainParticipant_var dp;
DLRL::CacheFactory_var cf;
/*
* Init phase
*/
DLRL::Cache_var c = cf->create_cache (WRITE_ONLY, dp);
RadarHome_var rh;
TrackHome_var th;
Track3DHome_var t3dh;
c->register_home (rh);
c->register_home (th);
c->register_home (t3dh);
c->register_all_for_pubsub();
// some QoS settings if needed
c->enable_all_for_pubsub();
/*
* Creation, modifications and publication
*/
Radar_var r1 = rh->create_object(c);
Track_var t1 = th->create-object (c);
Track3D_var t2 = t3dh->create-object (c);
t1->w(12);// setting of a pure local attribute
t1->x(1000.0);// some DLRL attributes settings
t1->y(2000.0);
t2->a_radar->put(r1);// modifies r1->tracks accordingly
t2->x(1000.0);
t2->y(2000.0);
t2->z(3000.0);
t2->a_radar->put(r1);// modifies r1->tracks accordingly
c->write();// all modifications are published
};
With:
DDS::DomainParticipant_var dp;
DDS::CacheFactory_var cf;
/*
* Init phase
*/
DDS::Cache_var c = cf->create_cache ("a_cache", WRITE_ONLY, dp);
RadarHome_var rh = new RadarHome();
TrackHome_var th = new TrackHome();
Track3DHome_var t3dh = new Track3DHome();
c->register_home (rh);
c->register_home (th);
c->register_home (t3dh);
c->register_all_for_pubsub();
// some QoS settings if needed
c->enable_all_for_pubsub();
/*
* Creation, modifications and publication
*/
DDS:CacheAccess_var ca = c->create_access(WRITE_ONLY);
Radar_var r1 = rh->create_object(ca);
Track_var t1 = th->create_object (ca);
Track3D_var t2 = t3dh->create_object (ca);
t1->w(12);// setting of a pure local attribute
t1->set_x(1000.0);// some DLRL attributes settings
t1->set_y(2000.0);
t1->set_a_radar(r1);// modifies r1->tracks accordingly
t2->set_x(1000.0);
t2->set_y(2000.0);
t2->set_z(3000.0);
t2->set_a_radar(r1);// modifies r1->tracks accordingly
ca->write();// all modifications are published
Problem: The find_object operation is not listed in the ObjectHome class in figure 3-4 on page 3-16. Solution: Obvious Problem: In section 3.2.1.2.2 Implied IDL, typos in the FooHome defintion for the find_object_in_access. The operation no longer exists. The find_object operation is missing a parameter as well Solution: Replace: Foo find_object_in_access (in DDS::DLRLOid oid, in DDS::CacheAccess access) raises (DDS::NotFound); Foo find_object (in DDS::DLRLOid oid); With: Foo find_object (in DDS::DLRLOid oid, in DDS::CacheBase source);
In Figure 3-4 and section 3.2.1.2.2 Implied IDL, typos in the FooHome defintion for the find_object_in_access. The operation no longer exists. The find_object operation is missing a parameter as well
Problem: In section 3.2.2.3.2.13 MultiRelation, the xml code still talks about a FullSimpleOID, it should be FullOid. The case is also wrong for the SimpleOid Solution: Replace: <multiRelation name="tracks"> <multiPlaceTopic name="RADARTRACKS-TOPIC" <keyDescription content="SimpleOID"> <keyField>RADAR-OID</keyField> </keyDescription> <\multiPlaceTopic> <keyDescription content="FullSimpleOID"> <keyField>TRACK-CLASS</keyField> <keyField>TRACK-OID</keyField> </keyDescription> </multiRelation> With: <multiRelation name="tracks"> <multiPlaceTopic name="RADARTRACKS-TOPIC" <keyDescription content="SimpleOid"> <keyField>RADAR-OID</keyField> </keyDescription> <\multiPlaceTopic> <keyDescription content="FullOid"> <keyField>TRACK-CLASS</keyField> <keyField>TRACK-OID</keyField> </keyDescription> </multiRelation>
In section 3.2.2.3.2.13 MultiRelation, the xml code still talks about a FullSimpleOID. The case is also wrong.
Problem: Section 3.2.1.1 Mapping Rules regarding error reporting states that an exception is only raised when future behavior is affected This is not exactly the case: Exceptions are also thrown when unrecoverable errors have occurred. Solution: TBD.
Problem: On Section 3.2.1.2 on page 3-51 regarding valuetype ObjectRoot contains the attribute class_name, but it is missing in figure 3-4 and in the ObjectRoot description in the entity listings. It should either be removed from IDL, or be added to the PIM. Solution: We propose to remove it from the IDL since, it can also be obtained through the related home: on Section 3.2.1.2 on page 3-51 regarding valuetype ObjectRoot remove: readonly attribute ClassName class_name;
Inconsistency in attribute definitions for valuetype ObjectRoot in section 3.2.1.2 IDL description
Problem: The find_object operation is not listed in the ObjectHome class in figure 3-4 on page 3-16. Solution: Obvious
Problem: The DTD , the monoAttribute and multiAttribute elements allow multiple valueField elements to be specified inside them. It is not explained for which use-cases something like that would be convenient. Solution: In case the attribute is a struct, it could be used to map it directly onto multiple value Fields in the topic using the order in which the members of the struct are mentioned. (However, for such cases it might be better to map each struct member individually using its scoped name to avoid confusion.) It could also make sense to use it for elements inside an array. However, the exact use-case for which this is allowed should be mentioned explicitly, referably with good examples. TBD.
Explain the use case for multiple valueFields in the mono-attribute and multi-attribute elements.
Problem: If one of the purposes of the CacheAccess is to separate sets of objects that are being manipulated by different threads from eachother, then it makes sense to also give them a separate Publisher. Otherwise different threads might try to write different coherent sets of information using the same DataWriters at the same time, thus mixing up two separate coherent sets as one big chunk. Also with respect to tailoring QoS settings the modifying threads might influence each-other. Solution: Giving each CacheAccess its own Publisher and DataWriters allows you to completely separate each thread of modification. Coherent Sets that are being written always have exactly the right coherence scope, and it is possible to have different CacheAccess objects for different purposes: for example one CacheAccess to write the objects reliably, another one to send them with best-effort. Creating a CacheAccess then implicitly creates a Publisher and all the required DataWriters for all ObjectHomes attached to the main Cache, using the same QoS settings as the writers already attached to the main Cache. To allow the user of a CacheAccess to tailor these QoS-settings, all these DCPS entities should be created in a disabled state. That means that the CacheAccess would require a new operation called 'enable' to enable them all. Also it means that since the CacheAccess will have its own Publisher, the 'the_publisher' attribute can be moved from the Cache to the CacheBase interface. Question is why a Cache still requires a Publisher if you are only allowed to write information from a CacheAccess. We see no reason why a Cache could not be used to write information as well: there are sufficient mechanisms available in DLRL to prevent manual modifications being overwritten by automatic updates: one could disable the automatic updates for this purpose, or one could do the modification during a listener callback, thus blocking new updates from being applied. Also for a WriteOnly application it makes sense to have a writeable Cache. We therefore also propose to move the write operation from the CacheAccess to the CacheBase class and to change the CacheAccess parameter in the create_object and create_unregistsred_object operations into a CacheBase operation. TBD.
The following IDs can be used in PT issue IDs: TYPO = To reflect the issue is about a type in the specification CLAR = To reflect the issue clarifies something in the specification ARCH = To reflect the issue highlights an architectural significant problem in the specification.
Problem: The current specification provides no solution for the following two use-cases: · A Volatile DataReader can receive data from a Transient or persistent DataWriter so it may also be interested in historical data. However a DataReader cannot receive historical data. Just making the DataReader Transient does not solve the problem because the DataReader may also be interested in volatile data. · A Transient or Persistent DataReader may not be interested in historical data, there is no way to avoid receiving historical data. A DataReader may be e.g. Transient to avoid receiving data from volatile DataWriters, in addition there may be no interest in historical data. Solution: Separate the control of receiving historical data from the DataReaders durability interest. The DataReaders Durability interest will specify from which DataWriter it will accept data. A separate call will control (ask for) the retrieval of historical data. Add the following method ReturnCode_t get_historical_data(Duration_t max_wait) And Transient and Persistent DataReaders should not automatically receive historical data. A consequence is that this invalidates the method ReturnCode_t wait_for_historical_data(Duration_t max_wait).
Problem: In the QoS table (Section 2.1.3 Supported QoS) the 'concern' row illegally specifies the DataWriter for the DURABILITY_SERVICE QoS. Solution: Section 2.1.3 Supported QoS, QoS Table: Entry for DURABILITY_SERVICE QoS, remove the word 'DataWriter' from the 'Concerns' column. If we do this it would need to be removed from the BuiltinPublicationData.
Problem: For system management it is very useful that Entities can be identified by a logical application defined name. This attribute should ideal be set via the factory create call but if changing API's is not desired it can also be set via a QoS policy. This attribute should also be supported by the built-in Topics Solution: Introduction of a new Properties QoS that can be used for a name, but also for other things. The new QoS would contain a sequence of "properties" where each property is a pair of strings: (property_name, property_value) This Properties QoS can be used very much the same as the USER_DATA, but because each property is named, multiple things can be stored without conflicting with each other. This also provides an extensibility mechanism for the DDS spec. We can reserve the property-names with the prefix "DDS." To indicate "built-in" properties that should not be used by applications. We can use this mechanism with the built-in property name "DDS.EntityName" to implement the name attribute.
Problem: From the OMG DDS specification the semantic of the liveliness of a reader instance is not clear, when it is exclusively owned by a writer. The LivelinessChangedStatus of the reader indicates how many active and inactive writers communicate with the reader. In case of exclusive ownership it is unclear whether only the reader sees the strongest writer as an active writer, or when it must see all available writers, since the reader will only receive samples from the strongest writer. Solution: Do we need to clarify this?
Problem: In IDL, the operations on the TypeSupport interface are commented out: all operations are defined in its specialized sub-classes. That is strange, since all TypeSupport operations have a signature that is completely independent of this specific type. Solution: We propose to promote all these operations to the TypeSupport class itself, and thus to uncomment these operations in the TypeSupport interface.
Problem: There is inconsistency in the spec: on page 2-24 there is a list of operations that is allowed when the DomainParticipant is in the disabled state. This list does not include any lookup operations. However, on page 2-14 there is also a list of operations which may be invoked when Entity has not yet been enabled, and here the 'lookup' operations are mentioned. So the questions are whether the DomainParticipant should be allowed to perform "find_topic" and "lookup_topicdescription" operations when it is in disabled state. Solution: Our proposal is that find_topic should not be allowed in this case, but lookup_topicdescription should be allowed. Also all delete operations including the delete_contained_entities should be allowed
Problem: QoS settings of buildin Readers: according to section 2.1.5 builtin Readers should have its ReaderDataLifecycles delays set to infinite, meaning that builtin topics that are disposed and/or unregistered will never be removed from the system unless explicitly 'taken'. If an application never bothers to look at the builtin readers, they will never clean up resources and these readers will use up more and more memory if entities keep on joining and leaving the network. Solution: We propose to give the builtin-readers a finite duration for both auto_purge variables (for example something like 5 minutes).
Problem: It is possible that a transaction needs to be canceled, for example because one of the participating writers gets blocked and finally stops while returning a timeout. This might lead to the situation in which you want to cancel all preceding writes as well. Solution: To be able to cancel such a transaction, we propose to add an additional operation like e.g. cancel_coherent_changes would then remove all samples that have already been written since the last begin_coherent_changes.
Problem: It should be possible to obtain the "enabled" state of an Entity Solution: We propose to add a boolean operation to the Entity called something like is_enabled().
Page 2-70, The get_sample_lost method seems to be part of the Subscriber class. However, according to the UML-diagram page 2-62 and the idl-listing page 2-166 and , the get_sample_lost method should belong to the DataReader class - not the Subscriber.
Section 7.l.3.6 of the DCPS spec should indicate what happens under the PRESENTATION=GROUP policy when different DataWriters in the group have different QoS settings. Whose settings are followed by the group? The most stringent? The least stringent? Group membership is dynamic, so the group members are not knows until each write() happens
In section 8.1.4.3, add: o For a mono-relation called "bar", the mono-relation's oid fields are "bar_class" and "bar_oid". o For a multi-relation, the related object's oid and class fields are "value_oid" and "value_class".
Specify names of mono-relation and multi-relation fields for default mapping
In a DLRL Query, there are [] operators to access members of a collection. For a Set, is the [] operator used to determine whether the value is in the set or not? Or does a "contains" operator need to be added to the query language?
DDS DLRL Issue: create_object and create_unregistered_object should throw PreconditionNotMet if keyType is incompatible Section 8.1.6.3.7, ObjectHome, add the following to these method descriptions: create_object: "The method throws a PreconditionNotMet if the Home is a NoOid Home" create_unregistered_object:"The method throws a PreconditionNotMet if the Home is a FullOid or SimpleOid Home"
Add the following to 8.1.6.3.7, ObjectHome::set_content_filter: "An object must pass all content filters in its Home and its base Homes to be accepted into the Cache"
Add a new subsection: 8.1.3.2.3 Compositions o A composition implies cascading deletion. When the owning object is deleted, the composed (i.e. owned) object is also deleted. When the owning object is deleted, its composed objects must also be present in the same CacheAccess so they too can be deleted. Failure to do this will result in a PreconditionNotMet exception on the ObjectRoot::destroy() call. o A composed (i.e. owned) object may only have one owner. Any attempt to set an owned ObjectRoot on two different owning object will result in a PreconditionNotMet exception Modify 8.1.6.3.14 ObjectRoot, by adding to the descriptions of the relevant operations: destroy: If the object has composition relations, all composed objects must be present in the CacheAccess. Otherwise, a PreconditionNotMet is thrown. set_attribute: If the attribute is a composition relation, the composed object must not be owned by any other object. Otherwise, a PreconditionNotMet is thrown.
1. Section 7.1.4.4 Conditions and Wait-sets - Two paragraphs above figure 7.19 is "The blocking behavior of the WaitSet is illustrated in Figure 7.18" I think this is meant to reference figure 7.19 instead. 2. Section 7.1.6.2.2 Notifications via Conditions and Wait-Sets - starts out with "The first part of Figure 7.21" I think it should be figure 7.22 3. In section 7.2.3 DCPS PSM : IDL on pages 152 - 153 are definitions of all the IDs and names of the QOS Policies with the exception that "TRANSPORTPRIORITY" has an ID definition but is missing a name definition. 4. Section 7.1.2.2.2.7 set_qos This operation sets the value of the DomainParticipantFactory QoS policies. These policies control the behavior of the object a factory for entities.
noticed an inconsistency while I was reading the DDS spec (version 1.2) last night. My question relates to the following quality of service policy: DURABILITYSERVICE_POLICY_NAME The pattern in all QoS names is: *_QOS_POLICY_NAME, except for the DURABILITYSERVICE The pattern in all QoS IDs is: *_QOS_POLICY_ID The DURABILITYSERVICE does adhere to the pattern/convention for Qos IDs: DURABILITYSERVICE_QOS_POLICY_ID But the does NOT adhere to the pattern/convention for Qos names. So, is there a typo in DURABILITYSERVICE_POLICY_NAME, i.e., should it be DURABILITYSERVICE_QOS_POLICY_NAME, or is the lacking of _QOS_ intentional?
The DDS Spec (as of v1.2) does not specify the set of IDL types that are allowed in a DDS Topic struct. This should be defined and made normative instead of being left as an implementation detail. Discussion: Because the set of IDL types allowed in a DDS Topic struct is not defined, implementors are at liberty to decide what set of IDL types to support. Concerns about the impact of the lack of standardization in this area have been raised by the computing infrastructure teams on major Navy programs on several occasions particularly in regards to potential impact on code portability across DDS implementations and interoperability between DDS implementations. In reviewing the DDS C++ Native Language Mapping RFP, we commented that the RFP provided an opportunity to insert a requirement to define the allowed C++ types within a DDS Topic struct which would indirectly result in defining the set of allowed DDS IDL types due to the RFP requirement for the C++ mapping to be consistent with the IDL mapping. This suggestion was rejected because it was viewed that the correct forum and mechanism for resolving this issue was the DDS RTF. We were requested to submit this as an issue to the RTF since the RTF could probably resolve this issue within the next 6 months.
While reading the DDS specification, 'Data Distribution Service for Real-time Systems Version 1.2 OMG Available Specification formal/07-01-01', I found that the 'synchrobnous' and 'asynchronous' was switched as highlighted in red in the below paragraph in the spec. <para #4, page 11> On the subscriber’s side however, there are more choices: relevant information may arrive when the application is busy doing something else or when the application is just waiting for that information. Therefore, depending on the way the application is designed, asynchronous notifications or synchronous access may be more appropriate. Both interaction modes are allowed, a Listener is used to provide a callback for synchronous access and a WaitSet associated with one or several Condition objects provides asynchronous data access.
The CORBA specification (08-01-04) in section 7.11.6 deprecates the use of anonymous types, for example the type of the struct field "seq" below:
struct Foo {
sequence<octet> seq;
};
The DDS DCPS IDL uses these in multiple places (the first is DDS::UserDataQosPolicy). These should be replaced with non-deprecated usage such as:
typedef sequence<octet> OctetSeq;
struct Foo {
OctetSeq seq;
};
This will also increase internal consistency of the DDS spec, since it already uses a DDS::StringSeq typedef in the PartitionQosPolicy struct.
Furthermore, if the element type of the sequence is a Basic Type or a String Type, the CORBA module already provides these typedefs, so it would be preferable to use them. The example above becomes:
struct Foo {
CORBA::OctetSeq seq;
};
The DDS specification document does not prescribes a specific package definition for the entities of the model in case of JAVA language binding; in the IDL only a "DDS" module is defined which does not result in a strictly specified package structure (it might only imply that the entities belong to the DDS package/namespace). It would be useful to have a precise statement indicating at least for the JAVA binding the packages to which the DDS Entities must belong in order to have all implementations using the same packages for the same Entities (e.g org.omg.dds.infrastructure, org.omg.dds.domain, etc..)
The mapping of user written IDL to C++ is not described in the version 1.2 of the DDS standard. Is it expected that DDS use the existing CORBA C++ mapping for data types? If so then the standard should state this requirement. On the other hand, the CORBA IDL to C++ mapping is fairly old. The new DDS PSM for C++ would suggest a more modern mapping. For example, for bounded strings and bounded sequences, C++ classes inspired by the Standard Template Library (STL) could be used. These classes need not necessarily break the DCPS compatibility with the C language mentioned in 8.2.1.1. (Use fixed buffer, avoid virtual methods.) It is not clear whether unbounded data types need be supported, see issues 8892 and 12360. In case a new mapping is defined which is independent of the CORBA C++ mapping, there is a problem to address: Bridge applications which use both CORBA and DDS would need to translate a single IDL file twice, once for CORBA and again for DDS. Then there is an overlap in the generated names. This problem could be solved by encapsulating all C++ code generated for DDS from user IDL in an extra namespace.
UserDataQosPolicy, TopicDataQosPolicy, and GroupDataQosPolicy use an anonymous sequence, this is deprecated in IDL. A new typedef for this should be introduced
For ContentFilteredTopic::get_expression_parameters the argument name is not given in the spec, this way the IDL isn't complete and compilable Also MultiTopic::get_expression_parameters has the same issue Also DataWriter::get_liveliness_lost_status has this issue
The DDS 1.2 standard does not mention which IDL data types are permissible as key fields for topic data types. As the data types for keys, at least enumeration types and integral types (octet/short/long etc.) should be permissible. However, it would be desirable to also allow simple (non-array) typedefs of these types.
The DDS spec defines: #define HANDLE_TYPE_NATIVE long typedef HANDLE_TYPE_NATIVE InstanceHandle_t; We see that some vendors use long, other a struct containing an octet array. This is causing problems when integrating DDS into CCM. A CORBA::Long is passed by value, a struct is passed as const &. At least the way the InstanceHandle_t is passed to methods and returned should be the same. We propose to change this type to: struct NativeInstanceHandle_t { // Vendor defined }; typedef NativeInstanceHandle_t InstanceHandle_t; That way we always have a struct passed as const&.
DDS defines Time_t with seconds as long, this is 32bit. This will give an issue after 2038, almost all operating systems are now defining time as 64bit, shouldn't DDS do the same?
DDS has several methods to create entities and pass a listener. With the listener than a mask has to be passed. It would be much cleaner for some C++ systems when we can pass a special mask which means that DDS will callback on the listener to gets its mask, this reduces the application code and could lead to a cleaner user code architecture
For template meta programming as we are doing for the dds4ccm spec, it is needed to know at runtime time of the user program whether a topic has a key or not, because behaviour is dependent on that. we propose to add methods to the Topic to get its key fields
We want to propose to extend the DDS status masks with DDS::STATUS_MASK_NONE which is defined as 0. This is cleaner for application code.
DDS leaves domainid, handle, and topic key to the dds vendor, each can define their own type for these. The specification has defines for them, but the default is long. If now a vendor uses for example a struct for the InstanceHandle_t, this leads to challenges when writing portable C++ code because the argument passing is different between a long (as value) or a fixed struct (const &). To my idea the dds specification should be more precise on the type, maybe fixed struct to achieve more portability of users code
In a CCM/DDS4CCM based system not all IDL defined types are intended to be used with dds. At this moment there is not a standardized way to indicate that a type should be usable with DDS. If I now have a large file with a lot of types, DDS just assumes that everything should be transmittable through DDS and generates a lot of code. Just like can indicate the keys of a struct, I think DDS should define a standardized way to annotate the idl so that just some types in a file are handled by the dds tooling
DDS4CCM adds support for qos xml files. There is no api in dds to create a dp/sub/pub/rd/wr with a qos given from the xml file. We propose to add the following methods to the DDS IDL to create dds entities with a qos xml profile.
local interface DomainParticipant : Entity {
Publisher create_publisher_with_profile(
in string library_name,
in string profile_name,
in PublisherListener a_listener,
in StatusMask mask);
Subscriber create_subscriber_with_profile(
in string library_name,
in string profile_name,
in SubscriberListener a_listener,
in StatusMask mask);
Topic create_topic_with_profile(
in string topic_name,
in string type_name,
in string library_name,
in string profile_name,
in TopicListener a_listener,
in StatusMask mask);
};
local interface DomainParticipantFactory {
DomainParticipant create_participant_with_profile(
in DomainId_t domain_id,
in string library_name,
in string profile_name,
in DomainParticipantListener a_listener,
in StatusMask mask);
ReturnCode_t set_default_participant_qos_with_profile(
in string library_name,
in string profile_name);
};
local interface Publisher : Entity {
DataWriter create_datawriter_with_profile(
in Topic a_topic,
in string library_name,
in string probile_name,
in DataWriterListener a_listener,
in StatusMask mask);
};
local interface Subscriber : Entity {
DataReader create_datareader_with_profile(
in TopicDescription a_topic,
in string library_name,
in string profile_name,
in DataReaderListener a_listener,
in StatusMask mask);
};DDS uses IDL and defines its interface as regular interfaces, but these should all be local interfaces
DDS Entities (DomainParticipant, Publisher, Subscriber, DataReader and DataWriter) have associated GUID but the current API does not provide any way of associating them a symbolic name, such as "com.acme.mycoolapp.DomainParticipantFoo" The DDS PIM should be extended to support the explicit setting of entity names. When not set explicitly, vendors should be free to pick meaningful names.
The DDS v1.2 specification does not clearly specify the behaviour of redefining multiple times the same topic with different QoS. As a Topic represents a global assertion, having different applications associating different QoS with the same topic should be flagged as an error.
It seems that DDS vendors are implementing get_type_name of the TypeSupport interfaces differently. Some do it as static class method in C++, some as a regular method. The spec uses regular IDL, which gives the idea that it is a regular method needed a concrete TypeSupport instance, but I doubt whether that is intended
InstanceHandle_t is underspecified, The IDL PSM that should use a struct, so that argument passing is the same for all values In the IDL PSM, I see that copying and comparing instance handles is underspecified. The same issue applies to domain IDs, actually, which also have unspecified contents in that PSM.)
For keyed topics, the core set of read or take APIs support read|take, read|take_instance, and read|take_next_instance. The corresponding set of API's for read or take with conditions only specified read|take_w_condition and read|take_next_instance_w_condition. The specification lacks the APIs for read|take_instance_w_condition. Lack of these APIs deprives the user from the ability to read or take from a specific instance while limiting the sample set to specific conditions.
The current DDSv1.2 specification provides a slightly different semantics for the history on the writer side and on the reader. In addition, the DDS v1.2 specification allows to use the data-writer history as the "reliability-send-queue" -- this is combined with the fact that History is (correctly) not an RxO policy leads to a serious bug in the spec (luckily not all DDS implementations follow this path) . Let's try to understand why. Firs, it is important to comprehend that unless a data writer uses KeepAll history, reliable data delivery is not guaranteed to matching data readers. This can't be guaranteed even when using Reliable communication (obviously ignoring data-writer crashes for the time being). Essentially, under this scheme a DataWriter is allowed send/re-send only what is in its history cache. If a sample is out of history a DataWriter won't do anything, thus leading to potential inconsistencies in case some reader has lost the message. Things get very messy when a DataReader with KeepAll History is matching a DataWriter that does uses KeepLast(n). In this case, the poor DataReader which might legitimally expect to have on his history all the samples written by the writers -- without holes -- might finds him-/her-self surprised by the fact that some samples might be missing.
The compatibility rules for the Presentation QosPolicy, specified in section 7.1.3.6 of the DDS Specification prevent a subscriber configured using GROUP access scope from simultaneously matching a publisher configured using GROUP access scope and a publisher configured using TOPIC or INSTANCE access scope. Proposed Resolution: The addition of a new boolean field called "use_highest_offered_access_scope" to Presentation QosPolicy. This boolean is not propagated as part of the discovery information and remains local to the subscriber. Users configure the subscriber to the minimum accepted access scope. When the boolean field is set to true, the subscriber will provide the access_scope offered by the publisher as opposed to its own access_scope value. For example: a subscriber wanted to provide GROUP access scope when matched with a GROUP order publisher and with a publisher providing INSTANCE access scope, will use INSTANCE access scope and set the user_highest_offered_access_scope boolean to true.
Found by: Fernando Crespo Severity: Problem: The DDS specification (section 7.1.2.5.2.8) does not allow access to the DDS cache when the application wants to access data, but does not care about the order across Data Writers. Proposed Resolution: When using GROUP access scope, allow both access patterns and do not return PRECONDITION_NOT_MET error code when called read, take, etc. without first called begin_access. Note: This also allows portability for applications which do not care about the order across topics.
Found by: Fernando Crespo Severity: Problem: OMG DDS spec (section 7.1.2.5.2.10) states that the sequence returned by get_datareaders() will contain list containing each DataReader one or more times. For example, if multiple consecutive samples in a group belong to the same DataReader the DataReader is repeated in the list returned by get_datareaders(). Having to process each element, even when they belong to the same DataReader is less performant. Proposed Resolution: Modify the specification to return one DataReader element, instead of a list where a DataReader is repeated multiple times, when multiple subsequent samples belong to the same DataReader. This allows for more optimized processing where the user calls read/take until the return code is NO_DATA.
The spec defines the following: 7.1.2.1.1.8 get_instance_handle This operation returns the InstanceHandle_t that represents the Entity. But it doesn't specify in how far this InstanceHandle_t has to be unique. When the user receives an InstanceHandle_t, is it than unique within the same domain participant, within the process or within the complete dds domain? If it is for example just unique within one domain participant, we can't use it for example as key in a map which could contain entities from multiple domain participants
For write the DDS spec says: The special value HANDLE_NIL can be used for the parameter handle. This indicates that the identity of the instance should be automatically deduced from the instance_data (by means of the key). The case which is not specified is the case where the handle is HANDLE_NUL, but they key we use hasn't been registered with dds yet, will DDS than return an error, or automatically register a new instance?