Issues for Data Distribution Service 3 Revision Task Force
To comment on any of these issues, send email to data-distribution-rtf@omg.org. (Please include the issue number in the Subject: header, thusly: [Issue ###].) To submit a new issue, send email to issues@omg.org.
List of issues (green=resolved, yellow=pending Board vote, red=unresolved)
Issue 7964: no specific mention of interoperability in DDS 04-04-12 standard proposal
Issue 7965: DDS: DCPS generated interface FooTypeSupport
Issue 7966: DDS: DCPS - define the term "plain data structures"
Issue 7974: 2.1.3.20 WRITER_DATA_LIFECYCLE, itemized list, first bullet
Issue 7975: DDS 04-04-12 para. 2.1.1.1 Format and conventions
Issue 7976: DDS 04-04-12 Appendix B, C
Issue 8354: Typographical and grammatical errors
Issue 8355: Spelling inconsistencies between the PIM and IDL PSM
Issue 8358: Operation DataWriter::register
Issue 8359: (T#4) Typo in section 2.1.2.4.2.10 (write) and section 2.1.2.4.12 (dispose)
Issue 8360: Typo in section 2.1.2.5.2.5
Issue 8361: Default value for READER_DATA_LIFECYCLE
Issue 8362: Incorrect reference to USER_DATA on TopicQos
Issue 8363: No mention of DESTINATION_ORDER on DataWriterQos
Issue 8364: Formal parameter name improvement in IDL
Issue 8365: Spell fully the names for the DataReader operations
Issue 8366: Missing operations on DomainParticipantFactory
Issue 8367: T#18,24,) Missing operations and attributes
Issue 8368: (T#28) Typographical and grammatical errors
Issue 8369: (T#29) Missing operations to Topic class
Issue 8370: Formal parameter name change in operations of ContentFilteredTopic
Issue 8371: (T#30) Ambiguous description of TOPIC_DATA
Issue 8372: Confusing description of behavior of Publisher::set_default_datawriter_qos
Issue 8373: (T#33) Clarification in use of set_listener operation
Issue 8374: Missing description of DomainParticipant::get_domain_id
Issue 8375: (T#41) Default value for RELIABILITY max_blocking_time
Issue 8376: (T#42) Behavior when condition is attached to WaitSet multiple times
Issue 8377: Explicit mention of static DomainParticipantFactory::get_instance operation
Issue 8378: (T#45) Clarification of syntax of char constants within query expressions
Issue 8379: (T#52) Allow to explicitly refer to the default QoS
Issue 8380: (T#54) Performance improvement to WaitSet
Issue 8381: (T#55) Modification to how enumeration values are indicated in expressions
Issue 8382: (T#56) Return values of Waitset::detach_condition
Issue 8383: (T#57) Enable status when creating DomainParticipant
Issue 8384: Add autopurge_disposed_samples_delay to READER_DATA_LIFECYCLE QoS
Issue 8388: (R#106b) Parameter passing convention of Subscriber::get_datareaders
Issue 8389: (R#107) Missing Topic operations in IDL PSM
Issue 8390: (R#109) Unused types in IDL
Issue 8391: Incorrect field name for USER_DATA, TOPIC_DATA, and GROUP_DATA
Issue 8392: R#112) Incorrect SampleRejectedStatusKind constants
Issue 8393: R#114) Operations should not return void
Issue 8394: R#115) Destination order missing from PublicationBuiltinTopicData
Issue 8395: TransportPriority QoS range does not specify high/low priority values
Issue 8396: (R#119) Need lookup_instance method on reader and writer
Issue 8397: (R#120) Clarify use of DATAREADER_QOS_USE_TOPIC_QOS
Issue 8398: (R#122) Missing QoS dependencies in table
Issue 8399: Need an extra return code: ILLEGAL_OPERATION
Issue 8417: (R#124) Clarification on the behavior of dispose
Issue 8418: (R#125) Additional operations that can return RETCODE_TIMEOUT
Issue 8419: (R#127) Improve PSM mapping of BuiltinTopicKey_t
Issue 8420: Unspecified behavior of DataReader/DataWriter creation w/t mismatched Topic
Issue 8421: (R#130) Unspecified behavior of delete_datareader with outstanding loans
Issue 8422: (R#131) Clarify behavior of get_status_changes
Issue 8423: Incorrect reference to LIVELINESS_CHANGED in DataWriter::unregister
Issue 8424: (R#135) Add fields to PublicationMatchStatus and SubscriptionMatchStatus
Issue 8425: (R#138) Add instance handle to LivelinessChangedStatus
Issue 8426: (R#139) Rename *MatchStatus to *MatchedStatus
Issue 8427: (R#142) OWNERSHIP QoS policy should concern DataWriter and DataReader
Issue 8428: (R#145,146) Inconsistent description of Topic module in PIM
Issue 8429: (R#147) Inconsistent error code list in description of TypeSupport::registe
Issue 8430: (R#152) Extraneous WaitSet::wakeup
Issue 8431: (R#153) Ambiguous SampleRejectedStatus::last_reason field
Issue 8432: (R#154) Undefined behavior if resume_publications is never called
Issue 8531: DTD Error (mainTopic
Issue 8532: get_all-topic_names operation missing on figure 3-4
Issue 8533: Naming inconsistencies (IDL PSM vs. PIM) for ObjectHome operations
Issue 8534: Naming inconsistencies (IDL PSM vs. PIM) for Cache operation
Issue 8535: Bad cardinality on figure 3-4
Issue 8536: ReadOnly exception on clone operations
Issue 8537: Wrong definition for FooListener
Issue 8538: Typo CacheUsage instead of CacheAccess
Issue 8539: templateDef explanation contains some mistakes
Issue 8540: DlrlOid instead of DLRLOid in implied IDL
Issue 8541: Parameter wrongly named "object" in implied IDL
Issue 8542: Attach_Listener and detach_listener operations on ObjectHome are untyped
Issue 8543: Remove operations badly put on implied classes
Issue 8545: Behavior of DataReaderListener::on_data_available
Issue 8546: Inconsistent naming for status parameters in DataReader operations.
Issue 8547: (T#23) Syntax of partition strings
Issue 8548: Clarification of order preservation on reliable data reception
Issue 8549: (T#37) Clarification on the value of LENGTH_UNLIMITED constant
Issue 8550: (T#38) request-offered behavior for LATENCY_BUDGET
Issue 8551: (T#46) History when DataWriter is deleted
Issue 8552: (T#47) Should a topic returned by lookup_topicdescription be deleted
Issue 8553: (T#51) Identification of the writer of a sample
Issue 8554: (T#53) Cannot set listener mask when creating an entity
Issue 8555: (T#53) Cannot set listener mask when creating an entity
Issue 8556: (T#59) Deletion of disabled entities
Issue 8557: (T#60) Asynchronous write
Issue 8558: (T#61) Restrictive Handle definition
Issue 8559: (T#62, R#141) Unspecified TOPIC semantics
Issue 8560: (T#65) Missing get_current_time() function
Issue 8561: Read or take next instance, and others with an illegal instance_handle
Issue 8562: (T#69) Notification of unsupported QoS policies
Issue 8567: O#7966) Confusing terminology: "plain data structures"
Issue 8568: (R#104) Inconsistent naming of QueryCondition::get_query_arguments
Issue 8569: (R#115b) Incorrect description of QoS for built-in readers
Issue 8570: (R#117) No way to access Participant and Topic built-in topic data
Issue 8571: (R#126) Correction to DataWriter blocking behavior
Issue 8572: Clarify meaning of LivelinessChangedStatus fields and LIVELINESS le
Issue 8573: (R#133) Clarify meaning of LivelinessLost and DeadlineMissed
Issue 8574: (R#136) Additional operations allowed on disabled entities
Issue 8575: (R#144) Default value for DataWriter RELIABILITY QoS
Issue 8576: (R#150) Ambiguous description of create_topic behavior
Issue 8577: R#178) Unclear behavior of coherent changes when communication interrupted
Issue 8578: R#179) Built-in DataReaders should have TRANSIENT_LOCAL durability
Issue 8579: R#180) Clarify which entities appear as instances to built-in readers
Issue 8580: (R#181) Clarify listener and mask behavior with respect to built-in entitie
Issue 8581: R#182) Clarify mapping of PIM 'out' to PSM 'inout'
Issue 8582: (T#6) Inconsistent name: StatusKindMask
Issue 8775: Page: 2-8
Issue 9478: Inconsistencies between PIM and PSM in the prototype of get_qos() methods
Issue 9479: Inconsistent prototype for Publisher's get_default_datawriter_qos() method
Issue 9480: String sequence should be a parameter and not return value
Issue 9481: Mention of get_instance() operation on DomainParticipantFactory beingstatic
Issue 9482: Improper prototype for get_XXX_status()
Issue 9483: Inconsistent naming in SampleRejectedStatusKind
Issue 9484: OWNERSHIP_STRENGTH QoS is not a QoS on built-in Subscriber of DataReaders
Issue 9485: Consistency between RESOURCE_LIMITS QoS policies
Issue 9486: Blocking of write() call
Issue 9487: Clarify PARTITION QoS and its default value
Issue 9488: Typos in built-in topic table
Issue 9489: Naming of filter_parameters concerning ContentFilteredTopic
Issue 9490: Incorect prototype for FooDataWriter method register_instance_w_timestamp()
Issue 9491: Compatible versus consistency when talking about QosPolicy
Issue 9492: Incorrect mention of INCONSISTENT_POLICY status
Issue 9493: Typos in QoS sections
Issue 9494: Typos in PIM sections
Issue 9495: Clarify ownership with same-strength writers
Issue 9496: Should write() block when out of instance resources?
Issue 9497: Description of set_default_XXX_qos()
Issue 9498: Naming consistencies in match statuses
Issue 9499: delete_contained_entities() on the Subscriber
Issue 9500: Return of get_matched_XXX_data()
Issue 9501: Need INVALID_QOS_POLICY_ID
Issue 9502: Clarify valid handle when calling write()
Issue 9503: Operation dispose_w_timestamp() should be callable on unregistered instance
Issue 9504: Behavior of dispose with regards to DURABILITY QoS
Issue 9505: Typo in copy_from_topic_qos
Issue 9506: Order of parameters incorrect in PSM
Issue 9507: Typo in get_discovered_participant_data
Issue 9508: Operation wait() on a WaitSet should return TIMEOUT
Issue 9509: Example in 2.1.4.4.2 not quite correct
Issue 9510: Non intuitive constant names
Issue 9511: Corrections to Figure 2-19
Issue 9516: Simplify Relation Management
Issue 9517: Cache and CacheAccess should have a common parent
Issue 9518: Object notification in manual update mode required
Issue 9519: ObjectExtent and ObjectModifier can be removed
Issue 9520: Introduce the concept of cloning contracts consistently in specification
Issue 9521: Object State Transitions of Figure 3-5 and 3-6 should be corrected
Issue 9522: Add Iterators to Collection types
Issue 9523: Harmonize Collection definitions in PIM and PSM
Issue 9524: Add the Set as a supported Collection type
Issue 9525: Make the ObjectFilter and the ObjectQuery separate Selection Criterions
Issue 9526: Add a static initializer operation to the CacheFactory
Issue 9527: Make update rounds uninterruptable
Issue 9528: Remove lock/unlock due to overlap with updates_enabled
Issue 9529: Add Listener callbacks for changes in the update mode
Issue 9530: Representation of OID should be vendor specific
Issue 9531: define both the Topic name and the Topic type_name separately
Issue 9532: Merge find_object with find_object_in_access
Issue 9533: Clarify which Exceptions exist in DLRL and when to throw them
Issue 9534: Support sequences of primitive types in DLRL Objects
Issue 9535: manual mapping key-fields of registered objects may not be changed
Issue 9536: Specification does not state how to instantiate an ObjectHome
Issue 9537: Raise PreconditionNotMet when changing filter expression on registered Obje
Issue 9538: PIM description of "get_domain_id" method is missing
Issue 9539: PIM and PSM contradicting wrt "get_sample_lost_status" operation
Issue 9540: Small naming inconsistentcies between PIM and PSM
Issue 9541: Unlimited setting for Resource limits not clearly explained
Issue 9542: Inconsistent PIM/PSM for RETCODE_ILLEGAL_OPERATION
Issue 9543: Resetting of the statusflag during a listener callback
Issue 9544: Incorrect description of enable precondition
Issue 9545: invalid reference to delete_datareader
Issue 9546: Clarify the meaning of locally
Issue 9548: Missing autopurge_disposed_sample_delay
Issue 9549: Illegal return value register_instance
Issue 9550: Typo in section 2.1.2.5.1
Issue 9551: Extended visibility of instance state changes
Issue 9552: Clarify notification of ownership change
Issue 9553: read/take_next_instance()
Issue 9554: instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section
Issue 9555: String sequence should be a parameter and not return value
Issue 7964: no specific mention of interoperability in DDS 04-04-12 standard proposal (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Uncategorized Issue
Severity:
Summary:
I find no specific mention of interoperability in the DDS 04-04-12
standard proposal.
It should be clarified whether the standard is intended to address
interoperability, and if so, under what exact conditions (e.g., is it
safe to assume that if the DCPS IDL PSM is implemented by IIOP based
CORBA ORBs then it will be possible to interoperate?)
Resolution:
Revised Text: Resolution:
Add clarifying text to the specification.
Revised Text:
At the end of section 1.2 "Purpose" add the text.
This specification focuses on the portability of applications using the Data-Distribution Service. This is consistent with the requirements expressed in the RFP. Wire-protocol interoperability between vendor implementations is planned as an extension
Actions taken:
December 2, 2004: received issue
August 1, 2005: closed issue
Discussion: RTF Comments:
The DDS specification addresses only inter-vendor portability. The specification defines the API and behavior. There is an on-going effort at OMG to address interoperability. In the meantime implementations could be built on top of IIOP. However, given that the DDS Entities are intended to be local communication endpoints and not and not references to the use of IIOP would not be sufficient to achieve interoperability as it IIOP does not address how to represent the QoS, discovery information, and other behaviors necessary to implement DDS. In addition the DDS specification was designed to be implementable on top of connectionless unreliable protocols such as IP multicast and IIOP does not offer direct facilities to do that.
Issue 7965: DDS: DCPS generated interface FooTypeSupport (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Enhancement
Severity:
Summary: Nature: Enhancement
Summary:
Document 04-04-12 para. 2.2.3 near end
In the implied IDL interface FooTypeSupport for a user type Foo,
there is an operation
DDS::ReturnCode_t register_type(in DDS::DomainParticipant
participant,
in string type_name);
IMHO the type_name argument is superfluous:
The generated stub code can fill it in automatically ("Foo").
Resolution:
Revised Text: Resolution:
Add the get_type_name operation to the FooTypeSupport, the result of which can be used as the type name. In addition, state that if the type_name is nil, that result will be the value used.
Revised Text:
Section 2.1.2.3.6 TypeSupport Interface.
· TypeSupport table. Add the operation:
get_type_name string
In section 2.1.2.3.6.1 Before the paragraph "Possible error codes returned…" Add the paragraph:
The application may pass nil as the value for the type_name. In this case the default type-name as defined by the TypeSupport (i.e. the value returned by the get_type_name operation) will be used.
Add section 2.1.2.3.6.2
2.1.2.3.6.2 get_type_name
This operation returns the default name for the data-type represented by the TypeSupport.
Figure 2-8 Add get_type_name() operation to TypeSupport and FooTypeSupport
Section 2.2.3 DCPS PSM : IDL, Add get_type_name() operation to TypeSupport and FooTypeSupport
interface TypeSupport. Add commented-out line:
// string get_type_name();
interface FooTypeSupport : DDS::TypeSupport . Add operation
string get_type_name();
Actions taken:
December 2, 2004: received issue
August 1, 2005: closed issue
Discussion: RTF Comments:
The type name is not superfluous; see section 2.1.2.3.6.1. In some applications, it may be desirable to register the same physical type multiple times (with different participants or the same participant) under different names.
However, given that different Topics can already be created that use the same type, and given that typdefs can be used to create new type names. A good argument could be made that there is limited use for the added functionality provided by the type-name parameter. A use case could perhaps be used to clarify the need.
As a compromise, the standard could be changed to state that a nil type name is permissible, in which case the default name will be used. Alternatively, the FooTypeSupport class could get an additional method get_type_name() that returns the default type name.
Issue 7966: DDS: DCPS - define the term "plain data structures" (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Clarification
Severity:
Summary: OMG document 04-04-12 para. 2.1.1.2.2 Overall Conceptual Model
pg. 2-7 states:
At the DCPS level, data types represent information that is sent
atomically. For performance reasons, only plain data structures
are handled by this level.
Please define the term "plain data structures".
Resolution:
Revised Text: Resolution:
Remove the second sentence quoted above from the specification.
Revised Text:
Remove the sentence "For performance reasons, only plain data structures are handled by this level" from section 2.1.1.2.2, page 2-7.
Actions taken:
October 28, 1999: received issue
December 2, 2004: received issue
August 1, 2005: closed issue
Discussion:
Issue 7974: 2.1.3.20 WRITER_DATA_LIFECYCLE, itemized list, first bullet (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Uncategorized Issue
Severity: Minor
Summary: * The setting 'autodispose_unregistered_instances = FALSE' causes the
DataWriter [...]
Change FALSE to TRUE.
Resolution:
Revised Text:
Actions taken:
December 10, 2004: received issue
August 1, 2005: closed issue
Issue 7975: DDS 04-04-12 para. 2.1.1.1 Format and conventions (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Revision
Severity:
Summary: The table format used for documenting classes contains an
"attributes" and an "operations" section.
However, in order for applications to be portable across
implementations of the DDS spec, it would be desirable to add
a "constructors" section that explicitly states those constructors
that take one or more arguments (i.e. non-default constructors.)
Resolution:
Revised Text:
Actions taken:
December 14, 2004: received issue
August 1, 2005: closed issue
Issue 7976: DDS 04-04-12 Appendix B, C (data-distribution-rtf)
Click here for this issue's archive.
Source: EADS (Mr. Oliver M. Kellogg, oliver.kellogg(at)cassidian.com)
Nature: Revision
Severity: Significant
Summary: Filters and Queries are not compile-time checked and are too
heavy
The 04-04-12 DDS document proposes a subset of SQL for defining filters
andqueries.
The filter/query expressions are passed into the corresponding methods
as type "string".
First, this means that conforming implementations need to provide an SQL
expression parser/evaluator - a fairly complex piece of software.
Second, since the expressions are given as strings, checking them at
compile time is not straight-forward.
We request the Revision Task Force to reconsider this design decision
in favor of less heavyweight approaches that allow for compile-time
checks.
Resolution:
Revised Text:
Actions taken:
Discussion: RTF Comments:
The DDS RTF agrees in principle that it would be a good idea. However we were not able to come up with a suitable proposal that addresses the need for doing the content-filter also at the DataWriter side.
Therefore the DDS RTF is recommending this issue is postponed to a future RTF where more implementation experience may be available to suggest the best approach.
Resolution:
No change to the specification.
Issue 8354: Typographical and grammatical errors (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification contains a number of misspellings and other minor typographical and grammatical errors.
Resolution:
Revised Text: The typographical and grammatical errors shall be corrected.
Revised Text:
Location Original Incorrect Text Corrected Text
2.1.2, fig. 2-4 "Topic Module" "Topic-Definition Module"
2.1.2.2.2 create_participant parameter "domainId" create_participant parameter "domain_id"
2.1.2.2.2 lookup_participant parameter "domainId" lookup_participant parameter "domain_id"
2.1.2.2.2.1 "domainId" "domain_id"
2.1.2.2.2.4 "domainId" (two occurrences) "domain_id" (two occurrences)
2.1.2.3.7, pg. 2-39 "…for a hypothetical application named "Foo"…" "…for a hypothetical application data-type named "Foo"…"
2.1.2.4.1.15 "…get_default_datawriter_qos will match the set of valuesspecified on the last successful call to get_default_datawriter_qos…" "…get_default_datawriter_qos will match the set of valuesspecified on the last successful call to set_default_datawriter_qos…"
2.1.2.5, fig. 2-10 SampleInfo attribute "instance_rank" SampleInfo attribute "sample_rank"
2.1.2.5.1, fig. 2-11 transition from NO_WRITERS to ALIVE "…=++" transition from NO_WRITERS to ALIVE "…++"
2.1.2.5.1, pg. 2-57 "time-stamp" "timestamp"
2.1.2.5.1, pg. 2-59, 2nd to last para. "…snapshot of view_state…" "…snapshot of the view_state…"
2.1.2.5.1, pg. 2-61, 4th para. "…multiple DataReader." "…multiple DataReaders."
2.1.2.5.1, pg. 2-61, list item (1) "…list of DataReader…" (two occurrences) "…list of DataReader …" (two occurrences)
2.1.2.5.1, pg. 2-61 "…acrossDataWriter entities." (two occurrences) "…acrossDataReader entities." (two occurrences)
2.1.2.5.2.7 "…multiple DataReader…" "…multiple DataReaders…"
2.1.3, pg. 2-92 "…ability to: specify and receive coherent changes see the relative order of changes." "…ability to specify and receive coherent changes and to see the relative order of changes."
2.1.3, pg. 2-98 "time-stamp" "timestamp"
2.1.3, pg. 2-101, autopurge_ nowriter_ samples_ delay row "…information regarding instances that have the view_state NOT_ALIVE_NO_WRITERS." "…information regarding instances that have the instance_state NOT_ALIVE_NO_WRITERS."
2.1.3.6 "TIME_BASED_PERIOD" "TIME_BASED_FILTER"
2.1.3.17 last para. "compatible" (two occurrences) "consistent" (two occurrences)
2.1.3.18 last para. "compatible" (two occurrences) "consistent" (two occurrences)
2.1.3.20 itemized list, first bullet "The setting 'autodispose_unregistered_ instances = FALSE' causes the DataWriter…" "The setting 'autodispose_unregistered_ instances = TRUE' causes the DataWriter…"
2.1.3.21, para. 4 "… view_state = NOT_ALIVE_NO_WRITERS…" "… instance_state = NOT_ALIVE_NO_WRITERS…"
2.1.4.1 Requested-Incom-patible-Qos-Status:: total_count row "Total cumulative count the concerned DataReader discovered a DataWriter…" "Total cumulative number of times the concerned DataReader discovered a DataWriter…"
2.1.4.4, before fig. 2-19 Reference to figure 2-18 Reference to figure 2-19
2.1.5 para. 3 "get_datareader" "lookup_datareader"
2.2.3 const long DURATION_INFINITY_SEC = 0x7ffffff;const unsigned long DURATION_INFINITY_NSEC = 0x7ffffff; const long DURATION_INFINITY_SEC = 0x7fffffff;const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff;
2.2.3 interface DomainParticipantFactory { DomainParticipant create_participant( in DomainId_t domainId, in DomainParticipantQos qos, in DomainParticipantListener a_listener); … DomainParticipant lookup_participant( in DomainId_t domainId); … interface DomainParticipantFactory { DomainParticipant create_participant( in DomainId_t domain_id, in DomainParticipantQos qos, in DomainParticipantListener a_listener); … DomainParticipant lookup_participant( in DomainId_t domain_id); …
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion: The typographical and grammatical errors shall be corrected.
Revised Text:
Location Original Incorrect Text Corrected Text
2.1.2, fig. 2-4 "Topic Module" "Topic-Definition Module"
2.1.2.2.2 create_participant parameter "domainId" create_participant parameter "domain_id"
2.1.2.2.2 lookup_participant parameter "domainId" lookup_participant parameter "domain_id"
2.1.2.2.2.1 "domainId" "domain_id"
2.1.2.2.2.4 "domainId" (two occurrences) "domain_id" (two occurrences)
2.1.2.3.7, pg. 2-39 "…for a hypothetical application named "Foo"…" "…for a hypothetical application data type named "Foo"…"
2.1.2.4.1.15 "…get_default_datawriter_qos will match the set of valuesspecified on the last successful call to get_default_datawriter_qos…" "…get_default_datawriter_qos will match the set of valuesspecified on the last successful call to set_default_datawriter_qos…"
2.1.2.5, fig. 2-10 SampleInfo attribute "instance_rank" SampleInfo attribute "sample_rank"
2.1.2.5.1, fig. 2-11 transition from NO_WRITERS to ALIVE "…=++" transition from NO_WRITERS to ALIVE "…++"
2.1.2.5.1, pg. 2-57 "time-stamp" "timestamp"
2.1.2.5.1, pg. 2-59, 2nd to last para. "…snapshot of view_state…" "…snapshot of the view_state…"
2.1.2.5.1, pg. 2-61, 4th para. "…multiple DataReader." "…multiple DataReaders."
2.1.2.5.1, pg. 2-61, list item (1) "…list of DataReader…" (two occurrences) "…list of DataReaders…" (two occurrences)
2.1.2.5.1, pg. 2-61 "…acrossDataWriter entities." (two occurrences) "…acrossDataReader entities." (two occurrences)
2.1.2.5.2.7 "…multiple DataReader…" "…multiple DataReaders…"
2.1.3, pg. 2-92 "…ability to: specify and receive coherent changes see the relative order of changes." "…ability to specify and receive coherent changes and to see the relative order of changes."
2.1.3, pg. 2-98 "time-stamp" "timestamp"
2.1.3, pg. 2-101, autopurge_ nowriter_ samples_ delay row "…information regarding instances that have the view_state NOT_ALIVE_NO_WRITERS." "…information regarding instances that have the instance_state NOT_ALIVE_NO_WRITERS."
2.1.3.6 "TIME_BASED_PERIOD" "TIME_BASED_FILTER"
2.1.3.17 last para. "compatible" (two occurrences) "consistent" (two occurrences)
2.1.3.18 last para. "compatible" (two occurrences) "consistent" (two occurrences)
2.1.3.20 itemized list, first bullet "The setting 'autodispose_unregistered_ instances = FALSE' causes the DataWriter…" "The setting 'autodispose_unregistered_ instances = TRUE' causes the DataWriter…"
2.1.3.21, para. 4 "… view_state = NOT_ALIVE_NO_WRITERS…" "… instance_state = NOT_ALIVE_NO_WRITERS…"
2.1.4.1 Requested-Incom-patible-Qos-Status:: total_count row "Total cumulative count the concerned DataReader discovered a DataWriter…" "Total cumulative number of times the concerned DataReader discovered a DataWriter…"
2.1.4.4, before fig. 2-19 Reference to figure 2-18 Reference to figure 2-19
2.1.5 para. 3 "get_datareader" "lookup_datareader"
2.2.3 const long DURATION_INFINITY_SEC = 0x7ffffff;const unsigned long DURATION_INFINITY_NSEC = 0x7ffffff; const long DURATION_INFINITY_SEC = 0x7fffffff;const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff;
2.2.3 interface DomainParticipantFactory { DomainParticipant create_participant( in DomainId_t domainId, in DomainParticipantQos qos, in DomainParticipantListener a_listener); … DomainParticipant lookup_participant( in DomainId_t domainId); … interface DomainParticipantFactory { DomainParticipant create_participant( in DomainId_t domain_id, in DomainParticipantQos qos, in DomainParticipantListener a_listener); … DomainParticipant lookup_participant( in DomainId_t domain_id); …
Issue 8355: Spelling inconsistencies between the PIM and IDL PSM (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In a number of instances, there are minor inconsistencies in spelling and naming between the specification's platform-independent model (PIM) and the included IDL platform-specific model (PSM).
Resolution:
In each case, the most descriptive term of the two options was chosen and the other was conformed to it.
Resolution:
Revised Text: In each case, the most descriptive term of the two options was chosen and the other was conformed to it.
Revised Text:
Location PIM Text PSM Text Replacement Text(used for both)
2.1.2.2.1 create_topic parameter "name" create_topic parameter "topic_name" create_topic parameter "name"
2.1.2.4.1 copy_from_topic_qos parameter "topic_qos" copy_from_topic_qos parameter "a_topic_qos" copy_from_topic_qos parameter "a_topic_qos"
2.1.2.4.1 copy_from_topic_qos parameter "datawriter_qos" copy_from_topic_qos parameter "a_datawriter _qos" copy_from_topic_qos parameter "a_datawriter _qos"
2.1.2.4.1.16 "datawriter_qos_list" (N/A) "a_datawriter_qos"
2.1.2.5.3 read_/ take_next_sample parameter "data_value" (in both DataReader and FooDataReader) read_/ take_next_sample parameter "received_data" (in both DataReader and FooDataReader) read_/ take_next_sample parameter "received_data" (in both DataReader and FooDataReader)
2.1.2.5.3 read, take, return_loan parameters "data_values" (in both DataReader and FooDataReader) read, take, return_loan parameters "received_data" (in both DataReader and FooDataReader) read, take, return_loan parameters "received_data" (in both DataReader and FooDataReader)
2.1.2.5.3 read, take, return_loan parameters "sample_infos" (in both DataReader and FooDataReader) read, take, return_loan parameters "info_seq" (in both DataReader and FooDataReader) read, take, return_loan parameters "sample_infos" (in both DataReader and FooDataReader)
2.1.2.5.3 *_w_condition and delete_readconditions parameters "a_condition" (in both DataReader and FooDataReader) *_w_condition and delete_readconditions parameters "condition" (in both DataReader and FooDataReader) *_w_condition and delete_readconditions parameters "a_condition" (in both DataReader and FooDataReader)
2.1.2.5.3 read_/ take_next_instance (_w_condition) parameters "previous_handle" (in both DataReader and FooDataReader) read_/ take_next_instance (_w_condition) parameters "a_handle" (in both DataReader and FooDataReader) read_/ take_next_instance (_w_condition) parameters "previous_handle" (in both DataReader and FooDataReader)
2.1.2.5.6 on_data_on_readers parameter "the_subscriber" on_data_on_readers parameter "subs" on_data_on_readers parameter "the_subscriber"
2.1.2.5.7 DataReaderListener method parameters "the_reader" DataReaderListener method parameters "reader" DataReaderListener method parameters "the_reader"
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8358: Operation DataWriter::register (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The method DataWriter::register conflicts with the C++ 'register' keyword.
Resolution:
Replace register and unregister by register_instance and unregister_instance
Replace register_w_timestamp and unregister_w_timestamp by register_instance_w_timestamp and unregister_instance_w_timestamp
Resolution:
Revised Text: Resolution:
Replace register and unregister by register_instance and unregister_instance
Replace register_w_timestamp and unregister_w_timestamp by register_instance_w_timestamp and unregister_instance_w_timestamp
Revised Text:
Since the revisions are straightforward, here only the figures, tables and paragraphs are indicated which are affected by the above indicated change.
· update figures 2-8, 2-9, accordingly
· update tables in paragraph 2.1.2.4.2 accordingly
· update text in paragraphs 2.1.2.4.2.5/6/7/8, 2.1.3.20 and 2.1.3.22.3 accordingly
· update IDL in paragraph 2.2.3 accordingly
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion: Since the revisions are straightforward, here only the figures,tables and paragraphs are indicated which are affected by the above indicated change.
- update figures 2-8, 2-9, accordingly
- update tables in paragraph 2.1.2.4.2 accordingly
- update text in paragraphs 2.1.2.4.2.5/6/7/8, 2.1.3.20 and 2.1.3.22.3 accordingly
- update IDL in paragraph 2.2.3 accordingly
Issue 8359: (T#4) Typo in section 2.1.2.4.2.10 (write) and section 2.1.2.4.12 (dispose) (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In par. 2.1.2.4.2.10 (write) and par. 2.1.2.4.12 (dispose) the specification does not specify an error code in case the specified handle is valid but does not correspond to the given instance (the key value must match), and neither for the case that the specified handle is invalid.
Resolution:
Specify that in general, the result is unspecified, but that depending on vendor-specific implementations, the resulting error-code is 'PRECONDITION_NOT_MET' if a wrong instance (i.e. with a wrong key-value) is provided and that the resulting error-code is 'BAD_PARAMETER' if a bad handle is provided
Revised Text:
Add the following text to the end of 2.1.2.4.2.10 (write)
In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS-implementation, the returned error-code will be 'BAD_PARAMETER'.
Replace in 2.1.2.4.2.12 (dispose), the text "Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET" by the following text:
In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS-implementation, the returned error-code will be 'BAD_PARAMETER'.
Resolution:
Revised Text: Resolution:
Specify that in general, the result is unspecified, but that depending on vendor-specific implementations, the resulting error-code is 'PRECONDITION_NOT_MET' if a wrong instance (i.e. with a wrong key-value) is provided and that the resulting error-code is 'BAD_PARAMETER' if a bad handle is provided
Revised Text:
Add the following text to the end of 2.1.2.4.2.10 (write)
In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS-implementation, the returned error-code will be 'BAD_PARAMETER'.
Replace in 2.1.2.4.2.12 (dispose) the text "Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET" by the following text:
In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS-implementation, the returned error-code will be 'BAD_PARAMETER'.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8360: Typo in section 2.1.2.5.2.5 (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In section 2.1.2.5.2.5. (create_datareader) the special value DATAWRITER_QOS_USE_TOPIC_QOS is mistakenly being used instead of DATAREADER_QOS_USE_TOPIC_QOS.
Resolution:
Replace the wrong text with the correct version.
Revised Text:
In 2.1.2.5.2.5 (create-datareader) replace the text "The special value DATAWRITER_QOS_USE_TOPIC_QOS" with "The special value "DATAREADER_QOS_USE_TOPIC_QOS
Resolution:
Revised Text: Resolution:
Replace the wrong text with the correct version.
Revised Text:
In 2.1.2.5.2.5 (create-datareader) replace the text "The special value DATAWRITER_QOS_USE_TOPIC_QOS" with "The special value "DATAREADER_QOS_USE_TOPIC_QOS
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8361: Default value for READER_DATA_LIFECYCLE (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Section 2.1.3. (Supported QoS) the default value of the duration attribute of the READER_DATA_LIFECYCLE QoS is specified as "unlimited"..
Resolution:
Replace "unlimited" by "infinite", which in general is used in relation with durations.
Revised Text:
In the QoS-table of paragraph 2.1.3, replace the text "By default, unlimited" as belongs to the READER_DATA_LIFECYCLE QoS by the text "By default, infinite".
Resolution:
Revised Text: Resolution:
Replace "unlimited" by "infinite", which in general is used in relation with durations.
Revised Text:
In the QoS-table of paragraph 2.1.3, replace the text "By default, unlimited" as belongs to the READER_DATA_LIFECYCLE QoS by the text "By default, infinite".
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8362: Incorrect reference to USER_DATA on TopicQos (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The table in section 2.1.3. (Supported QoS) wrongful specifies that USER_DATA concerns Topic.
Resolution:
'Topic' should be removed from the 'concerns' column.
Revised Text:
In the table in section 2.1.3 (Supported QoS), remove from the "USER-DATA" row, and from the "Concerns" column, the word 'Topic'.
Resolution:
Revised Text: Resolution:
'Topic' should be removed from the 'concerns' column.
Revised Text:
In the table in section 2.1.3 (Supported QoS), remove from the "USER-DATA" row, and from the "Concerns" column, the word 'Topic'.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8363: No mention of DESTINATION_ORDER on DataWriterQos (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In the table in section 2.1.3. (Supported QoS) the DESTINATION_ORDER QoS does not mention the 'datawriter' as concerned entity.
Resolution:
Add DataWriter to the 'concerns' column.
Revised Text:
In the table in section 2.1.3 (Supported QoS), add to the "DESTINATION_ORDER" column and the "Concerns" row, the word 'DataWriter'.
Resolution:
Revised Text: Resolution:
Add DataWriter to the 'concerns' column.
Revised Text:
In the table in section 2.1.3 (Supported QoS), add to the "DESTINATION_ORDER" column and the "Concerns" row, the word 'DataWriter'.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8364: Formal parameter name improvement in IDL (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In the IDL specification of section 2.2.3, the first parameter of the 'register_type' method is called 'domain' instead of 'participant' (as it is called elsewhere, like in the table of secion 2.1.2.3.6.
Resolution:
Change the parameter name to 'participant' in the typesupport::register_type IDL.
Revised Text:
In Chapter 2.2.3 (IDL specification), change the register_type parameter called 'domain' into 'participant'.
Resolution:
Revised Text: Resolution:
Change the parameter name to 'participant' in the typesupport::register_type IDL.
Revised Text:
In Chapter 2.2.3 (IDL specification), change the register_type parameter called 'domain' into 'participant' Resulting in:
interface TypeSupport {
// ReturnCode_t register_type(
in DomainParticipant domain participant,
in string type_name);
};
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8365: Spell fully the names for the DataReader operations (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In some class diagrams, generic operations are indicated using '_xxx_' in their names instead of fully specifying all the real operations and also some operations are missing.
Resolution:
- add the missing operations for the dataReader
- explicitly mention all operations for the dataReader
Revised Text:
In the class diagram Fig. 2-8 on page 2-39:
- add missing operations "read_w_condition", "take_w_condition" and "return_loan".
- rename "read_xxx_w_conditon" into "read_next_w_condition".
- rename "take_xxx_w_condition" into "take_next_w_condition"
Resolution:
Revised Text: Resolution:
· add the missing operations for the dataReader
· explicitly mention all operations for the dataReader
Revised Text:
In the class diagram Fig. 2-8 on page 2-39:
· add missing operations "read_w_condition", "take_w_condition" and "return_loan".
· rename "read_xxx_w_conditon" into "read_next_w_condition".
· rename "take_xxx_w_condition" into "take_next_w_condition
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8366: Missing operations on DomainParticipantFactory (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The class DomainParticipantFactory in figure 2-6 section 2.1.2.2. (Domain Module) misses the operations set_default_participant_qos and get_default_participant_qos..
Resolution:
Add the missing operations.
Revised Text:
In the class diagram Fig. 2-6 of section 2.1.2.2 (Domain Module), add the operations 'set_default_participant_qos' and 'get_default_participant_qos'.
Resolution:
Revised Text: Resolution:
Add the missing operations.
Revised Text:
In the class diagram Fig. 2-6 of section 2.1.2.2 (Domain Module), add the operations 'set_default_participant_qos' and 'get_default_participant_qos'.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8367: T#18,24,) Missing operations and attributes (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In some of the figures some operations are missing.
Resolution:
The missing operations shall be added.
Revised Text:
Location Missing operation
2.1.2.5, fig. 2-10 delete_contained_entities()
2.1.2.2, fig. 2-6 set_default_publisher_qos()get_default_publisher_qos()set_default_subscriber_qos()get_default_subscriber_qos()set_default_topic_qos()get_default_topic_qos()
Resolution:
Revised Text: Resolution:
The missing operations shall be added.
Revised Text:
Location Missing operation
2.1.2.5, fig. 2-10 delete_contained_entities()
2.1.2.2, fig. 2-6 set_default_publisher_qos() get_default_publisher_qos() set_default_subscriber_qos() get_default_subscriber_qos() set_default_topic_qos() get_default_topic_qos()
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8368: (T#28) Typographical and grammatical errors (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification contains a number of misspellings and other minor typographical and grammatical errors.
Resolution:
The typographical and grammatical errors shall be corrected.
Revised Text:
Location Original Incorrect Text Corrected Text
2.1.2.1, fig. 2-5 "Status" (the class name) "Status"
2.1.2.2, fig. 2-6 "domainId" "domain_id"
Resolution:
Revised Text: Resolution:
The typographical and grammatical errors shall be corrected.
Revised Text:
Location Original Incorrect Text Corrected Text
2.1.2.1, fig. 2-5 "Status" (the class name) "Status"
2.1.2.2, fig. 2-6 "domainId" "domain_id"
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8369: (T#29) Missing operations to Topic class (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In the DCPS PSM the Topic class does not specify the methods set_qos, get_qos, set_listener and get_listener.
Resolution:
The methods set_qos, get_qos, set_listener and get_listener shall be added to the IDL description of the Topic class.
Revised Text:
In the IDL in 2.2.3:
interface Topic : Entity, TopicDescription {
…
ReturnCode_t set_qos(
in TopicQos qos);
void get_qos(
inout TopicQos qos);
ReturnCode_t set_listener(
in TopicListener a_listener,
in StatusMask mask);
TopicListener get_listener();
ReturnCode_t get_inconsistent_topic_status(
inout: InconsistentTopicStatus);
…
};
Resolution:
Revised Text: Resolution:
The methods set_qos, get_qos, set_listener and get_listener shall be added to the IDL description of the Topic class.
Revised Text:
In the IDL in 2.2.3, resulting interface Topic is:
interface Topic : Entity, TopicDescription {
ReturnCode_t set_qos(
in TopicQos qos);
void get_qos(
inout TopicQos qos);
ReturnCode_t set_listener(
in TopicListener a_listener,
in StatusMask mask);
TopicListener get_listener();
// Access the status
ReturnCode_t get_inconsistent_topic_status(
inout InconsistentTopicStatus a_topic);
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8370: Formal parameter name change in operations of ContentFilteredTopic (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Some of the formal parameter names of ContentFilteredTopic methods are vague.
Resolution:
The names shall be changed into more distinct names.
Revised Text:
Location Original incorrect name Corrected name
section 2.1.2.2.1, create_contentfilteredtopic expression_parameters filter_parameters
section 2.1.2.2.1.7 topic_name related_topic
section 2.1.2.2.1.7 expression_parameters filter_parameters
section 2.1.2.3.3, get_expression_parameters expression_parameters filter_parameters
section 2.1.2.3.3, set_expression_parameters expression_parameters filter_parameters
Resolution:
Revised Text: Resolution:
The names shall be changed into more distinct names.
Revised Text:
Location Original incorrect name Corrected name
section 2.1.2.2.1, create_contentfilteredtopic expression_parameters filter_parameters
section 2.1.2.2.1.7 topic_name related_topic
section 2.1.2.2.1.7 expression_parameters filter_parameters
section 2.1.2.3.3, get_expression_parameters expression_parameters filter_parameters
section 2.1.2.3.3, set_expression_parameters expression_parameters filter_parameters
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8371: (T#30) Ambiguous description of TOPIC_DATA (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The last part of the description states: "They both concern Topic, DataWriter and DataReader…"although that furter in the text is described that TOPIC_DATA is only applicable for Topics it would be better to remove this part of the description.
Resolution:
The last section of paragraph 2.1.3.2 shall be removed.
Revised Text:
The text: "This QoS is very similar in intent to USER_DATA……primarily on the DataReader/DataWriter." shall be removed.
Resolution:
Revised Text: Resolution:
The last section of paragraph 2.1.3.2 shall be removed.
Revised Text:
The text: "This QoS is very similar in intent to USER_DATA……primarily on the DataReader/DataWriter." shall be removed.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8372: Confusing description of behavior of Publisher::set_default_datawriter_qos (data-distribution-rtf)
Click here for this issue's archive.
Nature: Uncategorized Issue
Severity:
Summary: The description of the Publisher method set_default_datawriter_qos describes the use in case the qos was not explicitly specified at the create_datawriter operation. However, specifying the qos policy at the create_datawriter is not optional, it should state in case the default is used.
Resolution:
The description shall be modified to clarify the use-case of using the defaults.
Revised Text:
In section 2.1.2.4.1.15:
Replace "in the case where the QoS policies are not explicitly specified" with "in the case where the QoS policies are defaulted
Resolution:
Revised Text: Resolution:
The description shall be modified to clarify the use-case of using the defaults.
Revised Text:
In section 2.1.2.4.1.14:
Replace "in the case where the QoS policies are not explicitly specified" with "in the case where the QoS policies are defaulted
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8373: (T#33) Clarification in use of set_listener operation (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The description of the Entity method set_listener does not describe the result of this method if called with the value of the listener parameter set to NIL.
the value NIL is passed for the listener parameter. Chapter 2.1.2.1.1.3 (set_listener): Explicitly state that passing the value NIL for the listener is valid and clears the listener.
Resolution:
The description of this use-case shall be added to the description.
Revised Text:
Only one listener can be attached to each Entity. If a listener was already set, the operation set_listener will replace it with the new one. Consequently if the value 'nil' is passed for the listener parameter to this method any existing listener will be removed.
Resolution:
Revised Text: Resolution:
The description of this use-case shall be added to the description. Chapter 2.1.2.1.1.3 (set_listener): Explicitly state that passing the value NIL for the listener is valid and clears the listener.
Revised Text:
In section 2.1.2.1.1.3 After: Only one listener can be attached to each Entity. If a listener was already set, the operation set_listener will replace it with the new one.
Add sentence:
Consequently if the value 'nil' is passed for the listener parameter to the set_listener operation any existing listener will be removed
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8374: Missing description of DomainParticipant::get_domain_id (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In the class description of the DomainParticipant the description of the attribute domain_id is missing
Resolution:
The attribute domain_id shall be added to the table in 2.1.2.2.1
The description of attribute domain_id shall be added as section 2.1.2.2.1.26
Revised Text:
1.1.2.2.1.26 domain_id
The domain_id identifies the Domain of the DomainParticipant. At creation the DomainParticipant is associated to this domain
Resolution:
Revised Text: Resolution:
The attribute domain_id shall be added to the table in 2.1.2.2.1
The description of attribute domain_id shall be added as section 2.1.2.2.1.26
Revised Text:
Add section 2.1.2.2.1.26
2.1.2.2.1.26 domain_id
The domain_id identifies the Domain of the DomainParticipant. At creation the DomainParticipant is associated to this domain
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8375: (T#41) Default value for RELIABILITY max_blocking_time (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The default value of the RELIABILITY qos policy attribute max_blocking_time is not specified.
Resolution:
The default value shall be specified an arbitrary value greater then zero to avoid that writers will encounter timeouts on acceptable temporary buffer saturations and the value should not be too big since real-time behavior would expect that anything causing writers to block would not sustain for a long time.
Revised Text:
In section 2.1.3:
Add to the description of the RELIABILITY QosPolicy value RELIABLE in the QosPolicy table the text:
"The default max_blocking_time=100ms.
Resolution:
Revised Text: Resolution:
The default value shall be specified an arbitrary value greater then zero to avoid that writers will encounter timeouts on acceptable temporary buffer saturations and the value should not be too big since real-time behavior would expect that anything causing writers to block would not sustain for a long time.
Revised Text:
In section 2.1.3:
Add to the description of the RELIABILITY QosPolicy value RELIABLE in the QosPolicy table the text:
"The default max_blocking_time=100ms."
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8376: (T#42) Behavior when condition is attached to WaitSet multiple times (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: It is not clearly defined what should happen when the same condition is attached to the same WaitSet multiple times.
Resolution:
Explicitly state that this has no effect: subsequent attachments of the same Condition will be ignored.
Revised Text:
Add a small piece of text to section 2.1.2.1.6.1, which explains that adding a Condition that is already attached to that WaitSet has no effect.
Resolution:
Revised Text: Resolution:
Explicitly state that this has no effect: subsequent attachments of the same Condition will be ignored.
Revised Text:
Section 2.1.2.1.6.1After paragraph "It is possible to attach … the WaitSet." Add the paragraph:
Adding a Condition that is already attached to that WaitSet has no effect.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8377: Explicit mention of static DomainParticipantFactory::get_instance operation (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The get_instance method is mentioned in the PIM, but not in the IDL PSM.
Resolution:
Explicitly state that this is a static method and that it is therefore not specified in IDL.
Revised Text:
Add a piece of text to section 2.1.2.2.2.3 explaining that the get_instance method is a static method implemented as a native language construction, and can therefore not be expressed in IDL.
Add the "static" keyword before the get_instance method mentioned in the table of section 2.1.2.2.2.
Add piece of text in section 2.2.2 (right after the introduction of default constructors for WaitSet and GuardCondition) which explains that the DomainParticipantFactory has a static method called get_instance in the native classes that implement it.
Resolution:
Revised Text: Resolution:
Explicitly state that this is a static method and that it is therefore not specified in IDL.
Revised Text:
section 2.1.2.2.2.3 add to the end of the section:
The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.
Table of section 2.1.2.2.2.
Add the "static" keyword before the get_instance method
Section 2.2.2 (right after the introduction of default constructors for WaitSet and GuardCondition) add:
The language implementations the DomainParticipantFactory interface should have the static operation get_instance described in Section 2.1.2.2.2 . This operation does not appear in the IDL interface DomainParticipantFactory as static operations cannot be expressed in IDL
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8378: (T#45) Clarification of syntax of char constants within query expressions (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: It is not clear how the value of a char constant should be expressed in a query expression.
Resolution:
Clarify that a char constant in a query expression must be places between single quotes.
Revised Text:
Add a bullet to Appendix B in the section "Token expression" where the char constant is introduced and where is explained how it should be defined (between single quotes, just like the tring).
Keep Appendix C inline with this as well
Resolution:
Revised Text: Resolution:
Clarify that a char constant in a query expression must be places between single quotes.
Revised Text:
Add a bullet to Appendix B in the section "Token expression" where the char constant is introduced and where is explained how it should be defined (between single quotes, just like the tring).
In Appendix B Modify:
Parameter ::= INTEGERVALUE
| FLOATVALUE
| STRING
| ENUMERATEDVALUE
| PARAMETER
To
Parameter ::= INTEGERVALUE
| CHARVALUE
| FLOATVALUE
| STRING
| ENUMERATEDVALUE
| PARAMETER
In Appendix B after bullet INTEGERVALUE add:
o CHARVALUE - A single character enclosed between single quotes.
In Appendix C Modify:
Parameter ::= INTEGERVALUE
| FLOATVALUE
| STRING
| ENUMERATEDVALUE
| PARAMETER
To
Parameter ::= INTEGERVALUE
| CHARVALUE
| FLOATVALUE
| STRING
| ENUMERATEDVALUE
| PARAMETER
In Appendix C after bullet INTEGERVALUE add:
o CHARVALUE - A single character enclosed between single quotes.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8379: (T#52) Allow to explicitly refer to the default QoS (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: It would be nice to be able to use the "<item>_QOS_DEFAULT" constant in both the set_default_<item>_qos method of its factory and in its set_qos method as well.
Resolution:
Explicitly allow that passing the default qos constant ("<item>_QOS_DEFAULT") to the "set_default_<item>_qos" method in its factory will reset the default qos value for the item to its initial factory default state.
Also state that using the "<item>_QOS_DEFAULT" constant in the set_qos method of an item will change the qos of that item according to the current default of its container entity at the time the call is made.
Revised Text:
For each "set_default_<item>_qos" method in each factory, add the fact that the "<item>_QOS_DEFAULT" constant may be used to revert it back to its factory settings. This impacts sections 2.1.2.2.1.20, 2.1.2.2.1.22, 2.1.2.2.1.24, 2.1.2.4.1.14 and 2.1.2.5.2.15.
For each set_qos method in each entity, state that the corresponding "<item>_QOS_DEFAULT" constant may be used to change the qos of the item according to the current default of its container entity at the time the call is made, provided that this does not change any immutable qos once the entity is enabled. This impacts only section 2.1.2.1.1.1, since the set_qos method explanations are not repeated in the descriptions for every entity specialization.
Resolution:
Revised Text: Resolution:
Explicitly allow that passing the default qos constant ("<item>_QOS_DEFAULT") to the "set_default_<item>_qos" method in its factory will reset the default qos value for the item to its initial factory default state.
Also state that using the "<item>_QOS_DEFAULT" constant in the set_qos method of an item will change the qos of that item according to the current default of its container entity at the time the call is made.
Revised Text:
Section 2.1.2.2.1.20, at the end add paragraph
The special value PUBLISHER_QOS_DEFAULT may be passed to this operation to indicate that the default QoS should be reset back to the initial values the factory would use, that is the values that would be used if the set_default_publisher_qos operation had never been called.
Section 2.1.2.2.1.22, at the end add paragraph
The special value SUBSCRIBER_QOS_DEFAULT may be passed to this operation to indicate that the default QoS should be reset back to the initial values the factory would use, that is the values that would be used if the set_default_subscriber_qos operation had never been called.
Section 2.1.2.2.1.24, at the end add paragraph
The special value TOPIC_QOS_DEFAULT may be passed to this operation to indicate that the default QoS should be reset back to the initial values the factory would use, that is the values that would be used if the set_default_topic_qos operation had never been called.
Section 2.1.2.4.1.14, at the end add paragraph
The special value DATAWRITER_QOS_DEFAULT may be passed to this operation to indicate that the default QoS should be reset back to the initial values the factory would use, that is the values that would be used if the set_default_datawriter_qos operation had never been called.
Section 2.1.2.5.2.15., at the end add paragraph
The special value DATAREADER_QOS_DEFAULT may be passed to this operation to indicate that the default QoS should be reset back to the initial values the factory would use, that is the values that would be used if the set_default_datareader_qos operation had never been called.
Section 2.1.2.1.1.1 before the last paragraph "Possible error codes returned…" Add paragraph:
Each derived Entity class (DomainParticipant, Topic, Publisher, DataWriter, Subscriber, DataReader) has a corresponding a special value of the QoS (PARTICIPANT_QOS_DEFAULT, PUBLISHER_QOS_DEFAULT, SUBSCRIBER_QOS_DEFAULT, TOPIC_QOS_DEFAULT, DATAWRITER_QOS_DEFAULT, DATAREADER_QOS_DEFAULT). This special value may be used as a parameter to the set_qos operation to indicate that the QoS of the Entity should be changed to match the current default QoS set in the Entity's factory. The operation set_qos cannot modify the immutable QoS so a successful return of the operation indicates that the mutable QoS for the Entity has been modified to match the current default for the Entity's factory.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8380: (T#54) Performance improvement to WaitSet (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The get_conditions and wait methods of the WaitSet pass the Conditions in which the user is interested back to the application as out-parameters. This causes unnecessary memory allocations each time a WaitSet is used for that purpose.
Resolution:
Make the WaitSet result sequence of the inout type for performance reasons, especially because the application is aware of the desired (worst-case) length. The user is then able to recycle these sequences every time.
Revised Text:
In the table in section 2.1.2.1.6 change the parameter types of the Condition Sequence from out to inout. Explain in sections 2.1.2.1.6.3 and 2.1.2.1.6.4 that the user can either pre-allocate the sequence and force the middleware to overwrite its contents, or to not to pre-allocate and let the middleware allocate the memory for him.
Also change the IDL definition for both methods in section 2.2.3.
Resolution:
Revised Text: Resolution:
Make the WaitSet result sequence of the inout type for performance reasons, especially because the application is aware of the desired (worst-case) length. The user is then able to recycle these sequences every time.
Revised Text:
In the WaitSet table in section 2.1.2.1.6
change the parameter type of the wait operation: from "out: active_conditions" to "inout: active_conditions "
change the parameter types of get_conditions operation from "out: attached_conditions" to "inout: attached_conditions"
In Section 2.2.3 DCPS PSM : IDL
Change:
interface WaitSet {
…
ReturnCode_t wait(out ConditionSeq active_conditions,
in Duration_t timeout);
ReturnCode_t get_conditions(out ConditionSeq attached_conditions);
…
};
To
interface WaitSet {
…
ReturnCode_t wait(inout ConditionSeq active_conditions,
in Duration_t timeout);
ReturnCode_t get_conditions(inout ConditionSeq attached_conditions);
…
};
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8381: (T#55) Modification to how enumeration values are indicated in expressions (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Appendix B describes an enumeration value as a name::value, during a telephone conference (in a hurry) this is decided to solve ambiguity between attribute names and enumeration labels. The description states that the name specifies the field, that should be the enumeration type. In addition the enumeration type should be a fully specified type name including its scope. This is a lot to specify in a query expression especially because within a query expression the enumeration value is always related to a field that already identifies the type. In addition in SQL enumeration labels are represented as string literals i.e. the values are put between single quotes.
Resolution:
Treat enumeration values as string literals and place them between single quotes instead of using a scope operator.
Revised Text:
In Appendix B in the section "Token expression" where the ENUMERATEDVALUE is introduced, replace the sentence that states that "A double colon '::' is used to separate the name of the enumeration from that of the field." with a sentence that states that enumeration labels should be treated as string literals and should therefore be put between single quotes. In the next sentence, remove the part which states that the name of the enumeration should correspond to the name specified in the IDL definition. (But keep the part of the sentence that states that the name of the value should correspond to the names of the labels.
Keep Appendix C in line with this as well.
Resolution:
Revised Text: Resolution:
Treat enumeration values as string literals and place them between single quotes instead of using a scope operator.
Revised Text:
In Appendix B replace:
o ENUMERATEDVALUE - An enumerated value is a reference to a value declared within an enumeration. A double colon '::' is used to separate the name of the enumeration from that of the field. Both the name of the enumeration and the name of the value correspond to the names specified in the IDL definition of the enumeration.
With:
o ENUMERATEDVALUE - An enumerated value is a reference to a value declared within an enumeration. Enumerated values consist of the name of the enumeration label enclosed in single quotes. The name used for the enumeration label must correspond to the label names specified in the IDL definition of the enumeration.
In Appendix C replace:
ENUMERATEDVALUE - An enumerated value is a reference to a value declared within an enumeration. A double colon '::' is used to separate the name of the enumeration from that of the field. Both the name of the enumeration and the name of the value correspond to the names specified in the IDL definition of the enumeration.
With:
o ENUMERATEDVALUE - An enumerated value is a reference to a value declared within an enumeration. Enumerated values consist of the name of the enumeration label enclosed in single quotes. The name used for the enumeration label must correspond to the label names specified in the IDL definition of the enumeration.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8382: (T#56) Return values of Waitset::detach_condition (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: section 2.1.2.1.6.2. (Waitset::detach_condition) describes to return BAD_PARAMETER if the given condition is not attached to the waitset. It would be more appropriate to return PRECONDITION_NOT_MET.
Resolution:
Change the return-code
Revised Text:
In section 2.1.2.1.6.2 (Waitset::detach_condition), change all mentioning of BAD_PARAMETER into PRECONDITION_NOT_MET.
Resolution:
Revised Text: Resolution:
Change the return-code
Revised Text:
In section 2.1.2.1.6.2 (Waitset::detach_condition), change:
If the Condition was not attached to the WaitSet the operation will return BAD_PARAMETER.
Possible error codes returned in addition to the standard ones: BAD_PARAMETER.
To:
If the Condition was not attached to the WaitSet the operation will return PRECONDITION_NOT_MET.
Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET.
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8383: (T#57) Enable status when creating DomainParticipant (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: DomainParticipants , being entities, can be both enabled or disabled.. Because the DomainParticipantFactory is not an entity and therefore does not have a QoS, it doesn't support a Factory QosPolicy which specifies how to create a DomainParticipant (either enabled or disabled).
Resolution:
Add a DomainParticipantFactoryQos policy to the DomainParticipantFactory, and add the operation set_qos() and get_qos() to the DomainParticipantFactory class. (However, do not make the DomainParticipantFactory an Entity itself!)
Revised Text:
In section 2.1.2.2.2, add the get_qos and set_qos methods to the table. Create two new sections (2.1.2.2.2.7 and 2.1.2.2.2.8), which explain the semantics of these get_qos and set_qos. Also explain that the fact that although the DomainParticipantFactory has a qos, it is not an Entity, since it does not have any StatusConditions or Listeners and cannot be enabled.
Add to the table in section 2.1.3 for the ENTITY_FACTORY policy in the "Concerns" column also the DomainParticipantFactory.
Add to the IDL in section 2.2.3 the following things:
struct DomainParticipantFactoryQos {
EntityFactoryQosPolicy entity_factory;
};
interface DomainParticipantFactory {
…..
ReturnCode_t set_qos(in DomainParticipantQos qos);
ReturnCode_t get_qos(inout DomainParticipantQos qos);
};
Resolution:
Revised Text: Resolution:
Add a DomainParticipantFactoryQos policy to the DomainParticipantFactory, and add the operation set_qos() and get_qos() to the DomainParticipantFactory class. (However, do not make the DomainParticipantFactory an Entity itself!)
Revised Text:
In section 2.1.2.2.2, add the get_qos and set_qos methods to the end of the DomainParticipantFactory table:
get_qos QosPolicy []
set_qos ReturnCode_t
qos_list QosPolicy []
Add new sections (2.1.2.2.2.7 and 2.1.2.2.2.8)
2.1.2.2.2.7 set_qos
This operation sets the value of the DomainParticipantFactory QoS policies. These policies control the behavior of the object a factory for entities.
Note that despite having QoS, the DomainParticipantFactory is not an Entity.
This operation will check that the resulting policies are self consistent; if they are not the operation will have no effect and return INCONSISTENT_POLICY.
2.1.2.2.2.8 get_qos
This operation returns the value of the DomainParticipantFactory QoS policies.
Figure 2-6
Add operations set_qos and get_qos to DomainParticipantFactory
Section 2.1.3, QoS Table for the ENTITY_FACTORY policy in the "Concerns" column
Add "DomainParticipantFactory" to that cell.
Add to the IDL in section 2.2.3 DCPS PSM : IDL
o add struct DomainParticipantFactoryQos:
struct DomainParticipantFactoryQos {
EntityFactoryQosPolicy entity_factory;
};
o add operations to interface DomainParticipantFactory
old interface:
interface DomainParticipantFactory {
DomainParticipant create_participant(in DomainId_t domain_id,
in DomainParticipantQos qos,
in DomainParticipantListener a_listener);
ReturnCode_t delete_participant(in DomainParticipant a_participant);
DomainParticipant lookup_participant(in DomainId_t domain_id);
ReturnCode_t set_default_participant_qos(in DomainParticipantQos qos);
void get_default_participant_qos(inout DomainParticipantQos qos);
};
new interface:
interface DomainParticipantFactory {
DomainParticipant create_participant(in DomainId_t domain_id,
in DomainParticipantQos qos,
in DomainParticipantListener a_listener);
ReturnCode_t delete_participant(in DomainParticipant a_participant);
DomainParticipant lookup_participant(in DomainId_t domain_id);
ReturnCode_t set_default_participant_qos(in DomainParticipantQos qos);
void get_default_participant_qos(inout DomainParticipantQos qos);
ReturnCode_t set_qos(in DomainParticipantFactoryQos qos);
ReturnCode_t get_qos(inout DomainParticipantFactoryQos qos);
};
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Issue 8384: Add autopurge_disposed_samples_delay to READER_DATA_LIFECYCLE QoS (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The READER_DATA_LIFECYCLE QoS specifies an autopurge_nowriter_samples_delay, however for the same reasons there should also be a autopurge_disposed_samples_delay.
Resolution:
Add the missing operation
Revised Text:
In section 2.1.3.21 add at the end:
The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance, any untaken samples will also be lost
In figure 2-12, class ReaderDataLifecycleQosPolicy, add "autopurge_disposed_samples_delay: Duration_t"
In section 2.2.3 (IDL) add the field "Duration_t autopurge_disposed_samples_delay" to struct ReaderDataLifecycleQosPolicy
Resolution:
Revised Text: Resolution:
Add the missing operation
Revised Text:
In section 2.1.3 Qos Table, Rows describing READER_DATA_LIFECYCLE QoS
o Modify entry of Value column from:
QosPolicy Value Meaning Concerns RxO Changeable
A duration "autopurge_nowr iter_samples_del ay"
To
QosPolicy Value Meaning Concerns RxO Changeable
Two durations "autopurge_nowr iter_samples_del ay"and "autopurge_dispo sed_samples_del ay"
o Add row below the one describing READER_DATA_LIFECYCLE:
autopurge_dispo sed_samples_del ay Indicates the duration the DataReader must retain information regarding instances that have the instance_state NOT_ALIVE_DISPOSED.By default, infinite
Section 2.1.3.21 Add paragraph to the end:
The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost.
Figure 2-12, class ReaderDataLifecycleQosPolicy, add
"autopurge_disposed_samples_delay: Duration_t"
Section 2.2.3 DCPS PSM : IDL struct ReaderDataLifecycleQosPolicy
add the field "Duration_t autopurge_disposed_samples_delay" to the structure resulting in:
struct ReaderDataLifecycleQosPolicy {
Duration_t autopurge_nowriter_samples_delay;
Duration_t autopurge_disposed_samples_delay;
};
Actions taken:
February 25, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8388: (R#106b) Parameter passing convention of Subscriber::get_datareaders (data-distribution-rtf)
Click here for this issue's archive.
Nature: Uncategorized Issue
Severity:
Summary: The mapping from PIM to IDL PSM for the operation Subscriber::get_datareaders represents the PIM 'out' sequence parameter to an IDL 'out' parameter. This maaping is inconsistent with that in other places in the API, in which PIM 'out' parameters are represented as 'inout' in the PSM. An 'out' parameter is also undesirable from a performance perspective.
Proposed Resolution:
The sequence argument to Subscriber::get_datareaders should be an 'inout' in the IDL PSM.
Proposed Revised Text:
In section 2.2.3:
ReturnCode_t get_datareaders(
inout DataReaderSeq readers,
in SampleStateMask sample_states,
in ViewStateMask view_states,
in InstanceStateMask instance_states);
Resolution:
Revised Text: Resolution:
The sequence argument to Subscriber::get_datareaders should be an 'inout' in the IDL PSM.
Revised Text:
Section 2.2.3 DCPS PSM : IDL interface Subscriber modify get_datareaders operation:
From
ReturnCode_t get_datareaders(out DataReaderSeq readers,
in SampleStateMask sample_states,
in ViewStateMask view_states,
in InstanceStateMask instance_states);
To:
ReturnCode_t get_datareaders(inout DataReaderSeq readers,
in SampleStateMask sample_states,
in ViewStateMask view_states,
in InstanceStateMask instance_states);
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8389: (R#107) Missing Topic operations in IDL PSM (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The Topic interface in the PSM is missing the following operations which are present in the PIM: get_qos, set_qos, get_listener, and set_listener.
Proposed Resolution:
Add the missing operations to the IDL interface.
Proposed Revised Text:
In section 2.2.2:
interface Topic : Entity, TopicDescription {
ReturnCode_t get_qos(inout TopicQos qos);
ReturnCode_t set_qos(in TopicQos qos);
TopicListener get_listener();
ReturnCode_t set_listener(
in TopicListener a_listener,
StatusKindMask mask);
};
Resolution:
Revised Text: Resolution:
Add the missing operations to the IDL interface.
Revised Text:
Section 2.2.3 DCPS PSM : IDL interface Topic add operations get_qos , set_qos, get_listener, set_listener:
interface Topic : Entity, TopicDescription {
…
ReturnCode_t get_qos(inout TopicQos qos);
ReturnCode_t set_qos(in TopicQos qos);
ReturnCode_t set_listener(
in TopicListener a_listener, StatusKindMask mask);
TopicListener get_listener();
…
};
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8390: (R#109) Unused types in IDL (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The types TopicSeq, SampleStateSeq, ViewStateSeq and InstanceStateSeq all appear in the IDL PSM but are never used.
Proposed Resolution:
Remove the unused types from the IDL PSM.
Proposed Revised Text:
The following declarations should be removed from the IDL PSM:
typedef sequence<Topic> TopicSeq;
typedef sequence <SampleStateKind> SampleStateSeq;
typedef sequence<ViewStateKind> ViewStateSeq;
typedef sequence<InstanceStateKind> InstanceStateSeq;
Resolution:
Revised Text: Resolution:
Remove the unused types from the IDL PSM.
Revised Text:
The following declarations should be removed from Section 2.2.3 DCPS PSM : IDL
typedef sequence<Topic> TopicSeq;
typedef sequence <SampleStateKind> SampleStateSeq;
typedef sequence<ViewStateKind> ViewStateSeq;
typedef sequence<InstanceStateKind> InstanceStateSeq;
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8391: Incorrect field name for USER_DATA, TOPIC_DATA, and GROUP_DATA (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The QoS table in section 2.1.3 does not mention the field names in the USER_DATA, TOPIC_DATA, and GROUP_DATA QoS policies. The UML diagram in figure 2-12 gives the names of these fields as "data"; however, that name is inconsistent with the names given in the IDL PSM.
Proposed Resolution:
The table and figure should indicate that the name of the field in each policy is "value." That name is consistent with the IDL PSM.
Proposed Revised Text:
In 2.1.3 figure 2-12, UserDataQosPolicy:
value [*] : char
In 2.1.3 figure 2-12, TopicDataQosPolicy:
value [*] : char
In 2.1.3 figure 2-12, GroupDataQosPolicy:
value [*] : char
In the table in 2.1.3, in the "Value" column of the USER_DATA, TOPIC_DATA, and GROUP_DATA rows:
"value": a sequence of octets
Resolution:
Revised Text: Resolution:
The table and figure should indicate that the name of the field in each policy is "value." That name is consistent with the IDL PSM.
Revised Text:
In 2.1.3 figure 2-12, UserDataQosPolicy:
value [*] : char
In 2.1.3 figure 2-12, TopicDataQosPolicy:
value [*] : char
In 2.1.3 figure 2-12, GroupDataQosPolicy:
value [*] : char
In the table in 2.1.3, in the "Value" column of the USER_DATA, TOPIC_DATA, and GROUP_DATA rows modify the contents to:
QosPolicy Value Meaning Concerns RxO Changeable
A sequence of octets: "value"
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8392: R#112) Incorrect SampleRejectedStatusKind constants (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The constants in the enumeration SampleRejectedStatusKind should correspond to the fields of the RESOURCE_LIMITS QoS policy.
Proposed Resolution:
Remove the constant REJECTED_BY_TOPIC_LIMIT. Add the constants REJECTED_BY_SAMPLES_LIMIT and REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT.
Proposed Revised Text:
enum SampleRejectedStatusKind {
REJECTED_BY_INSTANCE_LIMIT,
REJECTED_BY_SAMPLES_LIMIT,
REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT
};
Resolution:
Revised Text: Resolution:
Remove the constant REJECTED_BY_TOPIC_LIMIT. Add the constants REJECTED_BY_SAMPLES_LIMIT and REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT.
Revised Text:
Section 2.2.3 DCPS PSM : IDL modify SampleRejectedStatusKind
From
enum SampleRejectedStatusKind {
REJECTED_BY_INSTANCE_LIMIT,
REJECTED_BY_TOPIC_LIMIT
};
To
enum SampleRejectedStatusKind {
REJECTED_BY_INSTANCE_LIMIT,
REJECTED_BY_SAMPLES_LIMIT
REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT
};
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8393: R#114) Operations should not return void (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: A number of operations in the specification have a void return type. However, without a specified return type, an implementation cannot indicate that an error occurred.
Proposed Resolution:
The following methods currently return void and should return ReturnCode_t instead.
· GuardCondition::set_trigger_value
· DomainParticipant::get_default_publisher_qos
· DomainParticipant::get_default_subscriber_qos
· DomainParticipant::get_default_topic_qos
· DomainParticipant::assert_liveliness
· DomainParticipantFactory::get_default_participant_qos
· Publisher::get_default_datawriter_qos
· Subscriber::get_default_subscriber_qos
· DataWriter::assert_liveliness
· Subscriber::notify_datareaders
(The get_qos operations on each concrete Entity type are show to return void in the IDL PSM but a list of QoS policies in the PIM. That inconsistency is addressed in another issue.)
Proposed Revised Text:
In the GuardCondition Class table in 2.1.2.1.8, the void return type of set_trigger_value should be replaced by ReturnCode_t. The return type of that operation must be similarly changed in the IDL PSM in 2.2.3.
In the DomainParticipant Class table in 2.1.2.2.1, the void return type of the get_default_*_qos operations and the assert_liveliness operation should be replaced by ReturnCode_t. The return types of those operations should be similarly changed in the IDL PSM in 2.2.3.
In the Publisher Class table in 2.1.2.4.1, the void return type of get_default_datawriter_qos should be replaced by ReturnCode_t. The return type of that operation must be similarly changed in the IDL PSM in 2.2.3.
In the DataWriter Class table in 2.1.2.4.2, the void return type of assert_liveliness should be replaced by ReturnCode_t. The return type of that operation must be similarly changed in the IDL PSM in 2.2.3.
In the Subscriber Class table in 2.1.2.5.2, the void return type of get_default_datareader_qos and notify_datareaders should be replaced by ReturnCode_t. The return type of those operations must be similarly changed in the IDL PSM in 2.2.3.
Resolution:
Revised Text: Resolution:
The following methods currently return void and should return ReturnCode_t instead.
· GuardCondition::set_trigger_value
· DomainParticipant::get_default_publisher_qos
· DomainParticipant::get_default_subscriber_qos
· DomainParticipant::get_default_topic_qos
· DomainParticipant::assert_liveliness
· DomainParticipantFactory::get_default_participant_qos
· Publisher::get_default_datawriter_qos
· Subscriber::get_default_subscriber_qos
· DataWriter::assert_liveliness
· Subscriber::notify_datareaders
(The get_qos operations on each concrete Entity type are show to return void in the IDL PSM but a list of QoS policies in the PIM. That inconsistency is addressed in another issue.)
Revised Text:
In the GuardCondition Class table in 2.1.2.1.8, the void return type of set_trigger_value should be replaced by ReturnCode_t.
In the GuardCondition interface in Section 2.2.3 DCPS PSM : IDL the void return type of set_trigger_value should be replaced by ReturnCode_t.
In the DomainParticipant Class table in 2.1.2.2.1, the void return type of the get_default_publisher_qos, get_default_subscriber_qos, get_default_topic_qos and the assert_liveliness operation should be replaced by ReturnCode_t.
In the DomainParticipant interface in Section 2.2.3 DCPS PSM : IDL the void return of the get_default_publisher_qos, get_default_subscriber_qos, get_default_topic_qos and the assert_liveliness operation should be replaced by ReturnCode_t.
In the DomainParticipantFactory Class table in 2.1.2.2.1, the void return type of the get_default_participant_qos should be replaced by ReturnCode_t.
In the DomainParticipantFactory interface in Section 2.2.3 DCPS PSM : IDL the void return of the get_default_participant_qos should be replaced by ReturnCode_t.
In the Publisher Class table in 2.1.2.4.1, the void return type of get_default_datawriter_qos should be replaced by ReturnCode_t.
In the Publisher interface in Section 2.2.3 DCPS PSM : IDL the void return type of get_default_datawriter_qos should be replaced by ReturnCode_t.
In the DataWriter Class table in 2.1.2.4.2, the void return type of assert_liveliness should be replaced by ReturnCode_t.
In the DataWriter interface in Section 2.2.3 DCPS PSM : IDL, the void return type of assert_liveliness should be replaced by ReturnCode_t.
In the Subscriber Class table in 2.1.2.5.2, the void return type of get_default_datareader_qos and notify_datareaders should be replaced by ReturnCode_t.
In the Subscriber interface in Section 2.2.3 DCPS PSM : IDL, the void return type of get_default_datareader_qos and notify_datareaders should be replaced by ReturnCode_t.
Actions taken:
December 14, 2004: received issue
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8394: R#115) Destination order missing from PublicationBuiltinTopicData (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The PublicationBuiltinTopicData type is missing a destination order field.
Proposed Resolution:
Add the missing field in both the PIM and the IDL PSM.
Proposed Revised Text:
In the "DCPSPublication" row of the table in 2.1.5, pg. 2-131, add a sub-row like the following after the existing "ownership_strength" sub-row:
destination_order DestinationOrderQosPolicy Policy of the corresponding DataWriter
In the IDL PSM, modify the PublicationBuiltinTopicData declaration as follows (the member immediately preceding the new member is show below in order to demonstrate the position of the new member):
struct PublicationBuiltinTopicData {
OwnershipStrengthQosPolicy ownership_strength;
DestinationOrderQosPolicy destination_order;
};
Resolution:
Revised Text: Resolution:
Add the missing field in both the PIM and the IDL PSM.
Revised Text:
In the "DCPSPublication" row of the table in 2.1.5, pg. 2-131, add a sub-row like the following after the existing "ownership_strength" sub-row:
destination_order DestinationOrderQosPolicy Policy of the corresponding DataWriter
In Section 2.2.3 DCPS PSM : IDL, modify the PublicationBuiltinTopicData declaration as follows (the member immediately preceding the new member is show below in order to demonstrate the position of the new member):
struct PublicationBuiltinTopicData {
…
OwnershipStrengthQosPolicy ownership_strength;
DestinationOrderQosPolicy destination_order;
…
};
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8395: TransportPriority QoS range does not specify high/low priority values (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not state what the valid range of the transport priority values is, now does it state whether higher or lower values correspond to higher priorities.
Proposed Resolution:
Stipulate that the range of TransportPriorityQosPolicy::value is the entire range of a 32 bit signed integer. Larger numbers indicate higher priority. However, the precise interpretation of the value chosen is transport- and implementation-dependent.
Proposed Revised Text:
The second paragraph of section 2.1.3.14 contains the sentence:
"As this is specific to each transport it is not possible to define the behavior generically."
This sentence should be rewritten as follows:
"Any value within the range of a 32-bit signed integer may be chosen; higher values indicate higher priority. However, any further interpretation of this policy is specific to a particular transport and a particular implementation of the Service. For example, a particular transport is permitted to treat a range of priority values as equivalent to one another."
Resolution:
Revised Text: Resolution:
Stipulate that the range of TransportPriorityQosPolicy::value is the entire range of a 32 bit signed integer. Larger numbers indicate higher priority. However, the precise interpretation of the value chosen is transport- and implementation-dependent.
Revised Text:
The second paragraph of section 2.1.3.14 replace the sentence:
As this is specific to each transport it is not possible to define the behavior generically.
with the following
Any value within the range of a 32-bit signed integer may be chosen; higher values indicate higher priority. However, any further interpretation of this policy is specific to a particular transport and a particular implementation of the Service. For example, a particular transport is permitted to treat a range of priority values as equivalent to one another
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8396: (R#119) Need lookup_instance method on reader and writer (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: There are get_key_value operations in the DataReader and DataWriter to translate from an instance handle to a key. However, in order for a client of the Service to use the per-instance read and take operations of a DataReader, it would be convenient to have an operation to translate in the other direction: from key value(s) to an instance handle.
Proposed Resolution:
Add operations DataReader::lookup_instance and DataWriter::lookup_instance.
Proposed Revised Text:
Append the following rows to the DataWriter Class table in 2.1.2.4.2:
lookup_instance InstanceHandle_t
Instance Data
Add a new section "2.1.2.4.2.23 lookup_instance" with the following contents:
This operation takes as a parameter an instance (to get the key value) and returns a handle that can be used in successive operations that accept an instance handle as an argument.
This operation does not register the instance in question. If the instance has not been previously registered, or if for any other reason the Service is unable to provide an instance handle, the Service will return the special value HANDLE_NIL.
Append the following rows to the DataReader Class table in 2.1.2.5.3:
lookup_instance InstanceHandle_t
Instance Data
Add a new section "2.1.2.5.3.33 lookup_instance" with the following contents:
This operation takes as a parameter an instance (to get the key value) and returns a handle that can be used in successive operations that accept an instance handle as an argument.
If for any reason the Service is unable to provide an instance handle, the Service will return the special value HANDLE_NIL.
Resolution:
Revised Text: Resolution:
Add operations DataReader::lookup_instance and DataWriter::lookup_instance.
Revised Text:
Append the following rows to the DataWriter Class table in 2.1.2.4.2 after the rows that describe get_key_value:
lookup_instance InstanceHandle_t
instance Data
Append the following rows to the FooDataWriter Class table in 2.1.2.4.2 after the rows that describe get_key_value:
lookup_instance InstanceHandle_t
instance Foo
Insert a new section "2.1.2.4.2.10 lookup_instance". Previous section 2.1.2.4.2.10 "write" becomes 2.1.2.4.2.11:
2.1.2.4.2.11 lookup_instance
This operation takes as a parameter an instance and returns a handle that can be used in subsequent operations that accept an instance handle as an argument. The instance parameter is only used for the purpose of examining the fields that define the key.
This operation does not register the instance in question. If the instance has not been previously registered, or if for any other reason the Service is unable to provide an instance handle, the Service will return the special value HANDLE_NIL.
Append the following rows to the DataReader Class table in 2.1.2.5.3:
lookup_instance InstanceHandle_t
instance Data
Append the following rows to the FooDataReader Class table in 2.1.2.4.2 after the rows that describe get_key_value:
lookup_instance InstanceHandle_t
instance Foo
Add a new section "2.1.2.5.3.29 lookup_instance". Previous section 2.1.2.4.2.29 "delete_contained_entities" becomes 2.1.2.4.2.30:
2.1.2.4.2.29 lookup_instance
This operation takes as a parameter an instance and returns a handle that can be used in subsequent operations that accept an instance handle as an argument. The instance parameter is only used for the purpose of examining the fields that define the key.
This operation does not register the instance in question. If the instance has not been previously registered, or if for any other reason the Service is unable to provide an instance handle, the Service will return the special value HANDLE_NIL
Figure 2-8: add operation lookup_instance to the FooDataWriter and FooDataReader interfaces
Figure 2-9: add operation lookup_instance to the DataWriter interface
Figure 2-10: add operation lookup_instance to the DataReader interface
Section 2.2.3 DCPS PSM : IDL interface DataWriter add commented-out operation:
// InstanceHandle_t lookup_instance(in Data instance_data);
Section 2.2.3 DCPS PSM : IDL interface FooDataWriter add operation:
DDS::InstanceHandle_t lookup_instance(in Foo key_holder);
Section 2.2.3 DCPS PSM : IDL interface DataReader add commented-out operation:
// InstanceHandle_t lookup_instance(in Data instance_data);
Section 2.2.3 DCPS PSM : IDL interface Foo DataReader add operation:
DDS::InstanceHandle_t lookup_instance(in Foo key_holder);
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8397: (R#120) Clarify use of DATAREADER_QOS_USE_TOPIC_QOS (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Title: (R#120) Clarify use of DATAREADER_QOS_USE_TOPIC_QOS constant when creating DataReader on ContentFilteredTopic or MultiTopic
Summary:
The specification defines the constant DATAREADER_QOS_USE_TOPIC_QOS that may be used to specify the QoS of a DataReader. The meaning of such usage is unclear when the DataReader's TopicDescription is a ContentFilteredTopic or a MultiTopic since those types do not have QoS of their own.
Proposed Resolution:
A ContentFilteredTopic is based on a single Topic; therefore, the meaning of DATAREADER_QOS_USE_TOPIC_QOS is well-defined in that case: it refers to the QoS of the Topic accessible via the ContentFilteredTopic::get_related_topic operation.
The meaning of DATAREADER_QOS_USE_TOPIC_QOS is not well-defined in the case of a MultiTopic; using it to set the QoS of a DataReader of a MultiTopic is an error. Specifically, passing the constant to Subscriber::create_datareader when a MultiTopic is also passed to that operation will result in the operation returning nil.
Proposed Revised Text:
The last paragraph of section "2.1.2.5.2.5 create_datareader" (which begins "The special value…") should be rewritten as follows:
Provided that the TopicDescription passed to this method is a Topic or a ContentFilteredTopic, the special value DATAREADER_QOS_USE_TOPIC_QOS can be used to indicate that the DataReader should be created with a combination of the default DataReader QoS and the Topic QoS. (In the case of a ContentFilteredTopic, the Topic in question is the ContentFilteredTopic's "related Topic.") The use of this value is equivalent to the application obtaining the default DataReader QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS "overrides" the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataReader. It is an error to use DATAREADER_QOS_USE_TOPIC_QOS when creating a DataReader with a MultiTopic; this method will return a nil value in that case.
Resolution:
Revised Text: Resolution:
A ContentFilteredTopic is based on a single Topic; therefore, the meaning of DATAREADER_QOS_USE_TOPIC_QOS is well-defined in that case: it refers to the QoS of the Topic accessible via the ContentFilteredTopic::get_related_topic operation.
The meaning of DATAREADER_QOS_USE_TOPIC_QOS is not well-defined in the case of a MultiTopic; using it to set the QoS of a DataReader of a MultiTopic is an error. Specifically, passing the constant to Subscriber::create_datareader when a MultiTopic is also passed to that operation will result in the operation returning nil.
Revised Text:
The last paragraph of section "2.1.2.5.2.5 create_datareader" (which begins "The special value…") should be modified from:
The special value DATAREADER_QOS_USE_TOPIC_QOS can be used to indicate that the DataReader should be created with a combination of the default DataReader QoS and the Topic QoS. The use of this value is equivalent to the application obtaining the default DataReader QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS "overrides" the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataReader.
To (inserted text is shown in blue):
Provided that the TopicDescription passed to this method is a Topic or a ContentFilteredTopic, the special value DATAREADER_QOS_USE_TOPIC_QOS can be used to indicate that the DataReader should be created with a combination of the default DataReader QoS and the Topic QoS. (In the case of a ContentFilteredTopic, the Topic in question is the ContentFilteredTopic's "related Topic.") The use of this value is equivalent to the application obtaining the default DataReader QoS and the Topic QoS (by means of the operation Topic::get_qos) and then combining these two QoS using the operation copy_from_topic_qos whereby any policy that is set on the Topic QoS "overrides" the corresponding policy on the default QoS. The resulting QoS is then applied to the creation of the DataReader. It is an error to use DATAREADER_QOS_USE_TOPIC_QOS when creating a DataReader with a MultiTopic; this method will return a 'nil' value in that case.
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8398: (R#122) Missing QoS dependencies in table (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: A DataReader must specify a TimeBasedFilterQosPolicy::minimum_separation value that is less than or equal to its DeadlineQosPolicy::period value. (Otherwise, all matched DataWriters will be considered to miss every deadline.)
There are dependencies among the fields of ResourceLimitsQosPolicy: max_samples >= max_samples_per_instance.
The above dependencies are not made explicit in the specification.
Proposed Resolution:
The above dependencies should be made explicit in the QoS policy table in section 2.1.3.
Proposed Revised Text:
The following sentence should be added to the "Meaning" column of the "DEADLINE" row: "It is inconsistent for a DataReader to have a deadline period less than its TIME_BASED_FILTER's minimum_separation."
The following sentence should be added to the "Meaning" column of the "TIME_BASED_FILTER" row: "It is inconsistent for a DataReader to have a minimum_separation longer than its deadline period."
The following sentence should be added to the "Meaning" column of the "max_samples" row: "It is inconsistent for this value to be less than max_samples_per_instance."
The following sentence should be added to the "Meaning" column of the "max_samples_per_instance" row: "It is inconsistent for this value to be greater than max_samples."
Resolution:
Revised Text: Resolution:
The above dependencies should be made explicit in the QoS policy table in section 2.1.3.
Revised Text:
The following sentence should be added to the "Meaning" column of the "DEADLINE" row: "It is inconsistent for a DataReader to have a deadline period less than its TIME_BASED_FILTER's minimum_separation."
The following sentence should be added to the "Meaning" column of the "TIME_BASED_FILTER" row: "It is inconsistent for a DataReader to have a minimum_separation longer than its deadline period."
The following sentence should be added to the "Meaning" column of the "max_samples" row: "It is inconsistent for this value to be less than max_samples_per_instance."
The following sentence should be added to the "Meaning" column of the "max_samples_per_instance" row: "It is inconsistent for this value to be greater than max_samples."
Actions taken:
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8399: Need an extra return code: ILLEGAL_OPERATION (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: It would be useful to have an additional return code called RETCODE_ILLEGAL_OPERATION. This return code would be useful, for example, in preventing the user from performing certain operations on the built-in DataReaders. Their QoS values are stated in the specification; vendors need not allow those values to be changed. Users should also not be allowed to delete built-in Entities. If the user tries to perform either of these two operations, the choices of return code we could use that are in accordance to the spec are:
· RETCODE_ERROR
· RETCODE_UNSUPPORTED
· RETCODE_BAD_PARAMETER
· RETCODE_PRECONDITION_NOT_MET
· RETCODE_IMMUTABLE_POLICY
All of the above fall short of helping the user find out what the problem really is.
· RETCODE_ERROR: This is the generic error code; it does not give much information as to what might be wrong.
· RETCODE_UNSUPPORTED: This choice would be semantically incorrect. The failure is not due to a vendor's failure to support an optional feature of the specification, but rather to the user's violation of a policy consistent with the specification that was set by that vendor.
· RETCODE_BAD_PARAMETER: This return code is a little confusing. For instance, when trying to delete a built-in DataReader, the reader parameter passed is a valid DataReader and the function is expecting a reader. Such usage would seem to constitute passing a good parameter, not a bad one.
· RETCODE_PRECONDITION_NOT_MET: There is no precondition that the user could change that would make the call work. Therefore, this result would be confusing.
· RETCODE_IMMUTABLE_POLICY: This return code could potentially work when trying to change the QoS policies of the built-in DataReaders but not when attempting to delete them. However, it would still be semantically incorrect. The problem is not that the user is trying to change immutable QoS policies.
The QoS policies being changed may be mutable; what is not allowed is the Entity whose policies are in question. Such a return result could lead the user to think that s/he is confused about which QoS policies are mutable.
Proposed Resolution:
Add a return code RETCODE_ILLEGAL_OPERATION. This return code indicates a misuse of the API provided by the Service. The user is invoking an operation on an inappropriate Entity or at an inappropriate time. There is no precondition that could be changed to allow the operation to succeed.
Vendors may use this new return code to indicate violations of policies they have set that are consistent with, but not fully described by, the specification. It is therefore necessary that the return code be considered a "standard" return code (like RETCODE_OK, RETCODE_BAD_PARAMETER, and RETCODE_ERROR) that could potentially be returned by any operation having the return type ReturnCode_t.
Proposed Revised Text:
Add the following row to the "Return codes" table in 2.1.1.1:
ILLEGAL_OPERATION An operation was invoked on an inappropriate object or at an inappropriate time (as determined by policies set by the specification or the Service implementation). There is no precondition that could be changed to make the operation succeed.
In the paragraph following the table, the sentence "Any operation with return type ReturnCode_t may return OK or ERROR" should be restated "Any operation with return type ReturnCode_t may return OK, ERROR, or ILLEGAL_OPERATION." The sentence "The return codes OK, ERROR, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation" should be restated as "The return codes OK, ERROR, ILLEGAL_OPERATION, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation".
Resolution:
Revised Text: Resolution:
Add a return code RETCODE_ILLEGAL_OPERATION. This return code indicates a misuse of the API provided by the Service. The user is invoking an operation on an inappropriate Entity or at an inappropriate time. There is no precondition that could be changed to allow the operation to succeed.
Vendors may use this new return code to indicate violations of policies they have set that are consistent with, but not fully described by, the specification. It is therefore necessary that the return code be considered a "standard" return code (like RETCODE_OK, RETCODE_BAD_PARAMETER, and RETCODE_ERROR) that could potentially be returned by any operation having the return type ReturnCode_t.
Revised Text:
Add the following row to the "Return codes" table in 2.1.1.1:
ILLEGAL_OPERATION An operation was invoked on an inappropriate object or at an inappropriate time (as determined by policies set by the specification or the Service implementation). There is no precondition that could be changed to make the operation succeed.
In the paragraph following the table,
the sentence
"Any operation with return type ReturnCode_t may return OK or ERROR"
should be replaced with:
"Any operation with return type ReturnCode_t may return OK, ERROR, or ILLEGAL_OPERATION."
The sentence
"The return codes OK, ERROR, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation"
should be replaced with
"The return codes OK, ERROR, ILLEGAL_OPERATION, ALREADY_DELETED, UNSUPPORTED, and BAD_PARAMETER are the standard return codes and the specification won't mention them explicitly for each operation".
Actions taken:
March 3, 2000: reveived issue
February 28, 2005: received issue
August 1, 2005: closed issue
Issue 8417: (R#124) Clarification on the behavior of dispose (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The description of DataWriter::dispose needs to clarify whether it can be called with a nil handle.
Proposed Resolution:
DataWriter::dispose should just behave like DataWriter::write in that if the instance is not yet registered, the Service will automatically register it for the user. In that case, the operation should not return PRECONDITION_NOT_MET.
Proposed Revised Text:
The second-to-last paragraph in section 2.1.2.4.2.12 states "The operation must be only called on registered instances. Otherwise the operation will return the error PRECONDITION_NOT_MET." This paragraph should be removed.
Resolution:
Revised Text: Resolution:
DataWriter::dispose should just behave like DataWriter::write in that if the instance is not yet registered, the Service will automatically register it for the user. In that case, the operation should not return PRECONDITION_NOT_MET.
Revised Text:
Remove the second-to-last paragraph in section 2.1.2.4.2.13 "dispose"
The operation must be only called on registered instances. Otherwise the operation will return the error PRECONDITION_NOT_MET.
Remove the second-to-last paragraph in section 2.1.2.4.2.14 "dispose_w_timestamp"
The operation must be only called on registered instances. Otherwise the operation will return the error PRECONDITION_NOT_MET.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8418: (R#125) Additional operations that can return RETCODE_TIMEOUT (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification currently states that the DataWriter::write operation may return TIMEOUT under certain circumstances. However, the DataWriter operations dispose, register, unregister, and their variants may also block due to a temporarily full history.
Proposed Resolution:
Revise the documentation for the listed operations to state that they may return TIMEOUT if the RELIABILITY max_blocking_time elapses.
Proposed Revised Text:
The following paragraph should be appended to sections 2.1.2.4.2.5, 2.1.2.4.2.6, 2.1.2.4.2.7, 2.1.2.4.2.8, 2.1.2.4.2.12, and 2.1.2.4.2.13:
This operation may block if it would cause data to be lost or one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time this operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, this operation will fail and return TIMEOUT
Resolution:
Revised Text: Resolution:
Revise the documentation for the listed operations to state that they may return TIMEOUT if the RELIABILITY max_blocking_time elapses.
Revised Text:
The following paragraph should be appended to sections 2.1.2.4.2.5 "register_instance"
This operation may block if the RELIABILITY kind is set to RELIABLE and the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT.
The following paragraph should be appended to sections 2.1.2.4.2.6 "register_instance_w_timestamp" 2.1.2.4.2.7 "unregister_instance", 2.1.2.4.2.8 "unregister_instance_w_timestamp",
This operation may block and return TIMEOUT under the same circustamces described for the register_instance operation (Section 2.1.2.4.2.5 ).
The following paragraph should be appended to sections 2.1.2.4.2.12 "write_w_timestamp", 2.1.2.4.2.13 "dispose", ", 2.1.2.4.2.14 "dispose_w_timestamp"
This operation may block and return TIMEOUT under the same circustamces described for the write operation (Section 2.1.2.4.2.11 ).
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8419: (R#127) Improve PSM mapping of BuiltinTopicKey_t (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The IDL PSM defines the type BuiltinTopicKey_t to be an array of element type BUILTIN_TOPIC_KEY_TYPE_NATIVE. This definition prevents some compilers from permitting shallow copies of instances of this type.
Proposed Resolution:
Redefine BuiltinTopicKey_t to be a structure containing an array rather than the array itself.
Proposed Revised Text:
In 2.2.3, change this:
typedef BUILTIN_TOPIC_KEY_TYPE_NATIVE BuiltinTopicKey_t[3]
to this:
struct BuiltinTopicKey_t {
BUILTIN_TOPIC_KEY_TYPE_NATIVE value[3];
}
Resolution:
Revised Text: Resolution:
Redefine BuiltinTopicKey_t to be a structure containing an array rather than the array itself.
Revised Text:
Section 2.2.3 DCPS PSM : IDL change this:
typedef BUILTIN_TOPIC_KEY_TYPE_NATIVE BuiltinTopicKey_t[3]
…to this:
struct BuiltinTopicKey_t {
BUILTIN_TOPIC_KEY_TYPE_NATIVE value[3];
};
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8420: Unspecified behavior of DataReader/DataWriter creation w/t mismatched Topic (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not currently state whether it is permissible to create a DataReader or DataWriter with a TopicDescription that was created from a DomainParticipant other than that used to create the reader or writer's factory.
Proposed Resolution:
The use case in question is not allowed; create_datareader and create_datawriter should return nil in that case.
Proposed Revised Text:
The following paragraph should be appended to section 2.1.2.4.1.5 create_datawriter:
The Topic passed to this operation must have been created from the same DomainParticipant that was used to create this Publisher. If the Topic was created from a different DomainParticipant, this operation will fail and return a nil result.
The following paragraph should be appended to section 2.1.2.5.2.5 create_datareader:
The TopicDescription passed to this operation must have been created from the same DomainParticipant that was used to create this Subscriber. If the TopicDescription was created from a different DomainParticipant, this operation will fail and return a nil result.
Resolution:
Revised Text: Resolution:
The use case in question is not allowed; create_datareader and create_datawriter should return nil in that case.
Revised Text:
The following paragraph should be appended to section 2.1.2.4.1.5 create_datawriter:
The Topic passed to this operation must have been created from the same DomainParticipant that was used to create this Publisher. If the Topic was created from a different DomainParticipant, the operation will fail and return a nil result.
The following paragraph should be appended to section 2.1.2.5.2.5 create_datareader:
The TopicDescription passed to this operation must have been created from the same DomainParticipant that was used to create this Subscriber. If the TopicDescription was created from a different DomainParticipant, the operation will fail and return a nil result.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8421: (R#130) Unspecified behavior of delete_datareader with outstanding loans (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not state what should occur if the user attempts to delete a DataReader when it has one or more outstanding loans as a result of a call to DataReader::read, DataReader::take, or a variant thereof.
Proposed Resolution:
State that Subscriber::delete_datareader should fail and return PRECONDITION_NOT_MET in that case.
Proposed Revised Text:
In section 2.1.2.5.2.6 delete_datareader, there is a paragraph (beginning "The deletion of a DataReader is not allowed…") that describes the operation's behavior in the event that some conditions created from the reader have not been deleted. Following that paragraph a new paragraph should be added:
The deletion of a DataReader is not allowed if it has any outstanding loans as a result of a call to read, take, or one of the variants thereof. If the delete_datareader operation is called on a DataReader with one or more outstanding loans, it will return PRECONDITION_NOT_MET
Resolution:
Revised Text: Resolution:
State that Subscriber::delete_datareader should fail and return PRECONDITION_NOT_MET in that case.
Revised Text:
In section 2.1.2.5.2.6 delete_datareader, there is a paragraph (beginning "The deletion of a DataReader is not allowed…") that describes the operation's behavior in the event that some conditions created from the reader have not been deleted. Following that paragraph a new paragraph should be added:
The deletion of a DataReader is not allowed if it has any outstanding loans as a result of a call to read, take, or one of the variants thereof. If the delete_datareader operation is called on a DataReader with one or more outstanding loans, it will return PRECONDITION_NOT_MET.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8422: (R#131) Clarify behavior of get_status_changes (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not make clear whether the set of status kinds returned by Entity::get_status_changes when that operation is invoked on a factory Entity (such as a Publisher) should include the changed statuses of the Entities created from that factory (such as a DataWriter).
Proposed Resolution:
Clarify that the set of status kinds will only contain the statuses that have changed on the Entity on which get_status_changes is invoked and not that Entity's contained Entities.
Proposed Revised Text:
Append the following sentence to section 2.1.2.1.1.6: "A 'triggered' status on an Entity does not imply that that status is triggered on the Entity's factory."
Resolution:
Revised Text: Resolution:
Clarify that the set of status kinds will only contain the statuses that have changed on the Entity on which get_status_changes is invoked and not that Entity's contained Entities.
Revised Text:
Append the following sentence to section 2.1.2.1.1.6 "get_status_changes":
The list of statuses returned by the get_status_changes operation refers to the status that are triggered on the Entity itself and does not include statuses that apply to contained entities.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8423: Incorrect reference to LIVELINESS_CHANGED in DataWriter::unregister (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In the description of DataWriter::unregister in section 2.1.2.4.2.7 it says that if an instance is unregistered via a call to DataWriter::unregister, a matched DataReader will get an indication that its LIVELINESS_CHANGED status has changed.
However, unregister refers to an instance; the LIVELINESS_CHANGED status is based on the liveliness of a DataWriter, not an instance.
Proposed Resolution:
Instead the specification should state that the DataReader will receive a sample with a NOT_ALIVE_NO_WRITERS instance state.
Proposed Revised Text:
The sentence:
DataReader objects that are reading the instance will eventually get an indication that their LIVELINESS_CHANGED status (as defined in Section 2.1.4.1) has changed.
…should be rewritten:
DataReader objects that are reading the instance will eventually receive a sample with a NOT_ALIVE_NO_WRITERS instance state if no other DataWriter objects are writing the instance.
Resolution:
Revised Text: Resolution:
Instead the specification should state that the DataReader will receive a sample with a NOT_ALIVE_NO_WRITERS instance state.
Revised Text:
Section 2.1.2.4.2.7 " unregister_instance" replace sentence:
DataReader objects that are reading the instance will eventually get an indication that their LIVELINESS_CHANGED status (as defined in Section 2.1.4.1) has changed.
…with:
DataReader entities that are reading the instance will eventually receive a sample with a NOT_ALIVE_NO_WRITERS instance state if no other DataWriter entities are writing the instance.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8424: (R#135) Add fields to PublicationMatchStatus and SubscriptionMatchStatus (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: There are two limitations to the PublicationMatchStatus and SubscriptionMatchStatus that prevent them from being used to detect the loss of a match:
· The specification does not indicate whether those statuses are considered to have changed when a match is lost (e.g. as a result of a loss of liveliness or an incompatible QoS change).
· The status structures contain fields that indicate the total number of matches that have ever occurred, but they lack fields to indicate the number of current matches.
Proposed Resolution:
Two fields should be added to each status structure: current_count and current_count_change. The specification should be updated to state that the publication and subscription match statuses are considered to have changed both when a match is established and when it is lost.
Proposed Revised Text:
Update the table in 2.1.4.1 as follows:
DataReader SUBSCRIPTION_MATCH_STATUS The DataReader has found a DataWriter that matches the Topic and has compatible QoS or has stopped communicating with a DataWriter that was previously considered to have matched.
DataWriter PUBLICATION_MATCH_STATUS The DataWriter has found DataReader that matches the Topic and has compatible QoS or has stopped communicating with a DataReader that was previously considered to have matched.
Update PublicationMatchStatus and SubscriptionMatchStatus in figure 2-13 to add the following attributes to each:
current_count : long
current_count_change : long
Update the PublicationMatchStatus section of the table on page 2-119 with the following rows:
current_count The number of DataReaders currently matched to the concerned DataWriter.
current_count_change The change in current_count since the last time the listener was called or the status was read.
Update the SubscriptionMatchStatus section of the table on page 2-119 with the following rows:
current_count The number of DataWriters currently matched to the concerned DataReader.
current_count_change The change in current_count since the last time the listener was called or the status was read.
Modify the declarations of the PublicationMatchStatus and SubscriptionMatchStatus structures in the IDL PSM in 2.2.3 as follows:
struct PublicationMatchStatus {
long total_count;
long total_count_change;
long current_count;
long current_count_change;
InstanceHandle_t last_subscription_handle;
};
struct SubscriptionMatchStatus {
long total_count;
long total_count_change;
long current_count;
long current_count_change;
InstanceHandle_t last_publication_handle;
};
Resolution:
Revised Text: Resolution:
Two fields should be added to each status structure: current_count and current_count_change. The specification should be updated to state that the publication and subscription match statuses are considered to have changed both when a match is established and when it is lost.
Revised Text:
Update the table in 2.1.4.1 as follows:
DataReader SUBSCRIPTION_MATCH The DataReader has found a DataWriter that matches the Topic and has compatible QoS, or has ceased to be matched with a DataWriter that was previously considered to be matched.
DataWriter PUBLICATION_MATCH The DataWriter has found DataReader that matches the Topic and has compatible QoS, or has ceased to be matched with a DataReader that was previously considered to be matched.
Figure 2-13 Update PublicationMatchStatus and SubscriptionMatchStatus in figure 2-13 to add the following attributes to each:
current_count : long
current_count_change : long
Add the following rows to the PublicationMatchStatus section of the table on page 2-119:
current_count The number of DataReaders currently matched to the concerned DataWriter.
current_count_change The change in current_count since the last time the listener was called or the status was read.
Add the following rows to the SubscriptionMatchStatus section of the table on page 2-119:
current_count The number of DataWriters currently matched to the concerned DataReader.
current_count_change The change in current_count since the last time the listener was called or the status was read.
Section 2.2.3 DCPS PSM : IDL:
Modify the definition of the PublicationMatchStatus and SubscriptionMatchStatus structures. New structures follow:
struct PublicationMatchStatus {
long total_count;
long total_count_change;
long current_count;
long current_count_change;
InstanceHandle_t last_subscription_handle;
};
struct SubscriptionMatchStatus {
long total_count;
long total_count_change;
long current_count;
long current_count_change;
InstanceHandle_t last_publication_handle;
};
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8425: (R#138) Add instance handle to LivelinessChangedStatus (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: It would be useful to have a field in LivelinessChangedStatus that provides the instance handle for the last DataWriter for which there was a change in liveliness.
Proposed Resolution:
Add a field last_publication_handle to LivelinessChangedStatus.
Proposed Revised Text:
Add an attribute "last_publication_handle : InstanceHandle_t" to LivelinessChangedStatus in figure 2-13.
Add a row to the LivelinessChangedStatus section of the table on page 2-118:
last_publication_handle Handle to the last DataWriter whose change in liveliness caused this status to change.
Revise the definition of LivelinessChangedStatus in the IDL PSM in 2.2.3:
struct LivelinessChangedStatus {
InstanceHandle_t last_publication_handle;
};
Resolution:
Revised Text: Resolution:
Add a field last_publication_handle to LivelinessChangedStatus.
Revised Text:
Figure 2-13: Add an attribute "last_publication_handle : InstanceHandle_t" to LivelinessChangedStatus.
Add a row to the LivelinessChangedStatus section of the table on page 2-118:
last_publication_handle Handle to the last DataWriter whose change in liveliness caused this status to change.
Section 2.2.3 DCPS PSM : IDL:
Modify the definition of the LivelinessChangedStatus adding the last_publication_handle field. New structures follows:
struct LivelinessChangedStatus {
long active_count;
long inactive_count;
long active_count_change;
long inactive_count_change;
InstanceHandle_t last_publication_handle;
};
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8426: (R#139) Rename *MatchStatus to *MatchedStatus (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Most statuses (and the callbacks corresponding to them) have names ending in a past tense verb (e.g. LivelinessLost, LivelinessChanged, *DeadlineMissed, etc.). This convention makes the names very understandable because they refer to an actual thing that happened.
The publication and subscription match statuses/callbacks violate this convention, however. They are named after the match itself, not the event of matching.
Proposed Resolution:
To make the match statuses/callbacks consistent, they should be called PublicationMatchedStatus (on_publication_matched) and SubscriptionMatchedStatus (on_subscription_matched).
Proposed Revised Text:
Replace "PublicationMatchStatus" with "PublicationMatchedStatus," "on_publication_match" with "on_publication_matched," "SubscriptionMatchStatus" with "SubscriptionMatchedStatus," and "on_subscription_match" with "on_subscription_matched" in the DomainParticipantListener table in 2.1.2.2.3.
Perform the same substitutions in the DataWriter table in 2.1.2.4.2 and in the DataWriterListener table in 2.1.2.4.4.
Perform the same substitutions in the DataReader table in 2.1.2.5.3 and in the DataReaderListener table in 2.1.2.5.7.
Rename "PublicationMatchStatus" to "PublicationMatchedStatus" and "SubscriptionMatchStatus" to "SubscriptionMatchedStatus" in figure 2-13, in the immediately following table of statuses, and in the IDL PSM definitions of the types PublicationMatchStatus, SubscriptionMatchStatus, DataWriterListener, DataReaderListener, DataWriter, and DataReader.
Resolution:
Revised Text: Resolution:
To make the match statuses/callbacks consistent, they should be called PublicationMatchedStatus (on_publication_matched) and SubscriptionMatchedStatus (on_subscription_matched).
Revised Text:
Apply the following changes to DomainParticipantListener table in Section 2.1.2.2.3, DataWriter table in 2.1.2.4.2, DataWriterListener table in 2.1.2.4.4, DataReader table in 2.1.2.5.3 and DataReaderListener table in 2.1.2.5.7
Replace "PublicationMatchStatus" with "PublicationMatchedStatus," "on_publication_match" with "on_publication_matched," "SubscriptionMatchStatus" with "SubscriptionMatchedStatus," and "on_subscription_match" with "on_subscription_matched" in the DomainParticipantListener table in 2.1.2.2.3.
Figure 2-13
Rename "PublicationMatchStatus" to "PublicationMatchedStatus" and "SubscriptionMatchStatus" to "SubscriptionMatchedStatus"
Section 2.1.4.1 "Communication Status"
Apply the following change to the containing the status table that follows Figure 2-13
Rename "PublicationMatchStatus" to "PublicationMatchedStatus" and "SubscriptionMatchStatus" to "SubscriptionMatchedStatus"
Section 2.2.3 DCPS PSM : IDL:
Rename "PublicationMatchStatus" to "PublicationMatchedStatus" and "SubscriptionMatchStatus" to "SubscriptionMatchedStatus. This affects the definitions of PublicationMatchStatus, SubscriptionMatchStatus, DataWriterListener, DataReaderListener, DataWriter, and DataReader.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8427: (R#142) OWNERSHIP QoS policy should concern DataWriter and DataReader (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The OWNERSHIP QoS policy only concerns the Topic Entity. It is the only such policy; all other Topic QoS policies also concern the DataReader and DataWriter, which may override the value provided by the Topic.
The OWNERSHIP QoS policy is also missing from the PublicationBuiltinTopicData and SubscriptionBuiltinTopicData structures.
Proposed Resolution:
The OWNERSHIP QoS policy should concern the Topic, DataReader, and DataWriter Entities. It should have requested vs. offered (RxO) semantics: the two sides must agree on its value.
A field of type OwnershipQosPolicy should be added to the PublicationBuiltinTopicData and SubscriptionBuiltinTopicData structures.
Proposed Revised Text:
Change the "Concerns" column of the OWNERSHIP row of the table on page 2-94 to read "Topic, DataReader, DataWriter."
The second paragraph of section 2.1.3.8 OWNERSHIP begins "This QoS policy only applies to Topic and not to DataReader or DataWriter…" This paragraph should be removed.
Add the following rows to the built-in topic table on page 2-131:
DCPSPublication ownership OwnershipQosPolicy Policy of the corresponding DataWriter
DCPSSubscription Ownership OwnershipQosPolicy Policy of the corresponding DataReader
Modify the definitions of the DataWriterQos and DeadReaderQos structures in the IDL PSM in 2.2.3:
struct DataWriterQos {
OwnershipQosPolicy ownership;
};
struct DataReaderQosPolicy {
OwnershipQosPolicy ownership;
};
Resolution:
Revised Text: Resolution:
The OWNERSHIP QoS policy should concern the Topic, DataReader, and DataWriter Entities. It should have requested vs. offered (RxO) semantics: the two sides must agree on its value.
A field of type OwnershipQosPolicy should be added to the PublicationBuiltinTopicData and SubscriptionBuiltinTopicData structures.
Revised Text:
Section 2.1.3: QoS Table
Change the "Concerns" column of the OWNERSHIP row of the table read "Topic, DataReader, DataWriter."
The second paragraph of section 2.1.3.8 OWNERSHIP should be removed:
This QoS policy only applies to Topic and not to DataReader or DataWriter. The reason for this is that it would make no sense for a DataReader or a DataWriter to override the setting in the Topic.
Section 2.1.5: Add the following rows to the built-in topic table on page 2-131: on the DCPSPublication section above the row that describes the destination_order
DCPSPublication ownership OwnershipQosPolicy Policy of the corresponding DataWriter
Section 2.1.5: Add the following rows to the built-in topic table on page 2-131: on the DCPSSubscription section above the row that describes the destination_order
DCPSSubscription ownership OwnershipQosPolicy Policy of the corresponding DataReader
Section 2.2.3 DCPS PSM : IDL:
o Modify the definitions of the DataWriterQos and DeadReaderQos structures to add the the OwnershipQosPolicy ownership field: The new field is shown in blue
struct DataWriterQos {
…
UserDataQosPolicy user_data;
OwnershipQosPolicy ownership;
OwnershipStrengthQosPolicy ownership_strength;
…
};
struct DataReaderQosPolicy {
…
UserDataQosPolicy user_data;
OwnershipQosPolicy ownership;
TimeBasedFilterQosPolicy time_based_filter;
…
};
o Modify the definitions of the PublicationBuiltinTopicData and SubscriptionBuiltinTopicData structures to add the the OwnershipQosPolicy ownership field: The new field is shown in blue
struct PublicationBuiltinTopicData {
…
UserDataQosPolicy user_data;
OwnershipQosPolicy ownership;
OwnershipStrengthQosPolicy ownership_strength;
…
};
struct SubscriptionBuiltinTopicData {
…
ReliabilityQosPolicy reliability;
OwnershipQosPolicy ownership;
DestinationOrderQosPolicy destination_order;
…
};
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8428: (R#145,146) Inconsistent description of Topic module in PIM (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Several members in the Topic module are described as attributes in the UML diagram in 2.1.2.3 but as operations in the following tables and in the IDL PSM. These include:
· TopicDescription::type_name
· TopicDescription::name
· ContentFilteredTopic::filter_expression
· ContentFilteredTopic::expression_parameters
· MultiTopic::subscription_expression
· MultiTopic::expression_parameters
Also, the topic name and type name members are needlessly repeated in the tables of all of the TopicDescription subclasses. They are non-abstract; they need only appear in the TopicDescription table.
Proposed Resolution:
The read-only attributes should appear as such in the PIM tables. "Attributes" that can be changed return ReturnCode_t from the corresponding "set" methods; for clarity, they should consistently appear as operations in both the tables and the UML diagram. The duplicate descriptions of the topic name and type name attributes should be removed.
The IDL PSM should continue to express all of the members as methods to preserve the consistency of the naming conventions used in all programming languages that may be generated from the IDL.
Proposed Revised Text:
In figure 2-7, replace the ContentFilteredTopic attribute expression_parameters with two operations: get_expression_parameters and set_expression_parameters. Replace the MultiTopic attribute expression_parameters with two operations: get_expression_parameters and set_expression_parameters.
Revise the TopicDescription Class table in 2.1.2.3.1 as follows:
TopicDescription
attributes
readonly name string
readonly type_name string
operations
get_participant DomainParticipant
Rewrite section 2.1.2.3.1.2 as follows:
2.1.2.3.1.2 type_name
The type name used to create the TopicDescription.
Rewrite section 2.1.2.3.1.3 as follows:
2.1.2.3.1.3 name
The name used to create the TopicDescription.
Remove the get_type_name and get_name operations from the Topic Class table in 2.1.2.3.2.
Remove the get_type_name, get_name, and get_filter_expression operations from the ContentFilteredTopic Class table in 2.1.2.3.3. Add the following attributes to that table:
attributes
readonly filter_expression string
Rewrite section 2.1.2.3.3.2 as follows:
2.1.2.3.3.2 filter_expression
The filter_expression associated with the ContentFilteredTopic. That is, the expression specified when the ContentFilteredTopic was created.
Remove the get_type_name, get_name, and get_subscription_expression operations from the MultiTopic Class table in 2.1.2.3.4. Add the following attributes to that table:
attributes
readonly subscription_expression string
Rewrite section 2.1.2.3.3.2 as follows:
2.1.2.3.4.1 get_subscription_expression
The subscription_expression associated with the MultiTopic. That is, the expression specified when the MultiTopic was created.
Resolution:
Revised Text: Resolution:
The read-only attributes should appear as such in the PIM tables. "Attributes" that can be changed return ReturnCode_t from the corresponding "set" methods; for clarity, they should consistently appear as operations in both the tables and the UML diagram. The duplicate descriptions of the topic name and type name attributes should be removed.
The IDL PSM should continue to express all of the members as methods to preserve the consistency of the naming conventions used in all programming languages that may be generated from the IDL.
Revised Text:
Figure 2-7,
o replace the ContentFilteredTopic attribute expression_parameters with two operations: get_expression_parameters and set_expression_parameters.
o Replace the MultiTopic attribute expression_parameters with two operations: get_expression_parameters and set_expression_parameters.
Section in 2.1.2.3.1 TopicDescription Class table.
o Replace table:
TopicDescription
attributes
readonly: name string
readonly: type_name string
operations
get_participant DomainParticipant
get_type_name string
get_name string
o With:
TopicDescription
attributes
readonly name string
readonly type_name string
operations
get_participant DomainParticipant
Section 2.1.2.3.1.2 "get_type_name"
o Replace
2.1.2.3.1.2 get_type_name
This operation returns the type_name used to create the TopicDescription.
o With:
2.1.2.3.1.2 type_name
The type_name used to create the TopicDescription.
Section 2.1.2.3.1.3 "get_name"
o Replace
2.1.2.3.1.3 get_name
This operation returns the name used to create the TopicDescription.
o With
2.1.2.3.1.3 name
The name used to create the TopicDescription.
Section 2.1.2.3.2 Topic Class table
Remove the get_type_name and get_name operations.
Section 2.1.2.3.3 ContentFilteredTopic Class table
Remove the get_type_name, get_name, and get_filter_expression operations
Section 2.1.2.3.3 ContentFilteredTopic Class table
Add the following attributes to the table:
attributes
readonly filter_expression string
Section 2.1.2.3.3.2 get_filter_expression
o Replace
2.1.1.3.3.2 get_filter_expression
This operation returns the filter_expression associated with the ContentFilteredTopic. That is, the expression specified when the ContentFilteredTopic was created.
o With
2.1.1.3.3.2 filter_expression
The filter_expression associated with the ContentFilteredTopic. That is, the expression specified when the ContentFilteredTopic was created.
Section 2.1.2.3.4 MultiTopic Class table
Remove the get_type_name, get_name, and get_subscription_expression operations from the MultiTopic Class table in 2.1.2.3.4.
Section 2.1.2.3.4 MultiTopic Class table
Add the following attributes to the table:
attributes
readonly subscription_expression string
Section 2.1.2.3.4.1 get_ subscription _expression
Rewrite section 2.1.2.3.3.2 as follows:
o Replace
2.1.1.3.4.1 get_subscription_expression
This operation returns the subscription_expression associated with the MultiTopic. That is, the expression specified when the MultiTopic was created.
o With
2.1.1.3.4.1 subscription_expression
The subscription_expression associated with the MultiTopic. That is, the expression specified when the MultiTopic was created.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8429: (R#147) Inconsistent error code list in description of TypeSupport::registe (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The description of register_type in 2.1.2.3.6.1 first says that the operation may return PRECONDITION_NOT_MET but later says that the only "special" error code that may be returned is OUT_OF_RESOURCES.
Proposed Resolution:
The operation should be able to return either PRECONDITION_NOT_MET or OUT_OF_RESOURCES.
Proposed Revised Text:
The last sentence of section 2.1.2.3.6.1 should read "Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET and OUT_OF_RESOURCES."
Resolution:
Revised Text: Resolution:
The operation should be able to return either PRECONDITION_NOT_MET or OUT_OF_RESOURCES.
Revised Text:
Change the last sentence of section 2.1.2.3.6.1 from:
Possible error codes returned in addition to the standard ones: OUT_OF_RESOURCES.
To:
Possible error codes returned in addition to the standard ones: PRECONDITION_NOT_MET and OUT_OF_RESOURCES.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8430: (R#152) Extraneous WaitSet::wakeup (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The operation WaitSet::wakeup is listed in the UML diagrams in 2.1.2.1 and 2.1.4.4. This operation is not listed in the WaitSet table in 2.1.2.1.6.
Proposed Resolution:
The GuardCondition class already provides a mechanism for manually waking up a WaitSet. The wakeup method should be struck from the UML diagrams noted above.
Proposed Revised Text:
Remove the wakeup operation from the WaitSet class in figure 2-5 and in figure 2-18.
Resolution:
Revised Text: Resolution:
The GuardCondition class already provides a mechanism for manually waking up a WaitSet. The wakeup method should be struck from the UML diagrams noted above.
Revised Text:
Remove the wakeup operation from the WaitSet class in figure 2-5 and in figure 2-18.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8431: (R#153) Ambiguous SampleRejectedStatus::last_reason field (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The value of the SampleRejectedStatus::last_reason field is undefined in that case where the user calls get_sample_rejected_status when no samples have been rejected.
Proposed Resolution:
Introduce a new SampleRejectedStatusKind value NOT_REJECTED and stipulate that it is to be used in the case described above.
Proposed Revised Text:
Add the following sentence to the description of SampleRejectedStatus::last_reason in the table on page 2-118: "If no samples have been rejected, the reason is the special value NOT_REJECTED."
Modify the definition of SampleRejectedStatusKind in 2.2.3 to add a constant NOT_REJECTED.
Resolution:
Revised Text: Resolution:
Introduce a new SampleRejectedStatusKind value NOT_REJECTED and stipulate that it is to be used in the case described above.
Revised Text:
Add the following sentence to the description of SampleRejectedStatus::last_reason in the table on page 2-118:
"If no samples have been rejected, the reason is the special value NOT_REJECTED."
Section 2.2.3 DCPS PSM : IDL emum SampleRejectedStatusKind in 2.2.3 add the constant NOT_REJECTED. The new definition of the enumeration is:
enum SampleRejectedStatusKind {
NOT_REJECTED,
REJECTED_BY_INSTANCE_LIMIT,
REJECTED_BY_SAMPLES_LIMIT
REJECTED_BY_SAMPLES_PER_INSTANCE_LIMIT
};
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8432: (R#154) Undefined behavior if resume_publications is never called (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification fails to state what should happen to publications suspended with Publisher::suspend_publications if Publisher::resume_publications is never called by the time the Publisher is deleted.
Proposed Resolution:
The Publisher may be deleted in the situation described. Any samples that have not yet been sent will be discarded.
Proposed Revised Text:
Add the following sentence to the last paragraph of section 2.1.2.4.1.8: "If the Publisher is deleted before resume_publications is called, any suspended updates yet to be published will be discarded."
Resolution:
Revised Text: Resolution:
The Publisher may be deleted in the situation described. Any samples that have not yet been sent will be discarded.
Revised Text:
Section 2.1.2.4.1.8. last paraghaph starting "The use of this operation must be matched by a corresponding call to resume_publications indicating …" Append the following sentence to end of the paragraph
If the Publisher is deleted before resume_publications is called, any suspended updates yet to be published will be discarded.
Actions taken:
March 1, 2005: received issue
August 1, 2005: closed issue
Issue 8531: DTD Error (mainTopic (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification states that the mainTopic tag in the classMapping XML element is mandatory. However the example provided after does not contain that item, showing that it is actually not mandatory.
Resolution:
Change the status of that item in the DTD, to make it optional.
Resolution:
Revised Text: Resolution:
Change the status of that item in the DTD.
Revised Text:
Location Original Incorrect Text Corrected Text
p. 3-66 <!ELEMENT classMapping (mainTopic, extensionTopic?, (monoAttribute | multiAttribute | monoRelation | multiRelation | local))> <!ELEMENT classMapping (mainTopic?, extensionTopic?, (monoAttribute | multiAttribute | monoRelation | multiRelation | local))>
in 3.2.2.3.2.6 ClassMappingp. 3-68 a mandatory sub-tag mainTopic an optional sub-tag mainTopic
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8532: get_all-topic_names operation missing on figure 3-4 (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The get_all_topic_names() operation is mentioned in section 3.1.6.3.5 (ObjectHome) and in the IDL, but not in Figure 3-4.
Resolution:
Add the missing operation on the UML diagram
Resolution:
Revised Text: Resolution:
Add the missing operation on the UML diagram
Revised Text:
Location Original Incorrect Text Corrected Text
Figure 3-4 p.3-17 add a "get_all_topic_names" operation to the class "ObjectHome" (in pos 4 after "get_topic_name")
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8533: Naming inconsistencies (IDL PSM vs. PIM) for ObjectHome operations (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The parameter of "ObjectHome::set_filter" operation is named "expression" in the PIM and "filter" in the IDL.
The ObjectHome operation to register operation is named "register_object" in the UML diagram and in the ObjectHome table, but it is named "register_created_object" in the text and in the IDL.
Resolution:
Name everywhere "expression" the parameter of the set_filter" operation
Name everywhere "register_object" the operation to register an object
Resolution:
Revised Text: Resolution:
Name the parameter of the "set_filter" operation "expression" everywhere.
Name the operation to register an object "register_object" everywhere.
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.1.6.3.5 ObjectHomep.3-27 register an object resulting from such a pre-creation (register_created_object). register an object resulting from such a pre-creation (register _object).
in 3.2.1.2.1 Generic DLRL EntitiesObjectHome, p.3-55 local interface ObjectHome {[…]void set_filter ( in string filter) local interface ObjectHome {[…]void set_filter ( in string expression)
in 3.2.1.2.1 Generic DLRL EntitiesObjectHome and FooHome interfaces void register_created_object ( void register _object (
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8534: Naming inconsistencies (IDL PSM vs. PIM) for Cache operation (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The parameter of "Cache:: find_home_by_index" is named "registration_index" in the PIM and "index" in the IDL
Resolution:
Name everywhere that parameter "index"
Resolution:
Revised Text: Resolution:
Name that parameter "index" everywhere.
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.1.6.3.3 Cache, table of operationsline after "find_home_by_index"p.3-21 registration_index index
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8535: Bad cardinality on figure 3-4 (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Bad cardinality for relations Cache-> CacheListener and ObjectHome->ObjectListener on figure 3-4
Summary:
While the PIM text and the IDL state that several CacheListeners may be attached to a Cache and several ObjectListeners may be attached to an ObjectHome, the UML diagram shows a cardinality of at most 1 for those relations
Resolution:
Correct the figure to be in accordance with the rest of the document.
Resolution:
Revised Text: Resolution:
Correct the figure to be in accordance with the rest of the document.
Revised Text:
Location Original Incorrect Text Corrected Text
Figure 3-4 p.3-17 "0..1" as cardinality of relations- Cache -> CacheListener- ObjectHome -> ObjectListener "*"
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8536: ReadOnly exception on clone operations (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The IDL section states that the operations "clone" and "clone_object" on "ObjectRoot" as well as the operation "clone_foo" on the implied IDL for "Foo" may raise the exception "ReadOnlyMode", while this is not true.
Resolution:
Correct the IDL
Resolution:
Revised Text: Resolution:
Correct the IDL
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.2.1.2 IDL Descriptionp. 3-52ObjectRoot interface ObjectReference clone ([…]raises ( ReadOnlyMode, AlreadyClonedInWriteMode);ObjectRoot clone_object ([...]raises ( ReadOnlyMode, AlreadyClonedInWriteMode); ObjectReference clone ([…]raises ( AlreadyClonedInWriteMode);ObjectRoot clone_object ([...]raises ( AlreadyClonedInWriteMode);
in 3.2.1.2.2 Implied IDLp. 3-59 Foo clone_foo ([...]raises ( DDS::ReadOnlyMode, DDS:: AlreadyClonedInWriteMode); Foo clone_foo ([...]raises ( DDS:: AlreadyClonedInWriteMode);
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8537: Wrong definition for FooListener (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: A wrong copy-paste lead to wrong inheritance and methods for the FooListener interface in the implied IDL (while it is correct in the PIM). In addition, the IDL for ObjectListener should have commented out the operation that is actually defined in the derived FooListener.
FooListener is mentioned once in the PIM as FooObjectListener.
Resolution:
Fix the definition (based on ObjectListener) and name the class FooListener everywhere
Resolution:
Revised Text: Resolution:
Fix the definition (based on ObjectListener) and name the class FooListener everywhere
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.1.6.3.6 ObjectListenerp. 3-28 This interface is an abstract root, from which a typed interface will be derived for each application type. This typed interface (named FooObjectListener, … This interface is an abstract root, from which a typed interface will be derived for each application type. This typed interface (named FooListener, …
in 3.2.1.2 IDL Descriptionp. 3-48 local interface ObjectListener { boolean on_object_created ( in ObjectReference ref); boolean on_object_modified ( in ObjectReference ref, in ObjectRoot old_value); boolean on_object_deleted ( in ObjectReference ref);}; local interface ObjectListener { boolean on_object_created ( in ObjectReference ref);/**** will be generated with the proper Foo type* in the derived FooListener* boolean on_object_modified ( in ObjectReference ref, in ObjectRoot old_value);****/ boolean on_object_deleted ( in ObjectReference ref);};
in 3.2.1.2.2 Implied IDLp. 3-59 & 3-60 local interface FooListener: DDS::SelectionListener { void on_object_in ( in Foo the_object); void on_object_modified ( in Foo the_object);}; local interface FooListener : DDS ::ObjectListener { boolean on_object_modified ( in DDS ::ObjectReference ref, in Foo old_value);};
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8538: Typo CacheUsage instead of CacheAccess (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: In section 3.1.6.5, the specification suggests that it is sensible to create a CacheUsage per thread. This should say a CacheAccess instead.
Resolution:
Change CacheUsage to CacheAccess in the sentence.
Resolution:
Revised Text: Resolution:
Change CacheUsage to CacheAccess in the sentence.
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.1.6.5 Cache Accesses Managementp. 3-45 It should be noted that, even though a sensible design is to create a CacheUsage per thread, DLRL does not enforce this rule by any means. It should be noted that, even though a sensible design is to create a CacheAccess per thread, DLRL does not enforce this rule by any means.
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8539: templateDef explanation contains some mistakes (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: In section 3.2.2.3.2.3, the templateDef is explained. The 2nd bullet presents the possible values of the pattern attribute, but in this list "Ref" is missing. Furthermore, the example uses the wrong attribute name in its 2nd attribute: it says basis="StrMap", while this should be pattern="StrMap"
Resolution:
Correct the mistakes.
Resolution:
Revised Text: Resolution:
Correct the mistakes.
Revised Text:
Location Original Incorrect Text Corrected Text
In section 3.2.2.3.2.3p. 3-64 2nd bullet:"o pattern, that gives the collection pattern (are supported List, StrMap and IntMap);" o pattern, that gives the construct pattern. The supported constructs are: Ref, List, StrMap and IntMap.
In section 3.2.2.3.2.3p. 3-64 Example:<templateDef name="BarStrMap" basis="StrMap" itemType="Bar"/> Example:<templateDef name="BarStrMap" pattern="StrMap" itemType="Bar"/>
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8540: DlrlOid instead of DLRLOid in implied IDL (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: In the implied IDL, "DlrlOid" is used 3 times instead of the correct "DLRLOid"
Resolution:
Use "DLRLOid" everywhere
Resolution:
Revised Text: Use "DLRLOid" everywhere
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.2.1.2.2 Implied IDLp. 3-63 Foo create_object_with_oid( in DDS::CacheAccess access, in DDS::DlrlOid oid)[...] Foo create_object_with_oid( in DDS::CacheAccess access, in DDS::DLRLOid oid)[...]
in 3.2.1.2.2 Implied IDLp. 3-63 Foo find_object_in_access ( in DDS::DlrlOid oid,[...] Foo find_object_in_access ( in DDS::DLRLOid oid,[...]
in 3.2.1.2.2 Implied IDLp. 3-63 Foo find_object ( in DDS::DlrlOid oid); Foo find_object ( in DDS::DLRLOid oid);
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8541: Parameter wrongly named "object" in implied IDL (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The "set" operation in the implied "FooRef" IDL class has a parameter named object. Since IDL may be used both case-sensitive and case-insensitive, this may not be allowed (possible confusion with CORBA::Object)
Resolution:
Name this parameter "an_object"
Resolution:
Revised Text: Resolution:
Name this parameter "an_object"
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.2.1.2.2 Implied IDL p. 3-64 valuetype FooRef : DDS::RefRelation { // Ref<Foo> void set( in Foo object); valuetype FooRef : DDS::RefRelation { // Ref<Foo> void set( in Foo an_object);
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8542: Attach_Listener and detach_listener operations on ObjectHome are untyped (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The ObjectListeners that need to be registered to an ObjectHome are typed (i.e. a FooListener must be attached to a FooHome), but the definition of the attach and detach methods can only be found in the generic IDL part. This way it is possible to attach a BarListener to a FooHome.
Resolution:
Move those operations from the generic IDL to the implied one.
Resolution:
Revised Text: Resolution:
Move those operations from the generic IDL to the implied one.
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.2.1.2.1 Generic DLRL Entitiesp.3-54ObjectHome interface void attach_listener ( in ObjectListener listener, in boolean concerns_contained_objects);void detach_listener ( in ObjectListener listener); /**** Following methods will be generated properly typed* in the generated derived classes*void attach_listener ( in ObjectListener listener, in boolean concerns_contained_objects);void detach_listener ( in ObjectListener listener);****/
in 3.2.1.2.2 Implied IDLp. 3-61 readonly attribute FooListenerSeq listeners;FooSelection create_selection readonly attribute FooListenerSeq listeners;void attach_listener ( in FooListener listener, in boolean concerns_contained_objects);void detach_listener ( in FooListener listener);FooSelection create_selection
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8543: Remove operations badly put on implied classes (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The "remove" operations of the collection types are mentioned in the implied IDL part, while their signatures have no typed parameters.
In addition, the parameter for the get operation (key) wrongly starts with a capital letter (while all parameters are supposed to be in small letters)
Resolution:
Add those operations on the generic roots and remove them from the generated classes (implied IDL).
Correct the spelling of the "key" parameter.
Resolution:
Revised Text: Resolution:
Add those operations on the generic roots and remove them from the generated classes (implied IDL). Additions are indicated in blue
Correct the spelling of the "key" parameter.
Revised Text:
Location Original Incorrect Text Corrected Text
in 3.2.1.2.1 Generic DLRL Entities p. 3-57 abstract valuetype StrMapBase : CollectionBase { boolean which_added (out StringSeq keys); StringSeq get_all_keys ();};abstract valuetype IntMapBase : CollectionBase { boolean which_added (out LongSeq keys); LongSeq get_all_keys ();}; abstract valuetype StrMapBase : CollectionBase { boolean which_added (out StringSeq keys); StringSeq get_all_keys (); void remove ( in string key);};abstract valuetype IntMapBase : CollectionBase { boolean which_added (out LongSeq keys); LongSeq get_all_keys (); void remove ( in long key) ;};
in 3.2.1.2.2 Implied IDL p. 3-62 valuetype FooStrMap : DDS::StrMapRelation {... void put ( in string key, in Foo a_foo); Foo get ( in string Key) raises ( DDS::NotFound); void remove ( in string Key);};valuetype FooIntMap : DDS::IntMapRelation { ... void put ( in long key, in Foo a_foo); Foo get ( in long Key) raises ( DDS::NotFound); void remove ( in long Key);}; valuetype FooStrMap : DDS::StrMapRelation {... void put ( in string key, in Foo a_foo); Foo get ( in string key) raises ( DDS::NotFound);void remove ( in string Key);};valuetype FooIntMap : DDS::IntMapRelation { ... void put ( in long key, in Foo a_foo); Foo get ( in long key) raises ( DDS::NotFound);void remove ( in long Key);};
Actions taken:
March 10, 2005: received issue
August 1, 2005: closed issue
Issue 8545: Behavior of DataReaderListener::on_data_available (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: It is not clearly defined whether the on_data_available notification should be generated on every arrival of new data, or just on the status change that happens when coming from a no data situation.
Resolution:
For every arrival of new data, a notification should be generated, regardless of whether the previous data has already been read before.
Revised Text:
Introduce textual changes to 2.1.4.2.2 that describes the complete set of conditions under which the read-communication status will change. The data available status is considered to have changed each time a new sample becomes available or the ViewState, SampleState, or InstanceState of any existing sample changes for any reason other than a read or take.
Specific changes that cause the status to be changed include:
· The arrival of new data
· The disposal of an instance
· The loss of liveliness of a writer of an instance when no other writer of that instance exists
· Unregistration of an instance by the last writer of that instance
Resolution:
Revised Text: Resolution:
For every arrival of new data, a notification should be generated, regardless of whether the previous data has already been read before.
Introduce textual changes to 2.1.4.2.2 that describes the complete set of conditions under which the read-communication status will change. The data available status is considered to have changed each time a new sample becomes available or the ViewState, SampleState, or InstanceState of any existing sample changes for any reason other than a read or take.
Specific changes that cause the status to be changed include:
· The arrival of new data
· The disposal of an instance
· The loss of liveliness of a writer of an instance when no other writer of that instance exists
· Unregistration of an instance by the last writer of that instance
Revised Text:
In section 2.1.4.2.2 replace sentence:
It becomes TRUE when data arrives and it is reset to FALSE when all the data is removed from the responsibility of the middleware via the take operation on the proper DataReader entities
With the paragraphs:
It becomes TRUE when a data-sample arrives or the ViewState, SampleState, or InstanceState of any existing sample changes for any reason other than a call to DataReader::read, DataRedader::take, or their variants. The StatusChangedFlag becomes false again when all the samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entities.
Specific events detected by the DataReader that will cause the StatusChangedFlag to become TRUE include:
· The arrival of new data.
· The arrival of the notification that an instance has been disposed.
· The loss of liveliness of the DataWriter of an instance for which there is no other DataWriter.
· The arrival of the notification that an instance has been unregistered by the only DataWriter that is known to be writing the instance.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8546: Inconsistent naming for status parameters in DataReader operations. (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In section 2.1.2.1.1.7 it is explained which entity operations may be invoked on an entity that has not yet been enabled. However, in subsequent sections describing the behavior of operations on specialized entities that are disabled, fewer than the above-mentioned operations are mentioned.
Resolution:
Add the missing operations for each specialized entity to the list of operations that will never return RETCODE_NOT_ENABLED. Also state explicitly that Conditions obtained from disabled entities will never trigger, until the corresponding entities become enabled.
Revised Text:
On page 2-13, section 2.1.2.1.1.7 it is stated that the operation that gets the StatusCondition can be invoked on a disabled entity. Add some text that specifies that a StatusCondition obtained this way will not trigger until the corresponding entity becomes enabled.
On page 2-21, section 2.1.2.2.1 it is mentioned that for the DomainParticipant all operation except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition, all factory methods (create_topic, create_publisher, create_subscriber) and all delete methods (delete_topic, delete_publisher, delete_subscriber).
On page 2-35, section 2.1.2.3.2 it is mentioned that for the Topic all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition.
On page 2-42, section 2.1.2.4.1 it is mentioned that for the Publisher all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition, create_datawriter, delete_datawriter.
On page 2-48, section 2.1.2.4.2 it is mentioned that for the DataWriter all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition.
On page 2-63, section 2.1.2.5.2 it is mentioned that for the Subscriber all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition, create_datareader and delete_datareader.
On page 2-73, section 2.1.2.5.3 it is mentioned that for the DataRaeder all operations except get/set_qos, get/set_listener and enable may return a RETCODE_NOT_ENABLED. To this list should be added: get_statuscondition.
Resolution:
Revised Text: Resolution:
Add the missing operations for each specialized entity to the list of operations that will never return RETCODE_NOT_ENABLED. Also state explicitly that Conditions obtained from disabled entities will never trigger, until the corresponding entities become enabled.
Revised Text:
Section 2.1.2.2.1 DomainParticipant class.
Replace:
All the operations except the ones defined at the base-class level (namely, set_qos, get_qos, set_listener, get_listener and enable) may return the value NOT_ENABLED.
With
The following operations may be called even if the DomainParticipant is enabled. Other operations will the value NOT_ENABLED if called on a dsabled DomainParticipant:
o Operations defined at the base-class level namely, set_qos, get_qos, set_listener, get_listener and enable.
o Factory methods: create_topic, create_publisher, create_subscriber, delete_topic, delete_publisher, delete_subscriber
o Operations that access the status: get_statuscondition
Section 2.1.2.3.2 Topic Class
Replace
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, and enable may return the value NOT_ENABLED.
With
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable and get_status_condition may return the value NOT_ENABLED.
. Section 2.1.2.4.1 Publisher Class
Replace
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener and enable may return the value NOT_ENABLED.
With
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, create_datawriter, and delete_datawriter may return the value NOT_ENABLED.
Section 2.1.2.4.2 DataWriter Class
Replace
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener and enable may return the value NOT_ENABLED.
With
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable and get_statuscondition may return the value NOT_ENABLED.
. Section 2.1.2.5.2 Subscriber Class
Replace
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener and enable may return the value NOT_ENABLED.
With
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, create_datareader, and delete_datareader may return the value NOT_ENABLED.
Section 2.1.2.5.3 DataReader Class
Replace
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener and enable may return the value NOT_ENABLED.
With
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, and get_statuscondition may return the error NOT_ENABLED.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8547: (T#23) Syntax of partition strings (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In section 2.1.3 (Supported Qos) the table describes that specifying the PARTITION QOS by an empty sized sequence implies all partitions. However the default partition is specified to be exactly one partition with the name "". Any partition should be specified by means of wildcards. It is unclear how the default partition and wildcards can be used at publisher and subscriber side.
Resolution:
Concerning default partitions:
The default value for PartitionQosPolicy is an empty sequence of names.The empty sequence of partition names is equivalent to a single partition name, the empty string.
Concerning wildcards:
"Wildcards" refers to the regular expression language defined by the POSIX fnmatch API (1003.2-1992 section B.6). Either Publisher or Subscriber may include regular expressions in partition names, but no two names that both contain wildcards will ever be considered to match. This means that although regular expressions may be used both at publisher as well as subscriber side, the service will not try to match 2 regular expressions (between publishers and subscribers).
Revised Text:
Change the PARTITION row of the table in 2.1.3 to state that the default value is an empty sequence, which is equivalent to a sequence containing the single element "".
Add the text about the wildcards format and restrictions
Resolution:
Revised Text: Resolution:
Concerning default partitions:
The default value for PartitionQosPolicy is an empty sequence of names.The empty sequence of partition names is equivalent to a single partition name, the empty string.
Concerning wildcards:
"Wildcards" refers to the regular expression language defined by the POSIX fnmatch API (1003.2-1992 section B.6). Either Publisher or Subscriber may include regular expressions in partition names, but no two names that both contain wildcards will ever be considered to match. This means that although regular expressions may be used both at publisher as well as subscriber side, the service will not try to match 2 regular expressions (between publishers and subscribers).
Revised Text:
On Section 2.1.3.12 "PARTITION" Replace the sentence:
By default, DataWriter and DataReader objects belonging to Publisher or Subscriber that do not specify a PARTITION policy will participate in the default partition (whosename is "").
…with:
PARTITION names can be regular expressions and include wildcards as defined by the POSIX fnmatch API (1003.2-1992 section B.6). Either Publisher or Subscriber may include regular expressions in partition names, but no two names that both contain wildcards will ever be considered to match. This means that although regular expressions may be used both at publisher as well as subscriber side, the service will not try to match two regular expressions (between publishers and subscribers).
On Section 2.1.3 QoS table
In the PARTITION row of the table, replace the final sentences:
The default value is an empty (zero-sized) sequence. This is treated as a special value that matches any partition.
…with:
The default value is an empty (zero-length) sequence. This is treated as a special value that matches any partition. And is equivalent to a sequence containing a single element consisting of the empty string.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8548: Clarification of order preservation on reliable data reception (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Does reliability include order preservation up to API level?
In other words should data be made available to applications if older data exists but has not yet arrived (e.g. due to network irregularities), note that if a late arriving sample is accepted even after newer samples are made available then state inconsistencies may occur. In addition, not accepting a late coming sample should generate a sample lost notification.
Resolution:
Specify that data from a single writer (reliable and/or best-effort) will NOT be made available out-of-order.
Revised Text:
TBD
Resolution:
Revised Text: Resolution:
Specify that data from a single writer (reliable and/or best-effort) will NOT be made available out-of-order.
Revised Text:
In section 2.1.3.13 RELIABILITY before the last paragraph "The value offered…" insert the paragraphs:
If the RELIABILITY kind is set to RELIABLE, data-samples originating from a single DataWriter cannot be made available to the DataReader if there are previous data-samples that have not been received yet due to a communication error. In other words, the service will repair the error and re-transmit data-samples as needed in order to re-construct a correct snapshot of the DataWriter history before it is accessible by the DataReader.
If the RELIABILITY kind is set to BEST_EFFORT, the service will not re-transmit missing data-samples. However for data-samples originating from any one DataWriter the service will ensure they are stored in the DataReader history in the same order they originated in the DataWriter. In other words, the DataReader may miss some data-samples but it will never see the value of a data-object change form a newer value to an order value.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8549: (T#37) Clarification on the value of LENGTH_UNLIMITED constant (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: It is not clear what the value of unlimited resource limits is.
Resolution:
Probably it is defined by the in IDL defined constant LENGTH_UNLIMITED=-1. This should be clarified.
Revised Text:
Replace "unlimited" with "LENGTH_UNLIMITED" in table in 2.1.3
Resolution:
Revised Text: Resolution:
Probably it is defined by the in IDL defined constant LENGTH_UNLIMITED=-1. This should be clarified.
Revised Text:
Section 2.1.3 Supported Qos Policies. Qos Table
Replace "unlimited" with "LENGTH_UNLIMITED"
This affects the following rows:
RESOURCE_LIMITS/ max_samples
RESOURCE_LIMITS/ max_instances
RESOURCE_LIMITS/ max_samples_per_instance
Section 2.1.5 Built-in Topics table, RESOURCE_LIMITS row
Replace "unlimited" with "LENGTH_UNLIMITED
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8550: (T#38) request-offered behavior for LATENCY_BUDGET (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: In par. (2.1.3.7) is described that latency budget will neither prohibit connectivity nor send notifications if incompatible. However it also describes the RxO compatibility rule that will never be visible to applications. This is somewhat confusing and this description is only required because this QoS attribute is specified to be subject to the RxO pattern, however, since the description states that the latency budget is a hint to the service and that the service may apply an additional delay for optimizations then are we really speaking of RxO between datawriters and datareaders?
Resolution:
Make latency budget truly RxO by making connectivity dependent of compatibility rules and adding appropriate error notifications.
Revised Text:
Remove the "therefore the service will not fail to match…" from section 2.1.3.7
Add new text that describes the RxO consequences
Resolution:
Revised Text: Resolution:
Make latency budget truly RxO by making connectivity dependent of compatibility rules and adding appropriate error notifications.
Revised Text:
On section 2.1.3.7 replace the paragraph:
This policy is considered a hint. Therefore the Service will not fail to match a DataReader with a DataWriter due to incompatibility on this QoS, rather it will automatically adapt its behavior on the publishing end to meet the requirements of all subscribers. Consequently this QoS will never trigger an incompatible QoS notification,
With:
This policy is considered a hint to the service. There is no specified mechanism as to how the service should take advantage of this hint.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8551: (T#46) History when DataWriter is deleted (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not clearly describe what should happen with data in a reliable datawriter's history when the datawriter is deleted? Should it disappear immediately or wait until all messages in the history are delivered?
Resolution:
The right thing to do is provide some operation such that the user can wait for all data to be delivered:
Add an operation Publisher::delete_datawriter_after_acknowledgement( Duration_t timeout). Like DataReader::wait_for_historical_data, this operation takes its own timeout rather than using ReliabilityQosPolicy::max_blocking_time because it has to potentially wait for many writes to complete. As soon as all outstanding reliable samples are acknowledged, the DataWriter will be deleted and the operation will return OK. If timeout expires, however, before all samples are acknowledged, the operation will return TIMEOUT and the writer will not be deleted.
The regular Publisher::delete_datawriter should delete the writer immediately without waiting for any reliable samples to be acknowlwedged. Its description should be clarified accordingly. If delete_datawriter_after_acknowledgement fails, the user will then have the choice of either calling it again or accepting the possible loss of some samples and calling delete_datawriter.
Revised Text:
Change existing text according the resolution.
Resolution:
Revised Text: Resolution:
The right thing to do is provide some operation such that the user can wait for all data to be delivered:
Add an operation DataWriter:: wait_for_acknowlegments(Duration_t timeout) method that will block a reliable DataWriter until all data written has been acknowledged by the reliable readers. Like DataReader::wait_for_historical_data, this operation takes its own timeout rather than using ReliabilityQosPolicy::max_blocking_time because it has to potentially wait for many writes to complete. As soon as all outstanding reliable samples are acknowledged, the operation will return OK. If timeout expires, however, before all samples are acknowledged, the operation will return TIMEOUT.
The Publisher::delete_datawriter should delete the writer immediately without waiting for any reliable samples to be acknowledged. Its description should be clarified accordingly.
Revised Text:
In section 2.1.2.4.2 DataWriter table.
Add the following rows before the row that describes "get_liveliness_lost_status":
wait_for_acknowledgments ReturnCode_t
max_wait Duration_t
Add section 2.1.2.4.2.15 (Previous section 2.1.2.4.2.15 get_liveliness_lost_status becomes 2.1.2.4.2.16)
2.1.2.4.2.13 wait_for_acknowledgments
This operation is intended to be used only if the DataWriter has RELIABILITY QoS kind set to RELIABLE. Otherwise the operation will return immediately with RETCODE_OK.
The operation wait_for_acknowledgments blocks the calling thread until either all data written by the DataWriter is acknowledged by all matched DataReader entities that have RELIABILITY QoS kind RELIABLE, or else the duration specified by the max_wait parameter elapses, whichever happens first. A return value of OK indicates that all the samples written have been acknowledged by all reliable matched data readers; a return value of TIMEOUT indicates that max_wait elapsed before all the data was acknowledged.
Section 2.2.3 DCPS PSM : IDL, Interface DataWriter add operation:
ReturnCode_t wait_for_acknowledgments(in Duration_t max_wait);
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8552: (T#47) Should a topic returned by lookup_topicdescription be deleted (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: It is unclear if a topic found by lookup_topicdescription (section 2.1.2.2.1.13.) should also be deleted if not used anymore similar to find_topic.
Resolution:
lookup_topicdescription, unlike find_topic, should search only among the locally created topics. Therefore, it should never (at least as far as the user is concerned) create a new topic description. So looking up the topic should not require any extra deletion. (It is of course permitted to delete a topic one has looked up, provided it has no readers or writers, but then it is really deleted and subsequent lookups will fail).
Revised Text:
Change text in section 2.1.2.2.1.13 accordingly
Resolution:
Revised Text: Resolution:
lookup_topicdescription, unlike find_topic, should search only among the locally created topics. Therefore, it should never (at least as far as the user is concerned) create a new topic description. So looking up the topic should not require any extra deletion. (It is of course permitted to delete a topic one has looked up, provided it has no readers or writers, but then it is really deleted and subsequent lookups will fail).
Revised Text:
In section 2.1.2.2.1.12 lookup_topicdescription
add the following paragraph at the end of the section. Before the last paragraph starting "If the operation fails to locate a TopicDescription a 'nil' value …"
Unlike find_topic, the operation lookup_topicdescription searches only among the locally created topics. Therefore, it should never create a new TopicDescription. The TopicDescription returned by lookup_topicdescription does not require any extra deletion. It is still possible to delete the TopicDescription returned by lookup_topicdescription, provided it has no readers or writers, but then it is really deleted and subsequent lookups will fail.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8553: (T#51) Identification of the writer of a sample (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: For applications it is not possible to relate a sample to its datawriter. There are many use cases where it is required to be able to make such a relation
Resolution:
Add an 'InstanceHandle_t publication_handle' field (the handle to the remote writer, not the data instance) to SampleInfo. The user can use this handle to call get_matched_publication_data ()
Revised Text:
Change SampleInfo definition and related explanation accordingly.
Resolution:
Revised Text: Resolution:
Add an 'InstanceHandle_t publication_handle' field (the handle to the remote writer, not the data instance) to SampleInfo. The user can use this handle to call get_matched_publication_data ()
Revised Text:
In section 2.1.2.5.5 SampleInfo class.
On the SampleInfo table add the row:
publication_handle InstanceHandle_t
In section 2.1.2.5.5 SampleInfo class.
Add a bullet after the one with contents "the instance_handle that identifies locally the corresponding instance":
· the publication_handle that identifies locally the DataWriter that modified the instance.
Section 2.2.3 DCPS PSM : IDL, struct SampleInfo add field:
InstanceHandle_t publication_handle;
(add the filed after the field InstanceHandle_t instance_handle;)
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8554: (T#53) Cannot set listener mask when creating an entity (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The entity::set listener method sets a listener in combination with a mask that specifies the event interest. Listeners can also be set at construction of entities by passing the listener as parameter to the entity factory method. However it is not possible to set a mask during construction.
Resolution:
Add an event mask parameter (listener_mask) to entity constructors.
Revised Text:
Change signature and description of all entity factory methods.
Resolution:
Revised Text: Resolution:
Add an event mask parameter (listener_mask) to entity constructors.
Revised Text:
In section 2.1.2.2.1 DomainParticipant Class. DomainParticipant table. Add the following row indicating an additional parameter to the operation create_publisher:
a_mask StatusKind []
In section 2.1.2.2.1 DomainParticipant Class. DomainParticipant table. Add the following row indicating an additional parameter to the operation create_subscriber:
a_mask StatusKind []
In section 2.1.2.2.1 DomainParticipant Class. DomainParticipant table. Add the following row indicating an additional parameter to the operation create_topic:
a_mask StatusKind []
In section 2.1.2.2.2 DomainParticipantFactory Class. DomainParticipantFactory table. Add the following row indicating an additional parameter to the operation create_participant:
a_mask StatusKind []
In section 2.1.2.4.1 Publisher Class. Publisher table. Add the following row indicating an additional parameter to the operation create_datawriter:
a_mask StatusKind []
In section 2.1.2.5.2 Subscriber Class. Subscriber table. Add the following row indicating an additional parameter to the operation create_datareader:
a_mask StatusKind []
Section 2.2.3 DCPS PSM : IDL, Interface DomainParticipant modify the following operations.
Publisher create_publisher(in PublisherQos qos, in PublisherListener a_listener);
Subscriber create_subscriber(in SubscriberQos qos, in SubscriberListener a_listener);
Topic create_topic(in string topic_name, in string type_name, in TopicQos qos, in
TopicListener a_listener);
To
Publisher create_publisher(in PublisherQos qos, in PublisherListener a_listener,
in StatusMask mask);
Subscriber create_subscriber(in SubscriberQos qos, in SubscriberListener a_listener
in StatusMask mask);
Topic create_topic(in string topic_name, in string type_name, in TopicQos qos, in
TopicListener a_listener, in StatusMask mask);
Section 2.2.3 DCPS PSM : IDL, Interface DomainParticipantFactory modify the operation.
DomainParticipant create_participant(in DomainParticipantQos qos,
in DomainParticipantListener a_listener);
To
DomainParticipant create_participant(in DomainParticipantQos qos,
in DomainParticipantListener a_listener,
in StatusMask mask);
Section 2.2.3 DCPS PSM : IDL, Interface Publisher modify the operation.
DataWriter create_datawriter(in DataWriterQos qos, in DataWriterListener a_listener);
To
DataWriter create_datawriter(in DataWriterQos qos, in DataWriterListener a_listener,
in StatusMask mask);
Section 2.2.3 DCPS PSM : IDL, Interface Subscriber modify the operation.
DataReader create_aatareader(in DataReaderQos qos, in DataReaderListener a_listener);
To
DataReader create_datareader(in DataReaderQos qos, in DataReaderListener a_listener,
in StatusMask mask);
Actions taken:
March 11, 2005: received issue
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8555: (T#53) Cannot set listener mask when creating an entity (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The entity::set listener method sets a listener in combination with a mask that specifies the event interest. Listeners can also be set at construction of entities by passing the listener as parameter to the entity factory method. However it is not possible to set a mask during construction.
Resolution:
Add an event mask parameter (listener_mask) to entity constructors.
Revised Text:
Change signature and description of all entity factory methods.
Resolution:
Revised Text:
Actions taken:
August 1, 2005: closed issue
Discussion: Resolution:
Discard as it duplicates Issue#8554
Issue 8556: (T#59) Deletion of disabled entities (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Currently entities must be enabled before they can be deleted.
Resolution:
Specify that entities may be deleted if not enabled.
Revised Text:
Explicilty state on each class that the delete method can be called also on disabled entities.
Resolution:
Revised Text:
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Discussion: Resolution:
Specify that entities may be deleted if not enabled.
Revised Text:
In Section 2.1.2.1.1.7 enable; at the end of the paragraph:
"If an Entity has not yet been enabled, the only operations that can be invoked on it are the ones to set or get the QoS policies and the listener, the ones that get the StatusCondition, and the 'factory' operations that create other entities. Other operations will return the error NOT_ENABLED."
Add the paragraph:
It is legal to delete an Entity that has not been enabled by calling the proper operation on its factory.
Issue 8557: (T#60) Asynchronous write (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Some customers require guarantees on delivery to the network, i.e. a means to block until the service can guarantee that the data will be or is received by all recipients.
Resolution:
Add a Publisher::wait_for_acknowlegments(timeout) method that will block until all data written by all its writers has been acknowledged. If called with Publisher is suspended it will return PRECONDITION_NOT_MET
Revised Text:
Add a paragraph describing the function, add the function to the publisher-table, change the PSM to include the new function.
Resolution:
Revised Text: Resolution:
Add a Publisher::wait_for_acknowlegments(timeout) method that will block until all data written by all its writers has been acknowledged. If called with Publisher is suspended it will return PRECONDITION_NOT_MET
Revised Text:
In section 2.1.2.4.1 Publisher table. Add the following rows before the row that describes "get_participant":
wait_for_acknowledgments ReturnCode_t
max_wait Duration_t
Add section 2.1.2.4.2.12 (Previous section 2.1.2.4.2.12 get_participant becomes 2.1.2.4.2.13)
2.1.2.4.2.12 wait_for_acknowledgments
This operation blocks the calling thread until either all data written by the reliable DataWriter entities is acknowledged by all matched reliable DataReader entities, or else the duration specified by the max_wait parameter elapses, whichever happens first. A return value of OK indicates that all the samples written have been acknowledged by all reliable matched data readers; a return value of TIMEOUT indicates that max_wait elapsed before all the data was acknowledged.
Section 2.2.3 DCPS PSM : IDL, Interface Publisher add operation:
ReturnCode_t wait_for_acknowledgments(in Duration_t max_wait);
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8558: (T#61) Restrictive Handle definition (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The current IDL PSM contains the following lines:
#define HANDLE_TYPE_NATIVE long
#define HANDLE_NIL_NATIVE 0
typedef HANDLE_TYPE_NATIVE InstanceHandle_t;
const InstanceHandle_t HANDLE_NIL = HANDLE_NIL_NATIVE;
The two #defines can be vendor-specific. However, the constant definition in the last line restricts the HANDLE_TYPE_NATIVE to be of integer, char, wide_char, boolean, floating_pt, string, wide_string, fixed_pt or octet type; IDL does not allow any other (e.g. structured) types to be assigned a constant value.
Resolution:
The PSM contains a number of other elements that cannot be accurately expressed in IDL (e.g. static methods). As in those other cases, a comment should be added stating that structured and other non-primitive types may be used for HANDLE_TYPE_NATIVE and HANDLE_NIL_NATIVE even though IDL can't express it
Revised Text:
Mention the above in the introduction of the PSM.
Resolution:
Revised Text: Resolution:
The PSM contains a number of other elements that cannot be accurately expressed in IDL (e.g. static methods). As in those other cases, a comment should be added stating that structured and other non-primitive types may be used for HANDLE_TYPE_NATIVE and HANDLE_NIL_NATIVE even though IDL can't express it
Revised Text:
Section 2.2.2 PIM to PSM Mapping rules. At the end of the section add the paragraph:
The IDL PSM introduces a number of types that are intended to be defined in a native way. As these are opaque types, the actual definition of the type does not affect portability and is implementation dependent. For completeness the names of the types appear as typedefs in the IDL and a #define with the suffix "_TYPE_NATIVE" is used as a place-holder for the actual type. The type used in the IDL by this means is not normative and an implementation is allowed to use any other type, including non-scalar (i.e. structured types).
Section 2.2.3 DCPS PSM : IDL Replace
#define HANDLE_NIL_NATIVE 0
With:
#define HANDLE_NIL_NATIVE
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8559: (T#62, R#141) Unspecified TOPIC semantics (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: a) The semantics of the DURABILITY Qos attribute "service_cleanup_delay" in relation to RxO mechanism is not specified.
b) There is no relation between the history and resource-limits of the durability-service and the history and resource-limits of readers and writers.
c) the durability-service still has to be configured by means of the above mentioned parameters: 'service_cleanup_delay', 'history' and 'resource-limits'
Resolution:
Remove service_cleanup_delay from the DURABILITY QoS policy for readers and writers.
Add a new QoS "DURABILITY_SETTINGS" on the Topic whose sole purpose is to configure the durability service. The QoS policy should include the parameters: 'service_cleanup_delay', 'history' and 'resource-limits'.
Revised Text:
Remove the service_cleanup_delay from the DURABILITY QoS for readers and writers
Remove the history QoS and resource-limits QoS policies from the TopicBuiltinTopicData
Add the new DURABILITY_SETTINGS QoS and explain its behavior/meaning.
Add this QoS policy also to the TopicBuiltinTopicData.
Update the PSM accordingly
Resolution:
Revised Text: Resolution:
Remove service_cleanup_delay from the DURABILITY QoS policy for readers and writers.
Add a new QoS "DURABILITY_SERVICE" on the Topic whose sole purpose is to configure the durability service. The QoS policy should include the parameters: 'service_cleanup_delay', 'history' and 'resource-limits'.
Revised Text:
Section 2.1.3 Supported QoS Figure 2-12: Remove the service_cleanup_delay from the DurabilityQoPolicy
Section 2.1.3 Supported QoS Figure 2-12: Add new policy DurabilityServiceQosPolicy with fields:
struct DurabilityServiceQosPolicy {
Duration_t service_cleanup_delay;
HistoryQosPolicyKind history_kind;
long history_depth;
long max_samples;
long max_instances;
long max_samples_per_instance;
};
Section 2.1.3 QoS Table: DURABILITY row:
Remove sentence "And a duration service_cleanup_delay".
Section 2.1.3 QoS Table: remove row:
service_cleanup_delay Only needed if kind is TRANSIENTor PERSISTENT.Controls when the service is able toremove all information regarding adata-instances.By default, zero
Section 2.1.3 QoS Table: Add the following row:
DURABILITY _SERVICE A duration "service_cleanup_ delay"A HistoryQosPolicy-Kind "history_kind"And three integers: history_depth, max_samples, max_instances, max_samples_ per_instance Specifies the configuration of the durability service. That is, the service that implements the DURABILITY kind of TRANSIENT and PERSISTENT TopicDataWriter No No
service_cleanup_ delay Controls when the service is able toremove all information regarding adata-instance.By default, zero
history_kind,history_depth Control the HISTORY QoS of the fictitious DataReader that stores the data within the durability service (see section 2.1.3.4)The default settings are history_kind=KEEP_LASThistory_depth=1
max_samples, max_instances, max_samples_ per_instance Control the RESOURCE_ LIMITS QoS of the implied DataReader that stores the data within the durability service.By default they are all LENGTH_UNLIMITED.
Add section 2.1.3.5. Previous section 2.1.3.5 PRESENTATION becomes 2.1.3.6
2.1.3.5 DURABILITY_SERVICE
This policy is used to configure the HISTORY QoS and the RESOURCE_LIMITS QoS used by the fictitious DataReader and DataWriter used by the "persistence service". The "persistence service" is the one responsible for implementing the DURABILITY kinds TRANSIENT and PERSISTENCE. See Section 2.1.3.4.
Section 2.1.5 Built-in Topics. In the table that follows the sentence "The QoS of the built-in Subscriber and DataReader objects is given by the following table:".
· Modify row
DURABILITY TRANSIENT_LOCALservice_cleanup_delay = 0
· Add the row:
DURABILITY_SERVICE Does not apply as DURABILITY is TRANSIENT_LOCAL
Section 2.1.5 Built-in Topics. In the table that follows the sentence "The table below lists the built-in topics, their names, and the additional information--beyond the QoS policies that apply to the remote entity--that appears in the data associated with the built-in topic"
· In the rows describing DCPSTopic, add the row:
durability_service DurabilityServiceQosPolicy Policy of the corresponding Topic
· In the rows describing DCPSPublication, add the row:
durability_service DurabilityServiceQosPolicy Policy of the corresponding DataWriter
Section 2.2.3 DCPS PSM : IDL
· Add constants:
const string DURABILITYSERVICE_POLICY_NAME = "DurabilityService";
const QosPolicyId_t DURABILITYSERVICE_POLICY_ID = 22;
· Remove field service_cleanup_delay from structure DurabilityQosPolicy:
struct DurabilityQosPolicy {
DurabilityQosPolicyKind kind;
Duration_t service_cleanup_delay;
};
· Add structure DurabilityServiceQosPolicy:
struct DurabilityServiceQosPolicy {
Duration_t service_cleanup_delay;
HistoryQosPolicyKind history_kind;
long history_depth;
long max_samples;
long max_instances;
long max_samples_per_instance;
};
· Modify structure DataWriterQos to. Add field DurabilityServiceQosPolicy:
struct DataWriterQos {
DurabilityQosPolicy durability;
DurabilityServiceQosPolicy durability_service;
DeadlineQosPolicy deadline;
LatencyBudgetQosPolicy latency_budget;
LivelinessQosPolicy liveliness;
ReliabilityQosPolicy reliability;
DestinationOrderQosPolicy destination_order;
HistoryQosPolicy history;
ResourceLimitsQosPolicy resource_limits;
TransportPriorityQosPolicy transport_priority;
LifespanQosPolicy lifespan;
UserDataQosPolicy user_data;
OwnershipStrengthQosPolicy ownership_strength;
WriterDataLifecycleQosPolicy writer_data_lifecycle;
};
· Modify structure TopicQos. Add field DurabilityServiceQosPolicy:
struct TopicQos {
TopicDataQosPolicy topic_data;
DurabilityQosPolicy durability;
DurabilityServiceQosPolicy durability_service;
DeadlineQosPolicy deadline;
LatencyBudgetQosPolicy latency_budget;
LivelinessQosPolicy liveliness;
ReliabilityQosPolicy reliability;
DestinationOrderQosPolicy destination_order;
HistoryQosPolicy history;
ResourceLimitsQosPolicy resource_limits;
TransportPriorityQosPolicy transport_priority;
LifespanQosPolicy lifespan;
OwnershipQosPolicy ownership;
};
· Modify structure TopicBuiltinTopicData. Add field DurabilityServiceQosPolicy:
struct TopicBuiltinTopicData {
BuiltinTopicKey_t key;
string name;
string type_name;
DurabilityQosPolicy durability;
DurabilityServiceQosPolicy durability_service;
DeadlineQosPolicy deadline;
LatencyBudgetQosPolicy latency_budget;
LivelinessQosPolicy liveliness;
ReliabilityQosPolicy reliability;
TransportPriorityQosPolicy transport_priority;
LifespanQosPolicy lifespan;
DestinationOrderQosPolicy destination_order;
HistoryQosPolicy history;
ResourceLimitsQosPolicy resource_limits;
OwnershipQosPolicy ownership;
TopicDataQosPolicy topic_data;
};
· Modify structure PublicationBuiltinTopicData. Add field DurabilityServiceQosPolicy:
struct PublicationBuiltinTopicData {
BuiltinTopicKey_t key;
BuiltinTopicKey_t participant_key;
string topic_name;
string type_name;
DurabilityQosPolicy durability;
DurabilityServiceQosPolicy durability_service;
DeadlineQosPolicy deadline;
LatencyBudgetQosPolicy latency_budget;
LivelinessQosPolicy liveliness;
ReliabilityQosPolicy reliability;
LifespanQosPolicy lifespan;
UserDataQosPolicy user_data;
OwnershipStrengthQosPolicy ownership_strength;
PresentationQosPolicy presentation;
PartitionQosPolicy partition;
TopicDataQosPolicy topic_data;
GroupDataQosPolicy group_data;
};
Appendix A Compliance Points
· Add DURABILITY_SERVICE QoS to the Persistence profile. New text is:
o Persistence profile: This profile adds the optional Qos policy DURABILITY_SERVICE as well as the optional settings 'TRANSIENT' and 'PERSISTENT' of the DURABILITY QoS policy kind. This profile enables saving data into either TRANSIENT memory, or permanent storage so that it can survive the lifecycle of the DataWriter and system outings. See section 2.1.3.4.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8560: (T#65) Missing get_current_time() function (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: The DDS supports timestamping of information, either automatically or manually. These timestamps also appear in the SampleInfo data. What is missing is a get_current_time() call that allows applications to retrieve the current-time in the format as is utilized by the DDS. This means that the returned format and starting-time is the same as is used by the default/internal DDS timestamping.
Resolution:
Add such a 'get_current_time()' function to the participant class
Revised Text:
Add and explain the method and update the PSM.
Resolution:
Revised Text: Resolution:
Add such a 'get_current_time()' function to the participant class
Revised Text:
In section 2.1.2.2.1 DomainParticipant Class. DomainParticipant table. Add the following operation to the table:
get_current_time ReturnCode_t
out:current_time Time_t
Add section 2.1.2.2.1.26
2.1.2.2.1.26 get_current_time
This operation returns the current value of the time that the service uses to time-stamp data-writes and to set the reception-timestamp for the data-updates it receives.
Section 2.2.3 DCPS PSM : IDL, Interface DomainParticipant add the operation:
void get_current_time(inout Time_t current_time);
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8561: Read or take next instance, and others with an illegal instance_handle (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: Whenever an instance handle is passed as an input-parameter to a dataReader method, it may be invalid e.g. because they are disposed and reclaimed. The semantics of this case should be clarified.
Resolution:
Assuming that implementations want to check for invalid handles, they should generically return 'BAD_PARAMETER'.
Revised Text:
Specify that BAD_PARAMETER is returned when providing illegal handles to:
· Read/Take_instance
· Read/Take_next_instance
· Read/Take_next_instance_with_condition
· Get_key_value (both on dataReader and dataWriter)
Resolution:
Revised Text: Resolution:
Assuming that implementations want to check for invalid handles, they should generically return 'BAD_PARAMETER'.
Revised Text:
Section 2.1.2.4.2.9 get_key_value. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataWriter. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.14 read_instance. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.15 take_instance. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.16 read_next_instance. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.17 take_next_instance. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.18 read_next_instance_w_condition. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.19 take_next_instance_w_condition. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.28 get_key_value. At the very end add the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles then the result in this situation is unspecified.
Section 2.1.2.5.3.32 get_matched_publication_data. Replace 'PRECONDITION_NOT_MET' with 'BAD_PARAMATER' in the sentence:
The publication_handle must correspond to a publication currently associated with the DBAD_PARAMETERataReader otherwise the operation will fail and return PRECONDITION_NOT_MET BAD_PARAMETER.
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Discussion:
Issue 8562: (T#69) Notification of unsupported QoS policies (data-distribution-rtf)
Click here for this issue's archive.
Source: PrismTech (Mr. Hans van't Hag, hans.vanthag(at)prismtech.com)
Nature: Uncategorized Issue
Severity:
Summary: If a QoS policy is not supported (i.e. is part of an unsupported profile) but the user supplies a non-default value for it, how should the system react
Resolution:
Just return UNSUPPORTED
Revised Text:
Add a remark in the specification that retcode UNSUPPORTED is returned when supplying a QoS-policy that is not supported by the middleware (i.e. is part of an optional-profile that is not supported by the specific middleware implementation)
Resolution:
Revised Text: Resolution:
Just return UNSUPPORTED.
Revised Text:
Section 2.1.2.1.1.1 set_qos.
Before paragraph "The existing set of policies are only changed…" Add the paragraph:
If the application supplies a non-default value for a QoS policy that is not supported by the implementation of the service, the set_qos operation will fail and return UNSUPPORTED
Actions taken:
March 11, 2005: received issue
August 1, 2005: closed issue
Issue 8567: O#7966) Confusing terminology: "plain data structures" (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Section 2.1.1.2.2 states: "At the DCPS level, data types represent information that is sent atomically. For performance reasons, only plain data structures are handled by this level." It is not clear what "plain data structures" means.
Proposed Resolution:
Remove the second sentence quoted above from the specification.
Proposed Revised Text:
Remove the sentence "For performance reasons, only plain data structures are handled by this level" from section 2.1.1.2.2, page 2-7.
Resolution:
Revised Text:
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Discussion: Discard. This issue duplicates Issue#7966
Issue 8568: (R#104) Inconsistent naming of QueryCondition::get_query_arguments (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The operations QueryCondition::get_query_arguments and QueryCondition::set_query_arguments are named inconsistently with respect to similar operations on the ContentFilteredTopic and the MultiTopic.
Proposed Resolution:
Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters both in the PIM and PSM.
Proposed Revised Text:
Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters in the table in section 2.1.2.5.9. Rename set_query_arguments to set_query_parameters in the paragraph immediately following the table in the same section.
Rename get_query_arguments to get_query_parameters in the title of section 2.1.2.5.9.2. Rename set_query_arguments to set_query_parameters within that section (two occurrences).
Rename set_query_arguments to set_query_parameters in the title to section 2.1.2.5.9.3.
Rename set_query_arguments to set_query_parameters in figure 2-18.
Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters in the IDL PSM in section 2.3.3, page 2-144.
Resolution:
Revised Text: Resolution:
Rename get_query_arguments to get_query_parameters, set_query_arguments to set_query_parameters both in the PIM and PSM, and 'query_arguments' to 'query_parameters'
Revised Text:
Table in section 2.1.2.5.9
Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters.
Paragraph following Table in section 2.1.2.5.9
Rename set_query_arguments to set_query_parameters.
Section 2.1.2.5.9.2
Rename get_query_arguments to get_query_parameters in the title of section 2.1.2.5.9.2. Rename set_query_arguments to set_query_parameters within that section (two occurrences). Rename 'query_arguments' to query_parameters'
Section 2.1.2.5.9.3
Rename set_query_arguments to set_query_parameters in the title to section 2.1.2.5.9.3. Rename 'query_arguments' to query_parameters'
Figure 2-18
Rename set_query_arguments to set_query_parameters in figure 2-18.
Section 2.2.3 DCPS PSM : IDL interface QueryCondition
Rename get_query_arguments to get_query_parameters and set_query_arguments to set_query_parameters, 'query_arguments' to query_parameters'.
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8569: (R#115b) Incorrect description of QoS for built-in readers (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In Section 2.1.5, there is a table that lists all the QoS policies that are used to create built-in readers. Since the policies are for creating built-in readers, the table should only list the QoS for the corresponding subscriber, reader, and topic. It shouldn't list any policies that occur only in DataWriterQos. Specifically, TRANSPORT_PRIORITY, LIFESPAN, and OWNERSHIP_STRENGTH, all of which apply only to DataWriters, are currently listed erroneously.
The following QoS are supposed to apply to DataReaders (or their related entities) but are missing from the table: ReaderDataLifecycleQosPolicy, EntityFactoryQosPolicy
For the QoS that are already listed in the table, some of them don't list the default values of some of the fields.
DURABILITY: missing service_cleanup_delay value
RELIABILITY: missing max_blocking_time value
Proposed Resolution:
Remove TRANSPORT_PRIORITY, LIFESPAN, and OWNERSHIP_STRENGTH from the table.
Add the following values:
READER_DATA_LIFECYCLE autopurge_nowriter_samples_delay = INFINITE
ENTITY_FACTOR autoenable_created_entities = TRUE
DURABILITY service_cleanup_delay = 0
RELIABILITY max_blocking_time = 100 milliseconds
Proposed Revised Text:
Change the table in section 2.1.5, page 2-129, as described above.
Resolution:
Revised Text: Resolution:
Remove TRANSPORT_PRIORITY, LIFESPAN, and OWNERSHIP_STRENGTH from the table.
Add the following values:
READER_DATA_LIFECYCLE autopurge_nowriter_samples_delay = INFINITE
ENTITY_FACTOR autoenable_created_entities = TRUE
DURABILITY service_cleanup_delay = 0
RELIABILITY max_blocking_time = 100 milliseconds
Revised Text:
Change the table in section 2.1.5, page 2-129, apply the following modifications:
Remove TRANSPORT_PRIORITY, LIFESPAN, and OWNERSHIP_STRENGTH from the table.
Add the following values:
READER_DATA_LIFECYCLE autopurge_nowriter_samples_delay = infinite
ENTITY_FACTORY autoenable_created_entities = TRUE
DURABILITY service_cleanup_delay = 0
RELIABILITY max_blocking_time = 100 milliseconds
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8570: (R#117) No way to access Participant and Topic built-in topic data (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification already provides the operations get_matched_publication_data and get_matched_subscription_data on the DataReader and DataWriter. These operations allow applications to look up information about entities that exist in the domain without having to use the built-in DataReaders directly. It would be useful to have the corresponding ability to look up information about remote DomainParticipants and Topics; however, no such operations exist.
Proposed Resolution:
Add the following operations:
· ReturnCode_t DomainParticipant::get_discovered_participants(inout InstanceHandle_t[] participant_handles)
· ReturnCode_t DomainParticipant::get_discovered_participant_data(inout ParticipantBuiltinTopicData publication_data, InstanceHandle_t participant_handle)
· ReturnCode_t DomainParticipant::get_discovered_topics(inout InstanceHandle_t[]topic_handles)
· ReturnCode_t DomainParticipant::get_discovered_topic_data(inout TopicBuiltinTopicData topic_data, InstanceHandle_t topic_handle)
Proposed Revised Text:
Add the names of the aforementioned new operations to figure 2-6.
Append the following rows to the DomainParticipant Class table in 2.1.2.2.1:
get_discovered_participant_data ReturnCode_t
inout: publication_data ParticipantBuiltinTopicData
participant_handle InstanceHandle
get_discovered_participants ReturnCode_t
inout: participant_handles InstanceHandle_t []
get_discovered_topic_data ReturnCode_t
inout: topic_data TopicBuiltinTopicData
topic_handle InstanceHandle
get_discovered_topics ReturnCode_t
inout: topic_handles InstanceHandle_t []
Insert new sections to describe the new operations:
2.1.2.2.1.26 get_discovered_participant_data
This operation retrieves information on a DomainParticipant that has been discovered on the network. The participant must be in the same domain as the participant on which this operation is invoked and must not have been "ignored" by means of the DomainParticipant ignore_participant operation.
The participant_handle must correspond to such a DomainParticipant. Otherwise, the operation will fail and return PRECONDITION_NOT_MET.
Use the operation get_matched_participants to find the DomainParticipants that are currently discovered.
The operation may also fail if the infrastructure does not hold the information necessary to fill in the participant_data. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.27 get_discovered_participants
This operation retrieves the list of DomainParticipants that have been discovered in the domain and that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_participant operation.
The operation may fail if the infrastructure does not locally maintain the connectivity information. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.26 get_discovered_topic_data
This operation retrieves information on a Topic that has been discovered on the network. The topic must have been created by a participant in the same domain as the participant on which this operation is invoked and must not have been "ignored" by means of the DomainParticipant ignore_topic operation.
The topic_handle must correspond to such a topic. Otherwise, the operation will fail and return PRECONDITION_NOT_MET.
Use the operation get_discovered_topics to find the topics that are currently discovered.
The operation may also fail if the infrastructure does not hold the information necessary to fill in the topic_data. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.27 get_discovered_topics
This operation retrieves the list of Topics that have been discovered in the domain and that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_topic operation.
The operation may fail if the infrastructure does not locally maintain the connectivity information. In this case the operation will return UNSUPPORTED.
Resolution:
Revised Text: Resolution:
Add the following operations:
· ReturnCode_t DomainParticipant::get_discovered_participants(inout InstanceHandle_t[] participant_handles)
· ReturnCode_t DomainParticipant::get_discovered_participant_data(inout ParticipantBuiltinTopicData publication_data, InstanceHandle_t participant_handle)
· ReturnCode_t DomainParticipant::get_discovered_topics(inout InstanceHandle_t[]topic_handles)
· ReturnCode_t DomainParticipant::get_discovered_topic_data(inout TopicBuiltinTopicData topic_data, InstanceHandle_t topic_handle)
Revised Text:
Figure 2-6 Add the names of the aforementioned new operations
Section 2.1.2.2.1 DomainParticipant Class table:
Append the following rows to the table:
get_discovered_participants ReturnCode_t
inout: participant_handles InstanceHandle_t []
get_discovered_participant_data ReturnCode_t
inout: participant_data ParticipantBuiltinTopicData
participant_handle InstanceHandle_t
get_discovered_topics ReturnCode_t
inout: topic_handles InstanceHandle_t []
get_discovered_topic_data ReturnCode_t
inout: topic_data TopicBuiltinTopicData
topic_handle InstanceHandle_t
Insert new sections to describe the new operations:
2.1.2.2.1.27 get_discovered_participants
This operation retrieves the list of DomainParticipants that have been discovered in the domain and that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_participant operation.
The operation may fail if the infrastructure does not locally maintain the connectivity information. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.28 get_discovered_participant_data
This operation retrieves information on a DomainParticipant that has been discovered on the network. The participant must be in the same domain as the participant on which this operation is invoked and must not have been "ignored" by means of the DomainParticipant ignore_participant operation.
The participant_handle must correspond to such a DomainParticipant. Otherwise, the operation will fail and return PRECONDITION_NOT_MET.
Use the operation get_matched_participants to find the DomainParticipants that are currently discovered.
The operation may also fail if the infrastructure does not hold the information necessary to fill in the participant_data. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.29 get_discovered_topics
This operation retrieves the list of Topics that have been discovered in the domain and that the application has not indicated should be "ignored" by means of the DomainParticipant ignore_topic operation.
The operation may fail if the infrastructure does not locally maintain the connectivity information. In this case the operation will return UNSUPPORTED.
2.1.2.2.1.30 get_discovered_topic_data
This operation retrieves information on a Topic that has been discovered on the network. The topic must have been created by a participant in the same domain as the participant on which this operation is invoked and must not have been "ignored" by means of the DomainParticipant ignore_topic operation.
The topic_handle must correspond to such a topic. Otherwise, the operation will fail and return PRECONDITION_NOT_MET.
Use the operation get_discovered_topics to find the topics that are currently discovered.
The operation may also fail if the infrastructure does not hold the information necessary to fill in the topic_data. In this case the operation will return UNSUPPORTED.
Section 2.2.3 DCPS PSM : IDL interface DomainParticipant
Add the operations:
ReturnCode_t get_discovered_participants(
inout InstanceHandleSeq participant_handles);
ReturnCode_t get_discovered_participant_data(
in InstanceHandle_t participant_handle,
inout ParticipantBuiltinTopicData participant_data);
ReturnCode_t get_discovered_topics(inout InstanceHandleSeq topic_handles);
ReturnCode_t get_discovered_topic_data(
in InstanceHandle_t topic_handle,
inout TopicBuiltinTopicData topic_data);
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8571: (R#126) Correction to DataWriter blocking behavior (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The DDS spec currently states that the max_blocking_time parameter of the RELIABILITY QoS only applies for data writers that are RELIABLE and have HISTORY QoS of KEEP_ALL.
These assertions are not true. Depending on the RESOURCE_LIMITS QoS, even a KEEP_LAST writer may eventually need to block.
Proposed Resolution:
The specification needs to be updated in the table of QoS in Section 2.1.3 and in the DataWriter section 2.1.2.4.2.10 for write to account for the case in which (max_samples < max_instances * HISTORY depth). In this case, the writer may attempt to write a new value for an existing instance whose history is not full and fail because it exceeds the max_samples limit. Therefore, if (max_samples < max_instances * HISTORY depth), then in the situation where the max_samples resource limit is exhausted the middleware is allowed to discard samples of some other instance as long as at least one sample remains for that instance. If it is still not possible to make space available for the new sample, the writer is allowed to block.
The behavior in the case where max_samples < max_instances must also be described. In that case the writer is allowed to block.
Proposed Revised Text:
In the QoS table in 2.1.3, change the first sentence of the "Meaning" cell of the RELIABILITY max_blocking_time row to: "This setting applies only to the case where kind=RELIABLE."
In section 2.1.2.4.2.10, replace the final paragraph with the following:
If the RELIABILITY kind is set to RELIABLE, the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block waiting for space to become available. If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the write operation will fail and return TIMEOUT.
Specifically, the DataWriter may block in the following situations (although the list may not be exhaustive), even if its HISTORY kind is KEEP_LAST.
· If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances * HISTORY depth), then in the situation where the max_samples resource limit is exhausted the Service is allowed to discard samples of some other instance as long as at least one sample remains for such an instance. If it is still not possible to make space available to store the modification, the writer is allowed to block.
· If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances), then the DataWriter may block regardless of the HISTORY depth.
In section 2.1.3.13 RELIABILITY, the second paragraph currently states:
The setting of this policy has a dependency on the setting of the HISTORY and RESOURCE_LIMITS policies. In case the RELIABILITY kind is set to RELIABLE and the HISTORY kind set to KEEP_ALL the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded.
The above text should be rewritten as follows:
The setting of this policy has a dependency on the RESOURCE_LIMITS policy. In case the RELIABILITY kind is set to RELIABLE the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded.
Resolution:
Revised Text: Resolution:
The specification needs to be updated in the table of QoS in Section 2.1.3 and in the DataWriter section 2.1.2.4.2.10 for write to account for the case in which (max_samples < max_instances * HISTORY depth). In this case, the writer may attempt to write a new value for an existing instance whose history is not full and fail because it exceeds the max_samples limit. Therefore, if (max_samples < max_instances * HISTORY depth), then in the situation where the max_samples resource limit is exhausted the middleware is allowed to discard samples of some other instance as long as at least one sample remains for that instance. If it is still not possible to make space available for the new sample, the writer is allowed to block.
The behavior in the case where max_samples < max_instances must also be described. In that case the writer is allowed to block.
Revised Text:
In the QoS table in 2.1.3, change the first sentence:
This setting applies only to the case where kind=RELIABLE and the HISTORY is KEEP_ALL
With:
This setting applies only to the case where kind=RELIABLE.
In section 2.1.2.4.2.10 write
replace the paragraph
If the RELIABILITY kind is set to RELIABLE and the HISTORY kind is set to KEEP_ALL the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the write operation will fail and return TIMEOUT.
with the following:
If the RELIABILITY kind is set to RELIABLE, the write operation may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block waiting for space to become available. If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the write operation will fail and return TIMEOUT.
Specifically, the DataWriter write operation may block in the following situations (noe that the list may not be exhaustive), even if its HISTORY kind is KEEP_LAST.
· If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances * HISTORY depth), then in the situation where the max_samples resource limit is exhausted the Service is allowed to discard samples of some other instance as long as at least one sample remains for such an instance. If it is still not possible to make space available to store the modification, the writer is allowed to block.
· If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances), then the DataWriter may block regardless of the HISTORY depth.
.
In section 2.1.3.13 RELIABILITY, replace the second paragraph:
The setting of this policy has a dependency on the setting of the HISTORY and RESOURCE_LIMITS policiesy. In case the RELIABILITY kind is set to RELIABLE and the HISTORY kind set to KEEP_ALL the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum duration the write operation may block.
With:
The setting of this policy has a dependency on the RESOURCE_LIMITS policy. In case the RELIABILITY kind is set to RELIABLE the write operation on the DataWriter may block if the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum duration the write operation may block
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8572: Clarify meaning of LivelinessChangedStatus fields and LIVELINESS le (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: (R#132) Clarify meaning of LivelinessChangedStatus fields and LIVELINESS lease_duration
Summary:
The specification of LivelinessChangedStatus doesn't explain what the terms "active" and "inactive" mean nor what change is expected when various events occur. For example, the following actions should be accounted for:
· Loss of liveliness by a previously alive writer
· Re-assertion of liveliness on a previously lost writer
· Normal deletion of an alive writer
· Normal deletion of a not-alive writer
· Assertion of liveliness on a new writer
· A new writer is discovered (i.e. on_publication_match) but its liveliness has not yet been asserted
The specification is also not clear about the usage of a DataReader's LIVELINESS lease_duration it is not clear whether this field is used solely for QoS compatibility comparison with matching remote writers or if it is also the rate at which the reader will update its LivelinessChangedStatus.
Proposed Resolution:
Change "active" to "alive" and "inactive" to "not_alive" in the LivelinessChangedStatus field names.
In response to the list of events above:
· Previously alive writer is lost: alive_count_change == -1, not_alive_count_change = 1
· Lost writer re-asserts liveliness: alive_count_change == 1, not_alive_count_change == -1
· Normal deletion of alive writer: alive_count_change == -1, not_alive_count_change == 0
· Normal deletion of not alive writer: alive_count_change == 0, not_alive_count_change == -1
· New writer asserts liveliness for first time: alive_count_change == 1, not_alive_count_change == 0
· New writer but hasn't yet asserted liveliness: LivelinessChangedStatus is not changed
Specify that the information communicated by a reader's LivelinessChangedStatus is out of date by no more than a lease_duration. That is, the reader commits to update its LivelinessChangedStatus if necessary at least once during its lease_duration, although it may update more often if it chooses.
Proposed Revised Text:
Add an additional paragraph to the end of section 2.1.3.10 LIVELINESS:
The information communicated by a DataReader's LivelinessChangedStatus is out of date by no more than a single lease_duration. That is, the reader commits to updating its LivelinessChangedStatus if necessary at least once during each lease_duration, although it is permitted to update more often.
Change active_count to alive_count, inactive_count to not_alive_count, active_count_change to alive_count_change, and inactive_count_change to not_alive_count_change in figure 2-13 on page 2-117.
The rows for the LivelinessChangedStatus fields in the table on page 2-118 should be as follows:
alive_count The total number of currently active DataWriters that write the Topic read by the DataReader. This count increases when a newly matched DataWriter asserts its liveliness for the first time or when a DataWriter previously considered to be not alive reasserts its liveliness. The count decreases when a DataWriter considered alive fails to assert its liveliness and becomes not alive, whether because it was deleted normally or for some other reason.
not_alive_count The total count of currently DataWriters that write the Topic read by the DataReader that are no longer asserting their liveliness. This count increases when a DataWriter considered alive fails to assert its liveliness and becomes not alive for some reason other than the normal deletion of that DataWriter. It decreases when a previously not alive DataWriter either reasserts its liveliness or is deleted normally.
alive_count_change The change in the alive_count since the last time the listener was called or the status was read.
not_alive_count_change The change in the not_alive_count since the last time the listener was called or the status was read.
Change active_count to alive_count, inactive_count to not_alive_count, active_count_change to alive_count_change, and inactive_count_change to not_alive_count_change in the IDL PSM on page 2-141.
Resolution:
Revised Text: Resolution:
Change "active" to "alive" and "inactive" to "not_alive" in the LivelinessChangedStatus field names.
In response to the list of events above:
· Previously alive writer is lost: alive_count_change == -1, not_alive_count_change = 1
· Lost writer re-asserts liveliness: alive_count_change == 1, not_alive_count_change == -1
· Normal deletion of alive writer: alive_count_change == -1, not_alive_count_change == 0
· Normal deletion of not alive writer: alive_count_change == 0, not_alive_count_change == -1
· New writer asserts liveliness for first time: alive_count_change == 1, not_alive_count_change == 0
· New writer but hasn't yet asserted liveliness: LivelinessChangedStatus is not changed
Specify that the information communicated by a reader's LivelinessChangedStatus is out of date by no more than a lease_duration. That is, the reader commits to update its LivelinessChangedStatus if necessary at least once during its lease_duration, although it may update more often if it chooses.
Revised Text:
Add an additional paragraph to the end of section 2.1.3.10 LIVELINESS:
Changes in LIVELINESS must be detected by the Service with a time-granularity equal or greater to the lease_duration. This ensures that the value of the LivelinessChangedStatus is updated at least once during each lease_duration and the related Listeners and WaitSets are notified within a lease_duration from the time the LIVELINESS changed.
figure 2-13 on page 2-117
Change active_count to alive_count, inactive_count to not_alive_count, active_count_change to alive_count_change, and inactive_count_change to not_alive_count_change.
Section 2.1.4.1 Communication Status
The rows for the LivelinessChangedStatus fields in the Communication Status table should be as follows:
alive_count The total number of currently active DataWriters that write the Topic read by the DataReader. This count increases when a newly matched DataWriter asserts its liveliness for the first time or when a DataWriter previously considered to be not alive reasserts its liveliness. The count decreases when a DataWriter considered alive fails to assert its liveliness and becomes not alive, whether because it was deleted normally or for some other reason.
not_alive_count The total count of currently DataWriters that write the Topic read by the DataReader that are no longer asserting their liveliness. This count increases when a DataWriter considered alive fails to assert its liveliness and becomes not alive for some reason other than the normal deletion of that DataWriter. It decreases when a previously not alive DataWriter either reasserts its liveliness or is deleted normally.
alive_count_change The change in the alive_count since the last time the listener was called or the status was read.
not_alive_count_change The change in the not_alive_count since the last time the listener was called or the status was read.
Section 2.2.3 DCPS PSM : IDL interface LivelinessChangedStatus
Change from to
active_count alive_count
inactive_count not_alive_count
active_count_change alive_count_change
inactive_count_change not_alive_count_change
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8573: (R#133) Clarify meaning of LivelinessLost and DeadlineMissed (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not state whether the LivelinessLostStatus should be considered changed (and the on_liveliness_lost listener callback called) once-when the writer first lost its liveliness-or whether should it be called after every period in which the writer fails to assert its liveliness. For example, if a writer with liveliness set to MANUAL_BY_TOPIC doesn't write for two LIVELINESS lease_duration periods, should the writer listener's on_liveliness_lost callback be called once per lease_duration, or should the callback be called only once after the first lease_duration?
The analogous ambiguity exists with respect to the *DeadlineMissedStatuses.
Proposed Resolution:
The cases are somewhat different in that the deadline is under the application's control while a loss of liveliness is not (e.g. it may occur as a result of a network failure). Therefore, the *DeadlineMissed statuses should be considered changed (and the listeners invoked) at the end of every deadline period. The LivelinessLostStatus, on the other hand, should be considered changed only when a writer's state changes from alive to not alive, not after every lease_duration period thereafter.
Proposed Revised Text:
Change the description in the RequestedDeadlineMissed total_count row of the table in 2.1.4.1 (page 2-118) to read: "Total cumulative number of missed deadlines detected for any instance read by the DataReader. Missed deadlines accumulate; that is, each deadline period the total_count will be incremented by one for each instance for which data was not received."
Change the description in the LivelinessLostStatus total_count row of the table in 2.1.4.1 (page 2-119) to read: "Total cumulative number of times that a previously-alive DataWriter became not alive due to a failure to actively signal its liveliness within its offered liveliness period. This count does not change when an already not alive DataWriter simply remains not alive for another liveliness period."
Change the description in the OfferedDeadlineMissed total_count row of the table in 2.1.4.1 (page 2-119) to read: "Total cumulative number of offered deadline periods elapsed during which a DataWriter failed to provide data. Missed deadlines accumulate; that is, each deadline period the total_count will be incremented by one."
Resolution:
Revised Text: Resolution:
The cases are somewhat different in that the deadline is under the application's control while a loss of liveliness is not (e.g. it may occur as a result of a network failure). Therefore, the *DeadlineMissed statuses should be considered changed (and the listeners invoked) at the end of every deadline period. The LivelinessLostStatus, on the other hand, should be considered changed only when a writer's state changes from alive to not alive, not after every lease_duration period thereafter.
Revised Text:
Change the description in the RequestedDeadlineMissedStatus total_count row of the table in 2.1.4.1 (page 2-118)
From
Total cumulative count of the missed deadlines detected for any instance read by the DataReader. Missed deadlines accumulate, that is, each deadline period the total_count will be incremented by one for each instance for which data was not received.
To
Total cumulative number of missed deadlines detected for any instance read by the DataReader. Missed deadlines accumulate; that is, each deadline period the total_count will be incremented by one for each instance for which data was not received."
Change the description in the LivelinessLostStatus total_count row of the table in 2.1.4.1 (page 2-119)
From
Total cumulative count of the number of times the DataWriter failed to actively signal its liveliness within the offered liveliness period
To
Total cumulative number of times that a previously-alive DataWriter became not alive due to a failure to actively signal its liveliness within its offered liveliness period. This count does not change when an already not alive DataWriter simply remains not alive for another liveliness period.
Change the description in the OfferedDeadlineMissed total_count row of the table in 2.1.4.1 (page 2-119)
From:
Total cumulative number of times the DataWriter failed to write within its offered deadline.
To:
Total cumulative number of offered deadline periods elapsed during which a DataWriter failed to provide data. Missed deadlines accumulate; that is, each deadline period the total_count will be incremented by one."
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8574: (R#136) Additional operations allowed on disabled entities (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification states that before an entity is enabled, the only operations that can be invoked on it are get/set listener, get/set QoS, get_statuscondition, and factory methods. This list is unnecessarily restrictive.
Proposed Resolution:
The following operations should also be allowed:
· TopicDescription::get_name
· TopicDescription::get_type_name
· DomainParticipant::lookup_topicdescription
· Publisher::lookup_datawriter
· Subscriber::lookup_datareader
· Entity::get_status_changes and all get_*_status operations. Note that no status is considered 'triggered' when an Entity is disabled.
· All get_/set_default_*_qos operations
Proposed Revised Text:
Revise the fourth paragraph of section 2.1.2.1.1.7 enable to read:
If an Entity has not yet been enabled, the following operations may be invoked on it in general:
· Operations to set or get an Entity's QoS policies (including default QoS policies) and listener
· get_statuscondition
· 'factory' operations
· get_status_changes and other get status operations (although no status of a disabled entity is ever considered changed)
· 'lookup' operations
Other operations may explicitly state that they may be called on disabled entities; those that do not will return the error NOT_ENABLED.
Add the following sentence to sections 2.1.2.3.1.2 (TopicDescription::get_type_name) and 2.1.2.3.1.3 (TopicDescription::get_name): "This operation may be invoked on a Topic that is not yet enabled or on a ContentFilteredTopic or MultiTopic based on such a Topic."
Resolution:
Revised Text: Resolution:
The following operations should also be allowed:
· TopicDescription::get_name
· TopicDescription::get_type_name
· DomainParticipant::lookup_topicdescription
· Publisher::lookup_datawriter
· Subscriber::lookup_datareader
· Entity::get_status_changes and all get_*_status operations. Note that no status is considered 'triggered' when an Entity is disabled.
· All get_/set_default_*_qos operations
Revised Text:
section 2.1.2.1.1.7 enable
Replace the fourth paragraph
If an Entity has not yet been enabled, the only operations that can be invoked on it are the ones to set or get the QoS policies and the listener, the ones that get the StatusCondition, and the 'factory' operations that create other entities. Other operations will return the error NOT_ENABLED.
With:
If an Entity has not yet been enabled, the following kinds of operations may be invoked on itl:
· Operations to set or get an Entity's QoS policies (including default QoS policies) and listener
· get_statuscondition
· 'factory' operations
· get_status_changes and other get status operations (although the status of a disabled entity never changes)
· 'lookup' operations
Other operations may explicitly state that they may be called on disabled entities; those that do not will return the error NOT_ENABLED
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8575: (R#144) Default value for DataWriter RELIABILITY QoS (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification states that the default value of the RELIABILITY QoS policy on a DataWriter is BEST_EFFORT; however this makes it automatically incompatible with readers that request a RELIABLE value. This situation is not a problem per se, but it means that applications desiring RELIABLE communications must change the default configuration in two places: both with the reader and with the writer.
Proposed Resolution:
Changing the default value-on the DataWriter only-to RELIABLE would make the initial configuration of DDS applications simpler. The default behavior would still be the same because the DataReader would still default to BEST_EFFORT and therefore the default communication would be BEST_EFFORT. However, applications desiring a RELIABLE setting would have to change the defaults in only one place: with the DataReader.
Proposed Revised Text:
Append the following sentence to the "Meaning" column of the RELIABILITY RELIABLE row of the table in 2.1.3: "This is the default value for DataWriters."
The final sentence in the "Meaning" column of the RELIABILITY BEST_EFFORT row of the table in 2.1.3 currently states: "This is the default value." This sentence should be amended: "This is the default value for DataReaders and Topics."
Resolution:
Revised Text: Resolution:
Changing the default value-on the DataWriter only-to RELIABLE would make the initial configuration of DDS applications simpler. The default behavior would still be the same because the DataReader would still default to BEST_EFFORT and therefore the default communication would be BEST_EFFORT. However, applications desiring a RELIABLE setting would have to change the defaults in only one place: with the DataReader.
Revised Text:
Section 2.1.3 Supported Qos Policies
Append the following sentence to the "Meaning" column of the RELIABILITY RELIABLE row of the table in 2.1.3: "This is the default value for DataWriters."
Replace final sentence in the "Meaning" column of the RELIABILITY BEST_EFFORT row of the table in 2.1.3
From: "This is the default value."
To: "This is the default value for DataReaders and Topics."
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8576: (R#150) Ambiguous description of create_topic behavior (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The description of the DomainParticipant::create_topic operation in section 2.1.2.2.1.5 states that if an existing topic is found with the same name and QoS, that Topic will be returned; no duplicate Topic will be created. However, the specification fails to describe what will happen in the event that the name and QoS match but the listener is different. Additionally, the behavior places a barrier of understanding before the user, because create_topic behaves differently from all other factory methods in this respect.
Proposed Resolution:
Revise the specification to remove the language about reusing Topics. The create_topic operation, like all other 'create' operations in the specification, should always return a new Topic.
Proposed Revised Text:
Section 2.1.2.2.1.5 contains the following paragraphs; they should both be stricken from the specification:
The implementation of create_topic will automatically perform a lookup_topicdescription for the specified topic_name. If a Topic is found, then the QoS and type_name of the found Topic are matched against the ones specified on the create_topic call. If there is an exact match, the existing Topic is returned. If there is no match the operation will fail. The consequence is that the application can never create more than one Topic with the same topic_name per DomainParticipant. Subsequent attempts will either return the existing Topic (i.e., behave like find_topic) or else fail.
If a Topic is obtained multiple times by means of a create_topic, it must also be deleted that same number of times using delete_topic.
Resolution:
Revised Text: Resolution:
Revise the specification to remove the language about reusing Topics. The create_topic operation, like all other 'create' operations in the specification, should always return a new Topic.
Revised Text:
Section 2.1.2.2.1.5 contains the following paragraphs; they should both be removed from the specification:
The implementation of create_topic will automatically perform a lookup_topicdescription for the specified topic_name. If a Topic is found, then the QoS and type_name of the found Topic are matched against the ones specified on the create_topic call. If there is an exact match, the existing Topic is returned. If there is no match the operation will fail. The consequence is that the application can never create more than one Topic with the same topic_name per DomainParticipant. Subsequent attempts will either return the existing Topic (i.e., behave like find_topic) or else fail.
If a Topic is obtained multiple times by means of a create_topic, it must also be deleted that same number of times using delete_topic.
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8577: R#178) Unclear behavior of coherent changes when communication interrupted (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The Publisher entity has operations begin_coherent_changes and end_coherent_changes that allow groups of updates to be received by subscriptions as if they were a single update. Although the specification already contains a general statement about receivers not making updates available until all have been received, no specific mention of communication interruptions or configuration changes is made. This omission has caused questions to be raised with regards to the interactions between coherent changes and partitions, late-joining DataReaders, and network failures.
Proposed Resolution:
The specification should be amended to state that a Publisher should not prevent users from changing its partitions while it is in the middle of publishing a set of coherent changes, as the effect of doing so is no different than that of any other connectivity change. However, in the event that connectivity changes occur between the publishers and receivers of data such that some receiver is not able to obtain the entire set, that receiver must act as if it had received none of the data.
Proposed Revised Text:
Append the following text to the second paragraph of section 2.1.2.4.1.10 begin_coherent_changes:
A connectivity change may occur in the middle of a set of coherent changes; for example, the set of partitions used by the Publisher or one of its Subscribers may change, a late-joining DataReader may appear on the network, or a communication failure may occur. In the event that such a change prevents an entity from receiving the entire set of coherent changes, that entity must behave as if it had received none of the set.
Resolution:
Revised Text: Resolution:
The specification should be amended to state that a Publisher should not prevent users from changing its partitions while it is in the middle of publishing a set of coherent changes, as the effect of doing so is no different than that of any other connectivity change. However, in the event that connectivity changes occur between the publishers and receivers of data such that some receiver is not able to obtain the entire set, that receiver must act as if it had received none of the data.
Revised Text:
Append the following text to the second paragraph of section 2.1.2.4.1.10 begin_coherent_changes:
A connectivity change may occur in the middle of a set of coherent changes; for example, the set of partitions used by the Publisher or one of its Subscribers may change, a late-joining DataReader may appear on the network, or a communication failure may occur. In the event that such a change prevents an entity from receiving the entire set of coherent changes, that entity must behave as if it had received none of the set.
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8578: R#179) Built-in DataReaders should have TRANSIENT_LOCAL durability (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The table in 2.1.5 says that built-in DataReaders should have TRANSIENT durability. However, the description of that durability states that support for it is optional.
Proposed Resolution:
The specification should be changed to state that built-in readers should have TRANSIENT_LOCAL durability.
Proposed Revised Text:
Change TRANSIENT to TRANSIENT_LOCAL in the DURABILITY row of the table on page 2-129.
Resolution:
Revised Text: Resolution:
The specification should be changed to state that built-in readers should have TRANSIENT_LOCAL durability.
Revised Text:
Section 2.1.5 Built-in Topics, Built-In Subscriber and DataReader QoS table
Change TRANSIENT to TRANSIENT_LOCAL in the DURABILITY row
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8579: R#180) Clarify which entities appear as instances to built-in readers (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The specification does not explicitly state whether clients of the built-in DataReaders should be able to discover other entities that belong to the same participant by those means. In other words, if a DataReader 'A' belongs (indirectly) to a DomainParticipant 'B', will information about A appear when one reads from the subscription built-in reader of B?
We believe that most users will not want to "discover" entities they created themselves; the purpose of the built-in entities is to discover what exists elsewhere on the network. Furthermore, there is currently no way for a client of the built-in reader to distinguish between entities belonging to its own DomainParticipant and those that exist elsewhere on the network.
Proposed Resolution:
Clarify the descriptions of the built-in topics to indicate that data pertaining to entities of the same participant will not be made available there.
A mechanism to determine whether an instance handle (read from a built-in topic or obtained through any other means) represents a particular known entity is generally useful. Add the following operations:
· InstanceHandle_t Entity::get_instance_handle()
· boolean DomainParticipant::contains_entity(InstanceHandle_t a_handle)
Proposed Revised Text:
Add the following sentence to the end of the first paragraph on page 2-129:
A built-in DataReader object obtained from a given participant will not provide data pertaining to other entities created (directly or indirectly) from that participant under the assumption that such objects are already known to the application.
Add the following row to the Entity Class table in section 2.1.2.1.1:
get_instance_handle InstanceHandle_t
Add the description of the new operation as a new section:
2.1.2.1.1.8 get_instance_handle
Get the instance handle that represents the Entity in the built-in topic data, in various statuses, and elsewhere.
Add the following row to the DomainParticipant Class table in section 2.1.2.2.1:
contains_entity boolean
a_handle InstanceHandle_t
Add the description of the new operation as a new section:
2.1.2.2.1.26 contains_entity
This operation checks whether or not the given instance handle represents an entity that was created, directly or indirectly, from the DomainParticipant. The instance handle for an Entity may be obtained from built-in topic data, from various statuses, or from the Entity operation get_instance_handle.
Add the new operations to the IDL PSM in section 2.2.3:
interface Entity {
InstanceHandle_t get_instance_handle();
};
interface DomainParticipant : Entity {
boolean contains_entity(InstanceHandle_t a_handle);
};
Resolution:
Revised Text: Resolution:
Clarify the descriptions of the built-in topics to indicate that data pertaining to entities of the same participant will not be made available there.
A mechanism to determine whether an instance handle (read from a built-in topic or obtained through any other means) represents a particular known entity is generally useful. Add the following operations:
· InstanceHandle_t Entity::get_instance_handle()
· boolean DomainParticipant::contains_entity(InstanceHandle_t a_handle)
Revised Text:
Section 2.1.5 Built-in Topics
After the paragraph:
The information that is accessible about the remote entities by means of the built-in topics includes all the QoS policies that apply to the corresponding remote Entity. This QoS policies appear as normal 'data' fields inside the data read by means of the built-in Topic. Additional information is provided to identify the Entity and facilitate the application logic.
Add the paragraph:
A built-in DataReader obtained from a given Participant will not provide data pertaining to Entities created from that same Participant under the assumption that such entities are already known to the application that crated them.
Section 2.1.2.1.1 the Entity Class table
Add the following row to the table
get_instance_handle InstanceHandle_t
Add section 2.1.2.1.1.8
2.1.2.1.1.8 get_instance_handle
This operation returns the InstanceHandle_t that represents the Entity.
section 2.1.2.2.1 DomainParticipant Class table
Add the following row to the table:
contains_entity boolean
a_handle InstanceHandle_t
Add section 2.1.2.2.1.31:
2.1.2.2.1.31 contains_entity
This operation checks whether or not the given a_handle represents an Entity that was created from the DomainParticipant. The containment applies recursively. That is, it applies both to entities (TopicDescription, Publisher, or Subscriber) created directly using the DomainParticipant as well as entities created using a contained Publisher, or Subscriber as the factory, and so forth.
The instance handle for an Entity may be obtained from built-in topic data, from various statuses, or from the Entity operation get_instance_handle.
Section 2.2.3 DCPS PSM : IDL interface Entity
Add the operation get_instance_handle:
interface Entity {
…
InstanceHandle_t get_instance_handle();
…
};
Section 2.2.3 DCPS PSM : IDL interface DomainParticipant
Add the operation contains_entity:
interface DomainParticipant : Entity {
…
boolean contains_entity(InstanceHandle_t a_handle);
…
};
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8580: (R#181) Clarify listener and mask behavior with respect to built-in entitie (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: This issue subsumes two related issues.
· Presumably, listener callbacks pertaining to built-in entities should fall back to the DomainParticipantListener of their containing DomainParticipant in the usual way in the event that those built-in entities have not requested to receive the callbacks themselves. However, this behavior may prove inconvenient in practice: users-even those completely uninterested in built-in entities-must recognize the callbacks pertaining to those entities and deal with them in some way whenever they install a participant listener. Implementers are also constrained, as they will find it difficult to ensure the correct listener behavior while preserving the freedom to create built-in entities on demand.
· The specification does not state the behavior of installing a nil listener or what mask values are acceptable in that case.
Proposed Resolution:
Installing a nil listener should be equivalent to installing a listener that does nothing. It is acceptable to provide a mask with a nil listener; in that case, no callback will be delivered to the entity or to its containing entities.
A DomainParticipant's built-in Subscriber and all of its built-in Topics should by default have nil listeners with all mask bits set. Therefore their callbacks will no propagate back to the DomainParticipantListener unless the user explicitly calls set_listener on them.
Proposed Revised Text:
Insert a new paragraph after the existing first paragraph of section 2.1.2.1.1.3 set_listener:
It is permitted to set a nil listener with any listener mask; it is behaviorally equivalent to installing a listener that does nothing.
Append a new sentence to the final bullet in the list in section 2.1.4.3.1:
Any statuses appearing in the mask associated with a nil listener will neither be dispatched to the entity itself nor propagated to its containing entities.
Insert the following paragraph immediately following the table of built-in entity QoS on page 2-129:
Built-in entities have default listener settings as well. A DomainParticipant's built-in Subscriber and all of its built-in Topics have nil listeners with all statuses appearing in their listener masks. The built-in DataReaders have nil listeners with no statuses in their masks.
Resolution:
Revised Text: Resolution:
Installing a nil listener (i.e. 'clearing' the listener) should be equivalent to installing a listener that does nothing. It is acceptable to provide a mask with a nil listener; in that case, no callback will be delivered to the entity or to its containing entities.
A DomainParticipant's built-in Subscriber and all of its built-in Topics should by default have nil listeners with all mask bits set. Therefore their callbacks will no propagate back to the DomainParticipantListener unless the user explicitly calls set_listener on them.
Revised Text:
Section 2.1.2.1.1.3 set_listener:
Insert a new paragraph after the existing first paragraph "This operation installs …"
It is permitted to use 'nil' as the value of the listener. The 'nil' listener behaves as a Listener whose operations perform no action..
Section 2.1.4.3.1 Listener Access to Plain Communication Status
Append a new sentence to the final bullet in the list "When a plain communication status changes…". The resulting bullet is:
When a plain communication status changes, the middleware triggers the most 'specific' relevant listener operation that is enabled. In case the most specific relevant listener operation corresponds to an application-installed 'nil' listener the operation will be considered handled by a NO-OP operation.
Section 2.1.5 Built-in Topics
Insert the following paragraph immediately following the table of built-in entity QoS on page 2-128:
Built-in entities have default listener settings as well. The built-in Subscriber and all of its built-in Topics have nil listeners with all statuses appearing in their listener masks. The built-in DataReaders have nil listeners with no statuses in their masks.
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8581: R#182) Clarify mapping of PIM 'out' to PSM 'inout' (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: There is already a convention in the specification of mapping an out parameter in the PIM to an inout parameter in the IDL PSM. This convention is useful because it preserves more precise semantics in the PIM while allowing for more performant implementations in language PSMs based on the IDL PSM. However, the convention is never explicitly described in the specification, which could lead to confusion among readers.
Proposed Resolution:
The section 2.2.2 PIM to PSM Mapping Rules should explicitly describe and endorse the aforementioned convention.
Proposed Revised Text:
Insert a new paragraph after the current first paragraph in section 2.2.2:
'Out' parameters in the PIM are conventionally mapped to 'inout' parameters in the PSM in order to minimize the memory allocation performed by the Service and allow for more efficient implementations. The intended meaning is that the caller of such an operation should provide an object to serve as a "container" and that the operation will then "fill in" the state of that object appropriately.
Resolution:
Revised Text: Resolution:
The section 2.2.2 PIM to PSM Mapping Rules should explicitly describe and endorse the aforementioned convention.
Revised Text:
Section 2.2.2
Insert a new paragraph after the current first paragraph "A key concern in the development …"
'Out' parameters in the PIM are conventionally mapped to 'inout' parameters in the PSM in order to minimize the memory allocation performed by the Service and allow for more efficient implementations. The intended meaning is that the caller of such an operation should provide an object to serve as a "container" and that the operation will then "fill in" the state of that object appropriately.
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8582: (T#6) Inconsistent name: StatusKindMask (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In most cases in the specification, when constants of a type named like <something>Kind need to be combined together, a type <something>Mask is defined in the IDL PSM in addition to <something>Kind. The case of StatusKind is inconsistent, however: its mask type is called StatusKindMask, not StatusMask.
Proposed Resolution:
Replace StatusKindMask with StatusMask. Clarify the name mapping convention in section 2.2.2.
Proposed Revised Text:
Append the following sentence to the fourth paragraph of section 2.2.2: "The name of the mask type is formed by replacing the word 'Kind' with the word 'Mask'."
Replace "StatusKindMask" with "StatusMask" everywhere it appears in the IDL PSM in section 2.2.3.
Resolution:
Revised Text: Resolution:
Replace StatusKindMask with StatusMask. Clarify the name mapping convention in section 2.2.2.
Revised Text:
Section 2.2.2
Append the following sentence to the fourth paragraph of section 2.2.2 starting "Enumerations have been mapped …"
The name of the mask type is formed by replacing the word 'Kind' with the word 'Mask' as in StatusMask, SampleStateMask, etc.
Section 2.2.3 DCPS PSM : IDL
Replace:
typedef unsigned long StatusKindMask; // bit-mask StatusKind
With
typedef unsigned long StatusMask; // bit-mask StatusKind
interface StatusCondition replace
StatusKindMask get_enabled_statuses();
ReturnCode_t set_enabled_statuses(in StatusKindMask mask);
With
StatusMask get_enabled_statuses();
ReturnCode_t set_enabled_statuses(in StatusMask mask);
interface Entity replace
// ReturnCode_t set_listener(in Listener l, in StatusKindMask mask);
StatusKindMask get_status_changes();
With
// ReturnCode_t set_listener(in Listener l, in StatusMask mask);
StatusMask get_status_changes();
interface DomainParticipant replace
ReturnCode_t set_listener(in DomainParticipantListener a_listener,
in StatusKindMask mask);
With
ReturnCode_t set_listener(in DomainParticipantListener a_listener,
in StatusMask mask);
interface Publisher replace
ReturnCode_t set_listener(in PublisherListener a_listener,
in StatusKindMask mask);
With
ReturnCode_t set_listener(in PublisherListener a_listener,
in StatusMask mask);
interface DataWriter replace
ReturnCode_t set_listener(in DataWriterListener a_listener,
in StatusKindMask mask);
With
ReturnCode_t set_listener(in DataWriterListener a_listener,
in StatusMask mask);
interface Subscriber replace:
ReturnCode_t set_listener(in SubscriberListener a_listener,
in StatusKindMask mask);
With
ReturnCode_t set_listener(in SubscriberListener a_listener,
in StatusMask mask);
interface DataReader replace:
ReturnCode_t set_listener(in DataReaderListener a_listener,
in StatusKindMask mask);
With
ReturnCode_t set_listener(in DataReaderListener a_listener,
in StatusMask mask);
Actions taken:
March 14, 2005: received issue
August 1, 2005: closed issue
Issue 8775: Page: 2-8 (data-distribution-rtf)
Click here for this issue's archive.
Source: Vanderbilt University (Dr. Douglas C. Schmidt, schmidt(at)dre.vanderbilt.edu)
Nature: Clarification
Severity: Minor
Summary: I think the following sentence is incorrect: a Listener is used to provide a callback for synchronous access and a WaitSet associated with one or several Condition objects as far as I can tell, these roles should be reversed. There are a bunch of other bugs in the spec. I'll try to report them as time permits.
Resolution:
Revised Text:
Actions taken:
May 10, 2005: received issue
August 1, 2005: closed issue
Issue 9478: Inconsistencies between PIM and PSM in the prototype of get_qos() methods (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: ummary:
According to the PIM, get get_qos() method returns the QosPolicy [ ]. According to the PSM, the qos is a parameter and the
method returns void.
Proposed Resolution:
The PIM should be updated to be consistent with the PSM.
In addition, the return value in both the PIM and PSM should be changed from void to ReturnCode_t.
Proposed Revised Text:
Section 2.1.2.1.1 Entity Class; Entity class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.2.1 Domain Module; DomainParticipant class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.3.1 TopicDescription Class; Topic class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.4.1 Publisher Class; Publisher class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.4.2 DataWriter Class; DataWriter class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.5.2 Subscriber Class; Subscriber class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.1.2.5.3 DataReader Class; DataReader class table
Change row from:
abstract get_qos QosPolicy []
To
abstract get_qos ReturnCode_t
out: qos_list QosPolicy[]
Section 2.2.3 DCPS PSM : IDL
interface Entity
Change:
// void get_qos(inout EntityQos qos);
To
// ReturnCode_t get_qos(inout EntityQos qos);
Resolution: see above
Revised Text: The PIM should be updated to be consistent with the PSM.
In addition, the return value in both the PIM and PSM should be changed from void to ReturnCode_t..
Revised Text:
In Entity Class table in 2.1.2.1.1; replace:
abstract get_qos QosPolicy []
With:
abstract get_qos ReturnCode_t
out: qos_list QosPolicy []
In DomainParticipant Class table in 2.1.2.2.1, DomainParticipantFactory Class table in 2.1.2.2.2, Topic Class table in 2.1.2.3.2, Publisher Class table in 2.1.2.4.1, DataWriter Class table in 2.1.2.4.2, Subscriber Class table in 2.1.2.5.2, and DataReader Class table in 2.1.2.5.3: replace:
(inherited) get_qos QosPolicy []
With:
(inherited) get_qos ReturnCode_t
out: qos_list QosPolicy []
In Section 2.2.3 DCPS PSM : IDL
interface Entity ; replace
// void get_qos(inout EntityQos qos);
With
// ReturnCode_t get_qos(inout EntityQos qos);
interface DomainParticipant ; replace
void get_qos(inout DomainParticipantQos qos);
With
ReturnCode_t get_qos(inout DomainParticipantQos qos);
interface Topic ; replace
void get_qos( inout TopicQos qos);
With
ReturnCode_t get_qos( inout TopicQos qos);
interface Publisher : ; replace
void get_qos(inout PublisherQos qos);
With
ReturnCode_t get_qos(inout PublisherQos qos);
interface DataWriter ; replace
void get_qos(inout DataWriterQos qos);
With
ReturnCode_t get_qos(inout DataWriterQos qos);
interface Subscriber ; replace
void get_qos(inout SubscriberQos qos);
With
ReturnCode_t get_qos(inout SubscriberQos qos);
interface DataReader ; replace
void get_qos(inout DataReaderQos qos);
With
ReturnCode_t get_qos(inout DataReaderQos qos);
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: The PIM should be updated to be consistent with the PSM.
In addition, the return value in both the PIM and PSM should be changed from void to ReturnCode_t..
Issue 9479: Inconsistent prototype for Publisher's get_default_datawriter_qos() method (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Summary:
In the PSM it is returning void. However, in the PIM it is returning ReturnCode_t. Also, all other get_defatult_xxx_qos()
methods return ReturnCode_t in both the PIM and the PSM.
Proposed Resolution:
The return code should be changed to ReturnCode_t in the PSM.
Proposed Revised Text:
Section 2.2.3 DCPS PSM : IDL
interface Publisher :
Replace
void get_default_datawriter_qos(inout DataWriterQos qos);
With
ReturnCode_t get_default_datawriter_qos(inout DataWriterQos qos);
Resolution: The return code should be changed to ReturnCode_t in the PSM.
Revised Text: interface Publisher ; replace
void get_default_datawriter_qos(inout DataWriterQos qos);
With
ReturnCode_t get_default_datawriter_qos(inout DataWriterQos qos);
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9480: String sequence should be a parameter and not return value (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Summary:
The string sequence parameter in the get_expression_parameters() method of the ContentFilteredTopic and MultiTopic and in the
get_query_parameters() method of the QueryCondition are all listed as the return value in both the PIM and PSM.
It is desirable for the string sequence to be used as a parameter for consistency and to allow for an error return.
Proposed Resolution:
The PIM and the PSM should have the string sequence as a parameter and the methods should return ReturnCode_t.
Proposed Revised Text:
Section 2.1.2.3.3 ContentFilteredTopic class; ContentFilteredTopic class table
Change row from:
get_expression_parameters string[]
To
get_expression_parameters ReturnCode_t
inout: expression_parameters string[]
Section 2.1.2.3.4 MultiTopic Class [optional]
Change row from:
get_expression_parameters string[]
To
get_expression_parameters ReturnCode_t
inout: expression_parameters string[]
Section 2.2.3 DCPS PSM : IDL
interface ContentFilteredTopic
Replace:
StringSeq get_expression_parameters();
With:
ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);
interface MultiTopic
Replace:
StringSeq get_expression_parameters();
With:
ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);
Resolution: see above
Revised Text: Section 2.1.2.3.3 ContentFilteredTopic class; ContentFilteredTopic class table
Change row from:
get_expression_parameters String []
To
get_expression_parameters ReturnCode_t
out: expression_parameters String []
Section 2.1.2.3.4, MultiTopic class; MultiTopic class table:
Change row from:
get_expression_parameters String []
To
get_expression_parameters ReturnCode_t
out: expression_parameters String []
Section in 2.1.2.5.9 QueryCondition Class table:
Change row from:
get_expression_parameters String []
To
get_query_parameters ReturnCode_t
out: query_parameters String []
Section 2.2.3 DCPS PSM : IDL
interface ContentFilteredTopic
Replace:
StringSeq get_expression_parameters();
With:
ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);
interface MultiTopic
Replace:
StringSeq get_expression_parameters();
With:
ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);
interface QueryCondition
Replace:
StringSeq get_query_parameters();
With:
ReturnCode_t get_query_parameters(inout StringSeq expression_parameters);
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: The PIM and the PSM should have the string sequence as a parameter and the methods should return ReturnCode_t.
Issue 9481: Mention of get_instance() operation on DomainParticipantFactory beingstatic (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Title: R#4 Mention of get_instance() operation on the DomainParticipantFactory being static in the wrong section
Summary:
The last paragraph of section 2.1.2.2.2.4 (lookup_participant) mentioning that get_instance() is a static operation probably belongs in the preceding section 2.1.2.2.2.3 (get_instance).
Proposed Resolution:
Move the paragraph to the correct section
Proposed Revised Text:
Section 2.1.2.2.2.4 lookup_participant
Remove the last paragraph:
The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.
Section 2.1.2.2.2.3 get_instance
Add the paragraph removed from above:
The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.
Resolution: Move the paragraph to the correct section
Revised Text: Section 2.1.2.2.2.4 lookup_participant; Remove the last paragraph:
The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.
Section 2.1.2.2.2.3 get_instance; Add the paragraph removed from above:
The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9482: Improper prototype for get_XXX_status() (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In the PIM, all get_XXX_status() methods return the relevant status by value. This does not allow for an error return and is inconsistent with other operations that accept a parameter.
The same is true for the PSM except for get_inconsistent_topic_status() on the Topic which returns ReturnCode_t and the status is a parameter.
Proposed Resolution:
In the PIM and the PSM, the operations should return ReturnCode_t with the status as a parameter.
Proposed Revised Text:
Section 2.1.2.3.2 Topic Class; Replace
get_inconsistent_topic_status InconsistentTopicStatus
With
get_inconsistent_topic_status ReturnCode_t
inout: status InconsistentTopicStatus
Section 2.1.2.4.2 DataWriter Class;
Replace
get_liveliness_lost_status LivelinessLostStatus
get_offered_deadline_missed_status OfferedDeadlineMissedStatus
get_offered_incompatible_qos_status OfferedIncompatibleQosStatus
get_publication_match_status PublicationMatchedStatus
With
get_liveliness_lost_status ReturnCode_t
inout: status LivelinessLostStatus
get_offered_deadline_missed_status ReturnCode_t
inout: status OfferedDeadlineMissedStatus
get_offered_incompatible_qos_status ReturnCode_t
inout: status OfferedIncompatibleQosStatus
get_publication_match_status ReturnCode_t
inout: status PublicationMatchedStatus
Section 2.1.2.5.2 Subscriber Class;
Replace
get_sample_lost_status SampleLostStatus
With
get_sample_lost_status ReturnCode_t
inout: status SampleLostStatus
Section 2.1.2.5.3 DataReader Class;
Replace
get_liveliness_changed_status LivelinessChangedStatus
get_requested_deadline_missed_status RequestedDeadlineMissedStatus
get_requested_incompatible_qos_status RequestedIncompatibleQosStatus
get_sample_rejected_status SampleRejectedStatus
get_subscription_match_status SubscriptionMatchedStatus
With
get_liveliness_changed_status ReturnCode_t
inout: status LivelinessChangedStatus
get_requested_deadline_missed_status ReturnCode_t
inout: status RequestedDeadlineMissedStatus
get_requested_incompatible_qos_status ReturnCode_t
inout: status RequestedIncompatibleQosStatus
get_sample_rejected_status ReturnCode_t
inout: status SampleRejectedStatus
get_subscription_match_status ReturnCode_t
inout: status SubscriptionMatchedStatus
Section 2.2.3 DCPS PSM : IDL
interface DataWriter; Replace:
LivelinessLostStatus get_liveliness_lost_status();
OfferedDeadlineMissedStatus get_offered_deadline_missed_status();
OfferedIncompatibleQosStatus get_offered_incompatible_qos_status();
PublicationMatchedStatus get_publication_match_status();
With
ReturnCode_t get_liveliness_lost_status(inout LivelinessLostStatus status);
ReturnCode_t get_offered_deadline_missed_status(inout OfferedDeadlineMissedStatus status);
ReturnCode_t get_offered_incompatible_qos_status(inout OfferedIncompatibleQosStatus status);
ReturnCode_t get_publication_match_status(inout PublicationMatchedStatus status);
interface DataReader; Replace:
SampleRejectedStatus get_sample_rejected_status();
LivelinessChangedStatus get_liveliness_changed_status();
RequestedDeadlineMissedStatus get_requested_deadline_missed_status();
RequestedIncompatibleQosStatus get_requested_incompatible_qos_status();
SubscriptionMatchedStatus get_subscription_match_status();
SampleLostStatus get_sample_lost_status();
With:
ReturnCode_t get_sample_rejected_status( inout SampleRejectedStatus status );
ReturnCode_t get_liveliness_changed_status(inout LivelinessChangedStatus status);
ReturnCode_t get_requested_deadline_missed_status(inout RequestedDeadlineMissedStatus status);
ReturnCode_t get_requested_incompatible_qos_status(inout RequestedIncompatibleQosStatus status);
ReturnCode_t get_subscription_match_status(inout SubscriptionMatchedStatus status);
ReturnCode_t get_sample_lost_status(inout SampleLostStatus status);
Resolution: see above
Revised Text: In the PIM and the PSM, the operations should return ReturnCode_t with the status as an out parameter.
Revised Text:
Section 2.1.2.3.2 Topic Class; Topic class table Replace
get_inconsistent_topic_status InconsistentTopicStatus
With
get_inconsistent_topic_status ReturnCode_t
out: status InconsistentTopicStatus
Section 2.1.2.4.2 DataWriter Class; DataWrite class table Replace
get_liveliness_lost_status LivelinessLostStatus
get_offered_deadline_missed_status OfferedDeadlineMissedStatus
get_offered_incompatible_qos_status OfferedIncompatibleQosStatus
get_publication_match_status PublicationMatchedStatus
With
get_liveliness_lost_status ReturnCode_t
out: status LivelinessLostStatus
get_offered_deadline_missed_status ReturnCode_t
out: status OfferedDeadlineMissedStatus
get_offered_incompatible_qos_status ReturnCode_t
out: status OfferedIncompatibleQosStatus
get_publication_match_status ReturnCode_t
out: status PublicationMatchedStatus
Section 2.1.2.5.2 Subscriber Class; Subscriber class table replace
get_sample_lost_status SampleLostStatus
With:
get_sample_lost_status ReturnCode_t
out: status SampleLostStatus
Section 2.1.2.5.3 DataReader Class; DataReader class table replace
get_liveliness_changed_status LivelinessChangedStatus
get_requested_deadline_missed_status RequestedDeadlineMissedStatus
get_requested_incompatible_qos_status RequestedIncompatibleQosStatus
get_sample_rejected_status SampleRejectedStatus
get_subscription_match_status SubscriptionMatchedStatus
With
get_liveliness_changed_status ReturnCode_t
out: status LivelinessChangedStatus
get_requested_deadline_missed_status ReturnCode_t
out: status RequestedDeadlineMissedStatus
get_requested_incompatible_qos_status ReturnCode_t
out: status RequestedIncompatibleQosStatus
get_sample_rejected_status ReturnCode_t
out: status SampleRejectedStatus
get_subscription_match_status ReturnCode_t
out: status SubscriptionMatchedStatus
Section 2.2.3 DCPS PSM : IDL
interface DataWriter; Replace:
LivelinessLostStatus get_liveliness_lost_status();
OfferedDeadlineMissedStatus get_offered_deadline_missed_status();
OfferedIncompatibleQosStatus get_offered_incompatible_qos_status();
PublicationMatchedStatus get_publication_match_status();
With
ReturnCode_t get_liveliness_lost_status(
inout LivelinessLostStatus status);
ReturnCode_t get_offered_deadline_missed_status(
inout OfferedDeadlineMissedStatus status);
ReturnCode_t get_offered_incompatible_qos_status(
inout OfferedIncompatibleQosStatus status);
ReturnCode_t get_publication_match_status(
inout PublicationMatchedStatus status);
interface DataReader; Replace:
SampleRejectedStatus get_sample_rejected_status();
LivelinessChangedStatus get_liveliness_changed_status();
RequestedDeadlineMissedStatus get_requested_deadline_missed_status();
RequestedIncompatibleQosStatus get_requested_incompatible_qos_status();
SubscriptionMatchedStatus get_subscription_match_status();
SampleLostStatus get_sample_lost_status();
With:
ReturnCode_t get_sample_rejected_status(
inout SampleRejectedStatus status );
ReturnCode_t get_liveliness_changed_status(
inout LivelinessChangedStatus status);
ReturnCode_t get_requested_deadline_missed_status(
inout RequestedDeadlineMissedStatus status);
ReturnCode_t get_requested_incompatible_qos_status(
inout RequestedIncompatibleQosStatus status);
ReturnCode_t get_subscription_match_status(
inout SubscriptionMatchedStatus status);
ReturnCode_t get_sample_lost_status(
inout SampleLostStatus status);
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: In the PIM and the PSM, the operations should return ReturnCode_t with the status as an out parameter.
Issue 9483: Inconsistent naming in SampleRejectedStatusKind (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
We have REJECTED_BY_SAMPLES_LIMIT which comes from the max_samples in the ResourceLimitsQosPolicy.
However, we have REJECTED_BY_INSTANCE_LIMIT which comes from the max_instances.
Proposed Resolution:
It should be named REJECTED_BY_INSTANCES_LIMIT.
Proposed Revised Text:
Section 2.2.3 DCPS PSM : IDL
enum SampleRejectedStatusKind; Replace
REJECTED_BY_INSTANCE_LIMIT
With
REJECTED_BY_INSTANCES_LIMIT
Resolution: It should be named REJECTED_BY_INSTANCES_LIMIT
Revised Text: Section 2.2.3 DCPS PSM : IDL
enum SampleRejectedStatusKind; Replace
REJECTED_BY_INSTANCE_LIMIT,
With
REJECTED_BY_INSTANCES_LIMIT,
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9484: OWNERSHIP_STRENGTH QoS is not a QoS on built-in Subscriber of DataReaders (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The OWNERSHIP_STRENGTH QoS only applies to DataWriters, yet it is listed in the table of the QoS of the built-in Subscriber and DataReader objects in Section 2.1.5.
Proposed Resolution:
Remove OWNERSHIP_STRENGTH from the aforementioned table.
Proposed Revised Text:
Section 2.1.5
In the table that follows the sentence:
The QoS of the built-in Subscriber and DataReader objects is given by the following table:
Remove the row for 'OWNERSHIP_STRENGTH'
Resolution: Remove OWNERSHIP_STRENGTH from the aforementioned table
Revised Text: Section 2.1.5 Built-in Topics;
In the table that follows the sentence: "The QoS of the built-in Subscriber and DataReader objects is given by the following table:"
Remove the following row:
OWNERSHIP_STRENGTH <unspecified>
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9485: Consistency between RESOURCE_LIMITS QoS policies (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In the description of the TIME_BASED_FILTER QoS, we are missing the description of the consistency requirements with the DEADLINE QoS, which is mentioned in the table in Section 2.1.3.
Also, we should mention some consistency requirements between max_samples and max_samples_per_instance within the RESOURCE_LIMITS QoS.
Proposed Resolution:
In Section 2.1.3.12 on the TIME_BASED_FILTER QoS we should make explicit mention that the minimum_separation must be <= the period of the DEADLINE QoS.
In both the table in Section 2.1.3 and in Section 2.1.3.22 on the RESOURCE_LIMITS QoS we should mention the consistency requirements that max_samples >= max_samples_per_instance.
Proposed Revised Text:
Section 2.1.3.12 TIME_BASED_FILTER;
Add the following paragraph to the end of the section:
The setting of the TIME_BASED_FILTER policy must be set consistently with that of the DEADLINE policy. For these two policies to be consistent the settings must be such that "deadline period>= minimum_separation." An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.
Section 2.1.3.22 RESOURCE_LIMITS
Add the following paragraph before the last paragraph in the section:
The setting of RESOURCE_LIMITS max_samples must be consistent with the setting of the max_samples_per_instance. For these two values to be consistent they must verify that max_samples >= max_samples_per_instance.
Section 2.1.3.22 RESOURCE_LIMITS
Add the following paragraph at the end of the section:
An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.
Resolution:
Revised Text: Section 2.1.3.12 TIME_BASED_FILTER; Add the following paragraph to the end of the section:
The setting of the TIME_BASED_FILTER minimum_separation must be consistent with the DEADLINE period. For these two QoS policies to be consistent they must verify that "deadline period >= minimum_separation." An attempt to set these policies in an inconsistent manner when an entity is created of via a set_qos operation will cause the operation to fail.
Section 2.1.3.22 RESOURCE_LIMITS; Add the following paragraph before the last paragraph in the section:
The setting of RESOURCE_LIMITS max_samples must be consistent with the max_samples_per_instance. For these two values to be consistent they must verify that "max_samples >= max_samples_per_instance." An attempt to set these policies in an inconsistent manner when an entity is created of via a set_qos operation will cause the operation to fail.
Section 2.1.3.22 RESOURCE_LIMITS; Add the following paragraph at the end of the section:
An attempt to set this policy to inconsistent values when an entity is created of via a set_qos operation will cause the operation to fail.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: In Section 2.1.3.12 on the TIME_BASED_FILTER QoS we should make explicit mention that the minimum_separation must be <= the period of the DEADLINE QoS.
In both the table in Section 2.1.3 and in Section 2.1.3.22 on the RESOURCE_LIMITS QoS we should mention the consistency requirements that max_samples >= max_samples_per_instance.
Issue 9486: Blocking of write() call (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Blocking of write() call depending on RESOURCE_LIMITS, HISTORY, and RELIABILITY QoS
Summary:
Section 2.1.2.4.2.11 states that even writers with KEEP_LAST HISTORY QoS can block and describes some scenarios.
Some of these scenarios may no longer be valid depending on whether the implementation is willing to sacrifice reliability.
In the table in Section 2.1.3, it states the max_blocking _time in the RELIABILITY QoS only applies for RELIABLE and KEEP_ALL HISTORY QoS.
In Section 2.1.3.14 it is only mentioned that the writer can block if the RELIABILITY QoS is set to RELIABLE.
Proposed Resolution:
At the very least, remove mention of the requirement that the HISTORY QoS be KEEP_ALL for blocking to apply in the table in Section 2.1.3.
Proposed Revised Text:
Section 2.1.3 QoS Table
On the entry for the RELIABILITY QoS max_blocking_time
Replace:
This setting applies only to the case where kind=RELIABLE and the HISTORY is KEEP_ALL.
With:
This setting applies only to the case where kind=RELIABLE.
Resolution: see above
Revised Text: Section 2.1.3 QoS Table
On the entry for the RELIABILITY QoS max_blocking_time
Replace:
This setting applies only to the case where kind=RELIABLE and the HISTORY is KEEP_ALL.
With:
This setting applies only to the case where kind=RELIABLE.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Remove mention of the requirement that the HISTORY QoS be KEEP_ALL for blocking to apply in the table in Section 2.1.3.
Issue 9487: Clarify PARTITION QoS and its default value (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In the table in Section 2.1.3, the default partition value is said to be a zero-length sequence, which "is equivalent to a sequence containing a single element consisting of an empty string", which will match any partition. However, if an empty string will match any partition, it is not consistent with normal regular expression matching.
Proposed Resolution:
It is desirable to have the behavior in that if a special partition is specified, it will only match others who have that special partition. If the default behavior is that it will match all partitions, there is no way for a newly created entity to prevent others from matching it, unless the special partition is used.
Therefore, we should not overload the meaning of the empty string to mean matching everything. Instead, the empty string is the default partition. An empty partition sequence or a partition sequence that consists of wildcards only will automatically be assumed to be in the default empty string partition.
Proposed Revised Text:
Section 2.1.3 Supported QoS PARTITION Table
On the "Meaning" Column for the PARTITION QoS;
Replace the following paragraph:
The default value is an empty (zero-length) sequence. This is treated as a special value that matches any partition. And is equivalent to a sequence containing a single element consisting of the empty string.
With
The empty string ("") is considered a valid partition that is matched with other partition names using the same rules of string matching and regular-expression matching used for any other partition name (see Section 2.1.3.13)
The default value for the PARTITION QoS is a zero-length sequence. The zero-length sequence is treated as a special value equivalent to a sequence containing a single element consisting of the empty string.
Resolution: see above
Revised Text: Section 2.1.3 Supported QoS PARTITION Table
On the "Meaning" Column for the PARTITION QoS;
Replace the following paragraph:
The default value is an empty (zero-length) sequence. This is treated as a special value that matches any partition. And is equivalent to a sequence containing a single element consisting of the empty string.
With
The empty string ("") is considered a valid partition that is matched with other partition names using the same rules of string matching and regular-expression matching used for any other partition name (see Section 2.1.3.13)
The default value for the PARTITION QoS is a zero-length sequence. The zero-length sequence is treated as a special value equivalent to a sequence containing a single element consisting of the empty string.
Section 2.1.3.13 PARTITION, Replace "association" with "match" in the following sentence:
This policy is changeable. A change of this policy can potentially modify the
"associationmatch" of existing DataReader and DataWriter entities. It may establish new "associationsmatches" that did not exist before, or break existing associationsmatches.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: It is desirable to have the behavior in that if a special partition is specified, it will only match others who have that special partition. If the default behavior is that it will match all partitions, there is no way for a newly created entity to prevent others from matching it, unless the special partition is used.
Therefore, we should not overload the meaning of the empty string to mean matching everything. Instead, the empty string is the default partition. An empty partition sequence or a partition sequence that consists of wildcards only will automatically be assumed to be in the default empty string partition.
Also in Section 2.1.3.13 PARTITIOIN, the "connection" between a reader and writer is described as an "association". The correct term used elsewhere in the spec is "match". Therefore the use of "association" in this section should be replaced with the term "match".
Issue 9488: Typos in built-in topic table (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In the table in Section 2.1.5, for both the DCPSPublication and DCPSSubscription there is a typo in that "ownershiph" should be "ownership".
Also, the destination_order row in the DCPSPublication should be of type "DesinationOrderQosPolicy" and not "QosPolicy".
Also, the presentation row in the DCPSPublication should be of type "PresentationQosPolicy" and not "DestinationOrderQosPolicy".
Also, in the paragraph at the top of the page containing the table there is a typo where "crated" should be "created".
Proposed Resolution:
Fix the typos.
Proposed Revised Text:
Section 2.1.5, 2 paragraphs above the Builtin-Topic table; at the end of the paragraph:
Replace "crated" with "created" in the sentence:
"application that crated them."
Section 2.1.5 Builtin-Topic table;
Replace DCPSPublication fieldname 'ownershiph' with 'ownership'
Replace DCPSSubscription fieldname 'ownershiph' with 'ownership'
Replace the type of the DCPSPublication, destination_order field from 'QosPolicy" to "DestinationOrderQosPolicy"
Replace the type of the DCPSPublication presentation field from 'DestinationOrderQosPolicy" to "PresentationQosPolicy
Resolution: Fix the typos
Revised Text: Section 2.1.5, 2 paragraphs above the Builtin-Topic table; at the end of the paragraph:
Replace "crated" with "created" in the sentence:
"application that cratedcreated them."
Section 2.1.5 Builtin-Topic table;
Replace DCPSPublication fieldname 'ownershiph' with 'ownership'
ownershiph ownership OwnershipQosPolicy Policy of the corresponding DataWriter
Replace DCPSSubscription fieldname 'ownershiph' with 'ownership'
ownershiphownership OwnershipQosPolicy Policy of the corresponding DataReader
Replace the type of the DCPSPublication, destination_order field from 'QosPolicy" to "DestinationOrderQosPolicy" resulting in:
destination_ order QosPolicyDestinationOrderQosPolicy Policy of the corresponding DataWriter
Replace the type of the DCPSPublication presentation field from 'DestinationOrderQosPolicy" to "PresentationQosPolicy
presentation DestinationOrderQosP olicyPresentationQosPolicy Policy of the Publisher to which the DataWriter belongs
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9489: Naming of filter_parameters concerning ContentFilteredTopic (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The method name is get/set_expression_parameters() whereas the parameter passed in is the "filter_parameters". Understandably the full name is filter expression parameters since the ContentFilteredTopic has a "filter_expression" attribute.
Compare this with the MultiTopic which has the same named methods which take in "expression_parameters" and has a "subscription_expression" attribute.
The name "filter_parameters" is also used in the create_contentfilteredtopic() method on the DomainParticipant.
Proposed Resolution:
Change the name of "filter_parameters" to "expression_parameters" for more consistency.
Proposed Revised Text:
Section 2.1.2.2.1 DomainParticipant Class; DomainParticipant class table
On the row describing the operation "create_contentfilteredtopic"
Replace parameter name "filter_ parameters"
With parameter name "expression_ parameters"
Section 2.1.2.2.1.7 create_contentfilteredtopic
Last paragraph replace "filter_ parameters" with "expression_ parameters"
Section 2.1.2.3.3 ContentFilteredTopic Class; ContentFilteredTopic class table
On the row describing the operation "set_expression_parameters"
Replace parameter name "filter_ parameters"
With parameter name "expression_ parameters"
Section 2.1.2.3.3 ContentFilteredTopic Class
On the second bullet towards the end of the section:
Replace "filter_ parameters" with "expression_ parameters"
On the last paragraph just above section 2.1.2.3.3.1:
Replace "filter_ parameters" with "expression_ parameters"
Section 2.1.2.3.3.3 get_expression_parameters
On the first paragraph:
Replace "filter_ parameters" with "expression_ parameters"
Section 2.1.2.3.3.4 set_expression_parameters
On the first paragraph:
Replace "filter_ parameters" with "expression_ parameters"
Section 2.2.3 DCPS PSM : IDL
interface DomainParticipant
On the operation create_contentfilteredtopic
Replace formal parameter name "filter_ parameters" with "expression_ parameters"
Resolution: see above
Revised Text: Section 2.1.2.2.1 DomainParticipant Class; DomainParticipant class table
On the row describing the operation "create_contentfilteredtopic"
Replace parameter name "filter_ parameters" with parameter name "expression_ parameters"
filter_parametersexpression_parameters string []
Section 2.1.2.2.1.7 create_contentfilteredtopic
2nd paragraph replace "filter_ parameters" with "expression_ parameters"
The logical expression is derived from the filter_expression and filter_parametersexpression_paramaters arguments.
Section 2.1.2.3.3 ContentFilteredTopic Class; ContentFilteredTopic class table
On the row describing the operation "set_expression_parameters"
Replace parameter name "filter_ parameters" with parameter name "expression_ parameters"
filter_parametersexpression_parameters string []
Section 2.1.2.3.3 ContentFilteredTopic Class
Replace "filter_ parameters" with "expression_ parameters" (3 occurences)
The selection of the content is done using the filter_expression with parameters
filter_parameters expression_paramaters.
o The filter_parameters expression_paramaters attribute is a sequence of ….
Appendix B describes the syntax of filter_expression and filter_parameters expression_paramaters.
Section 2.1.2.3.3.3 get_expression_parameters
On the first paragraph:
Replace "filter_ parameters" with "expression_ parameters"
This operation returns the filter_parameters expression_paramaters associated with the ContentFilteredTopic.
Section 2.1.2.3.3.4 set_expression_parameters
On the first paragraph:
Replace "filter_ parameters" with "expression_ parameters"
This operation changes the filter_parameters expression_paramaters associated with the ContentFilteredTopic.
Section 2.2.3 DCPS PSM : IDL
interface DomainParticipant
On the operation create_contentfilteredtopic
Replace formal parameter name "filter_ parameters" with "expression_ parameters"
ContentFilteredTopic create_contentfilteredtopic(
in string name,
in Topic related_topic,
in string filter_expression,
in StringSeq filter_parameters expression_parameters);
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Change the name of "filter_parameters" to "expression_parameters" for more consistency.
Issue 9490: Incorect prototype for FooDataWriter method register_instance_w_timestamp() (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Incorrect prototype for the FooDataWriter method register_instance_w_timestamp() in the PSM
Summary:
The handle is incorrectly a parameter when it is already the return.
Proposed Resolution:
Remove the incorrect handle parameter.
Proposed Revised Text:
Section 2.2.3 DCPS PSM : IDL
interface FooDataWriter
On the register_instance_w_timestamp remove the parameter
"in DDS::InstanceHandle_t handle,"
The resulting operation is:
DDS::InstanceHandle_t register_instance_w_timestamp(in Foo instance_data, in DDS::Time_t source_timestamp);
Resolution: Remove the incorrect handle parameter.
Revised Text: Section 2.2.3 DCPS PSM : IDL
interface FooDataWriter
On the register_instance_w_timestamp remove the parameter
"in DDS::InstanceHandle_t handle,"
DDS::InstanceHandle_t register_instance_w_timestamp(
in Foo instance_data,
in DDS::InstanceHandle_t handle,
in DDS::Time_t source_timestamp);
The resulting operation is:
DDS::InstanceHandle_t register_instance_w_timestamp(
in Foo instance_data,
in DDS::Time_t source_timestamp);
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9491: Compatible versus consistency when talking about QosPolicy (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In the third paragraph of Section 2.1.3, it is stated that "some QosPolicy values may not be compatible with other ones". In this context we are really talking about the consistency of related QosPolicies as compatibility is already a concept concerning requested/offered semantics.
Proposed Resolution:
Reword the sentence to use the term "consistency" which is already used later in the paragraph.
Proposed Revised Text:
Section 2.1.3 Supported QoS
3rd paragraph
Replace "compatible" with "consistent" in the sentence:
"Some QosPolicy values may not be compatible with other ones."
Resulting in:
"Some QosPolicy values may not be consistent with other ones."
Resolution: see above
Revised Text: Section 2.1.3 Supported QoS
3rd paragraph ; Replace "compatible" with "consistent" in the sentence:
"Some QosPolicy values may not be compatible consistent with other ones."
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Reword the sentence to use the term "consistency" which is already used later in the paragraph
Issue 9492: Incorrect mention of INCONSISTENT_POLICY status (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In Section 2.1.3.7 concerning the DEADLINE QoS, it is stated that if the QoS is set inconsistently, i.e. period is < minimum_separation of the TIME_BASED_FILTER QoS, the INCONSISTENT_POLICY status will change and any associated Listeners/WaitSets will be triggered.
There is no such status. Instead the set_qos() operation will error with return code INCONSISTENT_POLICY.
Proposed Resolution:
Mention the return code instead.
Proposed Revised Text:
Section 2.1.3.7 DEADLINE
Remove the last sentence in the section:
"An attempt to set these policies in
an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered."
Resolution: Mention the return code instead
Revised Text: Section 2.1.3.7 DEADLINE
Remove the last sentence in the section:
An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9493: Typos in QoS sections (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In Section 2.1.3.11 (LIVELINESS QoS) the second condition for compatibility uses "=<" for less than or equal to where "<=" might be more readable.
Also, the last paragraph states "equal or greater to" where "equal or greater than" might be more readable.
In next to last paragraph of Section 2.1.3.14 (RELIABILITY QoS), there is a typo where "change form a newer value" should be "change from a newer value".
In Section 2.1.3.22 (READER_DATA_LIFECYCLE QoS) the last two paragraphs mention how "view_state becomes NOT_ALIVE_xxx" where it should be the "instance_state".
Proposed Resolution:
Make the aforementioned changes
Proposed Revised Text:
Section 2.1.3.11 LIVELINESS
Second bullet in the enumeration near the end of the section:
Replace "offered lease_duration =< requested lease_duration"
With "offered lease_duration <= requested lease_duration"
Section 2.1.3.11 LIVELINESS
Last paragraph; replace:
"Service with a time-granularity equal or greater to the lease_duration."
With:
"Service with a time-granularity greater or equal to the lease_duration."
Section 2.1.3.14 RELIABILITY
Next to last paragraph. Raplace:
"change form a newer value"
With:
"change from a newer value".
Section 2.1.3.22 READER_DATA_LIFECYCLE
Paragraph before the last
Replace "view_state" with "inatance_state" in:
"maintain information regarding an instance once its view_state becomes NOT_ALIVE_NO_WRITERS."
Section 2.1.3.22 READER_DATA_LIFECYCLE
Last paragraph:
Replace "view_state" with "inatance_state" in:
"maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED."
Resolution: Fix the typos.
Revised Text: Section 2.1.3.11 LIVELINESS
Second bullet in the enumeration near the end of the section:
Replace "offered lease_duration =< requested lease_duration"
With "offered lease_duration <= requested lease_duration"
Last paragraph; replace:
"Service with a time-granularity equal or greater to the lease_duration."
With:
"Service with a time-granularity greater or equal to the lease_duration."
Section 2.1.3.14 RELIABILITY
Next to last paragraph. Replace:
"change form a newer value"
With:
"change from a newer value".
Section 2.1.3.22 READER_DATA_LIFECYCLE
Paragraph before the last
Replace "view_state" with "instance_state" as shown below:
"maintain information regarding an instance once its view_state instance_state becomes NOT_ALIVE_NO_WRITERS."
Last paragraph:
Replace "view_state" with "instance_state" as shown below:
"maintain information regarding an instance once its view_state instance_state becomes NOT_ALIVE_DISPOSED."
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed isuse
Issue 9494: Typos in PIM sections (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In Section 2.1.2.4.1.10 (begin_coherent_changes) there is a typo in the last sentence of the section where "if may be useful" should be "it may be useful".
In the second paragraph of Section 2.1.2.2.2.4 (lookup_participant) there is a typo where "multiple DomainParticipant" should be "multiple DomainParticipants".
Proposed Resolution:
Make the suggested corrections.
Proposed Revised Text:
Section 2.1.2.4.1.10 begin_coherent_changes
Last sentence, replace:
"if may be useful"
With
"it may be useful"
Section 2.1.2.2.2.4 lookup_participant
Second paragraph replace
"If multiple DomainParticipant belonging"
With
"If multiple DomainParticipant entities belonging"
Resolution: see above
Revised Text: Section 2.1.2.4.1.10 begin_coherent_changes
Last sentence, replace:
"if may be useful"
With
"it may be useful"
Section 2.1.2.2.2.4 lookup_participant
Second paragraph replace
"If multiple DomainParticipant belonging"
With
"If multiple DomainParticipant entities belonging"
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: First typo "noe" is invalid. Must was already fixed in the last revision.
Otherwise, make the suggested corrections.
Issue 9495: Clarify ownership with same-strength writers (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In Section 2.1.3.9 in the paragraph dealing with when there are multiple same-strength writers, the next to last sentence describes that the owner must remain the same until one of several conditions are met.
The condition where "a new DataWriter with the same strength that should be deemed the owner according to the policy of the Service" should be explicitly mentioned although it may have been implied.
Proposed Resolution:
Add the explicit mention of the additional condition above.
Proposed Revised Text:
Section 2.1.3.9.2 EXCLUSIVE kind
5th paragraph; replace the sentence:
It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, or a new DataWriter with higher strength modifies the instance.
With:
It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, a new DataWriter with higher strength modifies the instance, or a new owner with the same strength that is deemed by the Service to be the owner modifies the instance.
Resolution: Add the explicit mention of the additional condition above.
Revised Text: Section 2.1.3.9.2 EXCLUSIVE kind
5th paragraph; replace the sentence:
It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, or a new DataWriter with higher strength modifies the instance.
With
It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, a new DataWriter with higher strength modifies the instance, or another DataWriter with the same strength that is deemed by the Service to be the new owner modifies the instance.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9496: Should write() block when out of instance resources? (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Currently it is stated that write() and dispose() may block and return TIMEOUT when the RELIABILITY QoS kind is set to RELIABLE and any of the RESOURCE_LIMITS QoS is hit.
We should reconsider the action taken when it is instance resource limits that are hit. If instance resources are kept around until they are unregistered (and not even yet considering how RELIABILITY or DURABILITY QoS affects this), then it seems awkward to block when the user is required to take action. Perhaps returning immediately with OUT_OF_RESOURCES makes more sense in this situation.
Proposed Resolution:
When the writer is out of instance resources because all max_instances have been registered or written, the write/dispose() call will return OUT_OF_RESOURCES instead of blocking if it can be detected.
Proposed Revised Text:
Section 2.1.2.4.2.11 write
Above the paragraph starting with "In case the provided handle is valid"; add the paragraph:
Instead of blocking, the write operation is allowed to return immediately with the error code OUT_OF_RESOURCES provided the following two conditions are met:
1. The reason for blocking would be that the RESOURCE_LIMITS are exceeded.
2. The service determines that waiting the 'max_waiting_time' has no chance of freeing the necessary resources. For example, if the only way to gain the necessary resources would be for the user to unregister an instance.
Section 2.1.2.4.2.12 write_w_timestamp
After the paragraph "This operation may block" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.13 dispose
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.14 dispose_w_timestamp
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.14 dispose_w_timestamp
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.5 register
Replace the paragraph:
This operation may block if the RELIABILITY kind is set to RELIABLE and the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT.
With:
This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.5 register_w_timestamp
Replace the paragraph:
This operation may block and return TIMEOUT under the same circumstances described for the register_instance operation (Section 2.1.2.4.2.5 ).
With:
This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Resolution: see above
Revised Text: Section 2.1.2.4.2.11 write
Above the paragraph starting with "In case the provided handle is valid…"; add the paragraph:
Instead of blocking, the write operation is allowed to return immediately with the error code OUT_OF_RESOURCES provided the following two conditions are met:
1. The reason for blocking would be that the RESOURCE_LIMITS are exceeded.
2. The service determines that waiting the 'max_waiting_time' has no chance of freeing the necessary resources. For example, if the only way to gain the necessary resources would be for the user to unregister an instance.
Section 2.1.2.4.2.12 write_w_timestamp
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.13 dispose
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.14 dispose_w_timestamp
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.14 dispose_w_timestamp
After the paragraph "This operation may block…" add the paragraph:
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.5 register_instance
Replace the paragraph:
This operation may block if the RELIABILITY kind is set to RELIABLE and the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT.
With:
This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Section 2.1.2.4.2.5 register_instance_w_timestamp
Replace the paragraph:
This operation may block and return TIMEOUT under the same circumstances described for the register_instance operation (Section 2.1.2.4.2.5 ).
With:
This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: When the writer is out of instance resources because all max_instances have been registered or written, the write/dispose() call will return OUT_OF_RESOURCES instead of blocking if it can be detected.
Issue 9497: Description of set_default_XXX_qos() (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: For XXX = participant, topic, publisher, subscriber, and datareader, the specification states "in the case where the QoS policies are not explicitly specified".
For XXX = datawriter, the specification states "in the case where the QoS policies are defaulted".
The latter is technically more correct.
Proposed Resolution:
Use the wording in set_default_datawriter_qos().
Proposed Revised Text:
Section 2.1.2.2.1.20 set_default_publisher_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.1.21 get_default_publisher_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.1.22 set_default_subscriber_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.1.23 get_default_subscriber_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.1.24 set_default_topic_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.1.25 get_default_topic_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.2.5 set_default_participant_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.2.2.6 get_default_participant_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.4.1.16 get_default_datawriter_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.5.2.15 set_default_datareader_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Section 2.1.2.5.2.16 get_default_datareader_qos
First paragraph replace:
in the case where the QoS policies are not explicitly specified
With
in the case where the QoS policies are defaulted
Resolution: Use the wording in set_default_datawriter_qos().
Revised Text: Section 2.1.2.2.1.20 set_default_publisher_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.1.21 get_default_publisher_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.1.22 set_default_subscriber_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.1.23 get_default_subscriber_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.1.24 set_default_topic_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.1.25 get_default_topic_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.2.5 set_default_participant_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.2.2.6 get_default_participant_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.4.1.16 get_default_datawriter_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.5.2.15 set_default_datareader_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Section 2.1.2.5.2.16 get_default_datareader_qos
First paragraph replace "explicitly specified" with "defaulted" as shown:
in the case where the QoS policies are not explicitly specified defaulted
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9498: Naming consistencies in match statuses (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: For better naming consistency with other statuses, the PUBLICATION_MATCH_STATUS and SUBSCRIPTION_MATCH_STATUS may be renamed to PUBLICATION_MATCHED_STATUS and SUBSCRIPTION_MATCHED_STATUS. Also the get_publication_match_status and get_subscription_match_status operations may be renamed to get_publication_matched_status and get_subscription_matched_status.
In addition the callback is named on_XXX_matched.
Proposed Resolution:
Rename PUBLICATION_MATCH_STATUS to PUBLICATION_MATCHED_STATUS, SUBSCRIPTION_MATCH_STATUS to SUBSCRIPTION_MATCHED_STATUS
Proposed Revised Text:
Section 2.1.2.4 Publication Module
Figure 2-9; DataWriter class
Rename
get_publication_match_status()
To
get_publication_matched_status()
Section 2.1.2.4.2 DataWriter Class
DataWriter class table
Rename
get_publication_match_status()
To
get_publication_matched_status()
Section 2.1.2.4.2.19 get_publication_match_status
Rename section heading to:
2.1.2.4.2.19 get_publication_matched_status
Replace
"allows access to the PUBLICATION_MATCH_QOS"
With:
"allows access to the PUBLICATION_MATCHED communication status "
Section 2.1.2.5 Subscription Module
Figure 2-9; DataReader class
Rename
get_subscription_match_status()
To
get_subscription_matched_status()
Section 2.1.2.4.2 DataReader Class
DataReader class table
Rename
get_subscription_match_status()
To
get_subscription_matched_status()
Section 2.1.2.5.3.25 get_subscription_match_status
Rename section heading to:
2.1.2.5.3.25 get_subscription_matched_status
Section 2.1.2.5.3.25 get_subscription_match_status
Rename "SUBSCRIPTION_MATCH_STATUS" to "SUBSCRIPTION_MATCHED_STATUS"
Section 2.1.4.4 Conditions and Wait-sets
Figure 2-19; DataReader class
Rename
get_publication_match_status()
To
get_publication_matched_status()
Section 2.1.4.1 Communication Status
Communication status table replace:
PUBLICATION_MATCH
With
PUBLICATION_MATCHED
Communication status table replace:
SUBSCRIPTION_MATCH
With
SUBSCRIPTION_MATCHED
Section 2.2.3 DCPS PSM : IDL
Status constants
Replace:
const StatusKind PUBLICATION_MATCH_STATUS = 0x0001 << 13;
const StatusKind SUBSCRIPTION_MATCH_STATUS = 0x0001 << 14;
With
const StatusKind PUBLICATION_MATCHED_STATUS = 0x0001 << 13;
const StatusKind SUBSCRIPTION_MATCHED_STATUS = 0x0001 << 14;
interface DataWriter
Replace:
PublicationMatchedStatus get_publication_match_status();
With
PublicationMatchedStatus get_publication_matched_status();
interface DataReader
Replace:
SubscriptionMatchedStatus get_subscription_match_status();
With
SubscriptionMatchedStatus get_subscription_matched_status();
Resolution: see above
Revised Text: Section 2.1.2.4 Publication Module
Figure 2-9; DataWriter class; Replace
get_publication_match_status()
With
get_publication_matched_status()
Resulting figure 2-9 is:
Section 2.1.2.4.2 DataWriter Class
DataWriter class table ; Replace
get_publication_match_status()
With
get_publication_matched_status()
Section 2.1.2.4.2.19 get_publication_match_status
Rename section heading to:
2.1.2.4.2.19 get_publication_matched_status
Section 2.1.2.4.2.19 get_publication_match_status; Replace
"allows access to the PUBLICATION_MATCH_QOS"
With:
"allows access to the PUBLICATION_MATCHED communication status "
Section 2.1.2.5 Subscription Module
Figure 2-10; DataReader class; Replace
get_subscription_match_status()
With
get_subscription_matched_status()
Resulting figure 2-10 is shown in resolution of issue 9551
Section 2.1.2.4.2 DataReader Class
DataReader class table ; Replace
get_subscription_match_status()
With
get_subscription_matched_status()
Section 2.1.2.5.3.25 get_subscription_match_status
Rename section heading to:
2.1.2.5.3.25 get_subscription_matched_status
Section 2.1.2.5.3.25 get_subscription_match_status
Replace
SUBSCRIPTION_MATCH_STATUS
With
SUBSCRIPTION_MATCHED_STATUS
Section 2.1.4.4 Conditions and Wait-sets
Figure 2-19; DataReader class; Replace
get_publication_match_status()
With
get_publication_matched_status()
Resulting figure 2-19 is shown along with the resolution of 9511:
Section 2.1.4.1 Communication Status
Communication status table replace:
PUBLICATION_MATCH
With
PUBLICATION_MATCHED
Communication status table replace:
SUBSCRIPTION_MATCH
With
SUBSCRIPTION_MATCHED
Section 2.2.3 DCPS PSM : IDL
Status constants ; Replace:
const StatusKind PUBLICATION_MATCH_STATUS = 0x0001 << 13;
const StatusKind SUBSCRIPTION_MATCH_STATUS = 0x0001 << 14;
With
const StatusKind PUBLICATION_MATCHED_STATUS = 0x0001 << 13;
const StatusKind SUBSCRIPTION_MATCHED_STATUS = 0x0001 << 14;
interface DataWriter ; Replace:
PublicationMatchedStatus get_publication_match_status();
With
PublicationMatchedStatus get_publication_matched_status();
interface DataReader; Replace:
SubscriptionMatchedStatus get_subscription_match_status();
With
SubscriptionMatchedStatus get_subscription_matched_status();
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: clsoed issue
Discussion: Rename PUBLICATION_MATCH_STATUS to PUBLICATION_MATCHED_STATUS, SUBSCRIPTION_MATCH_STATUS to SUBSCRIPTION_MATCHED_STATUS
Issue 9499: delete_contained_entities() on the Subscriber (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Should delete_contained_entities() on the Subscriber (and even the DataReader or DomainParticipant) be allowed to return PRECONDITION_NOT_MET?
Summary:
As described in Section 2.1.2.5.2.6, delete_datareader() can return PRECONDITION_NOT_MET if there are any outstanding loans. In a similar fashion, should we allow delete_contained_entities() on the Subscriber (and even the DataReader or DomainParticipant for that matter) to also return PRECONDITION_NOT_MET in this situation?
Proposed Resolution:
Return PRECONDITION_NOT_MET when delete_contained_entities() is called on either the DataReader, Subscriber, or DomainParticipant when a DataReader has outstanding loans.
Proposed Revised Text:
Section 2.1.2.2.1.18 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is in a state where it cannot be deleted.
Section 2.1.2.4.1.14 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.
Section 2.1.2.5.2.14 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted. This will occur, for example, if a contained DataReader cannot be deleted because the application has called a read or take operation and has not called the corresponding return_loan operation to return the loaned samples.
Section 2.1.2.5.3.30 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.
Resolution: see above
Revised Text: Section 2.1.2.2.1.18 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is in a state where it cannot be deleted.
Section 2.1.2.4.1.14 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.
Section 2.1.2.5.2.14 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted. This will occur, for example, if a contained DataReader cannot be deleted because the application has called a read or take operation and has not called the corresponding return_loan operation to return the loaned samples.
Section 2.1.2.5.3.30 delete_contained_entities
Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Return PRECONDITION_NOT_MET when delete_contained_entities() is called on either the DataReader, Subscriber, or DomainParticipant when a DataReader has outstanding loans.
Issue 9500: Return of get_matched_XXX_data() (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In get_matched_subscription_data, we return PRECONDITION_NOT_MET in this situation. However, in get_matched_publication_data, we return BAD_PARAMETER. Previously, they were both returning PRECONDITION_NOT_MET.
In addition, in both sections we state "The operation get_matched_XXXs to find the XXXs that are currently matched" should probably read "can be used to find".
Proposed Resolution:
Make it consistent by returning BAD_PARAMETER in both.
Proposed Revised Text:
Section 2.1.4.2.23 get_matched_subscription_data
In the first sentence of the second paragraph, replace
Replace "the operation will fail and return PRECONDITION_NOT_MET."
With "the operation will fail and return BAD_PARAMETER."
Resolution: Make it consistent by returning BAD_PARAMETER in both
Revised Text: Section 2.1.4.2.23 get_matched_subscription_data
In the first sentence of the second paragraph, Replace
"the operation will fail and return PRECONDITION_NOT_MET."
With
"the operation will fail and return BAD_PARAMETER."
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9501: Need INVALID_QOS_POLICY_ID (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The Requested/OfferedIncompatibleQosStatus contains the last_policy_id and we need to set this to something in case no QoS policy has ever been incompatible.
Proposed Resolution:
Add "const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;" to the PSM.
Proposed Revised Text:
Section 2.2.3 DCPS PSM : IDL
In the Qos section add the following to the list of QosPolicyId_t:
const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;
Resolution: Add "const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;" to the PSM
Revised Text: Section 2.2.3 DCPS PSM : IDL
In the Qos section
Add the following to the list of QosPolicyId_t
const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9502: Clarify valid handle when calling write() (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Section 2.1.2.4.2.11 the write() operation will return PRECONDITION_NOT_MET if the handle is "valid but does not correspond to the given instance". Further, it goes on to state that "in the case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER'." We should clarify what is "valid" versus "invalid".
Valid means the handle corresponds to a registered instance.
Proposed Resolution:
Clarify that valid means the handle corresponds to a registered instance.
When the handle is valid and does not correspond to the given instance should be up to the implementation to be able to detect this or not.
Proposed Revised Text:
Section 2.1.2.4.2.11 write
Remove the last paragraph that reads "In case the provided handle is valid"
Add a new paragraph directly following the one that reads "If handle is any value other than HANDLE_NIL" as follows:
In case the provided handle is valid, i.e. corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.
Section 2.1.2.4.2.13 dispose
Replace the next to last paragraph that reads "In case the provided handle is valid."
With the same paragraph above:
In case the provided handle is valid, i.e corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.
Resolution: see above
Revised Text: Section 2.1.2.4.2.11 write
Remove the last paragraph that reads "In case the provided handle is valid …"
In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET.' In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER.'
Add a new paragraph directly following the one that reads "If handle is any value other than HANDLE_NIL …" as follows:
In case the provided handle is valid, i.e. corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.
Section 2.1.2.4.2.13 dispose
Replace the next to last paragraph that reads "In case the provided handle is valid. …"
In case the provided handle is valid but does not correspond to the given instance, the resulting error-code of the operation will be 'PRECONDITION_NOT_MET.' In case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDSimplementation, the returned error-code will be 'BAD_PARAMETER.'
With the same paragraph above
In case the provided handle is valid, i.e corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Clarify that valid means the handle corresponds to a registered instance.
When the handle is valid and does not correspond to the given instance should be up to the implementation to be able to detect this or not.
Issue 9503: Operation dispose_w_timestamp() should be callable on unregistered instance (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Section 2.1.2.4.2.14 (dispose_w_timestamp) it states that the operation will return PRECONDITION_NOT_MET if called on an instance that has not yet been registered. This is not true as the operation will implicitly register the instance just as write does. This restriction was also originally in 2.1.2.4.2.13 (dispose) but has already been removed.
Proposed Resolution:
Remove the offending paragraph.
Proposed Revised Text:
Section 2.1.2.4.2.14 dispose_w_timestamp
Remove the last two paragraphs , that is the text starting from "The operation must be only called on registered instances." till the end of the section.
Resolution: see above
Revised Text: Section 2.1.2.4.2.14 dispose_w_timestamp
Remove the paragraph. …".
The operation must be only called on registered instances. Otherwise the operation will return the error PRECONDITION_NOT_MET.
Section 2.1.2.4.2.14 dispose_w_timestamp
Add the following paragraph before the last paragraph in the section:
If handle is any value other than HANDLE_NIL, then it must correspond to the value
returned by register_instance when the instance (identified by its key) was registered.
In case the provided handle is valid but does not correspond to the given instance, the
resulting error-code of the operation will be 'PRECONDITION_NOT_MET.' In case the
handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER.'
This operation may block and return TIMEOUT under the same circumstances described
for the write operation (Section 2.1.2.4.2.11 ).
Section 2.1.2.4.2.12 write_w_timestamp
Add the following paragraph after the first paragraph in the section:
If handle is any value other than HANDLE_NIL, then it must correspond to the value
returned by register_instance when the instance (identified by its key) was registered.
In case the provided handle is valid but does not correspond to the given instance, the
resulting error-code of the operation will be 'PRECONDITION_NOT_MET.' In case the
handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER.'
Section 2.1.2.4.2.7 unregister_instance
In the paragraph that starts with
"If handle is any value other than HANDLE_NIL, then it must correspond to the value returned by register_instance when the instance (identified by its key) was registered."
Replace the sentence:
Then if there is no correspondence, the result of the operation is unspecified.
With the paragraph:
In case the provided handle is valid but does not correspond to the given instance, the
resulting error-code of the operation will be 'PRECONDITION_NOT_MET.' In case the
handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER.'
Section 2.1.2.4.2.8 unregister_instance_w_timestamp
Add the following paragraph before the last paragraph in the section:
If handle is any value other than HANDLE_NIL, then it must correspond to the value
returned by register_instance when the instance (identified by its key) was registered.
In case the provided handle is valid but does not correspond to the given instance, the
resulting error-code of the operation will be 'PRECONDITION_NOT_MET.' In case the
handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER.'
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Remove the offending paragraph.
In addition, specify the behavior with regards to passing an invalid instance_handle to the operations: dispose_w_timestamp, write_w_timestamp, unregister_instance call; is behavior is the same that was specified for write, or dispose. Also align the explanation given for passing an 'invalid' handle to the operation "unregister" with the explanation in the other sections
Issue 9504: Behavior of dispose with regards to DURABILITY QoS (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Section 2.1.2.4.2.13 (dispose) it states "in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it".
Is this really necessary? Is it not acceptable to allow late-joining readers to see an instance with the NOT_ALIVE_DISPOSED instance state?
Does this also apply to TRANSIENT_LOCAL?
We think disposed instances should be propagated to new-ly discovered applications, otherwise there would be no way to enforce ownership of a disposed instance.
Furthermore the application should be notified of disposed instances even if this is the first time the middleware sees the instance because in practice there is no way for the middleware to tell if the application has seen the instance already; for example, following a network partition the middleware may have notified of NOT_ALIVE_NO_WRITERS and following the application taking all the samples it could have reclaimed the information on that instance, so when it sees it again it thinks it is the first time; the application meanwhile could still have information on that instance…
So the user case where a newly joining reader wants to not receive instances that have been disposed before it joined should be handled on the writer side by either explicitly unregistering the instances, or having some new QoS that auto-unregisters disposed instances.
Another issue is whether the act of disposing on the writer side should automatically remove previous samples for that instance, and whether that is done for particular values of the HISTORY (e.g. when it is KEEP_LAST only, or KEEP_LAST with depth==1, or, even for KEEP_ALL). Seems like the control of this should be another QoS on the WRITER_LIFECYCLE.
Proposed Resolution:
For now eliminate the following text from Section 2.1.2.4.2.13 (dispose)
"In case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it".
Proposed Revised Text:
Section 2.1.2.4.2.13 dispose
Remove the paragraph:
In addition, in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it.
Resolution: see above
Revised Text: Section 2.1.2.4.2.13 dispose
Remove the paragraph:
In addition, in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it.
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Discussion: Eliminate the following text from Section 2.1.2.4.2.13 (dispose)
"In case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it".
Issue 9505: Typo in copy_from_topic_qos (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Section 2.1.2.5.2.17 there is a typo in the last paragraph where "datawriter_qos" should be "datareader_qos".
Proposed Resolution:
Correct the typo.
Proposed Revised Text:
Section 2.1.2.5.2.17 copy_from_topic_qos
Replace "datawriter_qos" with "datareader_qos" in the first sentence of the last paragraph that currently reads "This operation does not check the resulting datawriter_qos for consistency".
Resolution: Correct the typo
Revised Text: Section 2.1.2.5.2.17 copy_from_topic_qos
Replace "datawriter_qos" with "datareader_qos" in the first sentence of the last paragraph as shown below:
This operation does not check the resulting datawriter_qos datareader_qos for consistency
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9506: Order of parameters incorrect in PSM (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In the PSM for get_discovered_topic_data() and get_discovered_participant_data() on the DomainParticipant, the data parameter should be first followed by the handle. The order is correct in the PIM.
Proposed Resolution:
Make the suggested modifications.
Proposed Revised Text:
Section 2.2.3 DCPS PSM : IDL
In the DomainParticipant interface:
Change the order of the parameters to get_discovered_participant_data from "in InstanceHandle_t participant_handle, inout ParticipantBuiltinTopicData participant_data" to inout ParticipantBuiltinTopicData participant_data, in InstanceHandle_t participant_handle".
Change the order of the parameters to get_discovered_topic_data from "in InstanceHandle_t topic_handle, inout TopicBuiltinTopicData topic_data" to inout TopicBuiltinTopicData topic_data, in InstanceHandle_t topic_handle".
Resolution:
Revised Text: Section 2.2.3 DCPS PSM : IDL
In the DomainParticipant interface
Change the order of the parameters to get_discovered_participant_data from
in InstanceHandle_t participant_handle, inout ParticipantBuiltinTopicData participant_data
To
inout ParticipantBuiltinTopicData participant_data, in InstanceHandle_t participant_handle
Change the order of the parameters to get_discovered_topic_data from
in InstanceHandle_t topic_handle, inout TopicBuiltinTopicData topic_data
To
inout TopicBuiltinTopicData topic_data, in InstanceHandle_t topic_handle
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
Issue 9507: Typo in get_discovered_participant_data (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Section 2.1.2.2.1.28 there is a typo in the next to last paragraph where "get_matched_participants" should be "get_discovered_participants".
Proposed Resolution:
Correct the typo.
Proposed Revised Text:
Section 2.1.2.2.1.28 get_discovered_participant_data
In the next to last paragraph replace "get_matched_participants" with "get_discovered_participants" where it currently reads "Use the operation get_matched_participants to find ".
Resolution: Correct the typo.
Revised Text: Section 2.1.2.2.1.28 get_discovered_participant_data
In the next to last paragraph replace "get_matched_participants" with "get_discovered_participants" as shown below:
Use the operation get_matched_participants get_discovered_participants to find...
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9508: Operation wait() on a WaitSet should return TIMEOUT (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Currently TIMEOUT is not a specified valid return code for the wait() operation. The specification explicitly states that timeout is conveyed by returning OK with an empty list of conditions. We should consider adding TIMEOUT as an explicit valid return value.
Proposed Resolution:
Add TIMEOUT as a valid return code to wait().
Proposed Revised Text:
Section 2.1.2.1.6.3 wait
In the next to last paragraph, replace
"If the duration is exceeded, wait will also return with the return code OK. In this case, the resulting list of conditions will be empty."
With
"If the duration is exceeded, wait will return with return code TIMEOUT."
Resolution: Add TIMEOUT as a valid return code to wait().
Revised Text: Section 2.1.2.1.6.3 wait
In the next to last paragraph, replace
If the duration is exceeded …, wait will also return with the return code OK. In this case, the resulting list of conditions will be empty.
With
If the duration is exceeded …, wait will return with return code TIMEOUT.
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9509: Example in 2.1.4.4.2 not quite correct (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Section 2.1.4.4.2 (Trigger State of the ReadCondition) the last paragraph describes an example. However, it is not quite true because reading samples belonging to the latest generation will cause the view_state to become NOT_NEW.
For the sake of the example considered, it may not be necessary to specify the view_state since it is not absolutely relevant to the desired condition being triggered when a new sample arrives given that all other samples were previously at least read.
Proposed Resolution:
Remove mention of the view_state.
Proposed Revised Text:
Section 2.1.4.4.2 Trigger State of the ReadCondition
In the last paragraph, change the sentence from
"A ReadCondition that has a sample_state_mask = {NOT_READ}, view_state_mask = {NEW} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the NEW samples are either read or taken."
To
"A ReadCondition that has a sample_state_mask = {NOT_READ} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the new samples are either read or taken. "
Section 2.1.4.4.2 Trigger State of the ReadCondition
In that last paragraph change the last sentence from
"that would only change the SampleState to READ but the sample would still have (SampleState, ViewState) = (READ, NEW) which overlaps the mask on the ReadCondition".
To
"that would only change the SampleState to READ which still overlaps the mask on the ReadCondition".
Resolution: Remove mention of the view_state.
Revised Text: Section 2.1.4.4.2 Trigger State of the ReadCondition
Change the last paragraph as shown below
To elaborate further, consider the following example: A ReadCondition that has a sample_state_mask = {NOT_READ}, view_state_mask = {NEW} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the NEW samples are either read (so their status changes to READ) or taken (so they are no longer managed by the Service). However if the same ReadCondition had a sample_state_mask = {READ, NOT_READ}, then the trigger_value would only become FALSE once all the new samples are taken (it is not sufficient to read them as that would only change the SampleState to READ but the sample would still have (SampleState, ViewState) = (READ, NEW) which overlaps the mask on the ReadCondition.
To
To elaborate further, consider the following example: A ReadCondition that has a sample_state_mask = {NOT_READ} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the new samples are either read (so their status changes to READ) or taken (so they are no longer managed by the Service).
However if the same ReadCondition had a sample_state_mask = {READ, NOT_READ}, then the trigger_value would only become FALSE once all the new samples are taken (it is not sufficient to read them as that would only change the SampleState to READ which still overlaps the mask on the ReadCondition".
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9510: Non intuitive constant names (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: The following literals are defined:
DURATION_INFINITY_SEC
DURATION_INFINITY_NSEC
TIMESTAMP_INVALID_SEC
TIMESTAMP_INVALID_NSEC
These are incorrectly named and should be:
DURATION_INFINITE_SEC
DURATION_INFINITE_NSEC
TIME_INVALID_SEC
TIME_INVALID_NSEC
Proposed Resolution:
Add the correct names.
Proposed Revised Text:
Section 2.2.3 DCPS PSM : IDL
Replace:
const long DURATION_INFINITY_SEC = 0x7fffffff;
const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff;
const long TIMESTAMP_INVALID_SEC = -1;
const unsigned long TIMESTAMP_INVALID_NSEC = 0xffffffff;
With:
const long DURATION_INFINITE_SEC = 0x7fffffff;
const unsigned long DURATION_INFINITE_NSEC = 0x7fffffff;
const long TIME_INVALID_SEC = -1;
const unsigned long TIME_INVALID_NSEC = 0xffffffff;
Resolution: Use the correct names
Revised Text: Section 2.2.3 DCPS PSM : IDL
Replace:
const long DURATION_INFINITY_SEC = 0x7fffffff;
const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff;
const long TIMESTAMP_INVALID_SEC = -1;
const unsigned long TIMESTAMP_INVALID_NSEC = 0xffffffff;
With
const long DURATION_INFINITE_SEC = 0x7fffffff;
const unsigned long DURATION_INFINITE_NSEC = 0x7fffffff;
const long TIME_INVALID_SEC = -1;
const unsigned long TIME_INVALID_NSEC = 0xffffffff;
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9511: Corrections to Figure 2-19 (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: In Figure 2-19 in Section 2.1.4.4 (Conditions and Wait-sets):
There is no such delete_statuscondition() operation on the Entity.
The ReadCondition should have a view_state_mask and an instance_state_mask instead of a lifecycle_state_mask.
Proposed Resolution:
Make the suggested corrections.
Proposed Revised Text:
Section 2.1.4.4 Conditions and Wait-sets
In Figure 2-19
Remove "delete_statuscondition()" from the operations listed on the Entity.
Remove "lifecycle_state_mask [*] : ViewStateKind" from the attributes listed on the ReadCondition.
Add "view_state_mask [*] : ViewStateKind" and "instance_state_mask [*] : InstanceStateKind" to the end of the attributes listed on the ReadCondition.
Resolution: Make the suggested corrections
Revised Text: Section 2.1.4.4 Conditions and Wait-sets
In Figure 2-19
Remove "delete_statuscondition()" from the operations listed on the Entity.
Remove "lifecycle_state_mask [*] : ViewStateKind" from the attributes listed on the ReadCondition.
Add "view_state_mask [*] : ViewStateKind" and "instance_state_mask [*] : InstanceStateKind" to the end of the attributes listed on the ReadCondition. The resulting figure 2-19 is (this is also affected by the resolution of issue 9498):
Disposition: Resolved
Actions taken:
April 2, 2006: received issue
August 23, 2006: closed issue
Issue 9516: Simplify Relation Management (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The purpose of the DLRL has been described as being able “to provide more direct access to the exchanged data, seamlessly integrated with the native-language constructs”. This means that DLRL should offer applications an OO-view on the information model(s) they use. In this view, objects behave in the same way as ordinary, native language objects.
Providing intuitive object access and object navigation should be key-benefits of DLRL compared to plain DCPS usage, where instances and their relations need to be resolved manually. Object navigation in DLRL therefore needs to be simple and intuitive, just like navigating between objects in any ordinary native OO-language.
It is in this aspect that DLRL falls short: object navigation is not simple and intuitive, since it requires intermediate objects (RefRelations and ObjectReferences) that abstract applications from the navigable objects. The purpose of these intermediate objects was to serve as some sort of smart pointers, that abstract applications from knowledge about the exact location and even about the existence of objects (to allow a form of lazy instantiation).
However, since the potential benefits from smart pointer management are rather dependent on the underlying target language, the DLRL specification does not address them and only explains the effort that an application should do in the absence of any smart pointer support. This results in the following problems:
The way in which a DLRL implementation solves pointer arithmetic is not standardized and may change from vendor to vendor and from language to language.
When smart pointer arithmetic is not available, applications will be expected to do lots of extra relational management, which is not in the scope of most application programmers.
Proposed Resolution:
Simplify relation management by removing all intermediate relation objects from the API (Reference, Relation, RefRelation, ObjectReference, ListRelation and MapRelation). Navigation of single relations is done by going directly from ObjectRoot to ObjectRoot (simplifying the IDL object model as well). Implementations can still choose to do smart resource management (e.g. lazy instantiation), but they should do so in a fully transparent way, one that is invisible to applications.
This approach also makes the PIM and PSM (which deviated quite a lot from eachother with respect to these intermediate relation-like objects) more consistent.
Proposed Revised Text:
Section 3.1.5.2, 2nd paragraph, 1st sentence: “DLRL classes are linked to other DLRL classes by means of Relation Objects”. This should be replaced with “… by means of relations.”.
Change the Object Diagram of Figure 3.4. (an alternative Object Diagram will be provided).
Change the table immediately following Figure 3.4 by removing the ObjectReference, Reference, Relation, RefRelation, ListRelation, StrMapRelation and IntMapRelation entries from it.
Remove the foot-note directly following this table (starting with with number 1) that says: “The specification does … (lazy instantiation).”
Section 3.1.6.3.2: Remove the sequence of ObjectReference attribute from the CacheAccess table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.
Section 3.1.6.3.2: Remove the deref method from the CacheAccess table and from the explanation below it.
Section 3.1.6.3.3: Remove the sequence of ObjectReference attribute from the Cache table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.
Section 3.1.6.3.3: Remove the deref method from the Cache table and from the explanation below it.
Section 3.2.1.2.1: Remove the following lines from the CacheAccess and Cache interface:
readonly attribute ObjectReferenceSeq refs;
ObjectRoot deref( in ObjectReference ref) raises (NotFound);
Section 3.1.6.3.5: Remove the sequence of ObjectReference attribute from the ObjectHome table, and from the explanation below it.
Section 3.2.1.2.1: Remove the following line from the ObjectHome interface:
readonly attribute ObjectReferenceSeq refs;
Section 3.1.6.3.5: Change the entire explanation of the auto_deref attribute from:
“a boolean that indicates if ObjectReference corresponding to that type should be implicitly instantiated (TRUE) or if this action should be explicitly done by the application when needed by calling a deref operation (auto_deref). As selections act on instantiated objects (see section 3.1.6.3.7 for details on selections), TRUE is a sensible setting when selections are attached to that home.”
to:
“a boolean that indicates whether the state of a DLRL Object should always be loaded into that Object (auto_deref = TRUE) or whether this state will only be loaded after it has been accessed explicitly by the application (auto_deref = FALSE).”
Section 3.1.6.3.5: Change the entire explanation of the deref_all method from:
“ask for the instantiation of all the ObjectReference that are attached to that home, in the Cache (deref_all).”
To:
“ask to load the most recent state of a DLRL Object into that Object for all objects managed by that home (deref_all).”
Section 3.1.6.3.5: Change the entire explanation of the underef_all method from:
“ask for the removal of non-used ObjectRoot that are attached to this home (underef_all).”
To:
“ask to unload all object states from objects that are attached to this home (underef_all).”
Section 3.1.6.3.6: Replace all occurrences of ObjectReference with ObjectRoot in the ObjectListener table. Also remove the second parameter of the on_object_modified method.
Section 3.1.6.3.6: Change the explanation of on_object_created from:
“… this operation is called with the ObjectReference of the newly created object (ref).”
to:
“… this operation is called with the value of the newly created object (the_object).”
Section 3.1.6.3.6: Change the explanation of on_object_modified from:
“This operation is called with the ObjectReference of the modified object (ref) and its old value (old_value); the old value may be NULL.”
To:
“This operation is called with the new value of the modified object (the_object).”
Section 3.1.6.3.6: Change the explanation of on_object_deleted from:
“… this operation is called with the ObjectReference of the newly deleted object (ref).”
To:
“… this operation is called with the value of the newly deleted object (the_object).
Section 3.1.6.3.10: Replace all occurrences of ObjectReference with ObjectRoot in the SelectionListener table.
Section 3.2.1.2.1: Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:
local interface ObjectListener {
boolean on_object_created ( in ObjectReference ref );
/****
* will be generated with the proper Foo type
* in the derived FooListener
* boolean on_object_modified ( in ObjectReference ref,
* in ObjectRoot old_value);
****/
boolean on_object_deleted ( in ObjectReference ref );
};
local interface SelectionListener {
/***
* will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in ( in ObjectRoot the_object );
void on_object_modified ( in ObjectRoot the_object );
*
***/
void on_object_out ( in ObjectReference the_ref );
};
To:
local interface ObjectListener {
/****
* will be generated with the proper Foo type
* in the derived FooListener
boolean on_object_created ( in ObjectRoot the_object );
boolean on_object_modified ( in ObjectRoot the_object );
boolean on_object_deleted ( in ObjectRoot the_object );
*
****/
};
local interface SelectionListener {
/***
* will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in ( in ObjectRoot the_object );
void on_object_modified ( in ObjectRoot the_object );
void on_object_out (in ObjectRoot the_object );
*
***/
};
Section 3.2.1.2.2: Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:
local interface FooListener: DDS::ObjectListener {
void on_object_modified ( in DDS ::ObjectReference ref,
in Foo old_value );
};
local interface FooSelectionListener : DDS::SelectionListener {
void on_object_in ( in Foo the_object );
void on_object_modified ( in Foo the_object );
};
To:
local interface FooListener: DDS::ObjectListener {
boolean on_object_created ( in Foo the_object );
boolean on_object_modified ( in Foo the_object );
boolean on_object_deleted ( in Foo the_object );
};
local interface FooSelectionListener : DDS::SelectionListener {
void on_object_in ( in Foo the_object );
void on_object_modified ( in Foo the_object );
void on_object_out (in Foo the_object );
};
Section 3.1.6.3.13: Remove the ObjectReference attribute from the ObjectRoot table, and from the explanation below it.
Section 3.2.1.2.1: Remove the following line from the IDL in the ObjectRoot:
readonly attribute ObjectReference ref;
Section 3.1.6.3.13: Change the following sentence from:
“In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute:”
To:
“In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute (including single- and multi-relation attributes):”
Section 3.1.6.3.14 can be removed (ObjectReference).
Section 3.2.1.2.1: Remove the following lines from the IDL:
/*****************
* ObjectReference
*****************/
struct ObjectReference {
DLRLOid oid;
unsigned long home_index;
};
typedef sequence<ObjectReference> ObjectReferenceSeq;
Section 3.1.6.3.15 can be removed (Reference).
Section 3.1.6.3.20 can be removed (Relation).
Section 3.1.6.3.21 can be removed (RefRelation).
Section 3.1.6.3.22 - Section 3.1.6.3.24 can be removed (ListRelation, IntMapRelation and StrMapRelation).
Section 3.2.1.2.1: Remove the following lines from the IDL:
/********************************
* Value Bases for Relations
*********************************/
valuetype RefRelation {
private ObjectReference m_ref;
boolean is_composition();
void reset();
boolean is_modified ( in ReferenceScope scope );
};
valuetype ListRelation : ListBase {
private ObjectReferenceSeq m_refs;
boolean is_composition();
};
valuetype StrMapRelation : StrMapBase {
struct Item {
string key;
ObjectReference ref;
};
typedef sequence <Item> ItemSeq;
private ItemSeq m_refs;
boolean is_composition();
};
valuetype IntMapRelation : IntMapBase {
struct Item {
long key;
ObjectReference ref;
};
typedef sequence <Item> ItemSeq;
private ItemSeq m_refs;
boolean is_composition();
};
Section 3.2.1.1: 1st paragraph after the numbered list of DLRL entities, remove the following sentence: “(with the exception of ObjectReference, …. , so that it can be embedded). Section 3.2.1.2.2: Change the following lines in IDL from:
valuetype FooStrMap : DDS::StrMapRelation { // StrMap<Foo>
…
valuetype FooIntMap : DDS::IntMapRelation { // IntMap<Foo>
To:
valuetype FooStrMap : DDS::StrMap { // StrMap<Foo>
…
valuetype FooIntMap : DDS::IntMap { // IntMap<Foo>
Section 3.2.2.3.1: Remove the “Ref” value from the allowed list of patterns, so change the templateDef . The templatedef then changes from:
<!ATTLIST templateDef name CDATA #REQUIRED
pattern (List | StrMap | IntMap | Ref) #REQUIRED
itemType CDATA #REQUIRED>
To (see also Issues T_DLRL#7 and T_DLRL#8):
<!ATTLIST templateDef name CDATA #REQUIRED
pattern (Set | StrMap | IntMap) #REQUIRED
itemType CDATA #REQUIRED>
Section 3.2.2.3.2.3, 2nd bullet: Remove the “Ref” pattern from the list of supported constructs.
Section 3.2.3.2: Replace the forward valuetype declaration for RadarRef with a forward declaration of type Radar, so change from:
valuetype RadarRef // Ref<Radar>
To:
valuetype Radar;
Section 3.2.3.3: Remove the following line from the XML (in both XML examples):
“<templateDef name=“RadarRef”
pattern=“Ref” itemType=“Radar”/>”
Resolution: see above
Revised Text: Section 3.1.5.1, 2nd paragraph,
1st sentence:
"DLRL classes are linked to other DLRL classes by means of Relation Objects".
This should be replaced with
"… by means of relations.".
Change the Object Diagram of Figure 3.4 to. (This diagram is also influenced by the other issues):
Old diagram:
New Diagram:
Change the table immediately following Figure 3.4
Remove the following entries from the table:
ObjectReference, Reference, Relation, RefRelation, ListRelation,
StrMapRelation and IntMapRelation.
Remove the foot-note directly following this table (starting with with number 1) that says:
"The specification does … (lazy instantiation)."
Section 3.1.6.3.2:
Remove the sequence of ObjectReference attribute from the CacheAccess table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.
refs ObjectReference []
Once the CacheAccess is created for a given purpose, copies of DLRL objects can be attached to it (see ObjectRoot::clone method), by means of references (refs) and then
Section 3.1.6.3.2:
Remove the deref method from the CacheAccess table and from the explanation below it.
deref ObjectRoot
ref ObjectReference
· a method allows transformation of an ObjectReference in the ObjectRoot which is valid for this CacheAccess (deref).
Section 3.1.6.3.3:
Remove the sequence of ObjectReference attribute from the Cache table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.
refs ObjectReference []
· the attached ObjectReference (refs).
Section 3.1.6.3.3:
Remove the deref method from the Cache table and from the explanation below it.
deref ObjectRoot
ref ObjectReference
Section 3.2.1.2.1:
Remove the following lines from the CacheAccess interface
readonly attribute ObjectReferenceSeq refs;
ObjectRoot deref( in ObjectReference ref) raises (NotFound);
Remove the following lines from the Cache interface:
readonly attribute ObjectReferenceSeq refs;
ObjectRoot deref( in ObjectReference ref);
Section 3.1.6.3.5:
Remove the sequence of ObjectReference attribute from the ObjectHome table, and from the explanation below it.
refs ObjectReference []
· the list of ObjectReference that correspond to objects of that class (refs).
Section 3.2.1.2.1:
Remove the following line from the ObjectHome interface:
readonly attribute ObjectReferenceSeq refs;
Section 3.1.6.3.5:
Change the entire explanation of the auto_deref attribute from:
"a boolean that indicates if ObjectReference corresponding to that type should be implicitly instantiated (TRUE) or if this action should be explicitly done by the application when needed by calling a deref operation (auto_deref). As selections act on instantiated objects (see section 3.1.6.3.7 for details on selections), TRUE is a sensible setting when selections are attached to that home."
to:
"a boolean that indicates whether the state of a DLRL Object should always be loaded into that Object (auto_deref = TRUE) or whether this state will only be loaded after it has been accessed explicitly by the application (auto_deref = FALSE)."
Section 3.1.6.3.5:
Change the entire explanation of the deref_all method from:
"ask for the instantiation of all the ObjectReference that are attached to that home, in the Cache (deref_all)."
To:
"ask to load the most recent state of a DLRL Object into that Object for all objects managed by that home (deref_all)."
Section 3.1.6.3.5:
Change the entire explanation of the underef_all method from:
"ask for the removal of non-used ObjectRoot that are attached to this home (underef_all)."
To:
"ask to unload all object states from objects that are attached to this home (underef_all)."
Section 3.1.6.3.6:
Replace all occurrences of ObjectReference with ObjectRoot in the ObjectListener table.
Also remove the second parameter of the on_object_modified method, resulting in the table:
ObjectListener
operations
on_object_created boolean
the_object ObjectRoot
on_object_modified boolean
the_object ObjectRoot
on_object_deleted boolean
the_object ObjectRoot
Section 3.1.6.3.6:
Change the explanation of on_object_created from:
"… this operation is called with the ObjectReference of the newly created object (ref)."
to:
"… this operation is called with the newly created object (the_object)."
Section 3.1.6.3.6:
Change the explanation of on_object_modified from:
"This operation is called with the ObjectReference of the modified object (ref) and its old value (old_value); the old value may be NULL."
To:
"This operation is called with the modified object (the_object)."
Section 3.1.6.3.6:
Change the explanation of on_object_deleted from:
"… this operation is called with the ObjectReference of the newly deleted object (ref)."
To:
"… this operation is called with the newly deleted object (the_object).
Section 3.1.6.3.10:
Replace all occurrences of ObjectReference with ObjectRoot in the SelectionListener table resulting in:
SelectionListener
operations
on_object_in void
the_object ObjectRoot
on_object_out void
the_object ObjectRoot
on_object_modified void
the_object ObjectRoot
Section 3.2.1.2.1:
Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:
local interface ObjectListener {
boolean on_object_created ( in ObjectReference ref );
/****
* will be generated with the proper Foo type
* in the derived FooListener
* boolean on_object_modified ( in ObjectReference ref,
in ObjectRoot old_value);
****/
boolean on_object_deleted ( in ObjectReference ref );
};
local interface SelectionListener {
/***
* will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in ( in ObjectRoot the_object );
void on_object_modified ( in ObjectRoot the_object );
*
***/
void on_object_out ( in ObjectReference the_ref );
};
To:
local interface ObjectListener {
/****
* will be generated with the proper Foo type
* in the derived FooListener
boolean on_object_created ( in ObjectRoot the_object );
boolean on_object_modified ( in ObjectRoot the_object );
boolean on_object_deleted ( in ObjectRoot the_object );
*
****/
};
local interface SelectionListener {
/***
* will be generated with the proper Foo type
* in the derived FooSelectionListener
*
void on_object_in ( in ObjectRoot the_object );
void on_object_modified ( in ObjectRoot the_object );
void on_object_out (in ObjectRoot the_object );
*
***/
};
Section 3.2.1.2.2: Change in the IDL interfaces for FooListener and FooSelectionListener the following lines from:
local interface FooListener: DDS::ObjectListener {
void on_object_modified ( in DDS ::ObjectReference ref,
in Foo old_value );
};
local interface FooSelectionListener : DDS::SelectionListener {
void on_object_in ( in Foo the_object );
void on_object_modified ( in Foo the_object );
};
To:
local interface FooListener: DDS::ObjectListener {
boolean on_object_created ( in Foo the_object );
boolean on_object_modified ( in Foo the_object );
boolean on_object_deleted ( in Foo the_object );
};
local interface FooSelectionListener : DDS::SelectionListener {
void on_object_in ( in Foo the_object );
void on_object_modified ( in Foo the_object );
void on_object_out (in Foo the_object );
};
Section 3.1.6.3.13:
Remove the ObjectReference attribute from the ObjectRoot table, and from the explanation below it.
refs ObjectReference []
· the full ObjectReference that corresponds to it (ref).
Section 3.2.1.2.1:
Remove the following line from the IDL in the ObjectRoot:
readonly attribute ObjectReference ref;
Section 3.1.6.3.13: Change the following sentence from:
"In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute:"
To:
"In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute (including single- and multi-relation attributes):"
Section 3.1.6.3.14 (ObjectReference):
Remove the section
Section 3.2.1.2.1:
Remove the following lines from the IDL:
/*****************
* ObjectReference
*****************/
struct ObjectReference {
DLRLOid oid;
unsigned long home_index;
};
typedef sequence<ObjectReference> ObjectReferenceSeq;
Remove Section 3.1.6.3.15 (Reference).
Remove Section 3.1.6.3.20 (Relation).
Remove Section 3.1.6.3.21 (RefRelation).
Remove Section 3.1.6.3.22 - Section 3.1.6.3.24 (ListRelation, IntMapRelation and StrMapRelation).
Section 3.2.1.2.1: Remove the following lines from the IDL:
/********************************
* Value Bases for Relations
*********************************/
valuetype RefRelation {
private ObjectReference m_ref;
boolean is_composition();
void reset();
boolean is_modified ( in ReferenceScope scope );
};
valuetype ListRelation : ListBase {
private ObjectReferenceSeq m_refs;
boolean is_composition();
};
valuetype StrMapRelation : StrMapBase {
struct Item {
string key;
ObjectReference ref;
};
typedef sequence <Item> ItemSeq;
private ItemSeq m_refs;
boolean is_composition();
};
valuetype IntMapRelation : IntMapBase {
struct Item {
long key;
ObjectReference ref;
};
typedef sequence <Item> ItemSeq;
private ItemSeq m_refs;
boolean is_composition();
};
Section 3.2.1.1:
1st paragraph after the numbered list of DLRL entities, remove the following sentence:
"(with the exception of ObjectReference, …. , so that it can be embedded).
Section 3.2.1.2.2:
Remove the following lines:
valuetype FooRef : DDS::RefRelation { // Ref<Foo>
void set(
in Foo an_object);
};
Section 3.2.1.2.2:
Change the following lines in IDL from:
valuetype FooStrMap : DDS::StrMapRelation { // StrMap<Foo>
…
valuetype FooIntMap : DDS::IntMapRelation { // IntMap<Foo>
…
valuetype FooList : DDS::ListRelation { // List<Foo>
To:
valuetype FooStrMap : DDS::StrMap { // StrMap<Foo>
…
valuetype FooIntMap : DDS::IntMap { // IntMap<Foo>
…
valuetype FooList : DDS::List { // List<Foo>
Section 3.2.2.3.1:
Remove the "Ref" value from the allowed list of patterns, so change the templateDef . The templatedef then changes from:
<!ATTLIST templateDef name CDATA #REQUIRED
pattern (List | StrMap | IntMap | Ref) #REQUIRED
itemType CDATA #REQUIRED>
To (see also Issue T_DLRL#10):
<!ATTLIST templateDef name CDATA #REQUIRED
pattern (List | StrMap | IntMap | Set) #REQUIRED
itemType CDATA #REQUIRED>
Section 3.2.2.3.2.3,
2nd bullet: Remove the "Ref" pattern from the list of supported constructs.
· pattern - gives the construct pattern. The supported constructs are: Ref, List, StrMap and IntMap.
Section 3.2.3.2:
Replace the forward valuetype declaration for RadarRef with a forward declaration of type Radar, so change from:
valuetype RadarRef // Ref<Radar>
public RadarRef a_radar;
To:
valuetype Radar;
public Radar a_radar;
Section 3.2.3.3:
Remove the following line from the XML (in both XML examples):
"<templateDef name="RadarRef"
pattern="Ref" itemType="Radar"/>"
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Simplify relation management by removing all intermediate relation objects from the API (Reference, Relation, RefRelation, ObjectReference, ListRelation and MapRelation). Navigation of single relations is done by going directly from ObjectRoot to ObjectRoot (simplifying the IDL object model as well). Implementations can still choose to do smart resource management (e.g. lazy instantiation), but they should do so in a fully transparent way, one that is invisible to applications.
This approach also makes the PIM and PSM (which deviated quite a lot from each other with respect to these intermediate relation-like objects) more consistent.
Issue 9517: Cache and CacheAccess should have a common parent (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Both the CacheAccess and Cache have some functional overlap. It would be nice if this overlap would be migrated to a common generalization (for a good reason, see also Issue T_DLRL#3).
Proposed Resolution:
Introduce a new class called CacheBase, that has represents the common functionality. Both the Cache and the CacheAccess inherit from this common base-class.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.2: Add the CacheBase class to the table at page 3-18. Describe that it is a "Base class for all Cache types."
Section 3.1.6.3.???: Add a section that describes the CacheBase class. Make a table with the following information and underlying explanation:
CacheBase
attributes
cache_usage CacheUsage
objects ObjectRoot[ ]
kind CacheKind
operations
refresh void
The public attributes give:
· The cache_usage indicates whether the cache is intended to support write operations (WRITE_ONLY or READ_WRITE) or not (READ_ONLY). This attribute is given at creation time and cannot be changed afterwards.
· A list of (untyped) objects that are contained in this CacheBase. To obtain objects by type, see the get_objects method in the typed ObjectHomes.
The kind describes whether a CacheBase instance represents a Cache or a CacheAccess.It offers methods to:
· Refresh the contents of the Cache with respect to its origins (DCPS in case of a main Cache, Cache in case of a CacheAccess).
Section 3.1.6.3.2:
State in the CacheAccess table that CacheAccess inherits from CacheBase.
Remove the cache_usage attribute and the refresh operation from the table and the explanation below it.
access_usage CacheUsage
refresh void
The attribute access_usage indicates whether the cache is intended to support write operations (WRITE_ONLY or READ_WRITE) or not (READ_ONLY). This attribute is given at creation time and must be compatible with the value of the owning Cache (see Cache::create_access).
· the attached objects can be refreshed (refresh). This operation takes new values from the Cache for all attached objects, following the former clone directives; this can lead to discard changes on the cloned objects if they haven't been saved by writing the CacheAccess.
Section 3.2.1.2.1:
Remove the following lines from the IDL definition of the CacheAccess interface:
readonly attribute CacheUsage access_usage;
void refresh () raises (DCPSError);
Section 3.1.6.3.3:
State in the Cache table that Cache inherits from CacheBase.
Remove the cache_usage attribute and the load operation from the table and from the explanation below it.
access_usage CacheUsage
· the usage mode of the cache (WRITE_ONLY-no subscription, READ_ONLY-no publication, or READ_WRITE-both modes). This mode applies to all objects in the cache and has to be given at creation time (cache_usage).
Section 3.2.1.2.1: Remove the following lines from the IDL definition of the Cache interface:
readonly attribute CacheUsage cache_usage;
void load () raises (DCPSError);
Section 3.1.6.3.5:
Add the get_objects operation to the ObjectHome table and the explanation below it:
ObjectHome
operations
get_objects ObjectRoot[ ]
source CacheBase
It offers methods to:
· Obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome. (The type ObjectRoot[ ] will be substituted by a type Foo[ ] in a FooHome for example).
Section 3.2.1.2.1:
modify the IDL for the ObjectHome interface by adding the get_objects method to the commented-out typed pre-declarations:
local interface ObjectHome {
…
/***
* The following methods will be generated properly typed
* in the generated derived classes
*
ObjectRootSeq get_objects( in CacheBase source );
…
*
***/
…
};
Section 3.2.1.2.2:
modify the IDL for the FooHome interface by adding the get_objects method:
local interface FooHome {
FooSeq get_objects( in CacheBase source );
…
};
Section 3.1.6.3.13:
Change the type of the cache_access attribute in the ObjectRoot table to CacheBase:
ObjectRoot
attributes
owner CacheBase
Section 3.1.6.3.13:
Change the explanation of the cache_access attribute from:
"the CacheAccess it belongs to (cache_access), when the ObjectRoot is a primary object directly attached to the Cache, cache_access is set to NULL;"
to:
"the cache it belongs to (owner), this can be either a Cache or a CacheAccess;"
Section 3.2.1.2.1:
Change the the following line in the IDL definition for the ObjectRoot from:
readonly attribute CacheAccess cache_access;
to:
readonly attribute CacheBase owner;
Section 3.2.1.2.1: add the IDL for the CacheBase interface :
local interface CacheBase;
typedef sequence<CacheBase> CacheBaseSeq;
enum CacheKind {
CACHE_KIND,
CACHEACCESS_KIND
};
local interface CacheBase {
readonly attribute CacheUsage cache_usage;
readonly attribute ObjectRootSeq objects;
readonly attribute CacheKind kind;
void refresh( ) raises (DCPSError);
};
Section 3.2.1.2.1:
Change the following lines in the IDL definition of the Cache and CacheAccess interface from:
local interface Cache {
local interface CacheAccess {
To:
local interface Cache : CacheBase {
local interface CacheAccess : CacheBase {
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Introduce a new class called CacheBase, that has represents the common functionality. Both the Cache and the CacheAccess inherit from this common base-class.
Issue 9518: Object notification in manual update mode required (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The DLRL offers two different update modes for its Primary Cache: an automatic mode in which object creations, updates and deletions are pushed into the Cache and a manual mode in which the Cache content are refreshed on user-demand.
>From the perspective of a Cache-user, it is important to find out what has happened to the contents of the Cache during the latest update session. In automatic update mode, Listeners are triggered for each Object creation, modification or deletion in the primary Cache. However, when the Cache is in manual update mode none of these Listeners are triggered and no means exist to examine what has happened during the last update round. The same can be said for the CacheAccess, that does not have an automatic update mode and neither has any means to examine the changes that were applied during the last invocation of the “refresh” method.
Proposed Resolution:
We therefore propose to add some extra methods to the ObjectHome, that allow an application to obtain the list of Objects that have been created, modified or deleted in the latest update round of a specific CacheBase
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.5: Add the following getter-operations to the table and the underlying explanation:
ObjectHome
operations
get_created_objects ObjectRoot[ ]
source CacheBase
get_modified_objects ObjectRoot[ ]
source CacheBase
get_deleted_objects ObjectRoot[ ]
source CacheBase
It offers methods to:
· Obtain from a CacheBase a (typed) list of all objects that match the type of the selected ObjectHome and that have been created, modified or deleted during the last refresh operation (get_created_objects, get_modified_objects and get_deleted_objects respectively). The type ObjectRoot[ ] will be substituted by a type Foo[ ] in a FooHome.
Section 3.2.1.2.1:
modify the IDL for the ObjectHome interface by adding the appropriate getter-methods:
local interface ObjectHome {
…
/***
* The following methods will be generated properly typed
* in the generated derived classes
*
…
ObjectRootSeq get_created_objects( in CacheBase source );
ObjectRootSeq get_modified_objects( in CacheBase source );
ObjectRootSeq get_deleted_objects( in CacheBase source );
*
***/
…
};
Section 3.2.1.2.2:
modify the IDL for the FooHome interface by adding the appropriate getter-methods:
local interface FooHome {
…
FooSeq get_created_objects( in CacheBase source );
FooSeq get_modified_objects( in CacheBase source );
FooSeq get_deleted_objects( in CacheBase source );
…
};
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: We therefore propose to add some extra methods to the ObjectHome, that allow an application to obtain the list of Objects that have been created, modified or deleted in the latest update round of a specific CacheBase.
Issue 9519: ObjectExtent and ObjectModifier can be removed (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: The ObjectExtent is a manager for a set objects. Basically it is a wrapper that offers functions to modify its contents and to create a sub-set based on on a user-defined function. The problem with using an Extent is that it overlaps with the get_objects method introduced in issue T_DLRL#3, and that it is not clear whether a new Extent should be allocated each time the user obtains it from the ObjectHome, or whether the existing Extent should be re-used and therefore its contents be overwritten with every update.
Furthermore, every application can easily write its own code that modifies every element in this sequence (no specialized ObjectModifier is required for that, a simple for-loop can do the trick), and similarly an application can also write code to filter each element and to store matching results in another sequence. Filtering and modifying objects like this are really business logic, and do not have to be part of a Middleware specification.
Proposed Resolution:
Remove the ObjectModifier and ObjectExtent from the specification. This saves two implied interfaces that are not required for most types of applications, but which can still be solved very well at application level. Replace the extent on the ObjectHome with a sequence of ObjectRoots.
Resolution: see above
Revised Text:
Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.2 (DLRL Entities):
DLRL entities table. Remove entries for ObjectModifier and ObjectExtent.
ObjectModifier Class whose instances represent modifiers to be applied to a set of objects.
ObjectExtent Class to manage a set of instances. ObjectExtent objects are used to represent all the instances managed by an ObjectHome as well as all the members of a Selection. They can also be used in conjunction with ObjectFilter and/or ObjectModifier to allow collective operations on sets of objects.
Section 3.1.6.3.5:
Remove the full_extent and extent entries from the ObjectHome table, and the corresponding explanations below it.
extent ObjectExtent
full_extent ObjectExtent
· the manager for the list of all the instantiated objects of that class (extent).
· the manager for the list of all the instantiated objects of that class and all its derived classes (full_extent).
Section 3.2.1.2.1:
Remove the following lines from the IDL interface for the ObjectHome:
readonly attribute ObjectExtent extent;
readonly attribute ObjectExtent full_extent;
Section 3.2.1.2.2:
Remove the following lines from the IDL interface for the FooHome:
readonly attribute FooExtent extent;
readonly attribute FooExtent full_extent;
Section 3.1.6.3.7 Selection:
Replace the membership attribute with a members attribute of type ObjectRoot[ ]:
Selection
attributes
membership ObjectExtent ObjectRoot[ ]
Section 3.1.6.3.7 Selection:
Change the explanation for membership from:
"the manager of the list of the objects that are part of the selection (membership)."
To:
"The list of the objects that are part of the selection (members).
Section 3.2.1.2.1:
Change the following line in the IDL interface for the Selection from:
readonly attribute ObjectExtent membership;
to:
readonly attribute ObjectRootSeq members;
Section 3.2.1.2.2:
Change the following line in the IDL interface for the FooSelection from:
readonly attribute FooExtent membership;
to:
readonly attribute FooSeq members;
Remove section 3.1.6.3.11 (ObjectModifier) and 3.1.6.3.12 (ObjectExtent).
Section 3.2.1.2.1:
Remove the interface definitions for ObjectModifier and for ObjectExtent.
/***************************************************
* ObjectModifier: Root of all the objects modifiers
***************************************************/
local interface ObjectModifier {
/***
* Following method will be generated properly typed
* in the generated derived classes
*
void modify_object (
in ObjectRoot an_object);
*
***/
};
/**********************************************************
* ObjectExtent : Root of all the extent (lists of objects)
**********************************************************/
local interface ObjectExtent {
/***
* Following method will be generated properly typed
* in the generated derived classes
*
readonly attribute ObjectRootSeq objects;
ObjectExtent find_objects (
in ObjectFilter filter
);
void modify_objects (
in ObjectFilter filter,
in ObjectModifier modifier
);
*
***/
};
Section 3.2.1.2.2:
Remove the interface definitions for FooModifier and for FooExtent.
local interface FooModifier: DDS::ObjectModifier {
void modify_object (
in Foo an_object);
};
local interface FooExtent: DDS::ObjectExtent {
readonly attribute FooSeq objects;
FooExtent find_objects (
in FooFilter filter
);
void modify_objects (
in FooFilter filter,
in FooModifier modifier
);
};
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Remove the ObjectModifier and ObjectExtent from the specification. This saves two implied interfaces that are not required for most types of applications, but which can still be solved very well at application level. Replace the extent on the ObjectHome with a sequence of ObjectRoots.
Issue 9520: Introduce the concept of cloning contracts consistently in specification (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The specification states that it is possible to clone an Object from the primary Cache into a CacheAccess, together with its related or contained objects for a specified navigatable depth. (We will refer to such an Object tree as a cloning contract from now on). However, while the cloning of objects is done on contract level, the deletion of clones is done on individual object level. What should happen to related objects when the top level object is deleted? Furthermore, it is unclear what the result should be when a relationship from an object A to an object B changes from object A to object C. Should the next refresh of the CacheAccess only refresh the states of objects A and B, or should object C be added and object B be removed from the CacheAccess?
Proposed Resolution:
Formally introduce the concept of a cloning contract into the API to replace all other clone-related methods. Cloning contracts are defined on the CacheAccess and are evaluated when the CacheAccess is refreshed.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.2:
Add the Contract class to the table at page 3-18. Add the following description:
"Class that represents a contract between a CacheAccess and a Cache that defines which objects will be cloned from the Cache into the CacheAccess when the latter is refreshed."
Section 3.1.6.3.???:
Add a section that describes the Contract class.
Make a table with the following information and underlying explanation:
Contract
attributes
depth integer
scope ObjectScope
contracted_object ObjectRoot
operations
set_depth void
depth integer
set_scope void
scope ObjectScope
The public attributes give:
· The top-level object (contracted_object). This is the object that acts as the starting point for the cloning contract.
· The scope of the cloning request (i.e., the object itself, or the object with all its (nested) compositions, or the object with all its (nested) compositions and all the objects that are navigable from it up till the specified depth).
· The depth of the cloning contract. This defines how many levels of relationships will be covered by the contract (UNLIMITED_RELATED_OBJECTS when all navigable objects must be cloned recursively). The depth only applies to a RELATED_OBJECT_SCOPE.
It offers methods to:
· Change the depth of an existing contract (set_depth). This change will only be taken into account at the next refresh of the CacheAccess.
· Change the scope of an existing contract set_scope). This change will only be taken into account at the next refresh of the CacheAccess.
Section 3.2.1.2.1:
add the IDL for the contract:
local interface Contract {
readonly attribute long depth;
readonly attribute ObjectScope scope;
readonly attribute ObjectRoot contracted_object.
void set_depth(in long depth);
void set_scope(in ObjectScope scope);
};
typedef sequence<Contract> ContractSeq;
Section 3.1.6.3.2:
Add to the CacheAccess table the following entries with their underlying explanation:
CacheAccess
attributes
contracts Contract[ ]
type_names string[ ]
operations
create_contract Contract
object ObjectRoot
scope ObjectScope
depth long
delete_contract void
a_contract Contract
The public attributes give:
· The contracted objects (contracts). This is the list of all Contracts that are attached to this CacheAccess.
· A list of names that represents the types for which the CacheAccess contains at least one object (type_names).
It offers methods to:
· Create a Contract(create_contract). This method defines a contract that covers the specified object with all the objects in its specified scope. When a CacheAccess is refreshed, all contracted objects will be cloned into it. The contracted object must be located in the Cache that owns the CacheAccess. If this is not the case, a PreconditionNotMet is raised.
· Delete a Contract (delete_contract). This method deletes a contract from the CacheAccess. When the CacheAccess is refreshed, the objects covered by the specified contract will no longer appear in the CacheAccess (unless also covered in another Contract). The specified Contract must be attached to this CacheAccess, otherwise a PreconditionNotMet is raised.
Section 3.1.6.3.2: remove the delete_clone method from the table and from the explanation below it.
Section 3.1.6.3.2:
Change the explanation of the purge operation from:
"the copies can be detached from the CacheAccess (purge)."
To:
"all contracts (including the contracted DLRL Objects themselves) can be detached from the CacheAccess (purge)."
Section 3.2.1.2.1:
Add to the IDL definition of the CacheAccess interface the following lines:
readonly attribute ContractSeq contracts;
readonly attribute StringSeq type_names;
void create_contract( in ObjectRoot object,
in ObjectScope scope, in long depth )
raises (PreconditionNotMet);
void delete_contract( in Contract a_contract )
raises (PreconditionNotMet);
Section 3.1.6.3.12:
Remove the clone and clone_object methods from the ObjectRoot table and from the explanation below it.
clone ObjectReference
access CacheAccess
scope ObjectScope
depth integer
clone_object ObjectRoot
access CacheAccess
scope ObjectScope
depth integer
· create a copy of the object and attach it to a CacheAccess (clone). An object can be cloned to only one CacheAccess allowing write operations; the operation takes as parameters the CacheAccess, the scope of the request (i.e., the object itself or the object and its components or the object and all the objects that are related) and an integer (depth).
· clone and instantiate the object in the same operation (clone_object). This operation takes the same parameters as the previous one, but returns an ObjectRoot instead only an ObjectReference; it corresponds exactly to the sequence of clone followed by CacheAccess::deref on the reference returned by the clone operation.
Section 3.2.1.2.1:
Remove the following lines from the IDL interface definition of the ObjectRoot:
ObjectReference clone ( in CacheAccess access,
in ObjectScope scope,
in RelatedObjectDepth depth )
raises (AlreadyClonedInWriteMode );
ObjectRoot clone_object ( in CacheAccess access,
in ObjectScope scope,
in RelatedObjectDepth depth )
raises (AlreadyClonedInWriteMode);
Section 3.2.1.2.2:
Remove the following lines from the IDL interface definition of the Foo object:
Foo clone_foo ( in DDS::CacheAccess access,
in DDS::ObjectScope scope,
in DDS::RelatedObjectDepth depth)
raises ( DDS::AlreadyClonedInWriteMode );
Sections 3.1.6.5.1 and 3.1.6.5.1: Change step 2 and 3 from:
2. Clone some objects in it (ObjectRoot::clone or clone_object).
3. Refresh them (CacheAccess::refresh).
To:
2. Attach some cloning contracts to it (CacheAccess::create_contract).
3. Execute these contracts (CacheAccess::refresh).
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Formally introduce the concept of a cloning contract into the API to replace all other clone-related methods. Cloning contracts are defined on the CacheAccess and are evaluated when the CacheAccess is refreshed.
Issue 9521: Object State Transitions of Figure 3-5 and 3-6 should be corrected (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Object State Transitions of Figure 3-5 and 3-6 should be corrected and simplified
Summary:
The state transition diagrams in Figure 3-5 and 3-6 are difficult to understand, and the 2nd diagram of Figure 3-5 is missing. (Instead of this 2nd diagram, the first diagram of Figure 3-6 has wrongly been duplicated here).
Furthermore, since it is difficult to distinguish between primary and secondary Objects and their primary and secondary states, it would be nice if more intuitive names and states could be used instead.
Finally, some of the possible conditions in which a state transition can occur are not mentioned in these state transition diagrams, which would even require for them to become more complex.
Proposed Resolution:
Introduce new names for the different states, and try to re-use the same set of states for each diagram. We propose not to speak about primary and secondary objects, but to speak about Cache Objects (located in a Cache) and CacheAccess objects (located in a CachAccess). Furthermore, we propose not to speak about primary and secondary states, but to speak about a READ state (with respect to incoming modifications) and a WRITE state (with respect to local modifications).
Decouple Objects in the Cache from Objects in a CacheAccess, it makes the the the idea of what a Cache or CacheAccess represent more understandable. The Cache represents the global Object states as accepted by the System, a READ_ONLY CacheAccess represents a temporary state of a Cache, and a READ_WRITE or WRITE_ONLY CacheAccess represents the state of what the user intends the system to do in the future.
Since a Cache then only represents the global state of the system (and not what the user intends to do), it does not have a WRITE state (it will be VOID). A READ_ONLY CacheAccess also has no WRITE state (VOID), but a WRITE_ONLY CacheAccess has no READ state (VOID). A READ_WRITE CacheAccess has both a WRITE and a READ state, and the WRITE state represents what the user has modified but not yet committed, and the READ state represent what the system has modified during its last update.
Resolution: see above
Revised Text: Section 3.1.6.3.13:
Replace the ObjectState attribute with two other attributes in the ObjectRoot table:
state ObjectState
read_state ObjectState
write_state ObjectState
Section 3.2.1.13: Change the explanation of the state attribute from:
"its lifecycle state (state);"
to:
"its lifecycle states (read_state and write_state);"
Section 3.2.1.2.1,
Change the definitions for Primary and Secondary Object states from:
// States of an object
// -------------------
typedef unsigned short ObjectSubState;
// Primary object state
const ObjectSubState OBJECT_NEW = 0x0001 << 0;
const ObjectSubState OBJECT_MODIFIED = 0x0001 << 1;
const ObjectSubState OBJECT_READ = 0x0001 << 2;
const ObjectSubState OBJECT_DELETED = 0x0001 << 3;
// Secondary object state
const ObjectSubState OBJECT_CREATED = 0x0001 << 8;
const ObjectSubState OBJECT_CHANGED = 0x0001 << 9;
const ObjectSubState OBJECT_WRITTEN = 0x0001 << 10;
const ObjectSubState OBJECT_DESTROYED = 0x0001 << 11;
to:
// Object State
enum ObjectState { OBJECT_VOID, OBJECT_NEW, OBJECT_NOT_MODIFIED, OBJECT_MODIFIED, OBJECT_DELETED };
Section 3.2.1.2.1, Change the following lines in the IDL interface of the ObjetRoot from:
readonly attribute ObjectSubState primary_state;
readonly attribute ObjectSubState secondary_state;
to:
readonly attribute ObjectState read_state;
readonly attribute ObjectState write_state;
State Transition diagrams of Figure 3-5 and 3-6. We have alternative diagrams:
New Figure 3-5 (left): READ state of a Cache Object. is below.
New Figure 3-5 (right): WRITE state of a Cache Object. is below
New Figure 3-6 (left): READ state of a CacheAccess Object (in READ_ONLY or READ_WRITE mode). is below
New Figure 3-6 (right): WRITE state of a CacheAccess Object (in WRITE_ONLY or READ_WRITE mode). is below
Section 3.2.1.13: Change the explanation of Figure 3-5 from:
"The primary_state that refers to incoming modifications (i.e., incoming updates for a primary object or modifications resulting from CacheAccess::refresh operations for a secondary object); even if the events that trigger the state change are different for both kinds of objects, the state values are the same."
To:
"A Cache Object represents the global system state. It has a READ state whose transitions represent the updates as they are received by the DCPS. Since Cache Objects cannot be modified locally, they have no corresponding WRITE state (i.e. their WRITE state is set to VOID). State transitions occur between the start of an update round and the end of of an update round. When in automatic updates mode, the start of the update round is signaled by the invocation of the on_begin_updates callback of the CacheListener, while the end of an update round is signaled by the invocation of the on_end_updates callback of the CacheListener. When in manual update mode, the start of an update round is defined as the start of a refresh operation, while the end of an update round is defined as the invocation of the next refresh operation."
Section 3.2.1.13: Change the explanation of Figure 3-6 from:
"The secondary_state that refers to modifications performed by the application. For a secondary object, the state describes the taking into account of the changes asked by the application (set_xxx or destroy and then write of the CacheAccess); for a primary object, it tracks if the object has been cloned for modification purpose."
To:
"A CacheAccess Object represents either represents a temporary system state (a so-called 'snapshot' of the Cache) when in READ_ONLY mode, or it represents an intended system state when in WRITE_ONLY or READ_WRITE mode. In READ_ONLY mode, a CacheAccess object has no WRITE state (it is VOID, not depicted), while in WRITE_ONLY mode it has no READ state (it is VOID, not depicted). Transitions to the READ state occur during an update round (caused by invocation of the refresh method) , or when the CacheAccess is purged. Changes to the WRITE state are caused by either local modifications (can be done on any time), by commiting the local changes to the system (during a write operation), by purging the CacheAccess or by starting a new update round (by invoking the refresh method and thus rolling back any uncommitted changes). Since a refresh operation validates contracts, and both these contracts and the relationships between their targeted objects may change, two results are possible: an object can be contracted as a result of the refresh operation, thus (re-)appearing in the CacheAccess, or an object can not be contracted as a result of a refresh operation, thus disappearing from a CacheAccess."
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Introduce new names for the different states, and try to re-use the same set of states for each diagram. We propose not to speak about primary and secondary objects, but to speak about Cache Objects (located in a Cache) and CacheAccess objects (located in a CacheAccess). Furthermore, we propose not to speak about primary and secondary states, but to speak about a READ state (with respect to incoming modifications) and a WRITE state (with respect to local modifications).
Decouple Objects in the Cache from Objects in a CacheAccess, it makes the idea of what a Cache or CacheAccess represent more understandable. The Cache represents the global Object states as accepted by the System, a READ_ONLY CacheAccess represents a temporary state of a Cache, and a READ_WRITE or WRITE_ONLY CacheAccess represents the state of what the user intends the system to do in the future.
Since a Cache then only represents the global state of the system (and not what the user intends to do), it does not have a WRITE state (it will be VOID). A READ_ONLY CacheAccess also has no WRITE state (VOID), but a WRITE_ONLY CacheAccess has no READ state (VOID). A READ_WRITE CacheAccess has both a WRITE and a READ state, and the WRITE state represents what the user has modified but not yet committed, and the READ state represent what the system has modified during its last update.
To really decouple Cache Objects from CacheAccess objects, it is even possible to allow an object to be cloned in multiple CacheAccesses. But this issue is up for discussion. Probably every writeable CacheAccess should then have its own DCPS Publisher, so that an object cloned in multiple Writable CacheAccesses will be seen in the DCPS as being owned by two different DataWriters. The Ownership QoS will then decide how to handle this situation.
Issue 9522: Add Iterators to Collection types (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
It would be nice to have an iterator for Collection types to be able to iterate through the entire Collection. For Maps there should be iterators for both the keys and the values.
Proposed Resolution:
Add an abstract Iterator class to the DLRL, which has typed implementations to access the underlying data.
Resolution: Issue was subsequently withdrawn from the RTF by the submitters of the issue
Revised Text:
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9523: Harmonize Collection definitions in PIM and PSM (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The Collection definitions are very different between the PIM and the PSM.
Proposed Resolution:
Use corresponding Collection definitions in PIM and PSM. Make a strict separation in the IDL between typed operations (to be implemented in the typed specializations, but to be mentioned in the untyped parents) and untyped operations (to be implemented in the untyped parents). Also remove methods that have a functional overlap with other methods.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Replace entire Section 3.1.6.3.16 (Collection) with:
This class is the abstract root for all collections (sets and maps).
Collection
attributes
length integer
values Undefined[ ] (e.g. of type ObjectRoot or of Primitive type).
It provides the following attributes:
· length - the length of the Collection.
· values - a list of all values contained in the Collection.
Replace in Section 3.1.6.3.17 (List) the table and the explanation below it with the following:
List : Collection
no attributes
operations
remove void
added_elements integer[ ]
removed_elements integer[ ]
modified_elements integer[ ]
add void
value Undefined (e.g. ObjectRoot or Primitive type).
put void
index integer
value Undefined (e.g. ObjectRoot or Primitive type).
get Undefined (e.g. ObjectRoot or Primitive type).
index integer
It provides the following methods:
· remove - to remove the item with the highest index from the collection.
· added_elements - get a list that contains the indexes of the added elements.
· removed_elements - get a list that contains the indexes of the removed elements.
· modified_elements - get a list that contains the indexes of the modified elements.
· add - to add an item to the the end of the list.
· put - to put an item in the collection at a specified index.
· get - to retrieve an item in the collection (based on its index).
Replace in Section 3.1.6.3.18 (StrMap) the table and the explanation below it with the following:
StrMap : Collection
attributes
keys string[ ]
operations
remove void
key string
added_elements string[ ]
removed_elements string[ ]
modified_elements string[ ]
put void
key string
value Undefined (e.g. ObjectRoot or Primitive type).
get Undefined (e.g. ObjectRoot or Primitive type).
key string
The public attributes give:
· keys - a list that contains all the keys of the items belonging to the map.
It provides the following methods:
· remove - to remove an item from the collection.
· added_elements - get a list that contains the keys of the added elements.
· removed_elements - get a list that contains the keys of the removed elements.
· modified_elements - get a list that contains the keys of the modified elements.
· put - to put an item in the collection.
· get - to retrieve an item in the collection (based on its key).
Replace in Section 3.1.6.3.19 (StrMap) the table and the explanation below it with the following:
IntMap : Collection
attributes
keys integer[ ]
operations
remove void
key integer
added_elements integer[ ]
removed_elements integer[ ]
modified_elements integer[ ]
put void
key integer
value Undefined (e.g. ObjectRoot or Primitive type).
get Undefined (e.g. ObjectRoot or Primitive type).
key integer
The public attributes give:
· keys - a list that contains all the keys of the items belonging to the map.
It provides the following methods:
· remove - to remove an item from the collection.
· added_elements - get a list that contains the keys of the added elements.
· removed_elements - get a list that contains the keys of the removed elements.
· modified_elements - get a list that contains the keys of the modified elements.
· put - to put an item in the collection.
· get - to retrieve an item in the collection (based on its key).
Section 3.2.1.2.1: Replace CollectionBase, List, StrMap and IntMap definitions with the following IDL definitions:
abstract valuetype CollectionBase {
long length();
boolean is_modified (
in ReferenceScope scope);
long how_many_added ();
long how_many_removed ();
};
abstract valuetype ListBase : CollectionBase {
boolean which_added (out LongSeq indexes);
void remove ();
};
abstract valuetype StrMapBase : CollectionBase {
boolean which_added (out StringSeq keys);
StringSeq get_all_keys ();
void remove ( in string key );
};
abstract valuetype IntMapBase : CollectionBase {
boolean which_added (out LongSeq keys);
LongSeq get_all_keys ();
void remove ( in long key );
};
abstract valuetype Collection {
readonly attribute long length;
/***
* The following methods will be generated properly typed
* in the generated derived classes
*
readonly attribute ObjectRootSeq values;
*
***/
};
abstract valuetype List : Collection {
void remove( );
LongSeq added_elements( );
LongSeq removed_elements( );
LongSeq modified_elements( );
/***
* The following methods will be generated properly typed
* in the generated derived classes
*
void add( in ObjectRoot value );
void put( in long key, in ObjectRoot value );
ObjectRoot get( in long key );
*
***/
};
abstract valuetype StrMap : Collection {
readonly attribute StringSeq keys;
void remove( in string key );
StringSeq added_elements( );
StringSeq removed_elements( );
StringSeq modified_elements( );
/***
* The following methods will be generated properly typed
* in the generated derived classes
*
void put( in string key, in ObjectRoot value );
ObjectRoot get( in string key );
*
***/
};
abstract valuetype IntMap : Collection {
readonly attribute LongSeq keys;
void remove( in long key );
LongSeq added_elements( );
LongSeq removed_elements( );
LongSeq modified_elements( );
/***
* The following methods will be generated properly typed
* in the generated derived classes
*
void put( in long key, in ObjectRoot value );
ObjectRoot get( in long key );
*
***/
};
Section 3.2.1.2.2: Replace the FooList, FooStrMap and FooIntMap definitions with the following IDL definitions:
valuetype FooList : DDS::List { // List<Foo>
void put (
in long index,
in Foo a_foo);
Foo get (
in long index)
raises (
DDS::NotFound);
};
valuetype FooStrMap : DDS::StrMap { // StrMap<Foo>
void put (
in string key,
in Foo a_foo);
Foo get (
in string Key)
raises (
DDS::NotFound);
};
valuetype FooIntMap : DDS::IntMap { // IntMap<Foo>
void put (
in long key,
in Foo a_foo);
Foo get (
in long Key)
raises (
DDS::NotFound);
};
valuetype FooList : DDS::List { //List<Foo>
readonly attribute FooSeq values;
void add( in Foo value );
void put( in long key, in Foo value );
Foo get( in long key );
};
valuetype FooStrMap : DDS::StrMap { //StrMap<Foo>
readonly attribute FooSeq values;
void put( in string key, in Foo value );
Foo get( in string key );
};
valuetype FooIntMap : DDS::IntMap { //IntMap<Foo>
readonly attribute FooSeq values;
void put( in long key, in Foo value );
Foo get( in long key );
};
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Use corresponding Collection definitions in PIM and PSM. Make a strict separation in the IDL between typed operations (to be implemented in the typed specializations, but to be mentioned in the untyped parents) and untyped operations (to be implemented in the untyped parents). Also remove methods that have a functional overlap with other methods. Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Issue 9524: Add the Set as a supported Collection type (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In many applications there is a need for an unordered Collection without keys.
Proposed Resolution:
Add the Set as a supported Collection type in DLRL.
Resolution: see above
Revised Text: see pages 141 - 146 of ptc/2006-04-08
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Add the Set as a supported Collection type in DLRL
Issue 9525: Make the ObjectFilter and the ObjectQuery separate Selection Criterions (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In the current specification, the ObjectQuery inherits from the ObjectFilter, making it an ObjectFilter as well. That means that performing Queries can no longer be delegated to the DCPS, since the Selection invokes the check_object method on the ObjectFilter for that purpose.
Proposed Resolution:
Make the ObjectFilter and the ObjectQuery to separate classes with a common parent called SelectionCriterion. A SelectionCriterion can be then be attached to a Selection, which will either invoke the check_object method in case of a Filter, or delegate the Query to DCPS in case of a Query.
Resolution: see above
Revised Text: Make the ObjectFilter and the ObjectQuery to separate classes with a common parent called SelectionCriterion. A SelectionCriterion can be then be attached to a Selection, which will either invoke the check_object method in case of a Filter, or delegate the Query to DCPS in case of a Query.
Revised Text:
Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.5 (ObjectHome): Change the filter parameter from type ObjectFilter to type SelectionCriterion and change its name into "criterion" in the table:
ObjectHome
Operations
create_selection Selection
filtercriterion ObjectFilterSelectionCriterion
auto_refresh boolean
concerns_contained_objects boolean
Section 3.1.6.3.5: Change the explanation of the create_selection method from:
"create a Selection (create_selection). The filter parameter specifies the ObjectFilter to be attached to the Selection, …"
To:
"create a Selection (create_selection). The criterion parameter specifies the Criterion (either a FilterCriterion or a QueryCriterion) to be attached to the Selection,, …"
Section 3.1.6.3.5: Add to the explanation of the create_selection method the following line:
"When creating a Selection while the DCPS State of the Cache is still set to INITIAL, a PreconditionNotMet is raised."
Section 3.2.1.2.1: Change in the IDL interface definition of the ObjectHome the following line from:
Selection create_selection ( in ObjectFilter filter,
in boolean auto_refresh)
raises ( BadParameter );
To (see also issue T_DLRL#19):
Selection create_selection ( in SelectionCriterion criterion,
in boolean auto_refresh,
in boolean concerns_contained_objects )
raises ( PreconditionNotMet );
Section 3.1.6.3.7 (Selection): Change the filter attribute from type ObjectFilter to type SelectionCriterion and change its name into "criterion" in the table:
Selection
Attributes
filtercriterion ObjectFilterSelectionCriterion
Section 3.1.6.3.7: Change the explanation of the filter attribute from:
· the corresponding ObjectFilter (filter). It is given at Selection creation time (see ObjectHome::create_selection).
To:
· the corresponding SelectionCriterion (criterion). It is given at Selection creation time (see ObjectHome::create_selection).
Section 3.2.1.2.1: Change in the IDL the Selection interface from:
readonly attribute ObjectFilter filter;
To
readonly attribute SelectionCriterion criterion;
Section 3.1.6.3.??: Add a new section to introduce the SelectionCriterion class:
A SelectionCriterion is an object (attached to a Selection) that gives the criterion to be applied to make the Selection. It is the abstract base-class for both the FilterCriterion and the QueryCriterion.
SelectionCriterion
Attributes
kind CriterionKind
It has one attribute (kind) that describes whether a SelectionCriterion instance represents a FilterCriterion or a QueryCriterion.
Section 3.1.6.3.8: (ObjectFilter)
Change section name to FilterCriterion
Change the text above the table from:
An ObjectFilter is an object (attached to a Selection) that gives the criterion to be applied to make the Selection.
To:
FilterCriterion is a specialization of SelectionCriterion that performs the object check based on a user-defined filter algorithm.
Section 3.1.6.3.8: Change the title of the table from:
ObjectFilter
To:
FilterCriterion : SelectionCriterion
Section 3.1.6.3.8: Change the text below the table from:
"The ObjectFilter class is a root from which are derived classes dedicated to application classes (for an application class named Foo, FooFilter will be derived)."
To:
"The FilterCriterion class is a root from which are derived classes dedicated to application classes (for an application class named Foo, FooFilter will be derived)."
Section 3.1.6.3.9 (ObjectQuery): Change the section name to QueryCriterion
Section 3.1.6.3.9: Change the text above the table from:
"ObjectQuery is a specialization of ObjectFilter that perform the object check based on a query expression."
To:
"QueryCriterion is a specialization of SelectionCriterion that performs the object check based on a query expression."
Section 3.1.6.3.9: Change the title of the table from:
ObjectQuery
To:
QueryCriterion : SelectionCriterion
Section 3.2.1.2.1: Change in the IDL the following interface definitions from:
/***********************************************
* ObjectFilter: Root of all the objects filters
***********************************************/
enum MembershipState {
UNDEFINED_MEMBERSHIP,
ALREADY_MEMBER,
NOT_MEMBER
};
local interface ObjectFilter {
/***
* Following method will be generated properly typed
* in the generated derived classes
*
boolean check_object (
in ObjectRoot an_object,
in MembershipState membership_state);
*
***/
};
/*******************************************************
* ObjectQuery : Specialization of the above to make a Query
******************************************************/
local interface ObjectQuery {
// Attributes
// ---------
readonly attribute string expression;
readonly attribute StringSeq parameters;
//--- Methods
boolean set_query (
in string expression,
in StringSeq parameters);
boolean set_parameters ( in StringSeq parameters );
};
To (See also Issue T_DLRL#19):
/***********************************************
* SelectionCriterion: Root of all filters and queries
***********************************************/
enum CriterionKind {
QUERY,
FILTER
};
local interface SelectionCriterion {
readonly attribute CriterionKind kind;
};
/***********************************************
* FilterCriterion: Root of all the objects filters
***********************************************/
enum MembershipState {
UNDEFINED_MEMBERSHIP,
ALREADY_MEMBER,
NOT_MEMBER
};
local interface FilterCriterion : SelectionCriterion {
/***
* Following method will be generated properly typed
* in the generated derived classes
*
boolean check_object (
in ObjectRoot an_object,
in MembershipState membership_state);
*
***/
};
/*******************************************************
* QueryCriterion : Specialized SelectionCriterion to make a
* Query
******************************************************/
local interface QueryCriterion : SelectionCriterion {
// Attributes
// ---------
readonly attribute string expression;
readonly attribute StringSeq parameters;
//--- Methods
boolean set_query (
in string expression,
in StringSeq parameters) raises (SQLError);
boolean set_parameters ( in StringSeq parameters ) raises (SQLError);
};
o
Section 3.2.1.2.2: Change in the IDL the following interface definitions from:
local interface FooFilter: DDS::ObjectFilter {
To:
local interface FooFilter: DDS::FilterCriterion {
Section 3.2.1.2.2: Change in the IDL the following interface definitions from:
local interface FooQuery : DDS::ObjectQuery, FooFilter {
To:
local interface FooQuery : DDS::QueryCriterion, FooFilter {
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Make the ObjectFilter and the ObjectQuery to separate classes with a common parent called SelectionCriterion. A SelectionCriterion can be then be attached to a Selection, which will either invoke the check_object method in case of a Filter, or delegate the Query to DCPS in case of a Query
Issue 9526: Add a static initializer operation to the CacheFactory (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: From the current DLRL specification it is not clear how to obtain your initial CacheFactory.
Proposed Resolution:
Add a static get_instance method to make the CacheFactory a singleton, just like we did for the DomainParticipantFactory in the DCPS.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.1: the following text before the table:
CacheFactory itself has no factory. It is a pre-existing singleton object that can be accessed by means of the get_instance class operation on the CacheFactory.
Section 3.1.6.3.1: add the static get_instance method to the CacheFactory table:
CacheFactory
Operations
(static) get_instance CacheFactory
Section 3.1.6.3.1 Add a bullet that explains the get_instance method:
· To retrieve the CacheFactory singleton. The operation is idempotent, that is, it can be called multiple times without side-effects and it will return the same CacheFactory instance. The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.
Section 3.2.1.1: Add the following paragraph:
The language implementation of the CacheFactory interface should have the static operation get_instance described in section 3.1.6.3.1, "CacheFactory"Class, on page ???. This operation does not appear in the IDL CacheFactory interface, as static operations cannot be expressed in IDL.
Disposition: Resolved
Actions taken:
April 3, 2006: rewceived issue
August 23, 2006: closed issue
Discussion: Add a static get_instance method to make the CacheFactory a singleton, just like we did for the DomainParticipantFactory in the DCPS
Issue 9527: Make update rounds uninterruptable (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
According to the current specification, it is possible to interrupt an update round, by invoking the disable_update mehod in the middle of such an update round. This makes no sense, since it can leave the Cache in an undefined and possibly inconsistent state. The specification does also not explain how to recover from such a state.
Proposed Resolution:
Make sure that the automatic update mode can never be changed while in the middle of an update round. This way, update rounds can never be interrupted and the Cache will always be in a consistent state. This also removes the need for the interrupted and update_round parameters in the callback methods of the CacheListener.
Also remove the related_cache parameter from the CacheListener, since it is not needed and is also missing in the IDL.
Resolution: see above
Revised Text: Section 3.1.6.3.4 (CacheListener): remove all function-parameters from the CacheListener table:
CacheListener
Operations
on_begin_updates void
on_end_updates void
Section 3.1.6.3.4: Change the on_begin_updates explanation (because of typo) from:
"on_begin_updates to indicates that…."
To:
"on_begin_updates indicates that…."
Section 3.1.6.3.4: Change the on_end_updates explanation from:
"on_end_updates that indicates that no more update is foreseen (either because no more update has been received - interrupted FALSE, or because the updates have been disabled for that Cache - interrupted = TRUE)."
To:
"on_end_updates indicates that no more updates are foreseen."
Section 3.2.1.2.1: Change the IDL Description of the CacheListener from:
local interface CacheListener {
void begin_updates ( in long update_round );
void end_updates ( in long update_round );
};
To:
local interface CacheListener {
void on_begin_updates ( );
void on_end_updates ( );
};
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Make sure that the automatic update mode can never be changed while in the middle of an update round. This way, update rounds can never be interrupted and the Cache will always be in a consistent state. This also removes the need for the interrupted and update_round parameters in the callback methods of the CacheListener.
Also remove the related_cache parameter from the CacheListener, since it is not needed and is also missing in the IDL
Issue 9528: Remove lock/unlock due to overlap with updates_enabled (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
It is not clear why we should need a lock/unlock on the Cache when we can turn on and off the automatic updates. If an application does not want to be interrupted by incoming updates, it can simply disable the automatic updates, re-enabling them afterwards.
Proposed Resolution:
Remove the lock and unlock methods of the Cache.
Resolution: Remove the lock and unlock methods of the Cache
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.3: Remove the lock and unlock from the table and from the underlying explanation.
Section 3.2.1.2.1: Remove the following lines from the IDL:
// Time-out
// --------
typedef long TimeOutDuration;
const TimeOutDuration INFINITE_TIME_OUT = -1;
Section 3.2.1.2.1: Remove the following lines from Cache interface in the IDL:
void lock ( in TimeOutDuration to_in_milliseconds )
raises (ExpiredTimeOut);
void unlock ();
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9529: Add Listener callbacks for changes in the update mode (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The CacheListener currently supports only two call-backs to signify the start and end of an update round. However because listeners are only used in enabled update mode it is important that when the DLRL switches between the enabled and disabled update mode that the listeners are notified, as the switch between update modes does not necessarily originate from the thread that registered the listener as well, and the fact that updates are enabled or disabled is a major event that should be known by the listeners.
Proposed Resolution:
Add two methods to the CacheListener interface, one for signalling a switch to automatic update mode, and for for signalling a switch to manual update mode.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.4: Add the following to entries to the table, with the corresponding underlying explanation:
CacheListener
Operations
on_updates_enabled void
on_updates_disabled void
· on_updates_enabled - indicates that the Cache has switched to automatic update mode. Incoming data will now trigger the corresponding Listeners.
· on_updates_disabled - indicates that the Cache has switched to manual update mode. Incoming data will no longer trigger the corresponding Listeners, and will only be taken into account during the next refresh operation.
Section 3.2.1.2.1: Add the following lines to the CacheListener interface in IDL:
void on_updates_enabled( );
void on_updates_disabled( );
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Add two methods to the CacheListener interface, one for signalling a switch to automatic update mode, and for for signalling a switch to manual update mode.
Issue 9530: Representation of OID should be vendor specific (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The OID currently consists of two numbers: a creator_id and a local_id. The philosophy is that each writer should obtain its own unique creator_id, and can then sequence number each object created with it to obtain unique object identifiers. The specification does not specify how the writers should obtain their unique creator_id. Building a mechanism to distribute unique OIDs requires knowledge about the underlying system characteristics, and this information is only available in DCPS.
Proposed Resolution:
Make the definition of the OID vendor specific. This allows a vendor to specify its own algorithms to guarantee that each object has got a unique identifier.
The only location where the application programmer actually has to know the contents of the OID is in the create_object_with_oid method on the ObjectHome. However, we see no use-case for this method and propose to remove it.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.5 (ObjectHome): Remove the create_object_with_oid from the table and from the underlying explanation and from the corresponding IDL.
create_object_with_oid ObjectRoot
access CacheAccess
oid DLRLOid
· create a new DLRL object with a user-provided oid (create_object_with_oid). This operation takes as parameter the CacheAccess concerned by the creation as well as the allocated oid. It raises an exception (ReadOnlyMode) if the CacheAccess is in READ_ONLY mode and another exception (AlreadyExisting) in that oid has already be given.
ObjectRoot create_object_with_oid(
in CacheAccess access,
in DLRLOid oid)
raises (
ReadOnlyMode,
AlreadyExisting);
Section 3.2.1.1: Add the following text:
The IDL PSM introduces a number of types that are intended to be defined in a native way. As these are opaque types, the actual definition of the type does not affect portability and is implementation dependent. For completeness the names of the types appear as typedefs in the IDL and a #define with the suffix "_TYPE_NATIVE" is used as a place-holder for the actual type. The type used in the IDL by this means is not normative and an implementation is allowed to use any other type, including non-scalar (i.e., structured types).
Section 3.2.1.2.1: Add the following #define statement before the opening of the DDS module:
#define DLRL_OID_TYPE_NATIVE long
Section 3.2.1.2.1: Change the struct DLRLOid definition in the IDL from:
struct DLRLOid {
unsigned long creator_id;
unsigned long local_id;
};
to:
struct DLRLOid {
DLRL_OID_TYPE_NATIVE value[3];
};
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: clsoed issue
Discussion: Make the definition of the OID vendor specific. This allows a vendor to specify its own algorithms to guarantee that each object has got a unique identifier.
The only location where the application programmer actually has to know the contents of the OID is in the create_object_with_oid method on the ObjectHome. However, we see no use-case for this method and propose to remove it.
Issue 9531: define both the Topic name and the Topic type_name separately (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: XML mapping file does not allow you to define both the Topic name and the Topic type_name separately
Summary:
In the DCPS, there is a clear distinction between a topic name and a topic type. (Both names must be provided when creating a Topic). However, the DLRL mapping XML only allows us to specify one name attribute which is called ‘name’. It is unclear whether this name should identify the type name or the topic name. Currently we just have to assume that the topic name and type name are always chosen to be equal, but that does not have to be the in a legacy topic model.
Proposed Resolution:
Add a second (optional) attribute to the mainTopic, extensionTopic, placeTopic and multiPlaceTopic that identifies the type name. If left out, the type is assumed to be equal to the topic name.
Resolution: see above
Revised Text: Section 3.2.2.3.1: Change the following lines in the DTD:
Change:
<!ELEMENT mainTopic (keyDescription)>
<!ATTLIST mainTopic name CDATA #REQUIRED>
To:
<!ELEMENT mainTopic (keyDescription)>
<!ATTLIST mainTopic name CDATA #REQUIRED
typename CDATA #IMPLIED>
Change:
<!ELEMENT extensionTopic (keyDescription)>
<!ATTLIST extensionTopic name CDATA #REQUIRED>
To:
<!ELEMENT extensionTopic (keyDescription)>
<!ATTLIST extensionTopic name CDATA #REQUIRED
typename CDATA #IMPLIED>
Change:
<!ELEMENT placeTopic (keyDescription)>
<!ATTLIST placeTopic name CDATA #REQUIRED>
To:
<!ELEMENT placeTopic (keyDescription)>
<!ATTLIST placeTopic name CDATA #REQUIRED
typename CDATA #IMPLIED>
Change:
<!ELEMENT multiPlaceTopic (keyDescription)>
<!ATTLIST multiPlaceTopic name CDATA #REQUIRED
indexField CDATA #REQUIRED>
To (see also issue 10):
<!ELEMENT multiPlaceTopic (keyDescription)>
<!ATTLIST multiPlaceTopic name CDATA #REQUIRED
typename CDATA #IMPLIED
indexField CDATA #IMPLIED>
Section 3.2.2.3.2.7 Change:
"It comprises one attribute name that gives the name of the Topic and:"
To:
"It comprises one attribute name that gives the name of the Topic, one (optional) attribute that gives the name of the type (if this attribute is not supplied the type name is considered to be equal to the topic name) and:"
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed isuse
Discussion: Add a second (optional) attribute to the mainTopic, extensionTopic, placeTopic and multiPlaceTopic that identifies the type name. If left out, the type is assumed to be equal to the topic name.
Issue 9532: Merge find_object with find_object_in_access (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
Currently there are separate methods to find a specific object based on its OID in the Cache and in a CacheAccess. It would be nice to have one method to search for an Object in any CacheBase.
Proposed Resolution:
Add a CacheBase parameter to the find_object method and remove the find_object_in_access method.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.3.5 (ObjectHome): Change the find_object entry in the table and remove the find_object_in_access from the table and change the underlying text accordingly:
ObjectHome
Operations
find_object_in_access ObjectRoot
oid DLRLOid
source CacheAccess
find_object ObjectRoot
oid DLRLOid
find_object ObjectRoot
oid DLRLOid
source CacheBase
· retrieve a DLRL object based on its oid in a given CacheAccess (find_object_in_access).
· retrieve a DLRL object based on its oid in the main Cache (find_object).
· retrieve a DLRL Object based on its oid in the specified CacheBase (find_object).
Section 3.2.1.2.1: Change the IDL definition of the ObjectHome from:
ObjectRoot find_object ( in DLRLOid oid);
to:
ObjectRoot find_object( in DLRLOid oid, in CacheBase source ) raises (NotFound);
Section 3.2.1.2.1: Remove in the IDL definition of the ObjectHome the following line:
ObjectRoot find_object_in_access (in DLRLOid oid, in CacheAccess access)
raises (NotFound);
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Add a CacheBase parameter to the find_object method and remove the find_object_in_access method.
Issue 9533: Clarify which Exceptions exist in DLRL and when to throw them (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The DLRL PSM specifies a number of Exceptions, but these are not explained in the PIM, and they do not cover the entire range of all possible errors.
Proposed Resolution:
Make an extensive list of all possible Exceptions and explain them in the PIM as well.
Add a String message to the exception that can give more details about the context of the exception.
Resolution: see above
Revised Text: Section 3.1.6.2: Add a paragraph that explains the various Exceptions and when they will be thrown:
"The DLRL API may raise Exceptions under certain conditions. What follows is an extensive list of all possible Exceptions and the conditions in which they will be raised:
· DCPSError: if an unexpected error occured in the DCPS
· BadHomeDefinition: if a registered ObjectHome has dependencies to other, unregistered ObjectHomes.
· NotFound: if a reference is encountered to an object that has not (yet) been received by the DCPS.
· AlreadyExisting: if a new object is created using an identify that is already in use by another object.
· AlreadyDeleted - if an operation is invoked on an object that has already been deleted.
· PreconditionNotMet - if a precondition for this operation has not (yet) been met.
· NoSuchElement - if an attempt is made to retrieve a non-existing element from a Collection.
· SQLError - if an SQL expression has bad syntax, addresses non-existing fields or is not consistent with its parameters.
Each exception contains a string attribute named 'message', that gives a more precise explanation of the reason for the exception."
Section 3.2.1.1: Add the following paragraph:
"Exceptions in DLRL will be mapped according to the default language mapping rules, except for the AlreadyDeleted exception. Since this exception can be raised on all methods and attributes (which is not possible to specify in IDL versions older than 3.0), it is not explicitly mentioned in the raise clause of each operation. Implementors may choose to map it onto an exception type that does not need to be caught explicitly, simplifying the DLRL code significantly."
Section 3.2.1.2.1: Change the following lines in the IDL from:
// Exceptions
// ==========
exception DCPSError {};
exception BadHomeDefinition {};
exception BadParameter {};
exception NotFound {};
exception ReadOnlyMode {};
exception WriteOnlyMode {};
exception AlreadyExisting {};
exception AlreadyClonedInWriteMode {};
exception ExpiredTimeOut {};
To:
// Exceptions
// ==========
exception DCPSError { string message; };
exception BadHomeDefinition { string message; };
exception NotFound { string message; };
exception AlreadyExisting { string message; };
exception AlreadyDeleted { string message; };
exception PreconditionNotMet { string message; };
exception NoSuchElement { string message; };
exception SQLError { string message; };
Section 3.1.6.3.13: Change the explanation for destroy from:
"destroy itself."
To:
"Mark the object for destruction, to be executed during a write operation. If the object is not located in a writeable CacheAccess, a PreconditionNotMet is raised."
Section 3.2.1.2.1: Change the following lines in the IDL definition of the ObjectRoot from:
void destroy ( ) raises ( DCPSError, ReadOnlyMode );
To:
void destroy ( ) raises (PreconditionNotMet);
Section 3.1.6.3.5 (ObjectHome): Add to the explanations for get_topic_name and get_all_topic_names the following line:
"If the DCPS State of the Cache is still set to INITIAL, a PreconditionNotMet is raised."
Section 3.1.6.3.5 (ObjectHome): Add to the explanation for delete_selection the following line:
"If the Selection was not created by this ObjectHome, a PreconditionNotMet is raised."
Section 3.1.6.3.5 (ObjectHome): Change in the explanation for create_object and create_unregistered_object the following line from:
"It raises an exception (ReadOnlyMode) if the CacheAccess is in READ_ONLY mode."
To:
"The following preconditions must be met: the Cache must be set to the DCPS State of ENABLED, and the supplied CacheAccess must writeable. Not satisfying either precondition will raise a PreconditionNotMet."
Section 3.1.6.3.5 (ObjectHome): Change in the explanation for register_object the following line from:
"… only objects created by create_unregistered_object can be passed as parameter. The method raises an exception (BadParameter) if an attempt is made to pass another kind of object or if the object content is not suitable and another exception (AlreadyExisting) if the result of the computation leads to an existing oid."
To:
"… only objects created by create_unregistered_object can be passed as parameter, a PreconditionNotMet is raised otherwise. If the result of the computation leads to an existing oid, an AlreadyExisting exception is raised."
Section 3.2.1.2.1 (IDL Description): Change the following lines in the IDL definition of the ObjectHome from:
string get_topic_name (in string attribute_name)
raises (BadParameter);
StringSeq get_all_topic_names ();
void delete_selection (in Selection a_selection) raises (BadParameter);
ObjectRoot create_object(in CacheAccess access) raises (ReadOnlyMode);
ObjectRoot create_unregistered_object (in CacheAccess access)
raises (ReadOnlyMode);
void register_object (in ObjectRoot unregistered_object)
raises (AlreadyExisting, BadParameter);
to:
string get_topic_name (in string attribute_name) raises (PreconditionNotMet);
StringSeq get_all_topic_names () raises (PreconditionNotMet);
void delete_selection (in Selection a_selection) raises (PreconditionNotMet);
ObjectRoot create_object(in CacheAccess access) raises (PreconditionNotMet);
ObjectRoot create_unregistered_object (in CacheAccess access)
raises (PreconditionNotMet);
void register_object (in ObjectRoot unregistered_object)
raises (AlreadyExisting, PreconditionNotMet);
Section 3.2.1.2.2 (Implied IDL): Change the following lines in the IDL definition of the FooHome from:
void delete_selection (
in FooSelection a_selection)
raises (
DDS::PreconditionNotMet);
Foo create_object(
in DDS::CacheAccess access)
raises (
DDS::ReadOnlyMode);
Foo create_unregistered_object (
in DDS::CacheAccess access)
raises (
DDS::ReadOnlyMode);
void register_object (
in Foo unregistered_object)
raises (
DDS::AlreadyExisting,
DDS::BadParameter);
To:
void delete_selection (
in FooSelection a_selection)
raises (
DDS::PreconditionNotMet);
Foo create_object(
in DDS::CacheAccess access)
raises (
DDS::PreconditionNotMet);
Foo create_unregistered_object (
in DDS::CacheAccess access)
raises (
DDS::PreconditionNotMet);
void register_object (
in Foo unregistered_object)
raises (
DDS::AlreadyExisting,
DDS::PreconditionNotMet);
Section 3.1.6.3.2: Add to the explanation for write the following line:
"When invoking this operation on a READ_ONLY CacheAccess, a PreconditionNotMet is raised."
Section 3.2.1.2.1: Change the following lines in the IDL definition of the CacheAccess from:
void write () raises (ReadOnlyMode, DCPSError);
to:
void write () raises (PreconditionNotMet, DCPSError);
Section 3.1.6.3.3 (Cache): Add to the explanation for register_all_for_pubsub the following lines:
"When an ObjectHome still refers to another ObjectHome that has not yet been registered, a BadHomeDefinition is raised. A number of preconditions must also be satisfied before invoking the register_all_for_pubsub method: at least one ObjectHome needs to have been registered, and the pubsub_state may not yet be ENABLED. If these preconditions are not satisfied, a PreconditionNotMet will be raised. Invoking the register_all_for_pub_sub on a REGISTERED pubsub_state will be considered a no-op."
Section 3.1.6.3.3 (Cache): Add to the explanation for enable_all_for_pubsub the following lines:
"One precondition must be satisfied before invoking the enable_all_for_pub_sub method: the pubsub_state must already have been set to REGISTERED before. A PreconditionNotMet Exception is thrown otherwise. Invoking the enable_all_for_pub_sub method on an ENABLED pubsub_state will be considered a no-op."
Section 3.1.6.3.3 (Cache): Add to the explanation for register_home the following lines:
"A number of preconditions must be satisfied when invoking the register_home method: the Cache must have a pubsub_state set to INITIAL, the specified ObjectHome may noy yet be registered before (either to this Cache or to another Cache), and no other instance of the same class as the specified ObjectHome may already have been registered to this Cache. If these preconditions are not satisfied, a PreconditionNotMet is raised."
Section 3.1.6.3.3 (Cache): Add to the explanation for find_home_by_name and find_home_by_index the following lines:
"If no registered home can be found that satisfies the specified name or index, a NULL is returned."
Section 3.1.6.3.3 (Cache): Change in the explanation for create_access the following line from:
"The purpose of the CacheAccess must be compatible with the usage mode of the Cache: only a Cache that is write-enabled can create sub-accesses that allow writing:"
to:
"The purpose of the CacheAccess must be compatible with the usage mode of the Cache: only a Cache that is write-enabled can create a CacheAccess that allows writing. Violating this rule will raise a PreconditionNotMet."
Section 3.1.6.3.3 (Cache): Add to the explanation for delete_access the following lines:
"Deleting a CacheAccess will purge all its contents. Deleting a CacheAccess that is not created by this Cache will raise a PreconditionNotMet."
Section 3.2.1.2.1: Change the following lines in the IDL definition of the Cache from:
void register_all_for_pubsub() raises (BadHomeDefinition, DCPSError);
void enable_all_for_pubsub() raises (DCPSError);
unsigned long register_home (in ObjectHome a_home) raises (BadHomeDefinition);
ObjectHome find_home_by_name (in ClassName class_name) raises (BadParameter);
ObjectHome find_home_by_index (in unsigned long index) raises (BadParameter);
CacheAccess create_access (in CacheUsage purpose) raises (ReadOnlyMode);
void delete_access (in CacheAccess access) raises (BadParameter);
to:
void register_all_for_pubsub() raises (BadHomeDefinition, DCPSError, PreconditionNotMet);
void enable_all_for_pubsub() raises (DCPSError, PreconditionNotMet);
unsigned long register_home (in ObjectHome a_home) raises (PreconditionNotMet);
ObjectHome find_home_by_name (in ClassName class_name);
ObjectHome find_home_by_index (in unsigned long index);
CacheAccess create_access (in CacheUsage purpose) raises (PreconditionNotMet);
void delete_access (in CacheAccess access) raises (PreconditionNotMet);
Section 3.1.6.3.1: Add to the explanation for find_cache_by_name the following lines:
"If the specified name does not identify an existing Cache, a NULL is returned."
Section 3.2.1.2.1: Change the following lines in the IDL definition of the CacheFactory from:
Cache find_cache_by_name(in CacheName name) raises (BadParameter);
To:
Cache find_cache_by_name(in CacheName name);
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Make an extensive list of all possible Exceptions and explain them in the PIM as well.
Add a String message to the exception that can give more details about the context of the exception
Issue 9534: Support sequences of primitive types in DLRL Objects (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The current Metamodel explains the different BasicTypes that are supported in DLRL. Although on DCPS sequences are supported for all primitive types, the DLRL states that the only sequences that can be supported are sequences of octet.
Proposed Resolution:
Explicitly state that the DLRL supports sequences of all supported primitive types.
Resolution: Explicitly state that the DLRL supports sequences of all supported primitive types
Revised Text: Section 3.1.3.3 (Metamodel): change
sequence of octet
into
sequence of any of the above.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9535: manual mapping key-fields of registered objects may not be changed (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Indicate that in case of manual mapping key-fields of registered objects may not be changed
Summary:
When using the DLRL with pre-defined mapping, keyfields of the topic can be mapped to ordinary attributes of a DLRL object. However, chaning these attributes on the DLRL object results in a change of identity on DCPS.
Proposed Resolution:
Do not allow attributes that are mapped to key fields in the underlying Topic to be modified after the DLRL object has been registered. Throw a PreconditionNotMet Exception if this rule is violated.
Resolution:
Revised Text: Section 3.1.6.3.5 (ObjectHome): Add to the explanation of the register_object the following sentence:
"Once an object has been registered, the fields that make up its identity (i.e. the fields that are mapped onto the keyfields of the corresponding topics) may not be changed anymore."
Section 3.1.6.3.13 (ObjectRoot): Add to the description of the set_<attribute> method the following sentence:
"Since the identity of DLRL Objects that are generated using predefined mapping (i.e. with a keyDescription content of "NoOid") is determined by the value of its key fields, changing these key fields means changing their identity. For this reason these keyfields are considered read-only: any attempt to change them will raise a PreconditionNotMet. The only exception to this rule is when locally created objects have not yet been registered and therefore do not have an identity yet."
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Do not allow attributes that are mapped to key fields in the underlying Topic to be modified after the DLRL object has been registered. Throw a PreconditionNotMet Exception if this rule is violated.
Issue 9536: Specification does not state how to instantiate an ObjectHome (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
There is no (default) constructor specified for the ObjectHome class. Nowhere in the specification it is specified how an ObjectHome should be instantiated and what the default values will be for auto_deref and for the filter expression.
Proposed Resolution:
Explicitly state that the default constructor should be used to instantiate an ObjectHome. Also state that by default the value of auto_deref will be set to true, and the filter expression will be set to NULL. Setting auto_deref to true by default ensures that the application developer has to make the conscious decision to set the auto_deref functionality to false for performance gain, which is more natural then the other way around
Resolution: see above
Revised Text: Section 3.1.6.5 (ObjectHome): Add the following text, just above the ObjectHome table:
"A derived ObjectHome (e.g. a FooHome) has no factory. It is created as an object directly by the natural means in each language binding (e.g., using "new" in C++ or Java)."
Section 3.1.6.5 (ObjectHome): Add to the explanation of the filter attribute the following sentence:
"The filter attribute is set to NULL by default."
Section 3.1.6.5 (ObjectHome): Add to the explanation of the auto_deref attribute the following sentence:
"The auto_deref attribute is set to TRUE by default."
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Explicitly state that the default constructor should be used to instantiate an ObjectHome. Also state that by default the value of auto_deref will be set to true, and the filter expression will be set to NULL. Setting auto_deref to true by default ensures that the application developer has to make the conscious decision to set the auto_deref functionality to false for performance gain, which is more natural then the other way around
Issue 9537: Raise PreconditionNotMet when changing filter expression on registered Obje (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Raise a PreconditionNotMet when changing a filter expression on a registered ObjectHome
Summary:
ObjectHome contains a set_filter method to set the filter attribute. This method may only be called before an object home is registered. However the only exception that is thrown is the BadParameter exception. We believe this exception does not cover the fact that the set_filter can be called after the objecthome is registered, as bad parameter is not a good description for the error that should be generated then.
Proposed Resolution:
Raise a PreconditionNotMet Exception when the set_filter method is invoked after the ObjectHome has been registered to a Cache.
Resolution: see above
Revised Text: Object Diagram of Figure 3.4. (We have an alternative Object Diagram).
Section 3.1.6.5 (ObjectHome): change the name of the filter attribute into content_filter and the name of the set_filter method into set_content_filter in the table of the ObjectHome:
ObjectHome
Attributes
content_filter String
Operations
set_content_filter void
expression string
Section 3.1.6.5 (ObjectHome): Change the following line in the explanation of the filter attribute from:
"a filter that is used to filter incoming objects. It only concerns subscribing applications; only the incoming objects that pass the filter will be created in the Cache and by that ObjectHome. This filter is given by means …"
to:
"a content filter (content_filter) that is used to filter incoming objects. It only concerns subscribing applications; only the incoming objects that pass the content filter will be created in the Cache and by that ObjectHome. This content filter is given by means …"
Section 3.1.6.5: Change the following line in the explanation of the set_filter method from:
"set the filter for that ObjectHome (set_filter). As a filter is intended …"
to:
"set the content filter for that ObjectHome (set_content_filter). As a content filter is intended …"
Section 3.1.6.5: Add the following text to the set_content_filter method explanation:
"An attempt to change the filter expression afterwards will raise a PreconditionNotMet. Using an invald filter expression will raise an SQLError."
Section 3.2.1.2.1: Change filter attribute and the set_filter method in the IDL definition of the ObjectHome from:
readonly attribute string filter;
void set_filter( in string expression ) raises (BadParameter);
to:
readonly attribute string content_filter;
void set_content_filter( in string expression) raises (SQLError, PreconditionNotMet);
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: To clearly distinguish between a FilterCriterion (used with Selections) and a content-filter (used at the ObjectHome), we propose to rename the attribute named "filter" to "content_filter".
Furthermore, raise a PreconditionNotMet Exception when the set_content_filter method is invoked after the ObjectHome has been registered to a Cache.
Issue 9538: PIM description of "get_domain_id" method is missing (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.2.2.2, the "get_domain_id" method is mentioned in the table, but is not explained in the following sections.
Proposed Resolution::
Add a section that explains the "get_domain_id" method.
Proposed Revised Text::
Replace section 2.1.2.2.1.26 with the following one:
2.1.2.2.1.26 get_domain_id
This operation retrieves the domain_id used to create the DomainParticipant. The domain_id identifies the Domain to which the DomainParticipant belongs. As described in the introduction to Section 2.1.2.2.1 each Domain represents a separate data "communication plane" isolated from other domains.
Resolution: Add a section that explains the "get_domain_id" method
Revised Text: 2.1.2.2.1.26 get_domain_id
This operation retrieves the domain_id used to create the DomainParticipant. The domain_id identifies the Domain to which the DomainParticipant belongs. As described in the introduction to Section 2.1.2.2.1 each Domain represents a separate data "communication plane" isolated from other domains.
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9539: PIM and PSM contradicting wrt "get_sample_lost_status" operation (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: PIM and PSM are contradicting with respect to the "get_sample_lost_status" operation.
Summary:
According to the PIM in section 2.1.2.5.2(.12), the Subscriber class has got an operation called "get_sample_lost_status". According to the PSM in section 2.2.3, this operation is not part of the Subscriber, but of the DataReader.
Proposed Resolution::
Move the "get_sample_lost_status" operation in the PIM to the DataReader as well.
RTI: We propose ewmoving this from the Subscriber altoguether and moving it to the DataReader.
Proposed Revised Text::
In the Subscriber table in section 2.1.2.5.2 Subscriber Class
Remove the entry on the operation get_sample_lost_status()
In the DataReader table in section section 2.1.2.5.3 DataReader Class
Add the entry on the get_sample_lost_status() operation that was removed from the Subscriber class
Add section 2.1.2.5.3.24, previous 2.1.2.5.3.24 becomes 2.1.2.5.3.25:
2.1.2.5.3.24 get_sample_lost_status
This operation allows access to the SAMPLE_LOST_STATUS communication status. Communication statuses are described in Section 2.1.4.1, "Communication Status," on page 2-125.
Resolution: Move the operation from the Subscriber to the DataReader
Revised Text: Section 2.1.2.5.2 Subscriber Class; in the Subscriber table
Remove the entry on the operation get_sample_lost_status()
get_sample_lost_status SampleLostStatus
section 2.1.2.5.3 DataReader Class ; in the DataReader table
Add the entry on the get_sample_lost_status() operation that was removed from the Subscriber class
get_sample_lost_status ReturnCode_t
out: status SampleLostStatus
Add section 2.1.2.5.3.24, previous 2.1.2.5.3.24 becomes 2.1.2.5.3.25:
2.1.2.5.3.24 get_sample_lost_status
This operation allows access to the SAMPLE_LOST_STATUS communication status. Communication statuses are described in Section 2.1.4.1, "Communication Status," on page 2-125.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion:
Issue 9540: Small naming inconsistentcies between PIM and PSM (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.2.4.1.17, the explanation for the "copy_from _topic_qos" operation mentions two parameters called "topic_qos" and "datawriter_qos_list". Both parameter names do not exist.
In the PSM (section 2.2.3) the first two parameters for all "read()" and "take()" methods (and their variants) are consistently called "received_data" and "sample_infos". In the DataReader PIM in section 2.1.2.5.3, these same names are only used for the "read()" and "take()" methods. All their variants and all have a first parameter called "data_values". The FooDataReader PIM has the same issue, but even uses the name "data_values" for the read() and take() methods themselves.
Proposed Resolution::
Replace "topic_qos" with "a_topic_qos" and "datawriter_qos_list" with "a_datawriter_qos".
Consistently use the parameter name "received_data" in bot the PIM and the PSM.
We propose we either ignore the second change regarding 'data_values' or change it the other way around (from received_data to data_values). This impacts the specification less. There are a lot of places that would be affected by the change to "received_data" from "data_values"
Proposed Revised Text::
Section 2.1.2.4.1.17 copy_from_topic_qos:
1st paragraph, replace: "topic_qos" with "a_topic_qos"
1st, 2nd, and 3rd paragraph, replace: "datawriter_qos_list" with "a_datawriter_qos"
Section 2.2.3
replace formal paramater name "received_data" with "data_value" or "data_values" depending on whether the typeis a sequence or not This affects DataReader::take* DataReader::read*, FooDataReader::take* and FooDataReader::read*
Section 2.1.2.5.3 DataReader Class table replace "received_data" with "data_values". This affects the operations:
return_loan
take
read
Section 2.2.3 DCPS PSM : IDL
Change formal parameter of read/take operations from "received_data" with "data_values". This affects the operations:
Resolution: see above
Revised Text: interface DataReader;
Replace:
// ReturnCode_t read(inout DataSeq received_data,
With
// ReturnCode_t read(inout DataSeq data_values,
Replace:
// ReturnCode_t take(inout DataSeq received_data,
With
// ReturnCode_t take(Section 2.1.2.4.1.17 copy_from_topic_qos:
1st paragraph, replace: "topic_qos" with "a_topic_qos"; 1st and 3rd paragraph, replace: "datawriter_qos_list" with "a_datawriter_qos". The result is shown below:
This operation copies the policies in the topic_qos a_topic_qos to the corresponding policies in the datawriter_qos_list a_datawriter_qos (replacing values in the a_datawriter_qos, if present).
This operation does not check the resulting datawriter_qos_list a_datawriter_qos for consistency. This is because the 'merged' datawriter_qos_list a_datawriter_qos may not be the final one, as the application can still modify some policies prior to applying the policies to the DataWriter.
Section 2.1.2.5.3 DataReader Class; DataReader Class table replace received_data" with "data_values". This affects the operations: take, read, return_loan,
read ReturnCode_t
inout:received_datadata_values Data []
take ReturnCode_t
inout:received_datadata_values Data []
return_loan ReturnCode_t
inout:received_datadata_values Data []
Section 2.2.3 DCPS PSM : IDL
Replace formal parameter name "received_data" with "data_value" or "data_values" depending on whether the typeis a sequence or not This affects DataReader::take* DataReader::read*, FooDataReader::take* and FooDataReader::read*
inout DataSeq data_values,
Replace:
// ReturnCode_t read_w_condition (inout DataSeq received_data,
With
// ReturnCode_t read_w_condition (inout DataSeq data_values,
Replace:
// ReturnCode_t take_w_condition (inout DataSeq received_data,
With
// ReturnCode_t take_w_condition (inout DataSeq data_values,
Replace:
// ReturnCode_t read_next_sample (inout DataSeq received_data,
With
// ReturnCode_t read_next_sample (inout DataSeq data_value,
Replace:
// ReturnCode_t take_next_sample (inout DataSeq received_data,
With
// ReturnCode_t take_next_sample (inout DataSeq data_value,
Replace:
// ReturnCode_t read_instance(inout DataSeq received_data,
With
// ReturnCode_t read_instance (inout DataSeq data_values,
Replace:
// ReturnCode_t take_instance(inout DataSeq received_data,
With
// ReturnCode_t take_instance (inout DataSeq data_values,
Replace:
// ReturnCode_t read_next_instance(inout DataSeq received_data,
With
// ReturnCode_t read_next_instance (inout DataSeq data_values,
Replace:
// ReturnCode_t take_next_instance(inout DataSeq received_data,
With
// ReturnCode_t take_next_instance (inout DataSeq data_values,
Replace:
// ReturnCode_t read_next_instance_w_condition(inout DataSeq received_data,
With
// ReturnCode_t read_next_instance_w_condition (inout DataSeq data_values,
Replace:
// ReturnCode_t take_next_instance_w_condition (inout DataSeq received_data,
With
// ReturnCode_t take_next_instance_w_condition (inout DataSeq data_values,
Replace:
// ReturnCode_t return_loan( inout DataSeq received_data,
With:
// ReturnCode_t return_loan( inout DataSeq data_values,
interface FooDataReader;
Replace:
DDS::ReturnCode_t read(inout DataSeq received_data,
With
DDS::ReturnCode_t read(inout DataSeq data_values,
Replace:
DDS::ReturnCode_t take(inout DataSeq received_data,
With
DDS::ReturnCode_t take(inout DataSeq data_values,
Replace:
DDS::ReturnCode_t read_w_condition (inout DataSeq received_data,
With
DDS::ReturnCode_t read_w_condition (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t take_w_condition (inout DataSeq received_data,
With
DDS::ReturnCode_t take_w_condition (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t read_next_sample (inout DataSeq received_data,
With
DDS::ReturnCode_t read_next_sample (inout DataSeq data_value,
Replace:
DDS::ReturnCode_t take_next_sample (inout DataSeq received_data,
With
DDS::ReturnCode_t take_next_sample (inout DataSeq data_value,
Replace:
DDS::ReturnCode_t read_instance(inout DataSeq received_data,
With
DDS::ReturnCode_t read_instance (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t take_instance(inout DataSeq received_data,
With
DDS::ReturnCode_t take_instance (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t read_next_instance(inout DataSeq received_data,
With
DDS::ReturnCode_t read_next_instance (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t take_next_instance(inout DataSeq received_data,
With
DDS::ReturnCode_t take_next_instance (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t read_next_instance_w_condition(inout DataSeq received_data,
With
DDS::ReturnCode_t read_next_instance_w_condition (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t take_next_instance_w_condition (inout DataSeq received_data,
With
DDS::ReturnCode_t take_next_instance_w_condition (inout DataSeq data_values,
Replace:
DDS::ReturnCode_t return_loan( inout DataSeq received_data,
With:
DDS::ReturnCode_t return_loan( inout DataSeq data_values,
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Replace "topic_qos" with "a_topic_qos" and "datawriter_qos_list" with "a_datawriter_qos".
Change it the other way around (from received_data to data_values). This impacts the specification less. There are a lot of places that would be affected by the change to "received_data" from "data_values".
Issue 9541: Unlimited setting for Resource limits not clearly explained (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.3.19 it is not clear how to specify unlimited resource limits. (It is mentioned in the QoS table in section 2.1.3 that the default setting for resource_limits is length_unlimited, but in the context of 2.1.3.19 this is not repeated).
Proposed Resolution::
Specify in Section 2.1.3.19 that the constant LENGTH_UNLIMITED must be used to specify unlimited resource limits.
Proposed Revised Text::
In section 2.1.3.19 add the following paragraph before the last paragraph in the section (the one that starts with "The setting of RESOURCE_LIMITS …":
The constant LENGTH_UNLIMITED may be used to indicate the absence of a particular limit. For example setting max_samples_per_instance to LENGH_UNLIMITED will cause the middleware to not enforce this particular limit.
Resolution: see above
Revised Text: Section 2.1.3.19 RESOURCE_LIMITS
Add the following paragraph before the last paragraph in the section (the one that starts with "The setting of RESOURCE_LIMITS …":
The constant LENGTH_UNLIMITED may be used to indicate the absence of a particular limit. For example setting max_samples_per_instance to LENGH_UNLIMITED will cause the middleware to not enforce this particular limit.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Specify in Section 2.1.3.19 that the constant LENGTH_UNLIMITED must be used to specify unlimited resource limits.
Issue 9542: Inconsistent PIM/PSM for RETCODE_ILLEGAL_OPERATION (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
See also issue R#123 of our previous Issues document. (Addition of an IllegalOperation Errorcode). This issue has been solved on the PIM level, but the ReturnCode has not been added to the IDL PSM.
Proposed Resolution::
Add the RETCODE_ILLEGAL_OPERATION ReturnCode to the PSM in section 2.2.3.
Proposed Revised Text::
Section 2.2.3 DCPS PSM : IDL
after the line "const ReturnCode_t RETCODE_NO_DATA = 11;" add the line:
const ReturnCode_t RETCODE_ILLEGAL_OPERATION = 12;
Resolution: see above
Revised Text: Section 2.2.3 DCPS PSM : IDL
After the line "const ReturnCode_t RETCODE_NO_DATA = 11;" add the line:
const ReturnCode_t RETCODE_ILLEGAL_OPERATION = 12;.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Add the RETCODE_ILLEGAL_OPERATION ReturnCode to the PSM in section 2.2.3.
Issue 9543: Resetting of the statusflag during a listener callback (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.4.2.1, it is explained that a statusflag becomes TRUE if a plain communication status changes, and becomes FALSE again each time the application accesses the plain communication status via the proper get_<plain_communication_status> operation. This is not a complete description, since it only assumes an explicit call to read the communication status. It is also possible (by attaching a Listener) to implicitly read the status (it is then passed as a parameter to the registered callback method), and then afterwards the status flag should also be set to FALSE as well.
Furthermore, the Status table in section 2.4.1 mentions that all total_count_change fields are being reset when a Listener callback is performed. The same thing happens when a get_<plain_communication_status> operation is invoked. It would make sense that a Listener callback behaves in a similar way as an when explicitly reading the plain communication status.
Proposed Resolution::
Mention explicitly in section 2.1.4.2.1 that a status flag is also set to FALSE when a listener callback for that status has been performed. (We need to think what consequences this will have for NIL-Listeners, that behave like a no-op. Probably they should also reset the flag in that case.)
Proposed Revised Text::
In section 2.1.4.2.1 after the paragraph:
For the plain communication status, the StatusChangedFlag flag is initially set to FALSE. It becomes TRUE whenever the plain communication status changes and it is reset to FALSE each time the application accesses the plain communication status via the proper get_<plain communication status> operation on the Entity.
Add the paragraphs:
The communication status is also reset to FALSE whenever the associated listener operation is called as the listener implicitly accesses the status which is passed as a parameter to the operation. The fact that the status is reset prior to calling the listener means that if the application calls the get_<plain communication status> from inside the listener it will see the status already reset.
An exception to this rule is when the associated listener is the 'nil' listener. As described in section 2.1.4.3.1 the 'nil' listener is treaded as a NOOP and the act of calling the 'nil' listener does not reset the communication status.
Resolution: see above
Revised Text: In section 2.1.4.2.1 after the paragraph:
"For the plain communication status, the StatusChangedFlag flag is initially set to FALSE. It becomes TRUE whenever the plain communication status changes and it is reset to FALSE each time the application accesses the plain communication status via the proper get_<plain communication status> operation on the Entity. "
Add the paragraphs:
The communication status is also reset to FALSE whenever the associated listener operation is called as the listener implicitly accesses the status which is passed as a parameter to the operation. The fact that the status is reset prior to calling the listener means that if the application calls the get_<plain communication status> from inside the listener it will see the status already reset.
An exception to this rule is when the associated listener is the 'nil' listener. As described in section 2.1.4.3.1 the 'nil' listener is treaded as a NOOP and the act of calling the 'nil' listener does not reset the communication status.
.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Mention explicitly in section 2.1.4.2.1 that a status flag is also set to FALSE when a listener callback for that status has been performed. (We need to think what consequences this will have for NIL-Listeners, that behave like a no-op. Probably they should also reset the flag in that case.)
Issue 9544: Incorrect description of enable precondition (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.2.2.1. DomainParticipant Class it says:
The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled
It should say:
The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled
Proposed Resolution:
Proposed Revised Text:
In section 2.1.2.2.1. DomainParticipant Class, paragraph at the end of the section before the bullet points
Replace:
The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled
With:
The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled
Resolution: Perform the above change
Revised Text: Section 2.1.2.2.1. DomainParticipant Class, paragraph at the end of the section before the bullet points
Replace:
The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled
With:
The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9545: invalid reference to delete_datareader (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
On page 2-70 at the end of section 2.1.2.5.2 (Subscriber Class) the description states that a list of operations including delete_datareader may return NOT_ENABLED. The operation delete_datareader should be removed from this list.
Proposed Resolution:
Proposed Revised Text:
In section 2.1.2.5.2 Subscriber Class, at the end right before section 2.1.2.5.2.1 replace paragraph:
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, create_datareader, and delete_datareader may return the value NOT_ENABLED.
With:
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, and create_datareader may return the value NOT_ENABLED.
Resolution: Remove delete_datareader from said list
Revised Text: Section 2.1.2.5.2 Subscriber Class, at the end right before section 2.1.2.5.2.1 replace paragraph:
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, create_datareader, and delete_datareader may return the value NOT_ENABLED.
With
All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, and create_datareader may return the value NOT_ENABLED.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9546: Clarify the meaning of locally (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
On 2-94 section 2.1.2.5.5 (SampleInfo Class) the description of publication_handle states that it identifies locally the DataWriter that modified the instance.
Clarify that locally means the instance_handle from the builtin Publication DataReader belonging to the Participant of the DataReader from which the sample is read.
Proposed Resolution:
Proposed Revised Text:
In section 2.1.2.5.5 SampleInfo Class, replace the bullet:
the publication_handle that identifies locally the DataWriter that modified the instance.
With the bullet:
the publication_handle that identifies locally the DataWriter that modified the instance. The publication_handle is the same InstanceHandle_t that is returned by the operation get_matched_publications on the DataReader and can also be used as a parameter to the DataReader operation get_matched_publication_data.
In section 2.1.2.5.3.33 get_matched_publications after the first paragraph add the paragraph.
The handles returned in the 'publication_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataWriter. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSPublications" builtin topic.
In section 2.1.2.5.3.33 get_matched_publications after the first paragraph add the paragraph:
The handles returned in the 'subscription_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataReader. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSSubscriptions" builtin topic.
Resolution: Add the state clarification
Revised Text: Section 2.1.2.5.5 SampleInfo Class, replace the bullet:
· the publication_handle that identifies locally the DataWriter that modified the instance.
With the bullet:
· the publication_handle that identifies locally the DataWriter that modified the instance. The publication_handle is the same InstanceHandle_t that is returned by the operation get_matched_publications on the DataReader and can also be used as a parameter to the DataReader operation get_matched_publication_data.
Section 2.1.2.5.3.34 get_matched_publications ; After the first paragraph add the paragraph.
The handles returned in the 'publication_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataWriter entities. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSPublications" builtin topic.
Section 2.1.2.4.2.24 get_matched_subscriptions ; After the first paragraph add the paragraph:
The handles returned in the 'subscription_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataReader entities. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSSubscriptions" builtin topic.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9548: Missing autopurge_disposed_sample_delay (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In the QoS table for built-in Subscriber and DataReader objects (Section 2.1.5 Built-in Topics) the value for autopurge_disposed_sample_delay is missing.
Proposed Resolution:
Proposed Revised Text:
In the UML figure in section 2.1.3 Supported QoS
Class ReaderDataLifecycleQoS, Add the field:
autopurge_disposed_sample_delay : Duration_t
In section 2.1.5 Built-in Topics, QoS table, READER_DATA_LIFECYCLE row, add:
autopurge_disposed_sample_delay = infinite
Resolution: Add the missing field.
Revised Text: see pages 28/29 of ptc/2006-04-08
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9549: Illegal return value register_instance (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.2.4.2.5 register_instance the description states that if this operation exceeds the max_blocking_time this operation will return TIMEOUT. However this is not possible because the operation cannot return a ReturnCode_t value.
Proposed Resolution:
Proposed Revised Text:
Section section 2.1.2.4.2.5 register_instance
At the end of the 5th paragraph Replace:
If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT
With:
If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return HANDLE_NIL
Resolution: State that in this case the operation will return HANDLE_NIL instead
Revised Text: Section section 2.1.2.4.2.5 register_instance
At the end of the 5th paragraph Replace TIMEAOUT with HANDLE_NIL as shown below:
If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT HANDLE_NIL
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9550: Typo in section 2.1.2.5.1 (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
On page 2-65 the second last bullet states
The sample_rank indicates the number or samples of the same instance that follow the current one in the collection.
The 'or' should be 'of'.
Proposed Resolution:
Proposed Revised Text:
Section 2.1.2.5.1 Access to the data, second to last bullet
Replace 'or' with 'of' in the sentence:
The sample_rank indicates the number or samples of the same instance that follow the current one in the collection.
Resulting in:
The sample_rank indicates the number of samples of the same instance that follow the current one in the collection.
Resolution: Fix typo.
Revised Text: Section 2.1.2.5.1 Access to the data, second to last bullet
Replace 'or' with 'of' in the sentence that starts with "The sample_rank indicates the number or samples…" as shown below::
The sample_rank indicates the number or of samples of the same instance that follow the current one in the collection.
.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Issue 9551: Extended visibility of instance state changes (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
The instance state is only accessible via sampleInfo and this requires the availability of data.
This implies that the dispose and the no writer state of an instance may not be noticed if the application has taken all samples.
Subsequent instance state changes are only notified if all samples are taken.
Consequently, it's very hard to receive notifications on disposal of instances.
Requires data so applications should use read instead of take.
But take is required for subsequent notifications.
Applications are not notified on arrival of data if they choose not to take all data (read or take not all)
Occasionally Application may need to react on disposal or the no writer state of instances e.g. cleanup allocated resources and applications may also continuously take all samples to save resources.
In this case a dispose or no writer state will only be noticed if a new generation appears, which may never happen.
Occasionally applications may want to keep all read samples and still be notified on data arrival.
Applications should be notified whenever new data arrives whether they have taken all previous data samples or not
According to the spec (section 2.1.2.5.3.8) it is possible to get 'meta samples' that is samples that have a SampleInfo but have no associated data, this can be used to notify of disposal, no writers and such.
Proposed Resolution:
Always reset the read communication status flag on any read or take operation.
Provide a notification mechanism on the DataReader that specifies the instance handle of the instance whose state has changed.
-> This is managed by the meta-sample mechanism mentioned above
Provide a method on an instance handle to access the instance state.
Modify figure 2-16 and section 2.1.4.2.2 to state that the ReadCommunicationStatus is reset to FALSE whenever the corresponding listener operation is called, or else if a read or take operation is called on the associated DataReader
In addition the ON_DATA_ON_READERS status is reset if the on_data_available is called. The inverse (resetting the ON_DATA_AVAILABLE status when the on_data_on_readers is called) does not happen.
Proposed Revised Text:
Section 2.1.2.5 Subscription Module, Figure 2-10
Add the following field to the SampleInfo class:
valid_data : boolean
Section 2.1.2.5.1 Access to the data (see attached document access_to_the_data2CMP.pdf for the resulting section with changes)
>>Aftter 2nd paragraph "Each of these" add the section heading:
2.1.2.5.1.1 Interpretation of the SampleInfo
3rd paragraph; add the following bullet after the bullet that starts with "The instance_state of the related instance"
The valid_data flag. This flag indicates whether there is data associated with the sample. Some samples do not contain data indicating only a change on the instance_state of the corresponding instance.
>>Before the paragraph that starts with "For each sample received" add the section headings:
2.1.2.5.1.2 Interpretation of the SampleInfo sample_state
>>Before the paragraph that starts with "For each instance the middleware internally maintains" add the section heading:
2.1.2.5.1.3 Interpretation of the SampleInfo instance_state
>>Before the paragraph that starts with "For each instance the middleware internally maintains two counts: the disposed_generation_count and no_writers_generation_count" add the following subsections (2.1.2.5.1.4, and 2.1.2.5.1.5):
2.1.2.5.1.4 Interpretation of the SampleInfo valid_data
Normally each DataSample contains both a SampleInfo and some Data. However there are situations where a DataSample contains only the SampleInfo and does not have any associated data. This occurs when the Service notifies the application of a change of state for an intance that was caused by some internal mechanism (such as a timeout) for which there is no associated data. An example of this situation is when the Service detects that an instance has no writers and changes the coresponding instance_state to NOT_ALIVE_NO_WRITERS.
The actual set of scenarios under which the middleware returns DataSamples containing no Data is implementation dependent. The application can distinguish wether a particular DataSample has data by examining the value of the valid_data flag. If this flag is set to TRUE, then the DataSample contains valid Data, if the flag is set to FALSE the DataSample contains no Data.
To ensure corerctness and portability, the valid_data flag must be examined by the application prior to accessing the Data associated with the DataSample and if the flag is set to FALSE, the application should not access the Data associated with the DataSample, that is, teh application should access only the SampleInfo.
2.1.2.5.1.5 Interpretation of the SampleInfo disposed_generation_count and no_writers_generation_count
Before the paragraph that starts with "The sample_rank and generation_rank available in the SampleInfo are computed …" add the section heading:
2.1.2.5.1.6 Interpretation of the SampleInfo sample_rank, generation_rank, and absolute_generation_rank
>>Before the paragraph that starts with "These counters and ranks allow the application to distinguish" add the section heading:
2.1.2.5.1.7 Interpretation of the SampleInfo counters and ranks
>>Before the paragraph that starts with "For each instance (identified by the key), the middleware internally…" add the section heading:
2.1.2.5.1.8 Interpretation of the SampleInfo view_state
>>Before the paragraph that starts with "The application accesses data by means of the operations read or take on the DataReader" add the section heading:
2.1.2.5.1.9 Data access patterns
Section 2.1.2.5.5 Sample Info class
Add another bullet to the list:
The valid_data flag that indicates whether the DataSample contains data or else it is only used to communicate of a change in the instance_state of the instance.
Section 2.2.3 DCPS PSM : IDL
struct SampleInfo
Add the following field at the end of the structure:
boolean valid_data
The resulting structure is:
struct SampleInfo {
SampleStateKind sample_state;
ViewStateKind view_state;
InstanceStateKind instance_state;
Time_t source_timestamp;
InstanceHandle_t instance_handle;
InstanceHandle_t publication_handle;
long disposed_generation_count;
long no_writers_generation_count;
long sample_rank;
long generation_rank;
long absolute_generation_rank;
boolean valid_data;
};
Resolution: see above
Revised Text: see pages 33 - 36 of ptc/2006-04-08
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: According to the spec (section 2.1.2.5.3.8) it is possible to get 'meta samples' that is samples that have a SampleInfo but have no associated data, this can be used to notify of disposal, no writers and such. So this part is not a problem.
The above problems can be solved as follows:
· Always reset the read communication status flag on any read or take operation.
· State that meta-samples (samples with no data) are used to provide a notification mechanism on the DataReader that specifies the instance handle of the instance whose state has changed.
Specifically the issue can be resolved by:
1. Modifying figure 2-16 and section 2.1.4.2.2 to state that the ReadCommunicationStatus is reset to FALSE whenever the corresponding listener operation is called, or else if a read or take operation is called on the associated DataReader
2. Changing the description of the ON_DATA_ON_READERS status such that it is reset if the on_data_available is called. The inverse (resetting the ON_DATA_AVAILABLE status when the on_data_on_readers is called) does not happen.
Issue 9552: Clarify notification of ownership change (data-distribution-rtf)
Click here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In section 2.1.3.9.2 EXCLUSIVE kind (the last sentence on page 2-114) the specification states that ownership changes are notified via a status change. However there is no status change that notifies of ownership change. The only way to detect it is to look at the SampleInstance and see that the publication_handle has changed.
Proposed Resolution:
Remove the sentence. We could add the Status, Listener, and Callback, but it seems unnecessary until we see some actual use-cases that require this…
Proposed Revised Text:
In section 2.1.3.9.2 EXCLUSIVE kind, last sentence in last paragraph, remove the sentence:
"The DataReader is also notified of this via a status change that is accessible by means of the Listener or Condition mechanisms."
Resolution: see above
Revised Text: Section 2.1.3.9.2 EXCLUSIVE kind, last sentence in last paragraph, remove the sentence:
The DataReader is also notified of this via a status change that is accessible by means of the Listener or Condition mechanisms.
Disposition: Resolved
Actions taken:
April 3, 2006: received issue
August 23, 2006: closed issue
Discussion: Remove the sentence. We could add the Status, Listener, and Callback, but it seems unnecessary until we see some actual use-cases that require this.
Issue 9553: read/take_next_instance() (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Must read/take_next_instance() require that the handle corresponds to a known data-object?
Summary:
In sections for read/take_next_instance() and read/take_next_instance_w_condition() it states that if detectable the implementation should return BAD_PARAMETER in this case or otherwise the situation is unspecified.
It might be desirable to allow for an invalid handle to be passed in, especially in the case that the user is iterating through instances and takes all samples to an instance that is NOT_ALIVE and has no writers in which case that action may actually free that instance, "invalidating" the handle of that instance.
Proposed Resolution:
Allow passing a handle that does not correspond to any instance currently on the DataReader to read_next_instance/take_next_instance. This handle should be sorted in a deterministic way with regards to the other handles such that the iteration is not interrupted.
Proposed Revised Text:
Section 2.1.2.5.3.16 read_next_instance
Replace the paragraph:
This operation implies the existence of some total order 'greater than' relationship between the instance handles. The specifics of this relationship are not important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instances, that is, it does not depend on the actual samples received or available. For the purposes of this explanation it is 'as if' each instance handle was represented as a unique integer.
With:
This operation implies the existence of a total order 'greater-than' relationship between the instance handles. The specifics of this relationship are not all important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instance handles: It should not depend on the state of the instance (e.g. whether it has data or not) and must be defined even for instance handles that do not correspond to instances currently managed by the DataReader. For the purposes of the ordering it should be 'as if' each instance handle was represented as a unique integer.
Section 2.1.2.5.3.16 read_next_instance
Remove the paragraph:
The behavior of the read_instance operation follows the same rules as the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan.
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Note that it is possible to call the 'read_next_instance' operation with an instance handle that does not correspond to an instance currently managed by the DataReader. This is because as stated earlier the 'greater-than' relationship is defined even for handles not managed by the DataReader. One practical situation where this may occur is when an applications is iterating though all the instances, takes all the samples of a NOT_ALIVE_NO_WRITERS instance, returns the loan (at which point the instance information may be removed, and thus the handle becomes invalid), and tries to read the next instance.
Section 2.1.2.5.3.17 take_next_instance
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader.
Section 2.1.2.5.3.18 read_next_instance_w_condition
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call read_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader.
Section 2.1.2.5.3.19 take_next_instance_w_condition
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance_w_condition with an instance handle that does not correspond to an instance currently managed by the DataReader.
Resolution: see above
Revised Text: Section 2.1.2.5.3.16 read_next_instance
Replace the paragraph:
This operation implies the existence of some total order 'greater than' relationship between the instance handles. The specifics of this relationship are not important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instances, that is, it does not depend on the actual samples received or available. For the purposes of this explanation it is 'as if' each instance handle was represented as a unique integer.
With:
This operation implies the existence of a total order 'greater-than' relationship between the instance handles. The specifics of this relationship are not all important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instance handles: It should not depend on the state of the instance (e.g. whether it has data or not) and must be defined even for instance handles that do not correspond to instances currently managed by the DataReader. For the purposes of the ordering it should be 'as if' each instance handle was represented as a unique integer.
Section 2.1.2.5.3.16 read_next_instance
Remove the paragraph:
The behavior of the read_instance operation follows the same rules as the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan.
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Note that it is possible to call the 'read_next_instance' operation with an instance handle that does not correspond to an instance currently managed by the DataReader. This is because as stated earlier the 'greater-than' relationship is defined even for handles not managed by the DataReader. One practical situation where this may occur is when an applications is iterating though all the instances, takes all the samples of a NOT_ALIVE_NO_WRITERS instance, returns the loan (at which point the instance information may be removed, and thus the handle becomes invalid), and tries to read the next instance.
Section 2.1.2.5.3.17 take_next_instance
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader.
Section 2.1.2.5.3.18 read_next_instance_w_condition
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call read_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader.
Section 2.1.2.5.3.19 take_next_instance_w_condition
Replace the paragraph:
This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.
With
Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance_w_condition with an instance handle that does not correspond to an instance currently managed by the DataReader.
Disposition: Resolved
Actions taken:
April 6, 2006: received issue
August 23, 2006: closed issue
Discussion: Allow passing a handle that does not correspond to any instance currently on the DataReader to read_next_instance/take_next_instance. This handle should be sorted in a deterministic way with regards to the other handles such that the iteration is not interrupted.
Issue 9554: instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Clarification to when a instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section
Summary:
In Section 2.1.3.22 (READER_DATA_LIFECYCLE QoS) the fourth paragraph mention how "the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken'".
This should be corrected to state "for instances for which all samples have been 'taken' and either instance_state = NOT_ALIVE_NO_WRITERS or instance_state = NOT_ALIVE_DISPOSED and there are no 'live' writers".
In light of this the statement in the last paragraph stating that once the state becomes NOT_ALIVE_DISPOSED after the autopurge_disposed_samples_delay elapses, "the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost" is not entirely true. If there are other 'live' writers, the DataReader will maintain the state on the instance of which DataWriters are writing to it.
We should change the "will purge all" to "may purge all" or even "will purge". Alternatively, we could describe in further detail when it "will purge all", i.e. when there are no 'live' writers.
The biggest thing here is to decide whether the instance lifecycle can end directly from the NOT_ALIVE_DISPOSED state (as Figure 2-11 currently states) or whether we must force it to go though the NOT_ALOVE_NO_WRITES; that is, in the case where the last writer unregisters a disposed instance do we transition to NOT_ALIVE_NO_WRITERS+NOT_ALIVE_DISPOSED or do we finish the lifecycle directly without notifying the user (as it is indicated now)
We think the current behavior is better because from the application reader point of view, the instance does not exists once it DISPOSED, the fact that we keep the instance state such that we can retain ownership is a detail inside the middleware, so it would be unnatural to get a further indication that the instance (that it no longer knows about) has now no writers.
We suggest the proposed changes should reflect this point of view.
Proposed Resolution:
Make the suggested corrections:
(1) Correct when readers can claim resources to include NOT_ALIVE_DISPOSED state when there are no live writers. So we always reclaim when there are no writers and all the samples for that instance are taken; these samples will include a sentinel mata-sample with an instance state that will be either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED
(2) Clarify that the auto_purge_disposed samples removes only the samples, but not the instance; the instance will only removed in the above case.
Proposed Revised Text:
Section 2.1.3.22 READER_DATA_LIFECYCLE QoS
Replace the paragraph:
Under normal circumstances the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken.'
With
Under normal circumstances the DataReader can only reclaim all resources for instances for which there are no writers and for which all samples have been 'taken.' The last sample the DataReader will have taken for that instance will have an instance_state of either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED depending on whether the last writer that had ownership of the instance disposed it or not. Refer to Figure 2-11 for a statechart describing the transitions possible for the instance_state.
In the Paragraph starting with "The autopurge_nowriter_samples_delay defines.."
Replace
once its view_state becomes NOT_ALIVE_NO_WRITERS
With
once its instance_state becomes NOT_ALIVE_NO_WRITERS
Replace the paragraph:
The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost
With
The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain samples for an instance once its instance_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all samples for the instance.
Resolution: see above
Revised Text: Section 2.1.3.22 READER_DATA_LIFECYCLE QoS
Replace the paragraph:
Under normal circumstances the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken.'
With
Under normal circumstances the DataReader can only reclaim all resources for instances for which there are no writers and for which all samples have been 'taken.' The last sample the DataReader will have taken for that instance will have an instance_state of either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED depending on whether the last writer that had ownership of the instance disposed it or not. Refer to Figure 2-11 for a statechart describing the transitions possible for the instance_state.
In the Paragraph starting with "The autopurge_nowriter_samples_delay defines.."
Replace
once its view_state becomes NOT_ALIVE_NO_WRITERS
With
once its instance_state becomes NOT_ALIVE_NO_WRITERS
Replace the paragraph:
The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost
With
The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain samples for an instance once its instance_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all samples for the instance.
Disposition: Resolved
Actions taken:
April 6, 2006: received issue
August 23, 2006: closed issue
Discussion: Make the suggested corrections:
.(1) Correct when readers can claim resources to include NOT_ALIVE_DISPOSED state when there are no live writers. So we always reclaim when there are no writers and all the samples for that instance are taken; these samples will include a sentinel mata-sample with an instance state that will be either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED
(2) Clarify that the auto_purge_disposed samples removes only the samples, but not the instance; the instance will only removed in the above case.
Issue 9555: String sequence should be a parameter and not return value (data-distribution-rtf)
Click here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary: Summary:
In Section 2.1.2.5.2.11 (notify_datareaders) the first sentence states
This operation invokes the operation on_data_available on the DataReaderListener objects attached to contained DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState.
In Section 2.1.4.2.2 (Changes in Read Communication Statuses) it states in the first paragraph that the "StatusChangedFlag becomes false again when all samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entities".
In Figure 2-16 in the same section, the transition from the TRUE state to FALSE is accompanied by the condition "DataReader:take[all data taken by application]".
However, in Section 2.1.4.4 (Condition and Wait-sets) in the last step in the general use pattern deals with using the result of the wait operation and in the third sub-bullet it states how if the wait unblocked due to a StatusCondition and the status change is DATA_AVAILABLE, the appropriate action is to call read/take on the relevant DataReader.
If only a take of all samples will reset the status, then simply calling read in this use pattern will not reset the status and the given general use pattern will actual spin.
Proposed Resolution:
The actual condition for the StatusChangedFlag to become false should then be that the status has been considered read/accessed by the user. This should be considered as such when the listener for a Read Communication Status is called similar to Plain Communication Statuses (see T#6).
In addition, it should be such if the user calls read/take on the associated DataReader.
Subscriber's DATA_ON_READERS status is reset if the on_data_on_readers is called (same as for all listeners).
In addition Subscriber's DATA_ON_READERS status is reset if the user calls read or take on any of the DataReaders belonging to the Subscriber.
In addition, the Subscriber's DATA_ON_READERS status is also reset if the on_data_available callback is called on the DataReaderListener. This is needed such that if the application calls notify_datareaders it will reset the status.
The inverse, i.e. resetting the DATA_AVAILABLE status when the on_data_on_readers callback is called) does not happen.
Proposed Revised Text:
Section 2.1.2.5.2.11 notify_datareaders
In the first sentence, change
DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState
To
DataReader entities with a DATA_AVAILABLE status that is considered changed.
Section 2.1.4.2.2 Changes in Read Communication Statuses
Change the last sentence of the first paragraph from
The StatusChangedFlag becomes false again when all the samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entitites.
To
The DATA_AVAILABLE StatusChangedFlag becomes false again when either the corresponding listener operation (on_data_available ) is called or a read or take operation is called on the associated DataReader.
The DATA_ON_READERS StatusChangedFlag becomes false again when any of the following occurs:
o The corresponding listener operation (on_data_on_readers) is called.
o The on_data_available listener operation is called on any DataReader belonging to the Subscriber.
o The read or take on any DataReader belonging to the Subscriber
In Figure 2-16
Introduce two figures one for the DATA_ON_READERS and the other for the DATA_AVAILABLE
Resolution: see above
Revised Text: Section 2.1.2.5.2.11 notify_datareaders
In the first sentence, change
… DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState
To
… DataReader entities with a DATA_AVAILABLE status that is considered changed.
Section 2.1.4.2.2 Changes in Read Communication Statuses
Replace the text in the section with:
For the read communication status, the StatusChangedFlag flag is initially set to FALSE.
The StatusChangedFlag becomes TRUE when either a data-sample arrives or else the ViewState, SampleState, or InstanceState of any existing sample changes for any reason other than a call to DataReader::read, DataReader::take or their variants. Specifically any of the following events will cause the StatusChangedFlag to become TRUE:
· The arrival of new data.
· A change in the InstanceState of a contained instance. This can be caused by either:
o The arrival of the notification that an instance has been disposed by:
§ the DataWriter that owns it if OWNERSHIP QoS kind=EXLUSIVE
§ or by any DataWriter if OWNERSHIP QoS kind=SHARED.
o The loss of liveliness of the DataWriter of an instance for which there is no other DataWriter.
o The arrival of the notification that an instance has been unregistered by the only DataWriter that is known to be writing the instance.
Depending on the kind of StatusChangedFlag, the flag transitions to FALSE again as follows:
· The DATA_AVAILABLE StatusChangedFlag becomes FALSE when either the corresponding listener operation (on_data_available) is called or the read or take operation (or their variants) is called on the associated DataReader.
· The DATA_ON_READERS StatusChangedFlag becomes FALSE when any of the following events occurs:
o The corresponding listener operation (on_data_on_readers) is called.
o The on_data_available listener operation is called on any DataReader belonging to the Subscriber.
o The read or take operation (or their variants) is called on any DataReader belonging to the Subscriber.
In Figure 2-16
Introduce two figures one for the DATA_ON_READERS and the other for the DATA_AVAILABLE. The new figure 2-16 is shown shown below:
Disposition: Resolved
Actions taken:
April 6, 2006: received issue
August 23, 2006: closed issue
Discussion: The actual condition for the StatusChangedFlag to become false should then be that the status has been considered read/accessed by the user. This should be considered as such when the listener for a Read Communication Status is called similar to Plain Communication Statuses (see ISSUE# [T#6).
In addition, it should be such if the user calls read/take on the associated DataReader.
Subscriber's DATA_ON_READERS status is reset if the on_data_on_readers is called (same as for all listeners).
In addition Subscriber's DATA_ON_READERS status is reset if the user calls read or take on any of the DataReaders belonging to the Subscriber.
In addition, the Subscriber's DATA_ON_READERS status is also reset if the on_data_available callback is called on the DataReaderListener. This is needed such that if the application calls notify_datareaders it will reset the status.
The inverse, i.e. resetting the DATA_AVAILABLE status when the on_data_on_readers callback is called) does not happen