Issues for Data Distribution Service Finalization Task Force

To comment on any of these issues, send email to data-distribution-ftf@omg.org. (Please include the issue number in the Subject: header, thusly: [Issue ###].) To submit a new issue, send email to issues@omg.org.

List of issues (green=resolved, yellow=pending Board vote, red=unresolved)

List options: All ; Open Issues only; or Closed Issues only

Issue 6732: Extension_to_the_query_language
Issue 6733: Attributes_on_the_data
Issue 6734: Filtered_out_lifecycle_state Issue
Issue 6737: Detection_of_dynamic_qos_failure Issue
Issue 6741: Additional_qos_THROUGHPUT Issue
Issue 6742: Key_separate_from_data Issue
Issue 6847: DDS ISSUE# 47] Allow application to not specify a timstamp
Issue 6850: DDS ISSUE# 50] Multiple observers sharing a datareader
Issue 6851: DDS ISSUE# 51] Avoid use of dynamic memory for manipulating QoS
Issue 6852: Ref-167 Malloc_required_for_get_default_qos
Issue 6860: Ref-232 Allow_reader_to_access_partition_of_writer
Issue 7065: ref-1053 Missing is_composition

Issue 6732: Extension_to_the_query_language (data-distribution-ftf)

Click here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2035 Extension_to_the_query_language Issue [Boeing SOSCOE] ? SOSCOE has a need to allow function expressions to be added to the query language ? SOSCOE has created a provider property class that allows applications to have attributes with typed values; it becomes a triplet (name, type, value). Currently the “type” only supports simple types, but the intent is to extend it to well-known structured types. Proposal [Boeing] ? Extend the query language to allow user defined function expressions ? Extend the query language to allow user defined structured types 

Resolution:
Revised Text:
Actions taken:
December 18, 2003: received issue

Discussion:
Discussion: 
The FTF recognizes this would be a useful facility. However, the FTF could not find suitable resolution within the time-frame of the FTF and therefore resolved to defer the issue to a future F/RTF or revision.


Issue 6733: Attributes_on_the_data (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Attributes_on_the_data Issue [Boeing SOSCOE] ? The DDS API forces the data to be written to be encapsulated into a single IDL structure. Moreover IDL-generated structures do not support pointers to other structures. ? These limitations are constraining the kinds of things that a layer such as SOSCOE can do. For example, it would be desirable to allow the DDS DataWriter to filter or perform other actions on information that is not “part of the data”. Not “being part of the data” means the attributes: • Are propagated along with the message (1-to-1 relationship) to the reader. • May be used for filtering, or intercepted by layers above DDS (e.g. SOSCOE). • Do not get passed to the read or take calls ? The problem is that since the DataWriter takes a single “compacted” data-structure, any additional information, whether introduced by the user-app or the SOSCOE layer must be somehow copied into the data and thus force the introduction of new data-types. ? In other words it would be desirable for DDS so provide some hook that would allow a user or a layer such as SOSCOE to add additional information to the “user-level” data that is then used by DDS to filter on. The filtering would then occur on the additional information provided by the SOSCOE layer and not by the application that is writing the data. There are three main ways to use this • The additional SOSCOE information, would be supplied in conjunction with every write operation. So the filtering is evaluated on every write. This may be too inefficient in some cases and therefore there should also be a way to turn off the filtering also (e.g. by providing an empty set of attributes). • The additional SOSCOE information is provided by some other means than a parameter to the write (e.g. a set_attributes(InstanceHandle_t, Attributes)) call on the DataWriter) so that the filtering does not examine every data-sample for filtering; rather it is performed on information that changes at much slower rate. For example, there is the concept of a “geographical region” in which the information lives; the filtering applies to getting information that is within our region of interest, and re-evaluation of the filter only occurs each time the region changes which is much rarer than the actual data changing. • The attributes could be attached to the instance by means of a separate API. These additional attributes would then be passed to the serialization as well as the filter operations. The deserialization would also need to handle them. This approach would meet SOSCOE’s requirements and has the advantage of not forcing filters to be re-evaluated each write; they would only be evaluated if the attributes change. Proposal [Boeing SOSCOE] ? Add a set of attributes (name-value pairs) (ref Issue#2035) that are provided separately from the data and can then be used to do the filtering. ? This allows reusing the same data with different attributes and thus filter it differently. ? Name-value pair representation would also potentially allow sending a partial list of attributes ? Filtering can be done on this name-attribute pairs. Similar to ContentFilteredTopic but the filtering is done on the attributes, not the data. Note that the ContentFilteredTopic may not know enough about the data. The data may be marshaled and encrypted. The brokering of the data may be done by nodes that do not know how to unmarshal/decrypt the data.

Resolution:
Revised Text:
Actions taken:
December 18, 2003: received issue

Discussion:
The FTF recognizes this would be a useful facility. However the FTF could not find a suitable resolution. It appears that the best solution would be either use value-types to describe the data, or to extend the IDL language to allow expressing that a structure can reference other structures (not just contain them). Both approaches are beyond the scope of the FTF.


Issue 6734: Filtered_out_lifecycle_state Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2050 Filtered_out_lifecycle_state Issue [Boeing SOSCOE] ? Sometimes the receiving application needs to know that data is being filtered-out. In some use cases this situation is different from the case where data is not being produced. ? A related question is whether the presence of filters (content-based, time-based) can be the cause of a DataReader to miss a requested deadline. It depends whether the deadline is interpreted to mean that data is produced at that rate, or else that data that passes the filter must be produced at that rate. ? SOSCOE thinks that the filter should not cause a deadline to be missed, rather the reader should receive explicit notification that the data was filtered-out. At least that data is starting to be filtered-out, not necessarily each time something is filtered out. ? In any case filters should not cause loss of liveliness. ? A typical use-case may be that a display-device is showing the tanks in a certain area (the one relevant to that particular display). The display has a filter to indicate that region of interest. When the tank leaves the region of interest, the display wants to know that. That way it can stop displaying it; however the display does not want to internally discard all the information about the tank immediately in case the tank appears again. The display needs to therefore know that the data for that tank instance is being filtered out. This situation is different from missing a deadline which may indicate that the data is not being generated as intended. ? Note that if a filter is present and a deadline is set, it would be necessary for the implementation to send some information to the reader so that the deadline would not fire. However, bandwidth is very important in some cases, so sending any information when the data is being filtered may be too expensive. ? One option may be to have the application change the deadline to a larger value when it finds out that the data is being filtered. Another possibility is to specify two values of the deadline, one when the data is not filtered and the other when the data is being filtered (the second one could be specified along with the filter). In this case the information that the middleware implementation sends to avoid missing the deadline could be sent at the lower rate. ? This facility may be hard to implement for all cases. For example in the case where the filter is applied at the source and the reliability QoS is BEST_EFFORTS it is not guaranteed that said notification would be received by the reader. If this is indeed the case then we would not require the filtered-out notification to be guaranteed if the reliability setting is BEST_EFFORTS. Proposal [Boeing SOSCOE] ? Add FILTERED_OUT lifecycle state to allow user to know that data is being filtered out. The usage of this state should be an option for the user. ? If the option is on, the state is set on a sample when the sample was filtered out and the previous sample was not filtered out. Only one notification of filtering is required, not one per sample being filtered. ? Optionally introduce a deadline_while_filtered QoS on the ContentFilteredTopic which would transition the deadline to a larger value when the data is being filtered. 

Resolution:
Revised Text:
Actions taken:
December 18, 2003: received issue

Discussion:
The FTF can see how it may be useful for some applications to know that data is being filtered. However, the FTF is unsure whether such functionality can be implemented without introducing significant overhead. There is concern with regards to additional messages that may be required as well as the potential coupling with HISTORY and other QoS. Given this the FTF resolved to defer this issue until the requirements and impact are better understood.


Issue 6737: Detection_of_dynamic_qos_failure Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Significant
Summary:
Issue# 2080 Detection_of_dynamic_qos_failure Issue [Boeing SOSCOE] ? DDS does not provide the complete means for a user of the DDS API to detect “dynamic” failure of QoS and other “configuration” changes that may be important. For example: • LATENCY_BUDGET (on the receiver side). • Messages dropped due to lack of resources (on the receiver side) • Messages lost when QoS is RELIABLE KEEP_ALL (related to Issue# 2070) • Changes on the ownership of data-instances. • Addition of a remote DataReader that matches a local DataWriter. Removal of a remote DataReader that matches a local DataWriter. • Addition of a remote DataWriter to a local DataReader; removal of a remote DataWriter to a local DataReader. • Changes in “liveliness” of a remote DataReader of a local DataWriter. The symmetric situation, that is changes in liveliness of a remote DataWriter to a local DataReader does have a listener in the DataReader. Proposal [Boeing SOSCOE] ? Add the missing operations to the proper listeners 

Resolution:
Revised Text:
Actions taken:
December 18, 2003: received issue

Discussion:
Some of the above requirements were addressed on the resolution of 6730. In particular the changes in the associations ("match") of local DataWriters with remote DataReaders  and similarly the associations of local DataReaders with remote DataWriters.  In addtion a change introduced as part of the resolution of 6736 (move the on_sample_lost operation from SubscriberListener to DataReaderListener)  partially addresses the need to provide more detailed information when samples are lost.
The FTF recognizes that it would be useful to provide more information to the application regarding changes stated in this issue. However, the FTF did not have enough time to assess how offering these additional facilities would impact the implementation and therefore resolved to defer the issue for a future F/RTF or revision.


Issue 6741: Additional_qos_THROUGHPUT Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Minor
Summary:
Issue# 2110 Additional_qos_THROUGHPUT Issue [Boeing SOSCOE] ? The DDS specification has no provision to control the amount of bandwidth that the different entities can consume. ? Ideally the user of the DDS API could indicate bandwidth limits and also reserve bandwidth in a way that could then be mapped by the service into the underlying transport facilities, for the cases where those facilities are there. ? At a minimum the user would like to indicate bandwidth limits in bytes per second. Although low level, this kind of unit would make more sense than something like messages-per-second because each data-type, or maybe even each particular write to an instance may be of a different size. ? There is also the case where the communication infrastructure needs to communicate to the application how much bandwidth it can expect to have. This can also change dynamically based on current network conditions. The application can then take advantage of this knowledge to configure itself so that only the more important messages are sent. ? All we need is something that can be passed to the API; the middleware does not need to do anything with it. ? Not clear how it can be implemented or how it interacts with things. But there is a requirement that there is a way to specify this QoS. This comes from streaming type applications. They want to be able to reserve some bandwidth. 

Proposal [Boeing SOSCOE] ? No concrete action is proposed at this time. The precise definition is fairly involved. However there is a general desire to be able to allocate and control bandwidth utilization so it would be nice if approaches would be explored. Comment [RTI] ? The fundamental problem is how to map this to the DDS model. The DDS spec. does not have a model for the Transport or expose to the user which entities (writers / readers) are associated with each transport. It is in fact possible that a single write to an Entity may result on multiple messages each over a different transport, its all hidden from the application. ? So the first thing would be to introduce some model on how the entities interact with the transport. Where are the TranportPlugins installed (globally, per participant, per Connector), what are the transport “resources” (e.g. in RTI’s TPI the SendResource, and ReceiveResource) and how they map to the DDS entities. ? Introducing a QoS that limits the bandwidth used by each DataWriter would be straightforward. Similarly for a QoS that attempts to reserve certain amount of bandwidth for a particular DataWriter. The DDS implementation who knows what transports it is associated with would then map it to the appropriate transport calls. The problem is that it would apply indiscriminately to all transports. ? For the case of EndpointConnectors if transports were explicitly associated with the connector, then it may also be possible to set this kind of QoS. It would then apply to all the DataReaders and DataWriters in the EndpointConnector. ? Regarding the listeners, presumably the callbacks would refer to the bandwidth changes on each transport resource. So for the user of the DDS API to make sense of this they would need that mode/map to DDS entities. 

Resolution:
Revised Text: The FTF agrees this is a limitation of the specification. However, the concrete meaning of this QoS and how it impacts the other QoS is not well understood and therefore the FTF resolved to defer this issue until more application use-cases are available.
Actions taken:
December 18, 2003: received issue

Issue 6742: Key_separate_from_data Issue (data-distribution-ftf)

Click
here for this issue's archive.
Nature: Enhancement
Severity: Minor
Summary:
Issue# 2120 Key_separate_from_data Issue [Boeing SOSCOE] ? Keyed data is important but having the key being part of the data leads to duplication or copying into other types of structures. ? Note that this issue is not too critical. SOSCOE has worked around it by creating a container type that copies the data inside. Proposal [Boeing SOSCOE] ? Split the keys out from the data type. The idea would be to have the write operations take two parameters, one for the data and the other for the key. The same would apply to the reader side. Comment [RTI] ? Maybe this can already be accommodated with a small extension of the DDS API. If we had a DataReader::register_type that took only the key, then we could say that provided that the InstanceHandle_t is passed to the write() operation it is not required for the data to contain the key. ? This issue is exacerbated by the fact that IDL does not allow structures to contain pointers to other structures. If this limitation was not present, then it would reasonable for the user of the DDS API to define a wrapper data-type that would just contain pointers to the Key and to the data-blub. Note that there are other languages’ such as ITU’s ODL (object description language) that are extension to OMG’s IDL and do allow this pointer syntax. However, for now we would have to rely on the individual vendors to implement this feature which would be technically quite simple. ? Separating the key from the data would require DDS that the definition of a Topic would involve not just the specification of the data-type, but also the key-type. Also the implied IDL that represents the type-specific data-writers and data-readers would need to now be generated for each combination of data-type and key-type. There is no standard way to indicate in the IDL file what those combinations are so it would not be so simple for the code-generator to determine this. These problems do not arise if we followed the first approach to allow pointers within the structures. Comment [Boeing SOSCOE] ? The register instance_by_key, would also help this particular scenario. 

Resolution:
Revised Text:
Actions taken:
December 18, 2003: received issue

Discussion:
The work-around of writing a container type that SOSCOE has implemented is adequate except that it forces an additional copy of the data and key. 
This is therefore part of a more general issue of how avoid additional copies when aggregating types into a container type for the purposes of sending it using DDS. The FTF agrees that this is an significant limitation but the resolution would require either the use of value-types or extensions to IDL. These changes are beyond the scope of the FTF.


Issue 6847: DDS ISSUE# 47] Allow application to not specify a timstamp (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-157 Ability_of_the_application_to_not_specify_a_timestamp


Getting a timestamp can be an expensive operation. It is desirable
that the application can configure a datawriter such that it does not
get/send the timestamp


The option of letting the application not set a timestamp by means of
calling write_w_timestamp() and passing TIME_INVALID is not good
because certain QoS such as DESTINATION_ORDER BY_SOURCE_TIMESTAMP
require this. Also it would be hard to manage if the application could
sometimes specify a timestamp and sometimes not for the same
DataWriter/instance.


***PROPOSAL***


Add an "receptionTimestamp" field to the SampleInfo.


Make the DESTINATION_ORDER also an offered QoS on the DataWriter. For
compatibility BY_SOURCE_TIMESTAMP>BY_DESTINATION_TIMESTAMP, that is if
you offer SOURCE_TIMESTAMP you can accommodate both kinds or readers.


Add the QoS WRITER_TIMESTAMP and READER_TIMESTAMP.


The SOURCE_TIMESTAMP has a kind that can be NOT_PROVIDED and PROVIDED.
And is set on both the DataWriter and the DataReader and also on
Topic. It has request/offered semantics where PROVIDED > NOT_PROVIDED.


The RECEPTION_TIMESTAMP is only set in the DataReader or Topic and has
a kind that can be NOT_PROVIDED and PROVIDED.


The SOURCE _TIMESTAMP indicates that data must be timestamped when
sent.


The RECEPTION_TIMESTAMP indicates that data must be timestamped when
received.


DESTINATION_ORDER BY_SOURCE_TIMESTAMP requires that the SOURCE
_TIMESTAMP is set to PROVIDED otherwise we will flag an INCOMPATIBLE
QoS.


If SOURCE _TIMESTAMP.kind== NOT_PROVIDED, then the DataWriter ::write
operation does not put any timestamp and the xxx_w_timestamp operation
silently ignores the timestamp and behaves normally. Upon reception
the sourceTimestamp field in the SampleInfo will be TIME_INVALID


If SOURCE _TIMESTAMP.kind== PROVIDED, then the write operation will
automatically get the timestamp by some means (i.e. the middleware
will do it), the xxx_w_timestamp will allow the application to provide
the timestamp. In either case the data will be sent with a timestamp
and the SampleInfo.sourceTimestamp field will never be TIME_INVALID


It is allowed for RECEPTION_TIMESTAMP to be NOT_PROVIDED and the the
DESTINATION_ORDER to be BY_RECEPTION_TIMESTAMP because what matter is
the relative order and that does not require to get a true
timestamp. If this is too confusing we could rename
BY_RECEPTION_TIMESTAMP to be BY_RECEPTION_ORDER


If RECEPTION_TIMESTAMP is NOT_PROVIDED then the
SampleInfo.receptionTimestamp will always be TIME_INVALID. Otherwise
it will never be TIME_INVALID By default the source-timestamp is
provided.


Resolution:
Revised Text:
Actions taken:
December 23, 2003: received issue

Discussion:
Resolution: 
The FTF recognizes that this would be a useful facility. However the FTF has resolved to not modify the specification. Offering this facility would require many changes to the specification and it is not clear how important this would be to users of the specification. This feature could be included in a future revision of the once the need and requirements are better defined.


Issue 6850: DDS ISSUE# 50] Multiple observers sharing a datareader (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-166 Allow_multiple_observers_on_a_datareader


Currently there can only be one "observer" on each DataReader. In
other words, it is not possible to have some independent application
observe the data that a DataReader gets without at the same time
affecting the sample-state and this the behavior of other
data-readers.


It is often the case that debugging tool, an interceptor or some other
utility would like to access the data available on the DataReader
without making its presence noted and thus changing the behavior of
other readers. This is particularly relevant for the built-in topics.


***PROPOSAL***


Add an operation: DataReader:: create_view that returns a
DataReader. This affects 2.2.2.4.1 and the IDL in 2.2.3


This DataReader is a view on the same DataReader so it has the same
QoS and listeners.


The application can use the original DataReader or a view to perform
any operations allowed on a DataReader.


A change of QoS or listener in one view or on the main object affects
the main object and all views.


Read and Take operations act independently on each view. The
application must take the data from all views before it can be removed
from the infrastructure and the resources reclaimed.

Resolution:
Revised Text:
Actions taken:
December 23, 2003: received issue

Discussion:
Resolution: 
The FTF recognizes that this would be a useful facility. However the FTF has resolved to not modify the specification. This concept significantly complicates the model and the value of it remains unclear. This feature could be included in a future revision of the once the need and requirements are better defined.


Issue 6851: DDS ISSUE# 51] Avoid use of dynamic memory for manipulating QoS (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Ref-122 Make_incompatible_qos_fixed_size


The structures RequestedIncompatibleQosStatus and
RequestedIncompatibleQosStatus each contain a sequence listing each
QoS and the corresponding count.


However given that we have explicitly enumerated all the QoS policies
it would be far simpler to replace this hard-to-use sequence with the
actual counts for each QoS


One possibility would be to Replace this "policies" sequence with
explicit count for each QoS. Anotehr possibility would be to use the
mask. To state which QoS are incompatible but loose the count.


***PROPOSAL***


No concrete proposal yet

Resolution:
Revised Text:
Actions taken:
December 23, 2003: received issue

Discussion:
Resolution: 
The FTF recognizes it would be desirable to find a way to not force the use of dynamic memory. However, the FTF could not agree on a resolution to this issue and agreed to defer the issue for a future revision.


Issue 6852: Ref-167 Malloc_required_for_get_default_qos (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
The problem is with the QoS that have dynamically-sized things inside,
namely PARTITION (string) and USER_DATA (sequence). These fields force
a "malloc" each time the structure is copied out (even if a
pre-allocated structure is passed in)


Moreover copies may also result in memory allocation and frees. Can't
this be avoided?


In the C mapping copies would then require the use of an explicit
function rather than direct assignment


One possibility would be to refactor the USER_DATA and PARTITION into
a more generic NAME-value pair infrastructure. Allow these to be set
with a different API (i.e. not by means of QoS, but as direct
operation on the Entity). These avoids all the above problems


***PROPOSAL***


No concrete proposal yet

Resolution:
Revised Text:
Actions taken:
December 23, 2003: received issue

Discussion:
Resolution: 
The FTF recognizes it would be desirable to find a way to not force the use of dynamic memory. However, the FTF could not agree on a resolution to this issue and agreed to defer the issue for a future revision.
Disposition:	Deferred


Issue 6860: Ref-232 Allow_reader_to_access_partition_of_writer (data-distribution-ftf)

Click
here for this issue's archive.
Source: Real-Time Innovations (Dr. Gerardo Pardo-Castellote, Ph.D., gerardo(at)rti.com)
Nature: Uncategorized Issue
Severity:
Summary:
Some applications would like to see originating partition as part of
sample info. Thisis specially important if the application subscribes
with wildcards to any partition


There is a concern, however, with putting the information in the
SampleInfo as it is a cost that has to be paid every time a sample is
returned.


Perhaps the SampleInfo is just an interface so that there are
operations to access stuff and the cost can be performed as needed.


***PROPOSAL***


No concrete proposal as it would be hard to represent in IDL but it
would be nice if such API was offered.

Resolution:
Revised Text:
Actions taken:
December 23, 2003: received issue

Discussion:
Resolution: 
The FTF recognizes it would be desirable to find a way to not force the use of dynamic memory. However, the FTF could not agree on a resolution to this issue and agreed to defer the issue for a future revision.


Issue 7065: ref-1053 Missing is_composition (data-distribution-ftf)

Click
here for this issue's archive.
Source: THALES (Ms. Virginie Watine, virginie.watine(at)thalesgroup.com)
Nature: Uncategorized Issue
Severity:
Summary:
The is_composition operation is described in the PIM, but is not in the IDL.
It concerns the valuettype RefRelation, ListRelatrion, IntMapRelation, and
StrMapRelation.


*** Proposal
add the following operation
        boolean is_composition(); on those valuetypes

Resolution:
Revised Text:
Actions taken:
March 4, 2004: received issue