Data Distribution Service Avatar
  1. OMG Specification

Data Distribution Service — Closed Issues

  • Acronym: DDS
  • Issues Count: 72
  • Description: Issues resolved by a task force and approved by Board
Closed All
Issues resolved by a task force and approved by Board

Issues Summary

Key Issue Reported Fixed Disposition Status
DDS12-72 String sequence should be a parameter and not return value DDS 1.1 DDS 1.2 Resolved closed
DDS12-5 Improper prototype for get_XXX_status() DDS 1.1 DDS 1.2 Resolved closed
DDS12-4 Mention of get_instance() operation on DomainParticipantFactory beingstatic DDS 1.1 DDS 1.2 Resolved closed
DDS12-3 String sequence should be a parameter and not return value DDS 1.1 DDS 1.2 Resolved closed
DDS12-2 Inconsistent prototype for Publisher's get_default_datawriter_qos() method DDS 1.1 DDS 1.2 Resolved closed
DDS12-1 Inconsistencies between PIM and PSM in the prototype of get_qos() methods DDS 1.1 DDS 1.2 Resolved closed
DDS12-70 read/take_next_instance() DDS 1.1 DDS 1.2 Resolved closed
DDS12-69 Clarify notification of ownership change DDS 1.1 DDS 1.2 Resolved closed
DDS12-59 Unlimited setting for Resource limits not clearly explained DDS 1.1 DDS 1.2 Resolved closed
DDS12-58 Small naming inconsistentcies between PIM and PSM DDS 1.1 DDS 1.2 Resolved closed
DDS12-68 Extended visibility of instance state changes DDS 1.1 DDS 1.2 Resolved closed
DDS12-67 Typo in section 2.1.2.5.1 DDS 1.1 DDS 1.2 Resolved closed
DDS12-71 instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section DDS 1.1 DDS 1.2 Resolved closed
DDS12-62 Incorrect description of enable precondition DDS 1.1 DDS 1.2 Resolved closed
DDS12-61 Resetting of the statusflag during a listener callback DDS 1.1 DDS 1.2 Resolved closed
DDS12-64 Clarify the meaning of locally DDS 1.1 DDS 1.2 Resolved closed
DDS12-63 invalid reference to delete_datareader DDS 1.1 DDS 1.2 Resolved closed
DDS12-57 PIM and PSM contradicting wrt "get_sample_lost_status" operation DDS 1.1 DDS 1.2 Resolved closed
DDS12-56 PIM description of "get_domain_id" method is missing DDS 1.1 DDS 1.2 Resolved closed
DDS12-66 Illegal return value register_instance DDS 1.1 DDS 1.2 Resolved closed
DDS12-65 Missing autopurge_disposed_sample_delay DDS 1.1 DDS 1.2 Resolved closed
DDS12-60 Inconsistent PIM/PSM for RETCODE_ILLEGAL_OPERATION DDS 1.1 DDS 1.2 Resolved closed
DDS12-21 Naming consistencies in match statuses DDS 1.1 DDS 1.2 Resolved closed
DDS12-20 Description of set_default_XXX_qos() DDS 1.1 DDS 1.2 Resolved closed
DDS12-19 Should write() block when out of instance resources? DDS 1.1 DDS 1.2 Resolved closed
DDS12-18 Clarify ownership with same-strength writers DDS 1.1 DDS 1.2 Resolved closed
DDS12-12 Naming of filter_parameters concerning ContentFilteredTopic DDS 1.1 DDS 1.2 Resolved closed
DDS12-11 Typos in built-in topic table DDS 1.1 DDS 1.2 Resolved closed
DDS12-10 Clarify PARTITION QoS and its default value DDS 1.1 DDS 1.2 Resolved closed
DDS12-9 Blocking of write() call DDS 1.1 DDS 1.2 Resolved closed
DDS12-17 Typos in PIM sections DDS 1.1 DDS 1.2 Resolved closed
DDS12-16 Typos in QoS sections DDS 1.1 DDS 1.2 Resolved closed
DDS12-8 Consistency between RESOURCE_LIMITS QoS policies DDS 1.1 DDS 1.2 Resolved closed
DDS12-15 Incorrect mention of INCONSISTENT_POLICY status DDS 1.1 DDS 1.2 Resolved closed
DDS12-14 Compatible versus consistency when talking about QosPolicy DDS 1.1 DDS 1.2 Resolved closed
DDS12-7 OWNERSHIP_STRENGTH QoS is not a QoS on built-in Subscriber of DataReaders DDS 1.1 DDS 1.2 Resolved closed
DDS12-6 Inconsistent naming in SampleRejectedStatusKind DDS 1.1 DDS 1.2 Resolved closed
DDS12-13 Incorect prototype for FooDataWriter method register_instance_w_timestamp() DDS 1.1 DDS 1.2 Resolved closed
DDS12-35 Cache and CacheAccess should have a common parent DDS 1.1 DDS 1.2 Resolved closed
DDS12-34 Simplify Relation Management DDS 1.1 DDS 1.2 Resolved closed
DDS12-39 Object State Transitions of Figure 3-5 and 3-6 should be corrected DDS 1.1 DDS 1.2 Resolved closed
DDS12-38 Introduce the concept of cloning contracts consistently in specification DDS 1.1 DDS 1.2 Resolved closed
DDS12-37 ObjectExtent and ObjectModifier can be removed DDS 1.1 DDS 1.2 Resolved closed
DDS12-36 Object notification in manual update mode required DDS 1.1 DDS 1.2 Resolved closed
DDS12-26 Operation dispose_w_timestamp() should be callable on unregistered instance DDS 1.1 DDS 1.2 Resolved closed
DDS12-25 Clarify valid handle when calling write() DDS 1.1 DDS 1.2 Resolved closed
DDS12-33 Corrections to Figure 2-19 DDS 1.1 DDS 1.2 Resolved closed
DDS12-32 Non intuitive constant names DDS 1.1 DDS 1.2 Resolved closed
DDS12-31 Example in 2.1.4.4.2 not quite correct DDS 1.1 DDS 1.2 Resolved closed
DDS12-28 Typo in copy_from_topic_qos DDS 1.1 DDS 1.2 Resolved closed
DDS12-27 Behavior of dispose with regards to DURABILITY QoS DDS 1.1 DDS 1.2 Resolved closed
DDS12-22 delete_contained_entities() on the Subscriber DDS 1.1 DDS 1.2 Resolved closed
DDS12-24 Need INVALID_QOS_POLICY_ID DDS 1.1 DDS 1.2 Resolved closed
DDS12-23 Return of get_matched_XXX_data() DDS 1.1 DDS 1.2 Resolved closed
DDS12-30 Operation wait() on a WaitSet should return TIMEOUT DDS 1.1 DDS 1.2 Resolved closed
DDS12-29 Typo in get_discovered_participant_data DDS 1.1 DDS 1.2 Resolved closed
DDS12-52 Support sequences of primitive types in DLRL Objects DDS 1.1 DDS 1.2 Resolved closed
DDS12-51 Clarify which Exceptions exist in DLRL and when to throw them DDS 1.1 DDS 1.2 Resolved closed
DDS12-43 Make the ObjectFilter and the ObjectQuery separate Selection Criterions DDS 1.1 DDS 1.2 Resolved closed
DDS12-42 Add the Set as a supported Collection type DDS 1.1 DDS 1.2 Resolved closed
DDS12-41 Harmonize Collection definitions in PIM and PSM DDS 1.1 DDS 1.2 Resolved closed
DDS12-40 Add Iterators to Collection types DDS 1.1 DDS 1.2 Resolved closed
DDS12-48 Representation of OID should be vendor specific DDS 1.1 DDS 1.2 Resolved closed
DDS12-47 Add Listener callbacks for changes in the update mode DDS 1.1 DDS 1.2 Resolved closed
DDS12-50 Merge find_object with find_object_in_access DDS 1.1 DDS 1.2 Resolved closed
DDS12-49 define both the Topic name and the Topic type_name separately DDS 1.1 DDS 1.2 Resolved closed
DDS12-54 Specification does not state how to instantiate an ObjectHome DDS 1.1 DDS 1.2 Resolved closed
DDS12-53 manual mapping key-fields of registered objects may not be changed DDS 1.1 DDS 1.2 Resolved closed
DDS12-46 Remove lock/unlock due to overlap with updates_enabled DDS 1.1 DDS 1.2 Resolved closed
DDS12-45 Make update rounds uninterruptable DDS 1.1 DDS 1.2 Resolved closed
DDS12-55 Raise PreconditionNotMet when changing filter expression on registered Obje DDS 1.1 DDS 1.2 Resolved closed
DDS12-44 Add a static initializer operation to the CacheFactory DDS 1.1 DDS 1.2 Resolved closed

Issues Descriptions

String sequence should be a parameter and not return value

  • Key: DDS12-72
  • Legacy Issue Number: 9555
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:

    In Section 2.1.2.5.2.11 (notify_datareaders) the first sentence states
    This operation invokes the operation on_data_available on the DataReaderListener objects attached to contained DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState.
    In Section 2.1.4.2.2 (Changes in Read Communication Statuses) it states in the first paragraph that the "StatusChangedFlag becomes false again when all samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entities".
    In Figure 2-16 in the same section, the transition from the TRUE state to FALSE is accompanied by the condition "DataReader:take[all data taken by application]".

    However, in Section 2.1.4.4 (Condition and Wait-sets) in the last step in the general use pattern deals with using the result of the wait operation and in the third sub-bullet it states how if the wait unblocked due to a StatusCondition and the status change is DATA_AVAILABLE, the appropriate action is to call read/take on the relevant DataReader.
    If only a take of all samples will reset the status, then simply calling read in this use pattern will not reset the status and the given general use pattern will actual spin.

    Proposed Resolution:

    The actual condition for the StatusChangedFlag to become false should then be that the status has been considered read/accessed by the user. This should be considered as such when the listener for a Read Communication Status is called similar to Plain Communication Statuses (see T#6).
    In addition, it should be such if the user calls read/take on the associated DataReader.

    Subscriber's DATA_ON_READERS status is reset if the on_data_on_readers is called (same as for all listeners).

    In addition Subscriber's DATA_ON_READERS status is reset if the user calls read or take on any of the DataReaders belonging to the Subscriber.
    In addition, the Subscriber's DATA_ON_READERS status is also reset if the on_data_available callback is called on the DataReaderListener. This is needed such that if the application calls notify_datareaders it will reset the status.
    The inverse, i.e. resetting the DATA_AVAILABLE status when the on_data_on_readers callback is called) does not happen.

    Proposed Revised Text:

    Section 2.1.2.5.2.11 notify_datareaders

    In the first sentence, change

    DataReader entities containing samples with SampleState 'NOT_READ' and any ViewState and InstanceState

    To

    DataReader entities with a DATA_AVAILABLE status that is considered changed.

    Section 2.1.4.2.2 Changes in Read Communication Statuses

    Change the last sentence of the first paragraph from
    The StatusChangedFlag becomes false again when all the samples are removed from the responsibility of the middleware via the take operation on the proper DataReader entitites.

    To

    The DATA_AVAILABLE StatusChangedFlag becomes false again when either the corresponding listener operation (on_data_available ) is called or a read or take operation is called on the associated DataReader.

    The DATA_ON_READERS StatusChangedFlag becomes false again when any of the following occurs:

    o The corresponding listener operation (on_data_on_readers) is called.
    o The on_data_available listener operation is called on any DataReader belonging to the Subscriber.
    o The read or take on any DataReader belonging to the Subscriber

    In Figure 2-16

    Introduce two figures one for the DATA_ON_READERS and the other for the DATA_AVAILABLE

  • Reported: DDS 1.1 — Thu, 6 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Improper prototype for get_XXX_status()

  • Key: DDS12-5
  • Legacy Issue Number: 9482
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In the PIM, all get_XXX_status() methods return the relevant status by value. This does not allow for an error return and is inconsistent with other operations that accept a parameter.
    The same is true for the PSM except for get_inconsistent_topic_status() on the Topic which returns ReturnCode_t and the status is a parameter.

    Proposed Resolution:
    In the PIM and the PSM, the operations should return ReturnCode_t with the status as a parameter.

    Proposed Revised Text:

    Section 2.1.2.3.2 Topic Class; Replace
    get_inconsistent_topic_status InconsistentTopicStatus
    With
    get_inconsistent_topic_status ReturnCode_t
    inout: status InconsistentTopicStatus
    Section 2.1.2.4.2 DataWriter Class;
    Replace
    get_liveliness_lost_status LivelinessLostStatus
    get_offered_deadline_missed_status OfferedDeadlineMissedStatus
    get_offered_incompatible_qos_status OfferedIncompatibleQosStatus
    get_publication_match_status PublicationMatchedStatus
    With
    get_liveliness_lost_status ReturnCode_t
    inout: status LivelinessLostStatus
    get_offered_deadline_missed_status ReturnCode_t
    inout: status OfferedDeadlineMissedStatus
    get_offered_incompatible_qos_status ReturnCode_t
    inout: status OfferedIncompatibleQosStatus
    get_publication_match_status ReturnCode_t
    inout: status PublicationMatchedStatus
    Section 2.1.2.5.2 Subscriber Class;
    Replace
    get_sample_lost_status SampleLostStatus
    With
    get_sample_lost_status ReturnCode_t
    inout: status SampleLostStatus
    Section 2.1.2.5.3 DataReader Class;
    Replace
    get_liveliness_changed_status LivelinessChangedStatus
    get_requested_deadline_missed_status RequestedDeadlineMissedStatus
    get_requested_incompatible_qos_status RequestedIncompatibleQosStatus
    get_sample_rejected_status SampleRejectedStatus
    get_subscription_match_status SubscriptionMatchedStatus
    With
    get_liveliness_changed_status ReturnCode_t
    inout: status LivelinessChangedStatus
    get_requested_deadline_missed_status ReturnCode_t
    inout: status RequestedDeadlineMissedStatus
    get_requested_incompatible_qos_status ReturnCode_t
    inout: status RequestedIncompatibleQosStatus
    get_sample_rejected_status ReturnCode_t
    inout: status SampleRejectedStatus
    get_subscription_match_status ReturnCode_t
    inout: status SubscriptionMatchedStatus
    Section 2.2.3 DCPS PSM : IDL

    interface DataWriter; Replace:
    LivelinessLostStatus get_liveliness_lost_status();
    OfferedDeadlineMissedStatus get_offered_deadline_missed_status();
    OfferedIncompatibleQosStatus get_offered_incompatible_qos_status();
    PublicationMatchedStatus get_publication_match_status();
    With
    ReturnCode_t get_liveliness_lost_status(inout LivelinessLostStatus status);
    ReturnCode_t get_offered_deadline_missed_status(inout OfferedDeadlineMissedStatus status);
    ReturnCode_t get_offered_incompatible_qos_status(inout OfferedIncompatibleQosStatus status);
    ReturnCode_t get_publication_match_status(inout PublicationMatchedStatus status);

    interface DataReader; Replace:
    SampleRejectedStatus get_sample_rejected_status();
    LivelinessChangedStatus get_liveliness_changed_status();
    RequestedDeadlineMissedStatus get_requested_deadline_missed_status();
    RequestedIncompatibleQosStatus get_requested_incompatible_qos_status();
    SubscriptionMatchedStatus get_subscription_match_status();
    SampleLostStatus get_sample_lost_status();
    With:
    ReturnCode_t get_sample_rejected_status( inout SampleRejectedStatus status );
    ReturnCode_t get_liveliness_changed_status(inout LivelinessChangedStatus status);
    ReturnCode_t get_requested_deadline_missed_status(inout RequestedDeadlineMissedStatus status);
    ReturnCode_t get_requested_incompatible_qos_status(inout RequestedIncompatibleQosStatus status);
    ReturnCode_t get_subscription_match_status(inout SubscriptionMatchedStatus status);
    ReturnCode_t get_sample_lost_status(inout SampleLostStatus status);

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Mention of get_instance() operation on DomainParticipantFactory beingstatic

  • Key: DDS12-4
  • Legacy Issue Number: 9481
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Title: R#4 Mention of get_instance() operation on the DomainParticipantFactory being static in the wrong section

    Summary:
    The last paragraph of section 2.1.2.2.2.4 (lookup_participant) mentioning that get_instance() is a static operation probably belongs in the preceding section 2.1.2.2.2.3 (get_instance).

    Proposed Resolution:
    Move the paragraph to the correct section

    Proposed Revised Text:

    Section 2.1.2.2.2.4 lookup_participant
    Remove the last paragraph:
    The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.

    Section 2.1.2.2.2.3 get_instance
    Add the paragraph removed from above:
    The get_instance operation is a static operation implemented using the syntax of the native language and can therefore not be expressed in the IDL PSM.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Move the paragraph to the correct section

  • Updated: Fri, 6 Mar 2015 20:58 GMT

String sequence should be a parameter and not return value

  • Key: DDS12-3
  • Legacy Issue Number: 9480
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    The string sequence parameter in the get_expression_parameters() method of the ContentFilteredTopic and MultiTopic and in the

    get_query_parameters() method of the QueryCondition are all listed as the return value in both the PIM and PSM.
    It is desirable for the string sequence to be used as a parameter for consistency and to allow for an error return.

    Proposed Resolution:
    The PIM and the PSM should have the string sequence as a parameter and the methods should return ReturnCode_t.

    Proposed Revised Text:

    Section 2.1.2.3.3 ContentFilteredTopic class; ContentFilteredTopic class table
    Change row from:
    get_expression_parameters string[]
    To
    get_expression_parameters ReturnCode_t
    inout: expression_parameters string[]

    Section 2.1.2.3.4 MultiTopic Class [optional]
    Change row from:
    get_expression_parameters string[]
    To
    get_expression_parameters ReturnCode_t
    inout: expression_parameters string[]

    Section 2.2.3 DCPS PSM : IDL

    interface ContentFilteredTopic
    Replace:
    StringSeq get_expression_parameters();
    With:
    ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);

    interface MultiTopic
    Replace:
    StringSeq get_expression_parameters();
    With:
    ReturnCode_t get_expression_parameters(inout StringSeq expression_parameters);

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Inconsistent prototype for Publisher's get_default_datawriter_qos() method

  • Key: DDS12-2
  • Legacy Issue Number: 9479
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In the PSM it is returning void. However, in the PIM it is returning ReturnCode_t. Also, all other get_defatult_xxx_qos()

    methods return ReturnCode_t in both the PIM and the PSM.

    Proposed Resolution:
    The return code should be changed to ReturnCode_t in the PSM.

    Proposed Revised Text:

    Section 2.2.3 DCPS PSM : IDL
    interface Publisher :
    Replace
    void get_default_datawriter_qos(inout DataWriterQos qos);
    With
    ReturnCode_t get_default_datawriter_qos(inout DataWriterQos qos);

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    The return code should be changed to ReturnCode_t in the PSM.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Inconsistencies between PIM and PSM in the prototype of get_qos() methods

  • Key: DDS12-1
  • Legacy Issue Number: 9478
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    ummary:
    According to the PIM, get get_qos() method returns the QosPolicy [ ]. According to the PSM, the qos is a parameter and the

    method returns void.

    Proposed Resolution:
    The PIM should be updated to be consistent with the PSM.
    In addition, the return value in both the PIM and PSM should be changed from void to ReturnCode_t.

    Proposed Revised Text:

    Section 2.1.2.1.1 Entity Class; Entity class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.1.2.2.1 Domain Module; DomainParticipant class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.1.2.3.1 TopicDescription Class; Topic class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.1.2.4.1 Publisher Class; Publisher class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.1.2.4.2 DataWriter Class; DataWriter class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.1.2.5.2 Subscriber Class; Subscriber class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.1.2.5.3 DataReader Class; DataReader class table
    Change row from:
    abstract get_qos QosPolicy []
    To
    abstract get_qos ReturnCode_t
    out: qos_list QosPolicy[]

    Section 2.2.3 DCPS PSM : IDL
    interface Entity
    Change:
    // void get_qos(inout EntityQos qos);
    To
    // ReturnCode_t get_qos(inout EntityQos qos);

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

read/take_next_instance()

  • Key: DDS12-70
  • Legacy Issue Number: 9553
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Must read/take_next_instance() require that the handle corresponds to a known data-object?

    Summary:

    In sections for read/take_next_instance() and read/take_next_instance_w_condition() it states that if detectable the implementation should return BAD_PARAMETER in this case or otherwise the situation is unspecified.
    It might be desirable to allow for an invalid handle to be passed in, especially in the case that the user is iterating through instances and takes all samples to an instance that is NOT_ALIVE and has no writers in which case that action may actually free that instance, "invalidating" the handle of that instance.

    Proposed Resolution:

    Allow passing a handle that does not correspond to any instance currently on the DataReader to read_next_instance/take_next_instance. This handle should be sorted in a deterministic way with regards to the other handles such that the iteration is not interrupted.
    Proposed Revised Text:

    Section 2.1.2.5.3.16 read_next_instance

    Replace the paragraph:

    This operation implies the existence of some total order 'greater than' relationship between the instance handles. The specifics of this relationship are not important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instances, that is, it does not depend on the actual samples received or available. For the purposes of this explanation it is 'as if' each instance handle was represented as a unique integer.

    With:

    This operation implies the existence of a total order 'greater-than' relationship between the instance handles. The specifics of this relationship are not all important and are implementation specific. The important thing is that, according to the middleware, all instances are ordered relative to each other. This ordering is between the instance handles: It should not depend on the state of the instance (e.g. whether it has data or not) and must be defined even for instance handles that do not correspond to instances currently managed by the DataReader. For the purposes of the ordering it should be 'as if' each instance handle was represented as a unique integer.

    Section 2.1.2.5.3.16 read_next_instance

    Remove the paragraph:

    The behavior of the read_instance operation follows the same rules as the read operation regarding the pre-conditions and post-conditions for the data_values and sample_infos collections. Similar to read, the read_instance operation may 'loan' elements to the output collections which must then be returned by means of return_loan.

    Replace the paragraph:

    This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.

    With

    Note that it is possible to call the 'read_next_instance' operation with an instance handle that does not correspond to an instance currently managed by the DataReader. This is because as stated earlier the 'greater-than' relationship is defined even for handles not managed by the DataReader. One practical situation where this may occur is when an applications is iterating though all the instances, takes all the samples of a NOT_ALIVE_NO_WRITERS instance, returns the loan (at which point the instance information may be removed, and thus the handle becomes invalid), and tries to read the next instance.

    Section 2.1.2.5.3.17 take_next_instance

    Replace the paragraph:

    This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.

    With

    Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader.

    Section 2.1.2.5.3.18 read_next_instance_w_condition

    Replace the paragraph:

    This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.

    With

    Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call read_next_instance with an instance handle that does not correspond to an instance currently managed by the DataReader.

    Section 2.1.2.5.3.19 take_next_instance_w_condition
    Replace the paragraph:

    This operation may return BAD_PARAMETER if the InstanceHandle_t a_handle does not correspond to an existing data-object known to the DataReader. If the implementation is not able to check invalid handles, then the result in this situation is unspecified.

    With

    Similar to the operation read_next_instance (see Section 2.1.2.5.3.16) it is possible to call take_next_instance_w_condition with an instance handle that does not correspond to an instance currently managed by the DataReader.

  • Reported: DDS 1.1 — Thu, 6 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Clarify notification of ownership change

  • Key: DDS12-69
  • Legacy Issue Number: 9552
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.3.9.2 EXCLUSIVE kind (the last sentence on page 2-114) the specification states that ownership changes are notified via a status change. However there is no status change that notifies of ownership change. The only way to detect it is to look at the SampleInstance and see that the publication_handle has changed.

    Proposed Resolution:
    Remove the sentence. We could add the Status, Listener, and Callback, but it seems unnecessary until we see some actual use-cases that require thisÂ…

    Proposed Revised Text:

    In section 2.1.3.9.2 EXCLUSIVE kind, last sentence in last paragraph, remove the sentence:
    "The DataReader is also notified of this via a status change that is accessible by means of the Listener or Condition mechanisms."

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Unlimited setting for Resource limits not clearly explained

  • Key: DDS12-59
  • Legacy Issue Number: 9541
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.3.19 it is not clear how to specify unlimited resource limits. (It is mentioned in the QoS table in section 2.1.3 that the default setting for resource_limits is length_unlimited, but in the context of 2.1.3.19 this is not repeated).

    Proposed Resolution::
    Specify in Section 2.1.3.19 that the constant LENGTH_UNLIMITED must be used to specify unlimited resource limits.

    Proposed Revised Text::
    In section 2.1.3.19 add the following paragraph before the last paragraph in the section (the one that starts with "The setting of RESOURCE_LIMITS Â…":

    The constant LENGTH_UNLIMITED may be used to indicate the absence of a particular limit. For example setting max_samples_per_instance to LENGH_UNLIMITED will cause the middleware to not enforce this particular limit.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Small naming inconsistentcies between PIM and PSM

  • Key: DDS12-58
  • Legacy Issue Number: 9540
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.2.4.1.17, the explanation for the "copy_from _topic_qos" operation mentions two parameters called "topic_qos" and "datawriter_qos_list". Both parameter names do not exist.

    In the PSM (section 2.2.3) the first two parameters for all "read()" and "take()" methods (and their variants) are consistently called "received_data" and "sample_infos". In the DataReader PIM in section 2.1.2.5.3, these same names are only used for the "read()" and "take()" methods. All their variants and all have a first parameter called "data_values". The FooDataReader PIM has the same issue, but even uses the name "data_values" for the read() and take() methods themselves.

    Proposed Resolution::
    Replace "topic_qos" with "a_topic_qos" and "datawriter_qos_list" with "a_datawriter_qos".

    Consistently use the parameter name "received_data" in bot the PIM and the PSM.

    We propose we either ignore the second change regarding 'data_values' or change it the other way around (from received_data to data_values). This impacts the specification less. There are a lot of places that would be affected by the change to "received_data" from "data_values"

    Proposed Revised Text::

    Section 2.1.2.4.1.17 copy_from_topic_qos:
    1st paragraph, replace: "topic_qos" with "a_topic_qos"
    1st, 2nd, and 3rd paragraph, replace: "datawriter_qos_list" with "a_datawriter_qos"

    Section 2.2.3
    replace formal paramater name "received_data" with "data_value" or "data_values" depending on whether the typeis a sequence or not This affects DataReader::take* DataReader::read*, FooDataReader::take* and FooDataReader::read*

    Section 2.1.2.5.3 DataReader Class table replace "received_data" with "data_values". This affects the operations:
    return_loan
    take
    read

    Section 2.2.3 DCPS PSM : IDL
    Change formal parameter of read/take operations from "received_data" with "data_values". This affects the operations:

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Extended visibility of instance state changes

  • Key: DDS12-68
  • Legacy Issue Number: 9551
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    The instance state is only accessible via sampleInfo and this requires the availability of data.
    This implies that the dispose and the no writer state of an instance may not be noticed if the application has taken all samples.
    Subsequent instance state changes are only notified if all samples are taken.

    Consequently, it's very hard to receive notifications on disposal of instances.

    Requires data so applications should use read instead of take.
    But take is required for subsequent notifications.
    Applications are not notified on arrival of data if they choose not to take all data (read or take not all)

    Occasionally Application may need to react on disposal or the no writer state of instances e.g. cleanup allocated resources and applications may also continuously take all samples to save resources.

    In this case a dispose or no writer state will only be noticed if a new generation appears, which may never happen.
    Occasionally applications may want to keep all read samples and still be notified on data arrival.

    Applications should be notified whenever new data arrives whether they have taken all previous data samples or not
    According to the spec (section 2.1.2.5.3.8) it is possible to get 'meta samples' that is samples that have a SampleInfo but have no associated data, this can be used to notify of disposal, no writers and such.

    Proposed Resolution:
    Always reset the read communication status flag on any read or take operation.

    Provide a notification mechanism on the DataReader that specifies the instance handle of the instance whose state has changed.
    -> This is managed by the meta-sample mechanism mentioned above

    Provide a method on an instance handle to access the instance state.

    Modify figure 2-16 and section 2.1.4.2.2 to state that the ReadCommunicationStatus is reset to FALSE whenever the corresponding listener operation is called, or else if a read or take operation is called on the associated DataReader

    In addition the ON_DATA_ON_READERS status is reset if the on_data_available is called. The inverse (resetting the ON_DATA_AVAILABLE status when the on_data_on_readers is called) does not happen.

    Proposed Revised Text:

    Section 2.1.2.5 Subscription Module, Figure 2-10
    Add the following field to the SampleInfo class:
    valid_data : boolean

    Section 2.1.2.5.1 Access to the data (see attached document access_to_the_data2CMP.pdf for the resulting section with changes)

    >>Aftter 2nd paragraph "Each of these" add the section heading:
    2.1.2.5.1.1 Interpretation of the SampleInfo
    3rd paragraph; add the following bullet after the bullet that starts with "The instance_state of the related instance"
    The valid_data flag. This flag indicates whether there is data associated with the sample. Some samples do not contain data indicating only a change on the instance_state of the corresponding instance.

    >>Before the paragraph that starts with "For each sample received" add the section headings:
    2.1.2.5.1.2 Interpretation of the SampleInfo sample_state

    >>Before the paragraph that starts with "For each instance the middleware internally maintains" add the section heading:
    2.1.2.5.1.3 Interpretation of the SampleInfo instance_state

    >>Before the paragraph that starts with "For each instance the middleware internally maintains two counts: the disposed_generation_count and no_writers_generation_count" add the following subsections (2.1.2.5.1.4, and 2.1.2.5.1.5):

    2.1.2.5.1.4 Interpretation of the SampleInfo valid_data
    Normally each DataSample contains both a SampleInfo and some Data. However there are situations where a DataSample contains only the SampleInfo and does not have any associated data. This occurs when the Service notifies the application of a change of state for an intance that was caused by some internal mechanism (such as a timeout) for which there is no associated data. An example of this situation is when the Service detects that an instance has no writers and changes the coresponding instance_state to NOT_ALIVE_NO_WRITERS.

    The actual set of scenarios under which the middleware returns DataSamples containing no Data is implementation dependent. The application can distinguish wether a particular DataSample has data by examining the value of the valid_data flag. If this flag is set to TRUE, then the DataSample contains valid Data, if the flag is set to FALSE the DataSample contains no Data.
    To ensure corerctness and portability, the valid_data flag must be examined by the application prior to accessing the Data associated with the DataSample and if the flag is set to FALSE, the application should not access the Data associated with the DataSample, that is, teh application should access only the SampleInfo.

    2.1.2.5.1.5 Interpretation of the SampleInfo disposed_generation_count and no_writers_generation_count
    Before the paragraph that starts with "The sample_rank and generation_rank available in the SampleInfo are computed Â…" add the section heading:
    2.1.2.5.1.6 Interpretation of the SampleInfo sample_rank, generation_rank, and absolute_generation_rank

    >>Before the paragraph that starts with "These counters and ranks allow the application to distinguish" add the section heading:
    2.1.2.5.1.7 Interpretation of the SampleInfo counters and ranks

    >>Before the paragraph that starts with "For each instance (identified by the key), the middleware internallyÂ…" add the section heading:
    2.1.2.5.1.8 Interpretation of the SampleInfo view_state

    >>Before the paragraph that starts with "The application accesses data by means of the operations read or take on the DataReader" add the section heading:
    2.1.2.5.1.9 Data access patterns

    Section 2.1.2.5.5 Sample Info class

    Add another bullet to the list:
    The valid_data flag that indicates whether the DataSample contains data or else it is only used to communicate of a change in the instance_state of the instance.

    Section 2.2.3 DCPS PSM : IDL

    struct SampleInfo
    Add the following field at the end of the structure:
    boolean valid_data

    The resulting structure is:
    struct SampleInfo

    { SampleStateKind sample_state; ViewStateKind view_state; InstanceStateKind instance_state; Time_t source_timestamp; InstanceHandle_t instance_handle; InstanceHandle_t publication_handle; long disposed_generation_count; long no_writers_generation_count; long sample_rank; long generation_rank; long absolute_generation_rank; boolean valid_data; }

    ;

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Typo in section 2.1.2.5.1

  • Key: DDS12-67
  • Legacy Issue Number: 9550
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    On page 2-65 the second last bullet states
    The sample_rank indicates the number or samples of the same instance that follow the current one in the collection.
    The 'or' should be 'of'.

    Proposed Resolution:

    Proposed Revised Text:
    Section 2.1.2.5.1 Access to the data, second to last bullet
    Replace 'or' with 'of' in the sentence:
    The sample_rank indicates the number or samples of the same instance that follow the current one in the collection.

    Resulting in:
    The sample_rank indicates the number of samples of the same instance that follow the current one in the collection.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Fix typo.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section

  • Key: DDS12-71
  • Legacy Issue Number: 9554
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Clarification to when a instance resource can be reclaimed in READER_DATA_LIFECYCLE QoS section

    Summary:

    In Section 2.1.3.22 (READER_DATA_LIFECYCLE QoS) the fourth paragraph mention how "the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken'".
    This should be corrected to state "for instances for which all samples have been 'taken' and either instance_state = NOT_ALIVE_NO_WRITERS or instance_state = NOT_ALIVE_DISPOSED and there are no 'live' writers".
    In light of this the statement in the last paragraph stating that once the state becomes NOT_ALIVE_DISPOSED after the autopurge_disposed_samples_delay elapses, "the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost" is not entirely true. If there are other 'live' writers, the DataReader will maintain the state on the instance of which DataWriters are writing to it.

    We should change the "will purge all" to "may purge all" or even "will purge". Alternatively, we could describe in further detail when it "will purge all", i.e. when there are no 'live' writers.

    The biggest thing here is to decide whether the instance lifecycle can end directly from the NOT_ALIVE_DISPOSED state (as Figure 2-11 currently states) or whether we must force it to go though the NOT_ALOVE_NO_WRITES; that is, in the case where the last writer unregisters a disposed instance do we transition to NOT_ALIVE_NO_WRITERS+NOT_ALIVE_DISPOSED or do we finish the lifecycle directly without notifying the user (as it is indicated now)

    We think the current behavior is better because from the application reader point of view, the instance does not exists once it DISPOSED, the fact that we keep the instance state such that we can retain ownership is a detail inside the middleware, so it would be unnatural to get a further indication that the instance (that it no longer knows about) has now no writers.

    We suggest the proposed changes should reflect this point of view.

    Proposed Resolution:

    Make the suggested corrections:

    (1) Correct when readers can claim resources to include NOT_ALIVE_DISPOSED state when there are no live writers. So we always reclaim when there are no writers and all the samples for that instance are taken; these samples will include a sentinel mata-sample with an instance state that will be either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED

    (2) Clarify that the auto_purge_disposed samples removes only the samples, but not the instance; the instance will only removed in the above case.
    Proposed Revised Text:

    Section 2.1.3.22 READER_DATA_LIFECYCLE QoS

    Replace the paragraph:

    Under normal circumstances the DataReader can only reclaim all resources for instances that instance_state = NOT_ALIVE_NO_WRITERS and for which all samples have been 'taken.'

    With

    Under normal circumstances the DataReader can only reclaim all resources for instances for which there are no writers and for which all samples have been 'taken.' The last sample the DataReader will have taken for that instance will have an instance_state of either NOT_ALIVE_NO_WRITERS or NOT_ALIVE_DISPOSED depending on whether the last writer that had ownership of the instance disposed it or not. Refer to Figure 2-11 for a statechart describing the transitions possible for the instance_state.

    In the Paragraph starting with "The autopurge_nowriter_samples_delay defines.."
    Replace

    once its view_state becomes NOT_ALIVE_NO_WRITERS

    With

    once its instance_state becomes NOT_ALIVE_NO_WRITERS

    Replace the paragraph:

    The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all internal information regarding the instance; any untaken samples will also be lost

    With

    The autopurge_disposed_samples_delay defines the maximum duration for which the DataReader will maintain samples for an instance once its instance_state becomes NOT_ALIVE_DISPOSED. After this time elapses, the DataReader will purge all samples for the instance.

  • Reported: DDS 1.1 — Thu, 6 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Incorrect description of enable precondition

  • Key: DDS12-62
  • Legacy Issue Number: 9544
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.2.2.1. DomainParticipant Class it says:
    The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled

    It should say:
    The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled

    Proposed Resolution:

    Proposed Revised Text:
    In section 2.1.2.2.1. DomainParticipant Class, paragraph at the end of the section before the bullet points

    Replace:
    The following operations may be called even if the DomainParticipant is enabled. Other operations will have the value NOT_ENABLED if called on a disabled

    With:
    The following operations may be called even if the DomainParticipant is not enabled. Other operations will return the value NOT_ENABLED if called on a disabled

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Perform the above change

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Resetting of the statusflag during a listener callback

  • Key: DDS12-61
  • Legacy Issue Number: 9543
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.4.2.1, it is explained that a statusflag becomes TRUE if a plain communication status changes, and becomes FALSE again each time the application accesses the plain communication status via the proper get_<plain_communication_status> operation. This is not a complete description, since it only assumes an explicit call to read the communication status. It is also possible (by attaching a Listener) to implicitly read the status (it is then passed as a parameter to the registered callback method), and then afterwards the status flag should also be set to FALSE as well.
    Furthermore, the Status table in section 2.4.1 mentions that all total_count_change fields are being reset when a Listener callback is performed. The same thing happens when a get_<plain_communication_status> operation is invoked. It would make sense that a Listener callback behaves in a similar way as an when explicitly reading the plain communication status.

    Proposed Resolution::
    Mention explicitly in section 2.1.4.2.1 that a status flag is also set to FALSE when a listener callback for that status has been performed. (We need to think what consequences this will have for NIL-Listeners, that behave like a no-op. Probably they should also reset the flag in that case.)

    Proposed Revised Text::

    In section 2.1.4.2.1 after the paragraph:
    For the plain communication status, the StatusChangedFlag flag is initially set to FALSE. It becomes TRUE whenever the plain communication status changes and it is reset to FALSE each time the application accesses the plain communication status via the proper get_<plain communication status> operation on the Entity.

    Add the paragraphs:

    The communication status is also reset to FALSE whenever the associated listener operation is called as the listener implicitly accesses the status which is passed as a parameter to the operation. The fact that the status is reset prior to calling the listener means that if the application calls the get_<plain communication status> from inside the listener it will see the status already reset.

    An exception to this rule is when the associated listener is the 'nil' listener. As described in section 2.1.4.3.1 the 'nil' listener is treaded as a NOOP and the act of calling the 'nil' listener does not reset the communication status.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Clarify the meaning of locally

  • Key: DDS12-64
  • Legacy Issue Number: 9546
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    On 2-94 section 2.1.2.5.5 (SampleInfo Class) the description of publication_handle states that it identifies locally the DataWriter that modified the instance.

    Clarify that locally means the instance_handle from the builtin Publication DataReader belonging to the Participant of the DataReader from which the sample is read.

    Proposed Resolution:

    Proposed Revised Text:

    In section 2.1.2.5.5 SampleInfo Class, replace the bullet:
    the publication_handle that identifies locally the DataWriter that modified the instance.

    With the bullet:
    the publication_handle that identifies locally the DataWriter that modified the instance. The publication_handle is the same InstanceHandle_t that is returned by the operation get_matched_publications on the DataReader and can also be used as a parameter to the DataReader operation get_matched_publication_data.

    In section 2.1.2.5.3.33 get_matched_publications after the first paragraph add the paragraph.
    The handles returned in the 'publication_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataWriter. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSPublications" builtin topic.

    In section 2.1.2.5.3.33 get_matched_publications after the first paragraph add the paragraph:
    The handles returned in the 'subscription_handles' list are the ones that are used by the DDS implementation to locally identify the corresponding matched DataReader. These handles match the ones that appear in the 'instance_handle' field of the SampleInfo when reading the "DCPSSubscriptions" builtin topic.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Add the state clarification

  • Updated: Fri, 6 Mar 2015 20:58 GMT

invalid reference to delete_datareader

  • Key: DDS12-63
  • Legacy Issue Number: 9545
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    On page 2-70 at the end of section 2.1.2.5.2 (Subscriber Class) the description states that a list of operations including delete_datareader may return NOT_ENABLED. The operation delete_datareader should be removed from this list.

    Proposed Resolution:

    Proposed Revised Text:
    In section 2.1.2.5.2 Subscriber Class, at the end right before section 2.1.2.5.2.1 replace paragraph:

    All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, create_datareader, and delete_datareader may return the value NOT_ENABLED.

    With:

    All operations except for the base-class operations set_qos, get_qos, set_listener, get_listener, enable, get_statuscondition, and create_datareader may return the value NOT_ENABLED.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Remove delete_datareader from said list

  • Updated: Fri, 6 Mar 2015 20:58 GMT

PIM and PSM contradicting wrt "get_sample_lost_status" operation

  • Key: DDS12-57
  • Legacy Issue Number: 9539
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    PIM and PSM are contradicting with respect to the "get_sample_lost_status" operation.

    Summary:
    According to the PIM in section 2.1.2.5.2(.12), the Subscriber class has got an operation called "get_sample_lost_status". According to the PSM in section 2.2.3, this operation is not part of the Subscriber, but of the DataReader.

    Proposed Resolution::
    Move the "get_sample_lost_status" operation in the PIM to the DataReader as well.
    RTI: We propose ewmoving this from the Subscriber altoguether and moving it to the DataReader.

    Proposed Revised Text::

    In the Subscriber table in section 2.1.2.5.2 Subscriber Class
    Remove the entry on the operation get_sample_lost_status()

    In the DataReader table in section section 2.1.2.5.3 DataReader Class
    Add the entry on the get_sample_lost_status() operation that was removed from the Subscriber class

    Add section 2.1.2.5.3.24, previous 2.1.2.5.3.24 becomes 2.1.2.5.3.25:
    2.1.2.5.3.24 get_sample_lost_status
    This operation allows access to the SAMPLE_LOST_STATUS communication status. Communication statuses are described in Section 2.1.4.1, "Communication Status," on page 2-125.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Move the operation from the Subscriber to the DataReader

  • Updated: Fri, 6 Mar 2015 20:58 GMT

PIM description of "get_domain_id" method is missing

  • Key: DDS12-56
  • Legacy Issue Number: 9538
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.2.2.2, the "get_domain_id" method is mentioned in the table, but is not explained in the following sections.

    Proposed Resolution::
    Add a section that explains the "get_domain_id" method.

    Proposed Revised Text::
    Replace section 2.1.2.2.1.26 with the following one:

    2.1.2.2.1.26 get_domain_id
    This operation retrieves the domain_id used to create the DomainParticipant. The domain_id identifies the Domain to which the DomainParticipant belongs. As described in the introduction to Section 2.1.2.2.1 each Domain represents a separate data "communication plane" isolated from other domains.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Add a section that explains the "get_domain_id" method

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Illegal return value register_instance

  • Key: DDS12-66
  • Legacy Issue Number: 9549
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In section 2.1.2.4.2.5 register_instance the description states that if this operation exceeds the max_blocking_time this operation will return TIMEOUT. However this is not possible because the operation cannot return a ReturnCode_t value.

    Proposed Resolution:

    Proposed Revised Text:

    Section section 2.1.2.4.2.5 register_instance
    At the end of the 5th paragraph Replace:
    If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT

    With:
    If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return HANDLE_NIL

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    State that in this case the operation will return HANDLE_NIL instead

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Missing autopurge_disposed_sample_delay

  • Key: DDS12-65
  • Legacy Issue Number: 9548
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    In the QoS table for built-in Subscriber and DataReader objects (Section 2.1.5 Built-in Topics) the value for autopurge_disposed_sample_delay is missing.

    Proposed Resolution:

    Proposed Revised Text:
    In the UML figure in section 2.1.3 Supported QoS
    Class ReaderDataLifecycleQoS, Add the field:
    autopurge_disposed_sample_delay : Duration_t

    In section 2.1.5 Built-in Topics, QoS table, READER_DATA_LIFECYCLE row, add:
    autopurge_disposed_sample_delay = infinite

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Add the missing field.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Inconsistent PIM/PSM for RETCODE_ILLEGAL_OPERATION

  • Key: DDS12-60
  • Legacy Issue Number: 9542
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:
    See also issue R#123 of our previous Issues document. (Addition of an IllegalOperation Errorcode). This issue has been solved on the PIM level, but the ReturnCode has not been added to the IDL PSM.

    Proposed Resolution::
    Add the RETCODE_ILLEGAL_OPERATION ReturnCode to the PSM in section 2.2.3.

    Proposed Revised Text::
    Section 2.2.3 DCPS PSM : IDL
    after the line "const ReturnCode_t RETCODE_NO_DATA = 11;" add the line:
    const ReturnCode_t RETCODE_ILLEGAL_OPERATION = 12;

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Naming consistencies in match statuses

  • Key: DDS12-21
  • Legacy Issue Number: 9498
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    For better naming consistency with other statuses, the PUBLICATION_MATCH_STATUS and SUBSCRIPTION_MATCH_STATUS may be renamed to PUBLICATION_MATCHED_STATUS and SUBSCRIPTION_MATCHED_STATUS. Also the get_publication_match_status and get_subscription_match_status operations may be renamed to get_publication_matched_status and get_subscription_matched_status.
    In addition the callback is named on_XXX_matched.

    Proposed Resolution:
    Rename PUBLICATION_MATCH_STATUS to PUBLICATION_MATCHED_STATUS, SUBSCRIPTION_MATCH_STATUS to SUBSCRIPTION_MATCHED_STATUS

    Proposed Revised Text:
    Section 2.1.2.4 Publication Module
    Figure 2-9; DataWriter class
    Rename
    get_publication_match_status()
    To
    get_publication_matched_status()

    Section 2.1.2.4.2 DataWriter Class
    DataWriter class table
    Rename
    get_publication_match_status()
    To
    get_publication_matched_status()

    Section 2.1.2.4.2.19 get_publication_match_status
    Rename section heading to:
    2.1.2.4.2.19 get_publication_matched_status

    Replace
    "allows access to the PUBLICATION_MATCH_QOS"
    With:
    "allows access to the PUBLICATION_MATCHED communication status "

    Section 2.1.2.5 Subscription Module
    Figure 2-9; DataReader class
    Rename
    get_subscription_match_status()
    To
    get_subscription_matched_status()

    Section 2.1.2.4.2 DataReader Class
    DataReader class table
    Rename
    get_subscription_match_status()
    To
    get_subscription_matched_status()

    Section 2.1.2.5.3.25 get_subscription_match_status
    Rename section heading to:
    2.1.2.5.3.25 get_subscription_matched_status

    Section 2.1.2.5.3.25 get_subscription_match_status
    Rename "SUBSCRIPTION_MATCH_STATUS" to "SUBSCRIPTION_MATCHED_STATUS"

    Section 2.1.4.4 Conditions and Wait-sets
    Figure 2-19; DataReader class
    Rename
    get_publication_match_status()
    To
    get_publication_matched_status()

    Section 2.1.4.1 Communication Status
    Communication status table replace:
    PUBLICATION_MATCH
    With
    PUBLICATION_MATCHED

    Communication status table replace:
    SUBSCRIPTION_MATCH
    With
    SUBSCRIPTION_MATCHED

    Section 2.2.3 DCPS PSM : IDL
    Status constants
    Replace:
    const StatusKind PUBLICATION_MATCH_STATUS = 0x0001 << 13;
    const StatusKind SUBSCRIPTION_MATCH_STATUS = 0x0001 << 14;
    With
    const StatusKind PUBLICATION_MATCHED_STATUS = 0x0001 << 13;
    const StatusKind SUBSCRIPTION_MATCHED_STATUS = 0x0001 << 14;

    interface DataWriter
    Replace:
    PublicationMatchedStatus get_publication_match_status();
    With
    PublicationMatchedStatus get_publication_matched_status();

    interface DataReader
    Replace:
    SubscriptionMatchedStatus get_subscription_match_status();
    With
    SubscriptionMatchedStatus get_subscription_matched_status();

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Description of set_default_XXX_qos()

  • Key: DDS12-20
  • Legacy Issue Number: 9497
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    For XXX = participant, topic, publisher, subscriber, and datareader, the specification states "in the case where the QoS policies are not explicitly specified".
    For XXX = datawriter, the specification states "in the case where the QoS policies are defaulted".
    The latter is technically more correct.

    Proposed Resolution:
    Use the wording in set_default_datawriter_qos().

    Proposed Revised Text:
    Section 2.1.2.2.1.20 set_default_publisher_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.1.21 get_default_publisher_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.1.22 set_default_subscriber_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.1.23 get_default_subscriber_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.1.24 set_default_topic_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.1.25 get_default_topic_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.2.5 set_default_participant_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.2.2.6 get_default_participant_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.4.1.16 get_default_datawriter_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.5.2.15 set_default_datareader_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

    Section 2.1.2.5.2.16 get_default_datareader_qos
    First paragraph replace:
    in the case where the QoS policies are not explicitly specified
    With
    in the case where the QoS policies are defaulted

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Use the wording in set_default_datawriter_qos().

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Should write() block when out of instance resources?

  • Key: DDS12-19
  • Legacy Issue Number: 9496
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Currently it is stated that write() and dispose() may block and return TIMEOUT when the RELIABILITY QoS kind is set to RELIABLE and any of the RESOURCE_LIMITS QoS is hit.
    We should reconsider the action taken when it is instance resource limits that are hit. If instance resources are kept around until they are unregistered (and not even yet considering how RELIABILITY or DURABILITY QoS affects this), then it seems awkward to block when the user is required to take action. Perhaps returning immediately with OUT_OF_RESOURCES makes more sense in this situation.

    Proposed Resolution:
    When the writer is out of instance resources because all max_instances have been registered or written, the write/dispose() call will return OUT_OF_RESOURCES instead of blocking if it can be detected.

    Proposed Revised Text:

    Section 2.1.2.4.2.11 write
    Above the paragraph starting with "In case the provided handle is valid"; add the paragraph:
    Instead of blocking, the write operation is allowed to return immediately with the error code OUT_OF_RESOURCES provided the following two conditions are met:
    1. The reason for blocking would be that the RESOURCE_LIMITS are exceeded.
    2. The service determines that waiting the 'max_waiting_time' has no chance of freeing the necessary resources. For example, if the only way to gain the necessary resources would be for the user to unregister an instance.

    Section 2.1.2.4.2.12 write_w_timestamp
    After the paragraph "This operation may block" add the paragraph:
    This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).

    Section 2.1.2.4.2.13 dispose
    After the paragraph "This operation may blockÂ…" add the paragraph:
    This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).

    Section 2.1.2.4.2.14 dispose_w_timestamp
    After the paragraph "This operation may blockÂ…" add the paragraph:
    This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).

    Section 2.1.2.4.2.14 dispose_w_timestamp
    After the paragraph "This operation may blockÂ…" add the paragraph:
    This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).

    Section 2.1.2.4.2.5 register

    Replace the paragraph:
    This operation may block if the RELIABILITY kind is set to RELIABLE and the modification would cause data to be lost or else cause one of the limits specified in the RESOURCE_LIMITS to be exceeded. Under these circumstances, the RELIABILITY max_blocking_time configures the maximum time the write operation may block (waiting for space to become available). If max_blocking_time elapses before the DataWriter is able to store the modification without exceeding the limits, the operation will fail and return TIMEOUT.

    With:
    This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
    This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).

    Section 2.1.2.4.2.5 register_w_timestamp

    Replace the paragraph:
    This operation may block and return TIMEOUT under the same circumstances described for the register_instance operation (Section 2.1.2.4.2.5 ).
    With:
    This operation may block and return TIMEOUT under the same circumstances described for the write operation (Section 2.1.2.4.2.11).
    This operation may return OUT_OF_RESOURCES under the same circumstances described for the write operation (Section 2.1.2.4.2.11).

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Clarify ownership with same-strength writers

  • Key: DDS12-18
  • Legacy Issue Number: 9495
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In Section 2.1.3.9 in the paragraph dealing with when there are multiple same-strength writers, the next to last sentence describes that the owner must remain the same until one of several conditions are met.
    The condition where "a new DataWriter with the same strength that should be deemed the owner according to the policy of the Service" should be explicitly mentioned although it may have been implied.

    Proposed Resolution:
    Add the explicit mention of the additional condition above.

    Proposed Revised Text:

    Section 2.1.3.9.2 EXCLUSIVE kind

    5th paragraph; replace the sentence:
    It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, or a new DataWriter with higher strength modifies the instance.

    With:
    It is also required that the owner remains the same until there is a change in strength, liveliness, the owner misses a deadline on the instance, a new DataWriter with higher strength modifies the instance, or a new owner with the same strength that is deemed by the Service to be the owner modifies the instance.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Add the explicit mention of the additional condition above.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Naming of filter_parameters concerning ContentFilteredTopic

  • Key: DDS12-12
  • Legacy Issue Number: 9489
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    The method name is get/set_expression_parameters() whereas the parameter passed in is the "filter_parameters". Understandably the full name is filter expression parameters since the ContentFilteredTopic has a "filter_expression" attribute.
    Compare this with the MultiTopic which has the same named methods which take in "expression_parameters" and has a "subscription_expression" attribute.
    The name "filter_parameters" is also used in the create_contentfilteredtopic() method on the DomainParticipant.

    Proposed Resolution:
    Change the name of "filter_parameters" to "expression_parameters" for more consistency.

    Proposed Revised Text:

    Section 2.1.2.2.1 DomainParticipant Class; DomainParticipant class table
    On the row describing the operation "create_contentfilteredtopic"
    Replace parameter name "filter_ parameters"
    With parameter name "expression_ parameters"

    Section 2.1.2.2.1.7 create_contentfilteredtopic
    Last paragraph replace "filter_ parameters" with "expression_ parameters"

    Section 2.1.2.3.3 ContentFilteredTopic Class; ContentFilteredTopic class table
    On the row describing the operation "set_expression_parameters"
    Replace parameter name "filter_ parameters"
    With parameter name "expression_ parameters"

    Section 2.1.2.3.3 ContentFilteredTopic Class
    On the second bullet towards the end of the section:
    Replace "filter_ parameters" with "expression_ parameters"
    On the last paragraph just above section 2.1.2.3.3.1:
    Replace "filter_ parameters" with "expression_ parameters"

    Section 2.1.2.3.3.3 get_expression_parameters
    On the first paragraph:
    Replace "filter_ parameters" with "expression_ parameters"

    Section 2.1.2.3.3.4 set_expression_parameters
    On the first paragraph:
    Replace "filter_ parameters" with "expression_ parameters"

    Section 2.2.3 DCPS PSM : IDL
    interface DomainParticipant
    On the operation create_contentfilteredtopic
    Replace formal parameter name "filter_ parameters" with "expression_ parameters"

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Typos in built-in topic table

  • Key: DDS12-11
  • Legacy Issue Number: 9488
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In the table in Section 2.1.5, for both the DCPSPublication and DCPSSubscription there is a typo in that "ownershiph" should be "ownership".
    Also, the destination_order row in the DCPSPublication should be of type "DesinationOrderQosPolicy" and not "QosPolicy".
    Also, the presentation row in the DCPSPublication should be of type "PresentationQosPolicy" and not "DestinationOrderQosPolicy".
    Also, in the paragraph at the top of the page containing the table there is a typo where "crated" should be "created".

    Proposed Resolution:
    Fix the typos.

    Proposed Revised Text:

    Section 2.1.5, 2 paragraphs above the Builtin-Topic table; at the end of the paragraph:
    Replace "crated" with "created" in the sentence:
    "application that crated them."

    Section 2.1.5 Builtin-Topic table;
    Replace DCPSPublication fieldname 'ownershiph' with 'ownership'
    Replace DCPSSubscription fieldname 'ownershiph' with 'ownership'
    Replace the type of the DCPSPublication, destination_order field from 'QosPolicy" to "DestinationOrderQosPolicy"
    Replace the type of the DCPSPublication presentation field from 'DestinationOrderQosPolicy" to "PresentationQosPolicy

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Fix the typos

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Clarify PARTITION QoS and its default value

  • Key: DDS12-10
  • Legacy Issue Number: 9487
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In the table in Section 2.1.3, the default partition value is said to be a zero-length sequence, which "is equivalent to a sequence containing a single element consisting of an empty string", which will match any partition. However, if an empty string will match any partition, it is not consistent with normal regular expression matching.

    Proposed Resolution:
    It is desirable to have the behavior in that if a special partition is specified, it will only match others who have that special partition. If the default behavior is that it will match all partitions, there is no way for a newly created entity to prevent others from matching it, unless the special partition is used.
    Therefore, we should not overload the meaning of the empty string to mean matching everything. Instead, the empty string is the default partition. An empty partition sequence or a partition sequence that consists of wildcards only will automatically be assumed to be in the default empty string partition.

    Proposed Revised Text:

    Section 2.1.3 Supported QoS PARTITION Table
    On the "Meaning" Column for the PARTITION QoS;

    Replace the following paragraph:

    The default value is an empty (zero-length) sequence. This is treated as a special value that matches any partition. And is equivalent to a sequence containing a single element consisting of the empty string.

    With

    The empty string ("") is considered a valid partition that is matched with other partition names using the same rules of string matching and regular-expression matching used for any other partition name (see Section 2.1.3.13)
    The default value for the PARTITION QoS is a zero-length sequence. The zero-length sequence is treated as a special value equivalent to a sequence containing a single element consisting of the empty string.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Blocking of write() call

  • Key: DDS12-9
  • Legacy Issue Number: 9486
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Blocking of write() call depending on RESOURCE_LIMITS, HISTORY, and RELIABILITY QoS

    Summary:
    Section 2.1.2.4.2.11 states that even writers with KEEP_LAST HISTORY QoS can block and describes some scenarios.
    Some of these scenarios may no longer be valid depending on whether the implementation is willing to sacrifice reliability.

    In the table in Section 2.1.3, it states the max_blocking _time in the RELIABILITY QoS only applies for RELIABLE and KEEP_ALL HISTORY QoS.
    In Section 2.1.3.14 it is only mentioned that the writer can block if the RELIABILITY QoS is set to RELIABLE.

    Proposed Resolution:
    At the very least, remove mention of the requirement that the HISTORY QoS be KEEP_ALL for blocking to apply in the table in Section 2.1.3.

    Proposed Revised Text:

    Section 2.1.3 QoS Table
    On the entry for the RELIABILITY QoS max_blocking_time
    Replace:
    This setting applies only to the case where kind=RELIABLE and the HISTORY is KEEP_ALL.
    With:
    This setting applies only to the case where kind=RELIABLE.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Typos in PIM sections

  • Key: DDS12-17
  • Legacy Issue Number: 9494
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In Section 2.1.2.4.1.10 (begin_coherent_changes) there is a typo in the last sentence of the section where "if may be useful" should be "it may be useful".
    In the second paragraph of Section 2.1.2.2.2.4 (lookup_participant) there is a typo where "multiple DomainParticipant" should be "multiple DomainParticipants".

    Proposed Resolution:
    Make the suggested corrections.

    Proposed Revised Text:

    Section 2.1.2.4.1.10 begin_coherent_changes
    Last sentence, replace:
    "if may be useful"
    With
    "it may be useful"

    Section 2.1.2.2.2.4 lookup_participant
    Second paragraph replace
    "If multiple DomainParticipant belonging"
    With
    "If multiple DomainParticipant entities belonging"

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Typos in QoS sections

  • Key: DDS12-16
  • Legacy Issue Number: 9493
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In Section 2.1.3.11 (LIVELINESS QoS) the second condition for compatibility uses "=<" for less than or equal to where "<=" might be more readable.
    Also, the last paragraph states "equal or greater to" where "equal or greater than" might be more readable.
    In next to last paragraph of Section 2.1.3.14 (RELIABILITY QoS), there is a typo where "change form a newer value" should be "change from a newer value".
    In Section 2.1.3.22 (READER_DATA_LIFECYCLE QoS) the last two paragraphs mention how "view_state becomes NOT_ALIVE_xxx" where it should be the "instance_state".

    Proposed Resolution:
    Make the aforementioned changes

    Proposed Revised Text:

    Section 2.1.3.11 LIVELINESS
    Second bullet in the enumeration near the end of the section:
    Replace "offered lease_duration =< requested lease_duration"
    With "offered lease_duration <= requested lease_duration"

    Section 2.1.3.11 LIVELINESS
    Last paragraph; replace:
    "Service with a time-granularity equal or greater to the lease_duration."
    With:
    "Service with a time-granularity greater or equal to the lease_duration."

    Section 2.1.3.14 RELIABILITY
    Next to last paragraph. Raplace:
    "change form a newer value"
    With:
    "change from a newer value".

    Section 2.1.3.22 READER_DATA_LIFECYCLE
    Paragraph before the last
    Replace "view_state" with "inatance_state" in:
    "maintain information regarding an instance once its view_state becomes NOT_ALIVE_NO_WRITERS."

    Section 2.1.3.22 READER_DATA_LIFECYCLE
    Last paragraph:
    Replace "view_state" with "inatance_state" in:
    "maintain information regarding an instance once its view_state becomes NOT_ALIVE_DISPOSED."

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Fix the typos.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Consistency between RESOURCE_LIMITS QoS policies

  • Key: DDS12-8
  • Legacy Issue Number: 9485
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In the description of the TIME_BASED_FILTER QoS, we are missing the description of the consistency requirements with the DEADLINE QoS, which is mentioned in the table in Section 2.1.3.
    Also, we should mention some consistency requirements between max_samples and max_samples_per_instance within the RESOURCE_LIMITS QoS.

    Proposed Resolution:
    In Section 2.1.3.12 on the TIME_BASED_FILTER QoS we should make explicit mention that the minimum_separation must be <= the period of the DEADLINE QoS.
    In both the table in Section 2.1.3 and in Section 2.1.3.22 on the RESOURCE_LIMITS QoS we should mention the consistency requirements that max_samples >= max_samples_per_instance.

    Proposed Revised Text:

    Section 2.1.3.12 TIME_BASED_FILTER;
    Add the following paragraph to the end of the section:

    The setting of the TIME_BASED_FILTER policy must be set consistently with that of the DEADLINE policy. For these two policies to be consistent the settings must be such that "deadline period>= minimum_separation." An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.

    Section 2.1.3.22 RESOURCE_LIMITS
    Add the following paragraph before the last paragraph in the section:

    The setting of RESOURCE_LIMITS max_samples must be consistent with the setting of the max_samples_per_instance. For these two values to be consistent they must verify that max_samples >= max_samples_per_instance.

    Section 2.1.3.22 RESOURCE_LIMITS
    Add the following paragraph at the end of the section:

    An attempt to set these policies in an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    No Data Available

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Incorrect mention of INCONSISTENT_POLICY status

  • Key: DDS12-15
  • Legacy Issue Number: 9492
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    In Section 2.1.3.7 concerning the DEADLINE QoS, it is stated that if the QoS is set inconsistently, i.e. period is < minimum_separation of the TIME_BASED_FILTER QoS, the INCONSISTENT_POLICY status will change and any associated Listeners/WaitSets will be triggered.
    There is no such status. Instead the set_qos() operation will error with return code INCONSISTENT_POLICY.

    Proposed Resolution:
    Mention the return code instead.

    Proposed Revised Text:

    Section 2.1.3.7 DEADLINE
    Remove the last sentence in the section:
    "An attempt to set these policies in
    an inconsistent manner will cause the INCONSISTENT_POLICY status to change and any associated Listeners/WaitSets to be triggered."

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Mention the return code instead

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Compatible versus consistency when talking about QosPolicy

  • Key: DDS12-14
  • Legacy Issue Number: 9491
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In the third paragraph of Section 2.1.3, it is stated that "some QosPolicy values may not be compatible with other ones". In this context we are really talking about the consistency of related QosPolicies as compatibility is already a concept concerning requested/offered semantics.

    Proposed Resolution:
    Reword the sentence to use the term "consistency" which is already used later in the paragraph.

    Proposed Revised Text:

    Section 2.1.3 Supported QoS
    3rd paragraph
    Replace "compatible" with "consistent" in the sentence:
    "Some QosPolicy values may not be compatible with other ones."
    Resulting in:
    "Some QosPolicy values may not be consistent with other ones."

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

OWNERSHIP_STRENGTH QoS is not a QoS on built-in Subscriber of DataReaders

  • Key: DDS12-7
  • Legacy Issue Number: 9484
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    The OWNERSHIP_STRENGTH QoS only applies to DataWriters, yet it is listed in the table of the QoS of the built-in Subscriber and DataReader objects in Section 2.1.5.

    Proposed Resolution:
    Remove OWNERSHIP_STRENGTH from the aforementioned table.

    Proposed Revised Text:

    Section 2.1.5
    In the table that follows the sentence:
    The QoS of the built-in Subscriber and DataReader objects is given by the following table:
    Remove the row for 'OWNERSHIP_STRENGTH'

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Remove OWNERSHIP_STRENGTH from the aforementioned table

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Inconsistent naming in SampleRejectedStatusKind

  • Key: DDS12-6
  • Legacy Issue Number: 9483
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Summary:
    We have REJECTED_BY_SAMPLES_LIMIT which comes from the max_samples in the ResourceLimitsQosPolicy.
    However, we have REJECTED_BY_INSTANCE_LIMIT which comes from the max_instances.

    Proposed Resolution:
    It should be named REJECTED_BY_INSTANCES_LIMIT.

    Proposed Revised Text:

    Section 2.2.3 DCPS PSM : IDL
    enum SampleRejectedStatusKind; Replace
    REJECTED_BY_INSTANCE_LIMIT
    With
    REJECTED_BY_INSTANCES_LIMIT

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    It should be named REJECTED_BY_INSTANCES_LIMIT

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Incorect prototype for FooDataWriter method register_instance_w_timestamp()

  • Key: DDS12-13
  • Legacy Issue Number: 9490
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Incorrect prototype for the FooDataWriter method register_instance_w_timestamp() in the PSM

    Summary:
    The handle is incorrectly a parameter when it is already the return.

    Proposed Resolution:
    Remove the incorrect handle parameter.

    Proposed Revised Text:

    Section 2.2.3 DCPS PSM : IDL
    interface FooDataWriter
    On the register_instance_w_timestamp remove the parameter
    "in DDS::InstanceHandle_t handle,"

    The resulting operation is:
    DDS::InstanceHandle_t register_instance_w_timestamp(in Foo instance_data, in DDS::Time_t source_timestamp);

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Remove the incorrect handle parameter.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Cache and CacheAccess should have a common parent

  • Key: DDS12-35
  • Legacy Issue Number: 9517
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Both the CacheAccess and Cache have some functional overlap. It would be nice if this overlap would be migrated to a common generalization (for a good reason, see also Issue T_DLRL#3).

    Proposed Resolution:

    Introduce a new class called CacheBase, that has represents the common functionality. Both the Cache and the CacheAccess inherit from this common base-class.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Simplify Relation Management

  • Key: DDS12-34
  • Legacy Issue Number: 9516
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    The purpose of the DLRL has been described as being able “to provide more direct access to the exchanged data, seamlessly integrated with the native-language constructs”. This means that DLRL should offer applications an OO-view on the information model(s) they use. In this view, objects behave in the same way as ordinary, native language objects.
    Providing intuitive object access and object navigation should be key-benefits of DLRL compared to plain DCPS usage, where instances and their relations need to be resolved manually. Object navigation in DLRL therefore needs to be simple and intuitive, just like navigating between objects in any ordinary native OO-language.
    It is in this aspect that DLRL falls short: object navigation is not simple and intuitive, since it requires intermediate objects (RefRelations and ObjectReferences) that abstract applications from the navigable objects. The purpose of these intermediate objects was to serve as some sort of smart pointers, that abstract applications from knowledge about the exact location and even about the existence of objects (to allow a form of lazy instantiation).
    However, since the potential benefits from smart pointer management are rather dependent on the underlying target language, the DLRL specification does not address them and only explains the effort that an application should do in the absence of any smart pointer support. This results in the following problems:
    The way in which a DLRL implementation solves pointer arithmetic is not standardized and may change from vendor to vendor and from language to language.
    When smart pointer arithmetic is not available, applications will be expected to do lots of extra relational management, which is not in the scope of most application programmers.

    Proposed Resolution:

    Simplify relation management by removing all intermediate relation objects from the API (Reference, Relation, RefRelation, ObjectReference, ListRelation and MapRelation). Navigation of single relations is done by going directly from ObjectRoot to ObjectRoot (simplifying the IDL object model as well). Implementations can still choose to do smart resource management (e.g. lazy instantiation), but they should do so in a fully transparent way, one that is invisible to applications.
    This approach also makes the PIM and PSM (which deviated quite a lot from eachother with respect to these intermediate relation-like objects) more consistent.

    Proposed Revised Text:

    Section 3.1.5.2, 2nd paragraph, 1st sentence: “DLRL classes are linked to other DLRL classes by means of Relation Objects”. This should be replaced with “Â… by means of relations.”.

    Change the Object Diagram of Figure 3.4. (an alternative Object Diagram will be provided).

    Change the table immediately following Figure 3.4 by removing the ObjectReference, Reference, Relation, RefRelation, ListRelation, StrMapRelation and IntMapRelation entries from it.

    Remove the foot-note directly following this table (starting with with number 1) that says: “The specification does Â… (lazy instantiation).”

    Section 3.1.6.3.2: Remove the sequence of ObjectReference attribute from the CacheAccess table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.

    Section 3.1.6.3.2: Remove the deref method from the CacheAccess table and from the explanation below it.

    Section 3.1.6.3.3: Remove the sequence of ObjectReference attribute from the Cache table and from the explanation below it. As a replacement, see T_DLRL#2 and T_DLRL#3.

    Section 3.1.6.3.3: Remove the deref method from the Cache table and from the explanation below it.

    Section 3.2.1.2.1: Remove the following lines from the CacheAccess and Cache interface:
    readonly attribute ObjectReferenceSeq refs;
    ObjectRoot deref( in ObjectReference ref) raises (NotFound);

    Section 3.1.6.3.5: Remove the sequence of ObjectReference attribute from the ObjectHome table, and from the explanation below it.

    Section 3.2.1.2.1: Remove the following line from the ObjectHome interface:
    readonly attribute ObjectReferenceSeq refs;

    Section 3.1.6.3.5: Change the entire explanation of the auto_deref attribute from:
    “a boolean that indicates if ObjectReference corresponding to that type should be implicitly instantiated (TRUE) or if this action should be explicitly done by the application when needed by calling a deref operation (auto_deref). As selections act on instantiated objects (see section 3.1.6.3.7 for details on selections), TRUE is a sensible setting when selections are attached to that home.”
    to:
    “a boolean that indicates whether the state of a DLRL Object should always be loaded into that Object (auto_deref = TRUE) or whether this state will only be loaded after it has been accessed explicitly by the application (auto_deref = FALSE).”

    Section 3.1.6.3.5: Change the entire explanation of the deref_all method from:
    “ask for the instantiation of all the ObjectReference that are attached to that home, in the Cache (deref_all).”
    To:
    “ask to load the most recent state of a DLRL Object into that Object for all objects managed by that home (deref_all).”

    Section 3.1.6.3.5: Change the entire explanation of the underef_all method from:
    “ask for the removal of non-used ObjectRoot that are attached to this home (underef_all).”
    To:
    “ask to unload all object states from objects that are attached to this home (underef_all).”

    Section 3.1.6.3.6: Replace all occurrences of ObjectReference with ObjectRoot in the ObjectListener table. Also remove the second parameter of the on_object_modified method.

    Section 3.1.6.3.6: Change the explanation of on_object_created from:
    “Â… this operation is called with the ObjectReference of the newly created object (ref).”
    to:
    “Â… this operation is called with the value of the newly created object (the_object).”

    Section 3.1.6.3.6: Change the explanation of on_object_modified from:
    “This operation is called with the ObjectReference of the modified object (ref) and its old value (old_value); the old value may be NULL.”
    To:
    “This operation is called with the new value of the modified object (the_object).”

    Section 3.1.6.3.6: Change the explanation of on_object_deleted from:
    “Â… this operation is called with the ObjectReference of the newly deleted object (ref).”
    To:
    “Â… this operation is called with the value of the newly deleted object (the_object).

    Section 3.1.6.3.10: Replace all occurrences of ObjectReference with ObjectRoot in the SelectionListener table.

    Section 3.2.1.2.1: Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:
    local interface ObjectListener

    { boolean on_object_created ( in ObjectReference ref ); /**** * will be generated with the proper Foo type * in the derived FooListener * boolean on_object_modified ( in ObjectReference ref, * in ObjectRoot old_value); ****/ boolean on_object_deleted ( in ObjectReference ref ); }

    ;

    local interface SelectionListener

    { /*** * will be generated with the proper Foo type * in the derived FooSelectionListener * void on_object_in ( in ObjectRoot the_object ); void on_object_modified ( in ObjectRoot the_object ); * ***/ void on_object_out ( in ObjectReference the_ref ); }

    ;

    To:

    local interface ObjectListener

    { /**** * will be generated with the proper Foo type * in the derived FooListener boolean on_object_created ( in ObjectRoot the_object ); boolean on_object_modified ( in ObjectRoot the_object ); boolean on_object_deleted ( in ObjectRoot the_object ); * ****/ }

    ;

    local interface SelectionListener

    { /*** * will be generated with the proper Foo type * in the derived FooSelectionListener * void on_object_in ( in ObjectRoot the_object ); void on_object_modified ( in ObjectRoot the_object ); void on_object_out (in ObjectRoot the_object ); * ***/ }

    ;

    Section 3.2.1.2.2: Change in the IDL interfaces for ObjectListener en SelectionListener the following lines from:

    local interface FooListener: DDS::ObjectListener

    { void on_object_modified ( in DDS ::ObjectReference ref, in Foo old_value ); }

    ;

    local interface FooSelectionListener : DDS::SelectionListener

    { void on_object_in ( in Foo the_object ); void on_object_modified ( in Foo the_object ); }

    ;

    To:

    local interface FooListener: DDS::ObjectListener

    { boolean on_object_created ( in Foo the_object ); boolean on_object_modified ( in Foo the_object ); boolean on_object_deleted ( in Foo the_object ); }

    ;

    local interface FooSelectionListener : DDS::SelectionListener

    { void on_object_in ( in Foo the_object ); void on_object_modified ( in Foo the_object ); void on_object_out (in Foo the_object ); }

    ;

    Section 3.1.6.3.13: Remove the ObjectReference attribute from the ObjectRoot table, and from the explanation below it.

    Section 3.2.1.2.1: Remove the following line from the IDL in the ObjectRoot:
    readonly attribute ObjectReference ref;

    Section 3.1.6.3.13: Change the following sentence from:
    “In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute:”
    To:
    “In addition, application classes (i.e., inheriting from ObjectRoot), will be generated with a set of methods dedicated to each shared attribute (including single- and multi-relation attributes):”

    Section 3.1.6.3.14 can be removed (ObjectReference).

    Section 3.2.1.2.1: Remove the following lines from the IDL:

    /*****************

    • ObjectReference
      *****************/
      struct ObjectReference { DLRLOid oid; unsigned long home_index; }

      ;
      typedef sequence<ObjectReference> ObjectReferenceSeq;

    Section 3.1.6.3.15 can be removed (Reference).

    Section 3.1.6.3.20 can be removed (Relation).

    Section 3.1.6.3.21 can be removed (RefRelation).

    Section 3.1.6.3.22 - Section 3.1.6.3.24 can be removed (ListRelation, IntMapRelation and StrMapRelation).

    Section 3.2.1.2.1: Remove the following lines from the IDL:

    /********************************

    • Value Bases for Relations
      *********************************/

    valuetype RefRelation

    { private ObjectReference m_ref; boolean is_composition(); void reset(); boolean is_modified ( in ReferenceScope scope ); }

    ;

    valuetype ListRelation : ListBase

    { private ObjectReferenceSeq m_refs; boolean is_composition(); }

    ;

    valuetype StrMapRelation : StrMapBase {
    struct Item

    { string key; ObjectReference ref; }

    ;
    typedef sequence <Item> ItemSeq;
    private ItemSeq m_refs;
    boolean is_composition();
    };

    valuetype IntMapRelation : IntMapBase {
    struct Item

    { long key; ObjectReference ref; }

    ;
    typedef sequence <Item> ItemSeq;
    private ItemSeq m_refs;
    boolean is_composition();
    };

    Section 3.2.1.1: 1st paragraph after the numbered list of DLRL entities, remove the following sentence: “(with the exception of ObjectReference, Â…. , so that it can be embedded). Section 3.2.1.2.2: Change the following lines in IDL from:

    valuetype FooStrMap : DDS::StrMapRelation { // StrMap<Foo>
    Â…
    valuetype FooIntMap : DDS::IntMapRelation { // IntMap<Foo>

    To:

    valuetype FooStrMap : DDS::StrMap { // StrMap<Foo>
    Â…
    valuetype FooIntMap : DDS::IntMap { // IntMap<Foo>

    Section 3.2.2.3.1: Remove the “Ref” value from the allowed list of patterns, so change the templateDef . The templatedef then changes from:
    <!ATTLIST templateDef name CDATA #REQUIRED
    pattern (List | StrMap | IntMap | Ref) #REQUIRED
    itemType CDATA #REQUIRED>

    To (see also Issues T_DLRL#7 and T_DLRL#8):

    <!ATTLIST templateDef name CDATA #REQUIRED
    pattern (Set | StrMap | IntMap) #REQUIRED
    itemType CDATA #REQUIRED>

    Section 3.2.2.3.2.3, 2nd bullet: Remove the “Ref” pattern from the list of supported constructs.

    Section 3.2.3.2: Replace the forward valuetype declaration for RadarRef with a forward declaration of type Radar, so change from:
    valuetype RadarRef // Ref<Radar>
    To:
    valuetype Radar;

    Section 3.2.3.3: Remove the following line from the XML (in both XML examples):
    “<templateDef name=“RadarRef”
    pattern=“Ref” itemType=“Radar”/>”

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Object State Transitions of Figure 3-5 and 3-6 should be corrected

  • Key: DDS12-39
  • Legacy Issue Number: 9521
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Object State Transitions of Figure 3-5 and 3-6 should be corrected and simplified

    Summary:

    The state transition diagrams in Figure 3-5 and 3-6 are difficult to understand, and the 2nd diagram of Figure 3-5 is missing. (Instead of this 2nd diagram, the first diagram of Figure 3-6 has wrongly been duplicated here).
    Furthermore, since it is difficult to distinguish between primary and secondary Objects and their primary and secondary states, it would be nice if more intuitive names and states could be used instead.
    Finally, some of the possible conditions in which a state transition can occur are not mentioned in these state transition diagrams, which would even require for them to become more complex.

    Proposed Resolution:

    Introduce new names for the different states, and try to re-use the same set of states for each diagram. We propose not to speak about primary and secondary objects, but to speak about Cache Objects (located in a Cache) and CacheAccess objects (located in a CachAccess). Furthermore, we propose not to speak about primary and secondary states, but to speak about a READ state (with respect to incoming modifications) and a WRITE state (with respect to local modifications).
    Decouple Objects in the Cache from Objects in a CacheAccess, it makes the the the idea of what a Cache or CacheAccess represent more understandable. The Cache represents the global Object states as accepted by the System, a READ_ONLY CacheAccess represents a temporary state of a Cache, and a READ_WRITE or WRITE_ONLY CacheAccess represents the state of what the user intends the system to do in the future.
    Since a Cache then only represents the global state of the system (and not what the user intends to do), it does not have a WRITE state (it will be VOID). A READ_ONLY CacheAccess also has no WRITE state (VOID), but a WRITE_ONLY CacheAccess has no READ state (VOID). A READ_WRITE CacheAccess has both a WRITE and a READ state, and the WRITE state represents what the user has modified but not yet committed, and the READ state represent what the system has modified during its last update.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Introduce the concept of cloning contracts consistently in specification

  • Key: DDS12-38
  • Legacy Issue Number: 9520
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    The specification states that it is possible to clone an Object from the primary Cache into a CacheAccess, together with its related or contained objects for a specified navigatable depth. (We will refer to such an Object tree as a cloning contract from now on). However, while the cloning of objects is done on contract level, the deletion of clones is done on individual object level. What should happen to related objects when the top level object is deleted? Furthermore, it is unclear what the result should be when a relationship from an object A to an object B changes from object A to object C. Should the next refresh of the CacheAccess only refresh the states of objects A and B, or should object C be added and object B be removed from the CacheAccess?

    Proposed Resolution:

    Formally introduce the concept of a cloning contract into the API to replace all other clone-related methods. Cloning contracts are defined on the CacheAccess and are evaluated when the CacheAccess is refreshed.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

ObjectExtent and ObjectModifier can be removed

  • Key: DDS12-37
  • Legacy Issue Number: 9519
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    The ObjectExtent is a manager for a set objects. Basically it is a wrapper that offers functions to modify its contents and to create a sub-set based on on a user-defined function. The problem with using an Extent is that it overlaps with the get_objects method introduced in issue T_DLRL#3, and that it is not clear whether a new Extent should be allocated each time the user obtains it from the ObjectHome, or whether the existing Extent should be re-used and therefore its contents be overwritten with every update.
    Furthermore, every application can easily write its own code that modifies every element in this sequence (no specialized ObjectModifier is required for that, a simple for-loop can do the trick), and similarly an application can also write code to filter each element and to store matching results in another sequence. Filtering and modifying objects like this are really business logic, and do not have to be part of a Middleware specification.

    Proposed Resolution:

    Remove the ObjectModifier and ObjectExtent from the specification. This saves two implied interfaces that are not required for most types of applications, but which can still be solved very well at application level. Replace the extent on the ObjectHome with a sequence of ObjectRoots.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Object notification in manual update mode required

  • Key: DDS12-36
  • Legacy Issue Number: 9518
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    The DLRL offers two different update modes for its Primary Cache: an automatic mode in which object creations, updates and deletions are pushed into the Cache and a manual mode in which the Cache content are refreshed on user-demand.
    >From the perspective of a Cache-user, it is important to find out what has happened to the contents of the Cache during the latest update session. In automatic update mode, Listeners are triggered for each Object creation, modification or deletion in the primary Cache. However, when the Cache is in manual update mode none of these Listeners are triggered and no means exist to examine what has happened during the last update round. The same can be said for the CacheAccess, that does not have an automatic update mode and neither has any means to examine the changes that were applied during the last invocation of the “refresh” method.

    Proposed Resolution:

    We therefore propose to add some extra methods to the ObjectHome, that allow an application to obtain the list of Objects that have been created, modified or deleted in the latest update round of a specific CacheBase

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Operation dispose_w_timestamp() should be callable on unregistered instance

  • Key: DDS12-26
  • Legacy Issue Number: 9503
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Section 2.1.2.4.2.14 (dispose_w_timestamp) it states that the operation will return PRECONDITION_NOT_MET if called on an instance that has not yet been registered. This is not true as the operation will implicitly register the instance just as write does. This restriction was also originally in 2.1.2.4.2.13 (dispose) but has already been removed.

    Proposed Resolution:
    Remove the offending paragraph.

    Proposed Revised Text:
    Section 2.1.2.4.2.14 dispose_w_timestamp
    Remove the last two paragraphs , that is the text starting from "The operation must be only called on registered instances." till the end of the section.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Clarify valid handle when calling write()

  • Key: DDS12-25
  • Legacy Issue Number: 9502
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Section 2.1.2.4.2.11 the write() operation will return PRECONDITION_NOT_MET if the handle is "valid but does not correspond to the given instance". Further, it goes on to state that "in the case the handle is invalid, the behavior is in general unspecified, but if detectable by a DDS implementation, the returned error-code will be 'BAD_PARAMETER'." We should clarify what is "valid" versus "invalid".
    Valid means the handle corresponds to a registered instance.

    Proposed Resolution:
    Clarify that valid means the handle corresponds to a registered instance.
    When the handle is valid and does not correspond to the given instance should be up to the implementation to be able to detect this or not.

    Proposed Revised Text:

    Section 2.1.2.4.2.11 write
    Remove the last paragraph that reads "In case the provided handle is valid"

    Add a new paragraph directly following the one that reads "If handle is any value other than HANDLE_NIL" as follows:
    In case the provided handle is valid, i.e. corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.

    Section 2.1.2.4.2.13 dispose
    Replace the next to last paragraph that reads "In case the provided handle is valid."
    With the same paragraph above:
    In case the provided handle is valid, i.e corresponds to an existing instance, but does not correspond to same instance referred by the 'data' parameter, the behavior is in general unspecified, but if detectable by the Service implementation, the return error-code will be 'PRECONDITION_NOT_MET'. In case the handle is invalid, the behavior is in general unspecified, but if detectable the returned error-code will be 'BAD_PARAMETER'.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Corrections to Figure 2-19

  • Key: DDS12-33
  • Legacy Issue Number: 9511
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Figure 2-19 in Section 2.1.4.4 (Conditions and Wait-sets):
    There is no such delete_statuscondition() operation on the Entity.
    The ReadCondition should have a view_state_mask and an instance_state_mask instead of a lifecycle_state_mask.

    Proposed Resolution:
    Make the suggested corrections.

    Proposed Revised Text:
    Section 2.1.4.4 Conditions and Wait-sets
    In Figure 2-19
    Remove "delete_statuscondition()" from the operations listed on the Entity.
    Remove "lifecycle_state_mask [*] : ViewStateKind" from the attributes listed on the ReadCondition.
    Add "view_state_mask [*] : ViewStateKind" and "instance_state_mask [*] : InstanceStateKind" to the end of the attributes listed on the ReadCondition.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Make the suggested corrections

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Non intuitive constant names

  • Key: DDS12-32
  • Legacy Issue Number: 9510
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    The following literals are defined:
    DURATION_INFINITY_SEC
    DURATION_INFINITY_NSEC
    TIMESTAMP_INVALID_SEC
    TIMESTAMP_INVALID_NSEC

    These are incorrectly named and should be:
    DURATION_INFINITE_SEC
    DURATION_INFINITE_NSEC
    TIME_INVALID_SEC
    TIME_INVALID_NSEC

    Proposed Resolution:
    Add the correct names.

    Proposed Revised Text:

    Section 2.2.3 DCPS PSM : IDL

    Replace:
    const long DURATION_INFINITY_SEC = 0x7fffffff;
    const unsigned long DURATION_INFINITY_NSEC = 0x7fffffff;
    const long TIMESTAMP_INVALID_SEC = -1;
    const unsigned long TIMESTAMP_INVALID_NSEC = 0xffffffff;

    With:
    const long DURATION_INFINITE_SEC = 0x7fffffff;
    const unsigned long DURATION_INFINITE_NSEC = 0x7fffffff;
    const long TIME_INVALID_SEC = -1;
    const unsigned long TIME_INVALID_NSEC = 0xffffffff;

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Use the correct names

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Example in 2.1.4.4.2 not quite correct

  • Key: DDS12-31
  • Legacy Issue Number: 9509
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Section 2.1.4.4.2 (Trigger State of the ReadCondition) the last paragraph describes an example. However, it is not quite true because reading samples belonging to the latest generation will cause the view_state to become NOT_NEW.
    For the sake of the example considered, it may not be necessary to specify the view_state since it is not absolutely relevant to the desired condition being triggered when a new sample arrives given that all other samples were previously at least read.

    Proposed Resolution:
    Remove mention of the view_state.

    Proposed Revised Text:
    Section 2.1.4.4.2 Trigger State of the ReadCondition
    In the last paragraph, change the sentence from
    "A ReadCondition that has a sample_state_mask =

    {NOT_READ}, view_state_mask = {NEW} will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the NEW samples are either read or taken."


    To
    "A ReadCondition that has a sample_state_mask = {NOT_READ}

    will have trigger_value of TRUE whenever a new sample arrives and will transition to FALSE as soon as all the new samples are either read or taken. "

    Section 2.1.4.4.2 Trigger State of the ReadCondition
    In that last paragraph change the last sentence from
    "that would only change the SampleState to READ but the sample would still have (SampleState, ViewState) = (READ, NEW) which overlaps the mask on the ReadCondition".
    To
    "that would only change the SampleState to READ which still overlaps the mask on the ReadCondition".

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Remove mention of the view_state.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Typo in copy_from_topic_qos

  • Key: DDS12-28
  • Legacy Issue Number: 9505
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Section 2.1.2.5.2.17 there is a typo in the last paragraph where "datawriter_qos" should be "datareader_qos".

    Proposed Resolution:
    Correct the typo.

    Proposed Revised Text:
    Section 2.1.2.5.2.17 copy_from_topic_qos
    Replace "datawriter_qos" with "datareader_qos" in the first sentence of the last paragraph that currently reads "This operation does not check the resulting datawriter_qos for consistency".

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Correct the typo

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Behavior of dispose with regards to DURABILITY QoS

  • Key: DDS12-27
  • Legacy Issue Number: 9504
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Section 2.1.2.4.2.13 (dispose) it states "in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it".
    Is this really necessary? Is it not acceptable to allow late-joining readers to see an instance with the NOT_ALIVE_DISPOSED instance state?
    Does this also apply to TRANSIENT_LOCAL?

    We think disposed instances should be propagated to new-ly discovered applications, otherwise there would be no way to enforce ownership of a disposed instance.
    Furthermore the application should be notified of disposed instances even if this is the first time the middleware sees the instance because in practice there is no way for the middleware to tell if the application has seen the instance already; for example, following a network partition the middleware may have notified of NOT_ALIVE_NO_WRITERS and following the application taking all the samples it could have reclaimed the information on that instance, so when it sees it again it thinks it is the first time; the application meanwhile could still have information on that instanceÂ…
    So the user case where a newly joining reader wants to not receive instances that have been disposed before it joined should be handled on the writer side by either explicitly unregistering the instances, or having some new QoS that auto-unregisters disposed instances.

    Another issue is whether the act of disposing on the writer side should automatically remove previous samples for that instance, and whether that is done for particular values of the HISTORY (e.g. when it is KEEP_LAST only, or KEEP_LAST with depth==1, or, even for KEEP_ALL). Seems like the control of this should be another QoS on the WRITER_LIFECYCLE.

    Proposed Resolution:
    For now eliminate the following text from Section 2.1.2.4.2.13 (dispose)
    "In case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it".

    Proposed Revised Text:
    Section 2.1.2.4.2.13 dispose
    Remove the paragraph:
    In addition, in case the DURABILITY QoS policy is TRANSIENT or PERSISTENT, the Service should take care to clean anything related to that instance so that late-joining applications would not see it.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

delete_contained_entities() on the Subscriber

  • Key: DDS12-22
  • Legacy Issue Number: 9499
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Should delete_contained_entities() on the Subscriber (and even the DataReader or DomainParticipant) be allowed to return PRECONDITION_NOT_MET?

    Summary:
    As described in Section 2.1.2.5.2.6, delete_datareader() can return PRECONDITION_NOT_MET if there are any outstanding loans. In a similar fashion, should we allow delete_contained_entities() on the Subscriber (and even the DataReader or DomainParticipant for that matter) to also return PRECONDITION_NOT_MET in this situation?

    Proposed Resolution:
    Return PRECONDITION_NOT_MET when delete_contained_entities() is called on either the DataReader, Subscriber, or DomainParticipant when a DataReader has outstanding loans.

    Proposed Revised Text:

    Section 2.1.2.2.1.18 delete_contained_entities
    Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
    The operation will return PRECONDITION_NOT_MET if the any of the contained entities is in a state where it cannot be deleted.

    Section 2.1.2.4.1.14 delete_contained_entities
    Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
    The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.

    Section 2.1.2.5.2.14 delete_contained_entities
    Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
    The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted. This will occur, for example, if a contained DataReader cannot be deleted because the application has called a read or take operation and has not called the corresponding return_loan operation to return the loaned samples.

    Section 2.1.2.5.3.30 delete_contained_entities
    Before the paragraph that starts with "Once delete_contained_entities returns successfully," add the paragraph
    The operation will return PRECONDITION_NOT_MET if the any of the contained entities is not in a state where it can be deleted.

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Need INVALID_QOS_POLICY_ID

  • Key: DDS12-24
  • Legacy Issue Number: 9501
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    The Requested/OfferedIncompatibleQosStatus contains the last_policy_id and we need to set this to something in case no QoS policy has ever been incompatible.

    Proposed Resolution:
    Add "const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;" to the PSM.

    Proposed Revised Text:
    Section 2.2.3 DCPS PSM : IDL
    In the Qos section add the following to the list of QosPolicyId_t:
    const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Add "const QosPolicyId_t INVALID_QOS_POLICY_ID = 0;" to the PSM

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Return of get_matched_XXX_data()

  • Key: DDS12-23
  • Legacy Issue Number: 9500
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In get_matched_subscription_data, we return PRECONDITION_NOT_MET in this situation. However, in get_matched_publication_data, we return BAD_PARAMETER. Previously, they were both returning PRECONDITION_NOT_MET.
    In addition, in both sections we state "The operation get_matched_XXXs to find the XXXs that are currently matched" should probably read "can be used to find".

    Proposed Resolution:
    Make it consistent by returning BAD_PARAMETER in both.

    Proposed Revised Text:

    Section 2.1.4.2.23 get_matched_subscription_data
    In the first sentence of the second paragraph, replace
    Replace "the operation will fail and return PRECONDITION_NOT_MET."
    With "the operation will fail and return BAD_PARAMETER."

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Make it consistent by returning BAD_PARAMETER in both

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Operation wait() on a WaitSet should return TIMEOUT

  • Key: DDS12-30
  • Legacy Issue Number: 9508
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    Currently TIMEOUT is not a specified valid return code for the wait() operation. The specification explicitly states that timeout is conveyed by returning OK with an empty list of conditions. We should consider adding TIMEOUT as an explicit valid return value.

    Proposed Resolution:
    Add TIMEOUT as a valid return code to wait().

    Proposed Revised Text:
    Section 2.1.2.1.6.3 wait

    In the next to last paragraph, replace
    "If the duration is exceeded, wait will also return with the return code OK. In this case, the resulting list of conditions will be empty."
    With
    "If the duration is exceeded, wait will return with return code TIMEOUT."

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Add TIMEOUT as a valid return code to wait().

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Typo in get_discovered_participant_data

  • Key: DDS12-29
  • Legacy Issue Number: 9507
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:

    In Section 2.1.2.2.1.28 there is a typo in the next to last paragraph where "get_matched_participants" should be "get_discovered_participants".

    Proposed Resolution:
    Correct the typo.

    Proposed Revised Text:
    Section 2.1.2.2.1.28 get_discovered_participant_data
    In the next to last paragraph replace "get_matched_participants" with "get_discovered_participants" where it currently reads "Use the operation get_matched_participants to find ".

  • Reported: DDS 1.1 — Sun, 2 Apr 2006 05:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Correct the typo.

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Support sequences of primitive types in DLRL Objects

  • Key: DDS12-52
  • Legacy Issue Number: 9534
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    The current Metamodel explains the different BasicTypes that are supported in DLRL. Although on DCPS sequences are supported for all primitive types, the DLRL states that the only sequences that can be supported are sequences of octet.

    Proposed Resolution:

    Explicitly state that the DLRL supports sequences of all supported primitive types.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Explicitly state that the DLRL supports sequences of all supported primitive types

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Clarify which Exceptions exist in DLRL and when to throw them

  • Key: DDS12-51
  • Legacy Issue Number: 9533
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    The DLRL PSM specifies a number of Exceptions, but these are not explained in the PIM, and they do not cover the entire range of all possible errors.

    Proposed Resolution:

    Make an extensive list of all possible Exceptions and explain them in the PIM as well.
    Add a String message to the exception that can give more details about the context of the exception.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Make the ObjectFilter and the ObjectQuery separate Selection Criterions

  • Key: DDS12-43
  • Legacy Issue Number: 9525
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    In the current specification, the ObjectQuery inherits from the ObjectFilter, making it an ObjectFilter as well. That means that performing Queries can no longer be delegated to the DCPS, since the Selection invokes the check_object method on the ObjectFilter for that purpose.

    Proposed Resolution:

    Make the ObjectFilter and the ObjectQuery to separate classes with a common parent called SelectionCriterion. A SelectionCriterion can be then be attached to a Selection, which will either invoke the check_object method in case of a Filter, or delegate the Query to DCPS in case of a Query.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Add the Set as a supported Collection type

  • Key: DDS12-42
  • Legacy Issue Number: 9524
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    In many applications there is a need for an unordered Collection without keys.

    Proposed Resolution:

    Add the Set as a supported Collection type in DLRL.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Harmonize Collection definitions in PIM and PSM

  • Key: DDS12-41
  • Legacy Issue Number: 9523
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    The Collection definitions are very different between the PIM and the PSM.

    Proposed Resolution:

    Use corresponding Collection definitions in PIM and PSM. Make a strict separation in the IDL between typed operations (to be implemented in the typed specializations, but to be mentioned in the untyped parents) and untyped operations (to be implemented in the untyped parents). Also remove methods that have a functional overlap with other methods.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Add Iterators to Collection types

  • Key: DDS12-40
  • Legacy Issue Number: 9522
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    It would be nice to have an iterator for Collection types to be able to iterate through the entire Collection. For Maps there should be iterators for both the keys and the values.

    Proposed Resolution:

    Add an abstract Iterator class to the DLRL, which has typed implementations to access the underlying data.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Issue was subsequently withdrawn from the RTF by the submitters of the issue

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Representation of OID should be vendor specific

  • Key: DDS12-48
  • Legacy Issue Number: 9530
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    The OID currently consists of two numbers: a creator_id and a local_id. The philosophy is that each writer should obtain its own unique creator_id, and can then sequence number each object created with it to obtain unique object identifiers. The specification does not specify how the writers should obtain their unique creator_id. Building a mechanism to distribute unique OIDs requires knowledge about the underlying system characteristics, and this information is only available in DCPS.

    Proposed Resolution:

    Make the definition of the OID vendor specific. This allows a vendor to specify its own algorithms to guarantee that each object has got a unique identifier.
    The only location where the application programmer actually has to know the contents of the OID is in the create_object_with_oid method on the ObjectHome. However, we see no use-case for this method and propose to remove it.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Add Listener callbacks for changes in the update mode

  • Key: DDS12-47
  • Legacy Issue Number: 9529
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    The CacheListener currently supports only two call-backs to signify the start and end of an update round. However because listeners are only used in enabled update mode it is important that when the DLRL switches between the enabled and disabled update mode that the listeners are notified, as the switch between update modes does not necessarily originate from the thread that registered the listener as well, and the fact that updates are enabled or disabled is a major event that should be known by the listeners.

    Proposed Resolution:

    Add two methods to the CacheListener interface, one for signalling a switch to automatic update mode, and for for signalling a switch to manual update mode.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Merge find_object with find_object_in_access

  • Key: DDS12-50
  • Legacy Issue Number: 9532
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    Currently there are separate methods to find a specific object based on its OID in the Cache and in a CacheAccess. It would be nice to have one method to search for an Object in any CacheBase.

    Proposed Resolution:

    Add a CacheBase parameter to the find_object method and remove the find_object_in_access method.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

define both the Topic name and the Topic type_name separately

  • Key: DDS12-49
  • Legacy Issue Number: 9531
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    XML mapping file does not allow you to define both the Topic name and the Topic type_name separately

    Summary:

    In the DCPS, there is a clear distinction between a topic name and a topic type. (Both names must be provided when creating a Topic). However, the DLRL mapping XML only allows us to specify one name attribute which is called ‘name’. It is unclear whether this name should identify the type name or the topic name. Currently we just have to assume that the topic name and type name are always chosen to be equal, but that does not have to be the in a legacy topic model.

    Proposed Resolution:

    Add a second (optional) attribute to the mainTopic, extensionTopic, placeTopic and multiPlaceTopic that identifies the type name. If left out, the type is assumed to be equal to the topic name.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Specification does not state how to instantiate an ObjectHome

  • Key: DDS12-54
  • Legacy Issue Number: 9536
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    There is no (default) constructor specified for the ObjectHome class. Nowhere in the specification it is specified how an ObjectHome should be instantiated and what the default values will be for auto_deref and for the filter expression.

    Proposed Resolution:

    Explicitly state that the default constructor should be used to instantiate an ObjectHome. Also state that by default the value of auto_deref will be set to true, and the filter expression will be set to NULL. Setting auto_deref to true by default ensures that the application developer has to make the conscious decision to set the auto_deref functionality to false for performance gain, which is more natural then the other way around

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

manual mapping key-fields of registered objects may not be changed

  • Key: DDS12-53
  • Legacy Issue Number: 9535
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Indicate that in case of manual mapping key-fields of registered objects may not be changed

    Summary:

    When using the DLRL with pre-defined mapping, keyfields of the topic can be mapped to ordinary attributes of a DLRL object. However, chaning these attributes on the DLRL object results in a change of identity on DCPS.

    Proposed Resolution:

    Do not allow attributes that are mapped to key fields in the underlying Topic to be modified after the DLRL object has been registered. Throw a PreconditionNotMet Exception if this rule is violated.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    No Data Available

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Remove lock/unlock due to overlap with updates_enabled

  • Key: DDS12-46
  • Legacy Issue Number: 9528
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    It is not clear why we should need a lock/unlock on the Cache when we can turn on and off the automatic updates. If an application does not want to be interrupted by incoming updates, it can simply disable the automatic updates, re-enabling them afterwards.

    Proposed Resolution:

    Remove the lock and unlock methods of the Cache.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    Remove the lock and unlock methods of the Cache

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Make update rounds uninterruptable

  • Key: DDS12-45
  • Legacy Issue Number: 9527
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Summary:

    According to the current specification, it is possible to interrupt an update round, by invoking the disable_update mehod in the middle of such an update round. This makes no sense, since it can leave the Cache in an undefined and possibly inconsistent state. The specification does also not explain how to recover from such a state.

    Proposed Resolution:

    Make sure that the automatic update mode can never be changed while in the middle of an update round. This way, update rounds can never be interrupted and the Cache will always be in a consistent state. This also removes the need for the interrupted and update_round parameters in the callback methods of the CacheListener.
    Also remove the related_cache parameter from the CacheListener, since it is not needed and is also missing in the IDL.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Raise PreconditionNotMet when changing filter expression on registered Obje

  • Key: DDS12-55
  • Legacy Issue Number: 9537
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    Raise a PreconditionNotMet when changing a filter expression on a registered ObjectHome

    Summary:

    ObjectHome contains a set_filter method to set the filter attribute. This method may only be called before an object home is registered. However the only exception that is thrown is the BadParameter exception. We believe this exception does not cover the fact that the set_filter can be called after the objecthome is registered, as bad parameter is not a good description for the error that should be generated then.

    Proposed Resolution:

    Raise a PreconditionNotMet Exception when the set_filter method is invoked after the ObjectHome has been registered to a Cache.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT

Add a static initializer operation to the CacheFactory

  • Key: DDS12-44
  • Legacy Issue Number: 9526
  • Status: closed  
  • Source: THALES ( Virginie Watine)
  • Summary:

    From the current DLRL specification it is not clear how to obtain your initial CacheFactory.

    Proposed Resolution:

    Add a static get_instance method to make the CacheFactory a singleton, just like we did for the DomainParticipantFactory in the DCPS.

  • Reported: DDS 1.1 — Mon, 3 Apr 2006 04:00 GMT
  • Disposition: Resolved — DDS 1.2
  • Disposition Summary:

    see above

  • Updated: Fri, 6 Mar 2015 20:58 GMT