IDL 4.1 RTF Avatar
  1. OMG Issue

IDL41 — The IDL4 spec syntax for bitsets departed from the one used in DDS-XTYPES

  • Key: IDL41-5
  • Status: closed  
  • Source: Real-Time Innovations ( Dr. Gerardo Pardo-Castellote, Ph.D.)
  • Summary:


    The IDL4 spec syntax for bitsets departed from the one used in DDS-XTYPES.


    In DDS XTYPES a bitset is defined like an enumeration with some extra annotations. For example:

    @BitSet @BitBound(16)
         enum MyBitSet {
           @Value(7) FLAG_7
           @Value(15) FLAG_LAST

    This defined the total number of bits in the bitset (16) as well provided symbolic names for the bits (FLAG_0), ... It also allowed the bit assignment to be defaulted (e.g. FLAG_0, FLAG_1) or explicitly assigned.

    It is unclear how to do this in IDL4.

    On the one hand one could write

    struct MyBitSet {
        bitset<1> FLAG_0;
        bitset<1> FLAG_1;
        bitset<1> FLAG_7
        bitset<1> FLAG_LAST;

    This would indicate each flag uses one bit. But it would not constrain the size of the overall bitset, nor the bit values for the flags.

    One could also say:

    struct MyBitSet {
        bitset<1, 16> FLAG_0;
        bitset<1, 16> FLAG_1;
        bitset<1, 16> FLAG_7
        bitset<1, 16> FLAG_LAST;

    According to the second integer would specify that the representation should use a 16-bit integer. But nowhere it is stated that all these will be reusing the same 16-bit integer...

    Also bitsets can appear inside other structures as in (see

    struct MyStruct {
                bitset<1>        foo1;
                bitset<3>        foo2;
                bitset<4>        foo3;
                long                 anotherMember;
                bitset<1>        bar1;
                bitset<1>        bar2;

    Where it is stated that bitsets that appear sequentially should be put together. This seems non-intuitive and brittle... In fact the spec recommends not to mix bitsets with other types for readability...

    This syntax seems a bit arbitrary and also hard to process by IDL compiler. The XTYPES approach was simpler. If the goal was to avoid "re-purposing" the "enum" then IDL4 could have defined a new keyword, but yet use similar syntax/approach as XTYPES as in:

    bitset {
        @value(0) FLAG_0;
        @value(1) FLAG_1;
        @value(7) FLAG_7;
        @value(15) FLAG_LAST;
    // OR
    bitset {
        FLAG_0 = 0;
        FLAG_1 = 1;
        FLAG_7 = 7;
        FLAG_LAST = 15;

    This is less ambiguous as it separates clearly the size of the bitset from the actual flags-names used for each bit. It also allows the user to not bother assigning explicit values to each bit. Finally the syntax is closer to that of enumeration so it is much easier for IDL parsers to handle it.

  • Reported: IDL 4.0 — Wed, 10 Aug 2016 15:15 GMT
  • Disposition: Resolved — IDL 4.1
  • Disposition Summary:

    Redesign of bitsets

    In the previous version of IDL4, bit sets were created to serve two main purposes in a single construct, namely::

    • Bit fields (à la C/C++)
    • Named accesses to individual bits (as in XTypes)

    Rationale for that "all-in-one" approach was to avoid introducing too many new IDL keywords. However, the result was deemed difficult to understand and to implement (notably because a struct had then different meanings and implementation depending if it contains or not bit fields) and not fully compliant with what was designed in XTypes.

    The proposal is thus to separate those two things and to design accordingly two different constructs.
    The consequence is that it introduces 2 new keywords (bitfield and bitmask) in addition to the existing one (bitset), which is not that annoying as they should not collide too often with existing identifiers (and if they do, IDL provides a mechanism to mitigate the collision).

  • Updated: Thu, 6 Apr 2017 13:50 GMT