Issue 3516: following question regarding modifications to CORBA core
Issue 3747: define the State as typedef any State
Issue 3778: Issue with 'factory'
Issue 3856: Propose Remove use of Filter
Issue 3908: Encoding of Service Contexts in Fault Tolerant CORBA specification missing
Issue 3910: typedef issue
Issue 3920: FT-FTF Issue: Request more powerful property management
Issue 3921: FT-FTF Issue: Intelligent factory selection
Issue 3976: Harmful deprecation of LOCATE_FORWARD_PERM for Faut Tolerant CORBA
Issue 4066: term "method" used wrongly
Issue 4109: On page 27-9 of the FT CORBA spec, under "Application-Controlled Membership
Issue 3516: following question regarding modifications to CORBA core (ft-ftf)
Click here for this issue's archive.
Source: Oracle (Dr. Anita Jindal, nobody)
Nature: Clarification
Severity:
Summary:
Basically, the Failure OBJ_ADAPTER is considered a failover condition in the document that was sent out. In most cases OBJ_ADAPTER exception may be thrown when there is an internal ORB error. In case of an internal ORB error, the retry on the TAG_ALTERNATE_IIOP_ADDRESS may still yield the same exception. This may be inefficient. Do you see situations where doing a failover on this particular exception is useful.
The Fault Tolerant CORBA specification defines the State used by the get_state(), set_state(), get_update(), set_update() methods, as typedef sequence<octet> State Those methods must be implemented by the application programmers. They will find their task easier if we define the State as: typedef any State
The Fault Tolerant CORBA specification contains the following struct.
struct FactoryInfo
{
GenericFactory factory;
Location the_location;
Criteria the_criteria;
};
This causes a problem for the IDL compilers of some vendors, because
"factory" is a keyword in CORBA V2.3. See CORBA V2.3, page 3-8, Lexical
Conventions, June 1999.
We need to change "factory" in this struct to "fact", "fctry",
"generic_factory", or whatever. What is your preference?
Motivation: The Notifier will be easier to replicate if it is a single
object. At present, all Filters created by the Notifier must also be
replicated. Furthermore, there is no requirement that a Filter be destroyed
by the client that created it (once it is done using it), raising a garbage
collection issue. FOr a connected consumer, if the consumer no longer
exists the Notifier can discard the connection. There is no analagous test
for Filters.
The Notifier interface is already a collapsed version of multiple
CosNotification APIs to get rid of the channel/admin/proxy objects in favor
of one object, so I am just proposing we carry through on that approach.
Óne proposal:
First, remove method create_subscription_filter.
Second, change the 2 connect_foo_fault_consumer methods
(connect_structured_fault_consumer + connect_sequence_fault_consumer) to
take just a consumer and a grammar:
ConsumerId connect_foo_fault_consumer (in CosNotifyComm::FooPushConsumer,
in string constraint_grammar)
raises (ÏnvalidGrammar, AlreadyConnected)
One or more event forwarding constraints is associated with each connected
consumer, with the default being a constraint that matches all events. The
ConsumerID returned from a call can be passed into methods that modify
these constraints. When a consumer is disconnected, the associated
constraints are discarded.
Third add methods for manipulating constraints associated with a ConsumeID:
string constraint_grammar(in ConsumerID)
void add_constraints(in ConsumerID, ...)
void remove_constraints(in ConsumerID, ...)
void remove_all_constraints(in ConsumerID)
void modify_constraints(in ConsumerID, ...)
ConstraintExpSeq get_constraints(in ConsumerID)
where ... means the normal arguments that are in the corresponding methods
in the Filter spec.
The above methods correspond to the Filter methdos that are required in the
current version of the spec, except I left out 2 of them, match_structured
and destroy. I do not think we need to support match_structured -- only the
Notifier needs to be able to do matching. destroy is not needed because
there is no filter object to be destroyed. (disconnect is sufficient.)
ALTERNATE PROPOSAL
A simpler scheme is to associate a single constraint with each consumer.
This is not very restrictive, especially when you consider that there is
currently only one event type in use in the FT spec. The default would
still be a constraint that matched all events. In this case the only method
needed to modify this constraint is:
void replace_constraint(in ConsumerID,
in EventTypeSeq event_types,
in string constraint_expression)
Further, if we are willing to stick to the default constraint grammar, no
grammar needs to be specified, which simplifies connect_foo_consumer -- not
only by removing the constraint_grammar argument but also by removing the
InvalidGrammar exception, which comes from CosNotifyFilter. I believe one
could simplify things enough to get rid of any dependencies on
CosNotifyFilter. It is not clear how important this is, but I thought I
should mention the possibility.
13.6.7 of the CORBA 2.3 specification states: "The context data for a particular service will be encoded as specified for its service-specific OMG IDL definition, and that encoded representation will be encapsulated in the context_data member of IOP::ServiceContext. (See Section 15.3.3, Encapsulation, on page 15-13)." The descriptions of service contexts in the FT spec are missing an explicit statement of the encoding of the service context data. Proposed Resolution: Add the following sentence in all appropriate sections: "When encoded in a request or reply message header, the <code>context_data</code> component of the <code>ServiceContext</code> struct will contain a CDR encapsulation of the xxxxxx struct."
One additional issue I have is that the ReplicationStyleValue, MembershipStyleValue, ConsistencyStyleValue, FaultMonitoringStyleValue, FaultMonitoringGranularityValue are typedefed to long, whereas the InitialNumberReplicasValue and MinimumNumberReplicasValue are typedefed to unsigned short. It might be more appropriate to typedef all of these to unsigned short.
There are 3 problems w.r.t. property management which I will list
together since a solution could/should address all of them. I will
send a proposed solution in another message.
---------------------------------------------------------------
** Problem A **
For sequence-valued properties, there should be a way to add or remove
one or more elements from a sequence value without having to resort to
using more than one method call.
Notes:
Currently, one has to 'get' the current sequence value, modify it, and
then 'set' the sequence value. This results in a nasty race
condition: it is not safe to have independent threads of control doing
get-set combinations.
A simple solution is to have new methods for element update. A
complex solution is to allow any number of updates, including property
set, element addition, element removal, for default and type and
dynamic, all to be grouped and sent to the manager in one
property_update request. For example, one might want to group the
removal of a property P from type T and the addition of property P as
a default property.
---------------------------------------------------------------
** Problem B **
The property management interface has insufficient power.
Notes:
One can query or update over a single type or a single object group,
but not over a set of types or a set of object groups. Further, one
cannot:
* get a list of types that have
+ at least 1 property defined;
+ specific propert[y|ies] defined;
+ at least 1 factory at specific location[s];
[ or modify properties for the specified types ]
* get a list of existing object groups that have
+ specific propert[y|ies] defined;
+ specific type[s];
+ an active replica at specific location[s];
[ or modify properties for the specified object groups ]
* get a list of active replicas that have
+ specific type[s];
+ specific location[s];
[ or modify properties for the specified replicas ]
* get a list of locations that have
+ at least 1 property defined;
+ specific propert[y|ies] defined;
+ at least 1 active replica;
+ an active replica for specific object group[s];
+ an active replica of specific type[s];
[ or modify properties for the specified locations]
* other query/update cases that should be supported?
---------------------------------------------------------------
** Problem C **
The property management interface does not sufficiently distinguish
between high-level FT QoS properties used to manage entire object
groups and low-level object construction properties used to
select factories and create individual replicas.
Notes:
High-level QoS properties change infrequently, and never differ across
replicas. Low-level construction properties change more frequently as
factories are created/destroyed/lost, and they do differ across
locations/replicas (different factories, different criteria).
In each case, one must distinguish between properties for an existing
object group and properties to be used for future object groups. Even
for replica construction properties, one should be able to assign a
different set of locations/factories to be used for new replica
creation for existing object group[s] and for future object groups.
Currently, low-level properties are buried in a single value that is
stored with a single property (FactoryInfos), either for a specific
type or for a specific object group. This makes it very hard to do
lookup or modification of these properties by location or by the pair
type x location or object group x location. To replace an Info for a
single location one must replace the entire Infos sequence. Even with
the ability to add/remove a member of a sequence, to replace either
the factory or the criteria within a given Info one would have to
remove the current Info and replace it with a new Info, where the Info
would need to contain a copy of the part(s) that are not to be
modified together with the modified part.
BTW I am leaning towards splitting the PropertyManager into a
GroupQoSManager and a FactoryManager, but other approaches are
possible. One argument for the split is that it seems to make sense
for a FactoryManager to monitor the liveness of registered factories
and to provide logic for selecting an appropriate factory and
associated criteria for construction of a new replica for a given
group or type. In contrast, it does not make sense for a generic
property manager to do monitoring (or to know anything about the
values stored in properties).This issue was determined to be out-of-scope of the Fault Tolerant CORBA Finalization Task Force.
An FT-FTF (Fault Tolerance Finalization Task Force) Issue:
GOAL: Introduce intelligent factory selection.
On a single machine (perhaps an N-way multiprocessor, but even for a
uniprocessor) one might want to have N factories, corresponding to N
processes that will have replicas created in them. Ideally, only one
replica for a given group should be started for the entire machine.
Similarly, if one has several subnets, one might have factories on all
machines, but ideally only one replica should be started per subnet,
if appropriate factories are available. If the only factories
available for a given type happen to be on the same subnet or same
machine, then it should be possible to specify either that it is OK to
go ahead with replicas on the same subnet or same machine or it is not
OK. Alternatively, I might want all replicas to be on the same
subnet, if possible, to reduce coordination costs, while still
wanting a different hosts requirement.
How to extend the specification to enable this feature?
One proposal is to take advantage of the fact that location names are
structured. While any structuring is allowed, we could declare that
if you want to use an intelligent factory selection policy you must
use names that capture the hierarchical nature of fault domains.
E.g., for my scenario I could use names that capture
subnet/host/processor distinctions:
sA.h1.p1, sA.h1.p2, sA.h2.p1, sA.h2.p2, ... sA.hJ.p1, sA.hJ.p2
sB.h1.p1, sB.h1.p2, sB.h2.p1, sB.h2.p2, ... sB.hK.p1, sB.hK.p2
I believe there should be a LocationHints property for types or groups
that is distinct from the issue of how many actual locations have
available factories, where hints are like location names but can have
wildcards. Thus, I could specify sA.*.* and sB.*.* as LocationHints
for type T to indicate that I prefer replicas for type T to be started
on machines on subnets sA and sB. Note that this is very different
from giving a list of specific locations. (I certainly do not want to
specify which processor number to use!) While the set of available
factories might change frequently, the hints should be relatively
stable.
Assume that as factories are created at specific locations (such as a
new factory F1 at location sA.h3.p1) they could be registered with a
FactoryManager. This manager knows all the location names that have
factories registered for a given group or object type. One algorithm
to select a location, given a set of existing replica locations and
possibly some location hints, is to choose a location name that
matches one of the hints and has the greatest difference from the
existing names, where a difference in the i'th part of a name
dominates a difference in the j'th part of the name.
Alternative algorithms are possible, e.g., one might prefer to keep
replica groups in the same subnet but on different machines, which
corresponds to a rule that says equality of the first part of the
name is the primary determinant, while for positions 2 and on, use the
greatest difference rule above.
We could have a QoS property called FactorySelectionPolicy which is a
string and have some predefined algorithms (+ algorithm names).
Vendors could define additional algorithms.
An alternative to having a fixed number of predefined algorithms is to
introduce a means of describing a whole class of algorithms. Here is
one approach.
For a given part, one of 5 requirements holds:
. NC : no constraint
. EB : equality best, inequality allowed
. ER : equality required
. DB : difference best, equality allowed
. DR : difference required
A policy string is a sequence of <requirement> specs separated by dots
("."). Each requirement applies to the part at the given location,
while the final <requirement> applies to the part at its location and
all subsequent locations. E.g., the spec ER.DB.DR requires equality
for part 1, prefers difference for part 2 (but not required), and
requires difference for all remaining parts (3, 4, ... ).
DR/ER constraints have higher priority than DB/EB constraints (all
DR/ER constraints must be met).
When there are optional constraints, a solution that satisfies an
earlier optional constraint has priority over a solution that
satisfies a later optional constraint. This is true regardless of how
many optional constraints can be satisfied, e.g., satisfying the first
optional constraint but not the second or third has priority over
satisfying both the second and third optional constraint but not the
first. The reverse ordering (favoring later optional constraints over
earlier ones) can be selected by adding a less-than ("<") sign at the
end of the policy string.
For solutions that satisfy the same earliest (or latest in the case of
"<") optional constraint, solutions that satisfy more optional
constraints have priority over solutions that satisfy fewer optional
constraints. This rule can be overridden by adding "MIN:" as a prefix
to the policy string (indicating that the minimal number of optional
constraints should be met --- i.e., at least one optional constraint
should be met, if possible, but beyond this, solutions that satisfy
the fewest additional optional constraints are favored).
The resulting location selection policy implicitly includes a final
global constraint: the locations chosen for a given group must be
unique.
N.B. When locations have a different number of parts, EB and DB
requirement are ignored for missing part locations, while if
one location has a part but another does not, this
satisfies the DR requirement and fails the ER requirement.
Some example selection policies:
[1] NC
No part is constrained. Due to the implicit global
constraint, NC selects unique locations,
but selection is otherwise random.
[2] DR
*Every* part must differ. This policy is not
often used; it is more common to follow one or more
DR constraints with some optional constraints
or with NC, as in the next example.
[3] DR.NC
The first part must differ, while there are no
constraints on the other parts.
[4] DB
A difference is best for each part, but not required
for any given part. The result is a selection algorithm
that attempts to find a difference in the earliest
possible part. When several locations differ
starting at the same earliest part, the algorithm favors
selecting locations that differ in as many subsequent
parts as possible.
[5] MIN:DB
Like DB, except when several locations differ
starting at the same earliest part, the algorithm favors
selecting locations that differ in as few subsequent
parts as possible.
[6] DB<
Like DB, except the algorithm favors
locations that differ in the latest possible part.
[7] EB
Equality is best for every part, but not required
for any part. The result is a selection algorithm
that attempts to find equality in the earliest
possible part. When several locations are
equal starting at the same earliest part, the algorithm favors
selecting locations that are equal in as many subsequent
parts as possible.
[8] ER.DB
Equality of the first part required, while differences
in other parts are preferred but not required, with
earlier optional differences dominating later ones.
[9] EB.DB
Equality of the first part is preferred, while differences
in other parts are preferred but not required, with
earlier optional differences dominating later ones
(EB dominates DB and earlier DB differences dominate
later ones).
Consider the subnet.host.processor location naming scheme.
+ DR.NC would choose a different subnet for each replica
and otherwise choose an arbitrary factory in each subnet.
+ EB.DB would choose the same subnet for all replicas,
if possible, but if necessary would use different
subnets. For locations in the same subnet,
it would attempt to use different hosts and different
processors, with higher priority given to using
different hosts.
+ EB.EB.DB< would attempt to find locations that differ
in the processor part but have the same host and
subnet, where the processor difference has highest
priority, host equality has next highest priority, and
subnet equality has least priority. This would tend to
cluster replicas as close together as possible, optimizing
coordination cost while sacrificing some reliability.
+ MIN:DB< has the same effect as EB.EB.DB< :
it specifies minimal DB matches (beyond 1 match)
with priority given to later parts over earlier ones.
MIN:DB< has the advantage that it works with locations
of any length, while EB.EB.DB< is only useful for
locations of length 3.This issue was determined to be out-of-scope of the Fault Tolerant CORBA Finalization Task Force.
Earlier this year, the interop FTF deprecated the LOCATE_FORWARD_PERM exception because of several reasons : - it was badly specified - it made the implementation of hash() difficult, and broke most of the existing ones. It turns out that the Fault Tolerance specification published a little earlier crucially requires a similar mechanism. In normal life, most applications can rely on plain LOCATE_FORWARD because there is no reason to expect the death of the originally pointed component. In the case of Fault Tolerant CORBA, this is entirely different: it is precisely when we issue a LOCATE_FORWARD_PERM that we know for sure that the original component is dead, and might never return. If all the backup profiles of an IOR enjoy the same death, all hope is gone. This means that without a mechanism similar to LOCATE_FORWARD_PERM, the Fault Tolerant CORBA spec cannot address the requirements of real fault-tolerant systems. This is why the Fault-Tolerant CORBA FTF would like to see LOCATE_FORWARD_PERM re-introduced in some way. Here are a few ideas that might help : Issue of scope: The scope of LOCATE_FORWARD_PERM is ORB lifetime. Issue of hash() : Let us be reminded that the Fault-Tolerant CORBA spec defines teh concept of an Interoperable Object Group Reference (IOGR). The IOGR contains a specific profile that contains a group identifier. - When an ORB receives and IOGR, it should compute the value of hash() based on the GroupID contained in the IOGR, and performs LOCATE_FORWARD_PERMs if requested. - When an ORB receives a normal IOR (i.e. an IOR lacking a group profile) it computes hash() in the customary way, and doesn't have to respond to LOCATE_FORWARD_PERMs.
Throughout the document, the authors use the term "method" several times where they should be talking about "operations" instead. This violates the general understanding of the OMG terminology, where IDL interfaces contain "operations", not "methods". The term "method" is usually reserved as a concept of oo programming languages. I recommend that for the next revision, the authors run a global search&replace and identify where they want to talk about methods and where of operations.
On page 27-9 of the FT CORBA spec, under "Application-Controlled Membership", "The application-controlled (MEMB_INF_CTRL) Membership Style" should be corrected to read "The application-controlled (MEMB_APP_CTRL) Membership Style"