 | Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems | | April 17-19, 2012, Concorde La Fayette, Paris, France | | | | | | | | PLATINUM SPONSORS: | GOLD SPONSOR: |  |  |  |  |  | Program | TUESDAY April 17, 2012 – Tutorial Day | | 09:30 – 12:50 | Getting Started with DDS in C++ and Java | TRACK 1 | Angelo Corsaro, Chief Technology Officer, PrismTech The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for Publish/Subscribe that addresses the needs of mission- and business-critical applications, such as, financial trading, air traffic control and management, and complex supervisory and telemetry systems. DDS provides a powerful set of abstractions for building distributed systems as autonomous and asynchronous entities that cooperate by reading and writing a shared (and distributed) data model. DDS is also equipped with a rich set of Quality of Service (QoS) Policies that control resource usage, timelines, replication and data availability. This tutorial will cover the DDS foundation concepts and will illustrate how DDS functionalities are accessible through the new ISO C++ and Java 5 APIs. Regardless of whether you are an experienced DDS programmer or a newbie, this Tutorial will give you the foundations for effectively writing efficient and safe DDS applications in C++ and Java. The Tutorial will also cover the key programming idioms and patterns that are supported and promoted by the new APIs and explain how and when to use them. The only pre-requisite is a reasonable level or proficiency with C++ / Java. Several code examples will be provided to highlight the key DDS concepts. Code examples used throughout the tutorial will be freely available for download. | | 09:30 – 12:50 | Using the Lightweight CORBA Component Model and DDS for Lightweight CCM to Develop Distributed Real-time Systems | TRACK 2 | Dr. Douglas C. Schmidt, Professor, EECS Department, Vanderbilt University William R. Otte, Research Assistant, EECS Department, Vanderbilt University Johnny Willemsen, Technical Director, Remedy IT Distributed Real-time & Embedded (DRE) applications, such as flight avionics systems and financial trading systems, typically have stringent Quality of Service (QoS) requirements, which if not met can lead to undesirable or even catastrophic consequences. Research in QoS-enabled middleware has shown that maintaining QoS in such systems largely depends on proper application and system resource management, which often cross-cuts system and application boundaries. As DRE applications become increasingly complex, managing these resources from application code imposes an increasing burden on the programmer. Work presented at previous OMG Real-Time Workshops has shown that as much as 80 percent of DDS-related code in a typical application is associated with configuring the middleware. The OMG Lightweight CORBA Component Model (CCM) and DDS for Lightweight CCM (DDS4CCM) specifications address several of these difficulties. They provide a clear definition of a software component as an independent unit of reuse and composition, standardized component run-time container environments, standardized interaction models among components and run-time environments, and standardized mechanisms for configuring and deploying applications from reusable software components. This tutorial will explain the key features of the Lightweight CCM specification, including the new IDL3+ language for defining data types and interfaces. It will also cover the new DDS4CCM specification in depth. Examples will demonstrate how to develop CORBA components, how to assemble these components into applications, and how to deploy these applications in the CCM run-time environment. Other examples will show how real-time extensions to Lightweight CCM enable the development of robust, adaptive, and complex DRE applications. The tutorial will also cover the Deployment and Configuration for Component Based Applications specification, and how its data model and run-time interfaces can be used to deploy and configure distributed component-based systems. | | 11:00 – 11:20 | Morning Refreshments | | 12:50 – 14:05 | Lunch | | 14:05 – 17:25 | Advanced DDS Tutorial: Best Practice Data-Centric Programming with DDS | TRACK 1 | Gerardo Pardo-Castellote, Chief Technology Officer, Real-Time Innovations Users upgrading to DDS from a homegrown solution or a legacy-messaging infrastructure often limit themselves to using its most basic publish-subscribe features. This allows applications to take advantage of reliable multicast and other performance and scalability features of the DDS wire protocol (DDS-RTPS), as well as the enhanced robustness of the DDS peer-to-peer architecture. However, applications that do not use DDS's data-centricity do not take advantage of many of its QoS-related, scalability and availability features, such as the KeepLast History Cache, Instance Ownership and Deadline Monitoring. As a consequence some developers duplicate these features in custom application code, resulting in increased costs, lower performance, and compromised portability and interoperability. This tutorial will formally define the data-centric publish-subscribe model as specified in the OMG DDS specification and define a set of best-practice guidelines and patterns for the design and implementation of systems based on DDS. | | 14:05 – 17:25 | Lightweight Fault Tolerance at Work | TRACK 2 | Hakim Souami, Technical Architect, Thales The Lightweight Fault Tolerance (LwFT) for Distributed RT Systems specification provides solutions for building highly fault-tolerant systems using application-provided fault detectors, fault analyzers and recovery mechanisms. Applications can integrate the FT-enabled middleware into their own infrastructures with their own supervision systems. Though CORBA-based, the LwFT specification can be used for DDS-based applications and can cope with multiple replica consistency styles. This tutorial will detail use of LwFT APIs to develop a simple primary/backup application: - Startup of a primary replica
- Backup insertion
- Replica recovery
- Integration with an application fault-detector
- Integration with application consistency management (example with DDS and/or replicated database)
- Use of FT CORBA IOGR and enforcing at-most once semantics
| | 15:35 – 15:55 | Afternoon Refreshments | | | | | WEDNESDAY April 18, 2012 – Workshop Day 1 | | 08:55 – 09:00 | Welcome & Opening Remarks | | | Program Chair: Andrew Watson, Object Management Group | | 09:00 – 10:30 | SESSION 1: Components | | | Chair: Virginie Watine, Thales A Real-time Middleware and Component Model for Fractionated Spacecraft Abhishek Dubey, William Emfinger, Aniruddha Gokhale, Gabor Karsai, William R. Otte, Jeffrey Parsons & Csanad Szabo, Institute for Software Integrated Systems Alessandro Coglio & Eric Smith, Kestrel Institute Prasanta Bose, Advanced Technology Center (ATC) Accompanying Background Design Paper A fractionated spacecraft is a cluster of independent modules that interact wirelessly to maintain cluster flight and realize the functions usually performed by a monolithic satellite. This spacecraft architecture poses novel software challenges because the hardware platform is inherently distributed, with highly fluctuating connectivity among the modules. It is critical for mission success to support autonomous fault management and to satisfy real-time performance requirements. We will provide an overview of our Information Architecture for fractionated spacecraft, which is a specific instantiation of OMG's Model-Driven Architecture (MDA). Called F6MDA ("F6" from "Future, Fast, Flexible, Fractionated, Free-Flying Spacecraft united by information exchange"), it is based on number of open standards, including UML, CORBA, CCM, DDS, DDS4CCM, MARTE, SysML, ARINC-653, and POSIX. We will focus specifically on the features and architecture of its restricted middleware layer that implements only the essential communication services for the distributed system (F6ORB), and the component model that defines how components are built and how applications are constructed from components (F6COM). | | | Toward a Unified Component & Deployment Model for Distributed Real-Time Systems Sumant Tambe, Software Engineer, RTI Heidi Schubert, Director of Research, RTI Johnny Willemsen, Technical Manager, Remedy IT Component-based software engineering (CBSE) holds the promise of reducing the complexity of software development and increasing the reuse of software assets. Several component frameworks have been successfully defined in both the enterprise and real-time systems domains. However, to date no component model has emerged as the clear choice for general distributed real-time systems, because the existing real-time component frameworks do not meet the desirable level of abstraction and separation of concerns. For instance, OMG CCM and lwCCM depend on a single common middleware (CORBA). The OMG DDS4CCM framework broadens the scope of CCM to incorporate OMG Data Distribution Service (DDS), but it still depends heavily on CORBA. This talk will present the lessons learned from the existing component frameworks and identify places where further generalization is necessary to realize a unified component model for distributed real-time systems. This session will give an overview of the ideas for a new CCM and D&C specification. On the CCM side we will go into the ideas to migrate from CCM to a unified component model that is focused on Real Time and embedded systems. On the D&C side we will describe the ideas to create a specification that allows more dynamic system behavior as part of the specification. New techniques for declarative specification of interaction patterns (request/reply, pub/sub, queue, point-to-point, etc.) and general-purpose deployment infrastructure will be discussed. The presentation will conclude with a set of recommendations that could be used as guidance for a potential future OMG Unified/Common Component Model standard. | | | iCCM: A Framework for Servant-based Integration of DDS into the CORBA Component Model James H. Hill, Assistant Professor, Indiana University-Purdue University Indianapolis The DDS4CCM specification aims to providing the benefits of a component model to the DDS programmer, abstracting away low-level programming tasks such as manually construct the publisher/subscriber entities and binding data types to them. It uses a connector-based approach for integrating DDS into the CORBA Component Model (CCM). Connectors, which reside outside the actual CCM architecture, act as a gateway between a CCM component and the DDS middleware. The connector-based approach, however, is not the only way to integrate DDS into CCM. This talk presents current work on our iCCM framework, based on the alternative approach of integrating DDS at the servant level in CCM. It will discuss the design challenges faced when designing and implementing iCCM while attempting to not modify either the existing CCM or DDS specification. This presentation will also compare performance results of iCCM, DDS4CCM, and direct integration of DDS into the component's "business-logic". To date, we have integrated RTI-DDS and OpenSplice into CCM using the iCCM framework. Finally, this presentation will summarize our work extending iCCM to integrate other distributed middleware technologies into CCM. | | 10:30 – 10:50 | Morning Refreshments | | 10:50 – 11:50 | SESSION 2: Instrumentation | | | Chair: Nick Stavros, Jackrabbit Consulting OASIS: A Framework for Real-time Instrumentation of Distributed Real-time and Embedded Systems James H. Hill, Assistant Professor, Indiana University-Purdue University Indianapolis Dennis Feiock, Indiana University-Purdue University Indianapolis Tanumoy Pati, Indiana University-Purdue University Indianapolis Software instrumentation is the process of monitoring a DRE system's behaviour to collect metrics of interest, such as CPU usage, memory allocation or event arrival rate. There are two dominant approaches: intrusive or non-intrusive. In intrusive software instrumentation, developers modify existing source code to collect the sought-after metrics. In non-intrusive instrumentation the application source code is not modified, but (for example), the binaries are modified by tools that inject instrumentation points (this is called "dynamic binary instrumentation"). We have implemented a real-time instrumentation architecture and framework named the Open-source Architecture for Software Instrumentation of Systems (OASIS). OASIS enables DRE system developers to collect metrics of interest without knowing a priori the structure and complexity of what is being collected. OASIS accomplishes this feat by (1) giving DRE system developers the ability to express what metrics should be collected either manually or autonomously, which are called software probes; and (2) providing the general-purpose framework for collecting, extracting, and disseminating the collected metrics on different distributed middleware platforms, such as CORBA, DDS, or TENA. Because OASIS's instrumentation architecture and framework is decoupled from the DRE system's implementation, DRE system developers can defer decisions about what metrics to collect. We will present our work on OASIS, and show how it addresses many of the challenges associated with real-time instrumentation of DRE systems. We will show how we have applied OASIS to different DRE system case studies (both software-only and hardware/software) that have had major influence on OASIS's design and implementation to support the masses while continuing to meet its mission - general-purpose instrumentation middleware that can fit the needs of any application domain. Finally, we highlight open challenges in real-time instrumentation for next-generation enterprise DRE systems. | | | Minimally Intrusive Real-time Software Instrumentation Gerardo Pardo-Castellote, Chief Technology Officer, RTI Andrea Sorbini, Software Engineer, RTI Application instrumentation is key to understanding the factors that affect DRE system performance (such as processor load, network performance and undiagnosed application bugs), and hence both maximizing that performance and ensuring correct operation. However instrumenting distributed real-time applications presents significant technical challenges; the instrumentation must be minimally intrusive to avoid perturbing the operation of the system. This presentation introduces recent research on a new architecture for instrumenting distributed real-time systems. The approach combines a minimally-intrusive API with a distribution back-end based on the OMG DDS specification. With this design, developers can directly see and record key internal application data, dynamically monitor network and process statistics, and analyze system operational performance - all in real time. This design supports direct code instrumentation, operating-system statistical collection, and integration with post-processing tools. The architecture provides visibility into all important system data, in real-time, with minimal developer effort. We will describe the API model, how it maps to DDS, and present benchmarks illustrating the impact of using this API on application performance as a function of collection frequency and number of variables collected. This work is very timely in view of the on-going OMG work to create an application instrumentation specification. | | 11:50 – 12:50 | SESSION 3: High-integrity and High-assurance | | | Chair: Angelo Corsaro, Chief Technology Officer, PrismTech Model-based development of ARINC653 using UML and SysML Andreas Korff, Product Manager, Atego Systems The session presentation shows how to bridge the worlds of IMA, ARINC 653 and UML/SysML modeling of systems and software in a pragmatic way. Integrated Modular Avionics (IMA) has goals similar to normal systems or software modeling using UML and SysML; minimize life cycle costs and enhance system and software quality. ARINC 653 aims to standardise the integration of IMA concepts into RTOS usage, including the definition of configuration data. In this presentation, we will illustrate the transition from a traditional, federated architecture, illustrating the typical obstacles to achieving an object-oriented IMA network architecture with appropriate layers, modeled using best practice UML/SysML concepts. We will show how IMA hardware and software elements in a model are annotated using an ARINC 653 profile. This meta-model extension supports generating both code and configuration data from one model, using all the traceability offered by SysML. | | | Building High Integrity Systems out of Independently Developed Java Components of Unknown Pedigree Dr. Kelvin Nilsen, Chief Technology Officer for Java, Atego Systems Tom Grosman, Senior Real-Time Consultantl, Atego France A critical contributor to the popularity of the Java language for enterprise-critical applications has been the ease with which independently-developed software components can be ported and integrated into new software contexts running on new hardware platforms. Surprisingly, even dedicated real-time systems deployed in mission-critical applications are often constructed from a combination of open-source software, commercial off-the-shelf components licensed from third parties. Although the cost savings are appealing, this approach complicates validation and verification, since software developed by outsiders may not have been developed to the same quality standards as that built in-house. This talk discusses common approaches to partitioning Java software so as to limit the reliability risks posed by externally-developed software. The surveyed approaches include careful application of object-oriented encapsulation techniques, the use of special Java virtual machine implementations, instantiation of multiple Java virtual machines for different integrity levels, and the use of static analysis techniques to audit potential information flows within off-the-shelf software components. | | 12:00 – 19:00 | Exhibition Area Open | | 12:50 – 14:05 | Lunch | | 14:05 – 15:35 | SESSION 4: Future Directions | | | Chair: Angelo Corsaro, Chief Technology Officer, PrismTech DDS for SCADA Erik Boasson, Senior Engineer, PrismTech Ltd Supporting SCADA systems in DDS poses some unique challenges because of the massive number of individual data sources and sinks in those systems. The unit of subscription in DDS is the topic, but each topic, reader and writer incurs a significant overhead. We have designed and built a layer that provides a very simple interface to the application and sits atop DDS, and allows applications to publish and/or subscribe to millions of individual parameters with very low overhead. Our layer uses a simple DDS-based discovery protocol to determine what needs to go where at a logical level, dynamically mapping parameter values to DDS samples and leaving all the intricacies of real networks to DDS. This results in a much smaller memory footprint, excellent network utilization, and CPU requirements that vary from negligible for publishing updates to parameters to which there are no subscribers, to simply modest for those that are needed elsewhere. This presentation will showcase its use and provide an empirical evaluation of its time and space efficiency. | | | Data-centric Invocable Services Dr. Rajive Joshi, Principal Solution Architect, Real-Time Innovations Inc. An invocable service is an abstraction for a collection of functionality expressed as a set of operations. Each operation represents a request-response interaction between two parties. Traditional implementations of invocable services are tightly coupled - the interaction is dependent on the locations of the server and client, requires a reliable unicast network transport, and relies on a "point-to-point" session being established between the clients and the server. While these choices may be reasonable in a tightly managed "data-center" environment, they lead to several limitations for systems on the operational or tactical edge. This presentation takes a different approach. By defining an invocable service as a set of data-centric interactions, we address these limitations while preserving the logical service-centric remote-invocation. Moreover by implementing the data-centric interactions on top of standard technologies such as the OMG Data Distribution Service, we provide several additional benefits, including the ability to cancel requests, offer partial implementation of the service interfaces, remote auditability, and control over the quality of service and prioritization. | | | The New IDL to C++11 Language Specification Johnny Willemsen, Technical Manager, Remedy IT One of the main goals of the new IDL to C++11 language mapping is to create a mapping which feels natural for a C++ programmer. As a result the development of CORBA, DDS, and CCM based applications will be much easier and safer. This reduces development time and costs. This session will give an overview of the basic concepts of this new language mapping. It will clearly demonstrate the simplicity and ease of use. We will finish with an overview of the standardization effort and the schedule for the availability of a full implementation. | | 15:35 – 15:55 | Afternoon Refreshments in Exhibit Area | | 15:55 – 16:55 | SESSION 5: MARTE | | | Chair: Sébastien Gérard, Senior Expert, CEA UML, SysML and MARTE in Use, a High Level Methodology for Real-time and Embedded Systems Alessandra Bagnato, Project Manager, TXT e-solutions Andrey Sadovykh, International Project Manager and Imran Quadri, Engineer, Softeam R&D We will present the development context and needs that have fostered the creation of a methodology and a set of UML, SysML and MARTE model-based diagrams within the EU-funded MADES project [http://www.mades-project.org/]. MADES aims to develop novel model-driven techniques to improve existing RTES development practices for avionics and embedded systems industries. We will illustrate modeling scenarios created with the SOFTEAM ModelioSoft Modeling Tool and analyze typical real-life modeling situations, taking advantage of UML, SysML and MARTE UML profile modeling capabilities. | | | PRESTO: Improvements of Industrial Real-time Embedded Systems Design and Development Imran Quadri, R&D Engineer, Softeam Shuai Li, PhD Student, Thales Communications and Security Due to continuous evolution in the industrial process developments of real time and embedded systems, new challenges have risen in their design and development. Constraints such as related to limited resources and effective allocations of application functionalities on execution platforms are some of the issues that need to be carefully addressed, as early as possible, during the design stages. A high level model-driven methodology thus seems effective as it provides solutions to respond to these design challenges at initial development phases, while reducing development costs and decreasing time to market. The PRESTO project (http://www.presto-embedded.eu/) inspires from these aspects and proposes a complete tools set integrating test traces exploitation, platform models and design space exploration techniques to provide design-time functional and performance analysis; along with platform optimization. Particular attention has been given to industrial development constraints such as reducing the costs of increased design time and expertise. We aim for simple-to-use tools which can be smoothly integrated into current design process based on a variety of different process methodologies, design languages and integration test frameworks. Analysis results are validated by comparison with real platform results, and platform modeling for fast prototyping can be continuously improved from these comparisons. In addition to the OMG MARTE profile, aspects of domain specific languages such as SDL, EAST-ADL2 and AADL are used in the PRESTO project. | | 16:55 – 17:55 | SESSION 6: Perfomance Evaluation | | | Chair: Johnny Willemsen, Technical Manager, Remedy IT Empirical Evaluation of RMI Frameworks Nawel Hamouche, PrismTech Angelo Corsaro, Chief Technology Officer, PrismTech Ramzi Karoui, PrismTech The abstraction of Remote Method Invocations over distributed objects was made popular and widely available in the 90s thanks to the definition of the Common Object Request Broker Architecture (CORBA) standard and the consequent availability of quality implementations -- many of which are open source. RMI frameworks have become foundational building blocks of modern programming language libraries such as Java RMI and .Net Remoting. More recently, new language-independent RMI frameworks, such as ICE have emerged, and OpenSplice RMI has introduced a new RMI with several interesting innovations for QoS control over RMI and the time decoupling of client/server interactions. However, with all these different technologies providing a solution to roughly the same problem, users are often confused by what the differences are and what might be best for them. We will provide a systematic characterization of the most popular RMI framework, review their functional and non functional properties, and evaluate their performance. | | | Performance Evaluation of Publish/Subscribe Middleware Technologies for ATM (Air Traffic Management) Systems Juan M. Lopez-Soler, Jose M. Lopez-Vega, Javier Povedano-Molina and Juan J. Ramos-Munoz, Signal Theory, Telematics and Communications Dept., University of Granada The EU Single European Sky is an ambitious initiative launched by the European Commission to reform the architecture of European air traffic management. Within this framework, SESAR (Single European Sky ATM Research) aims to eliminate the fragmented approach to European ATM, transform the ATM system, synchronize all stakeholders and federate resources. Different technologies are under evaluation to deal with the plethora of challenging ATM requirements. Although the evaluation will be performed at several levels (performance, portability, interoperability, security, availability, etc), in this presentation we will focused on pure performance evaluation. In particular, we will report on benchmarking to provide quantitative performance data for ground/ground ATM communications via the Data Distribution Service (DDS). Instead of using generic benchmarks, scenarios, data-sizes and usage patterns have been chosen to match expected SESAR deployments. We will also report on performance comparisons with Web Services Notification (WS-N) and Java Message Service (JMS) for the same logical scenarios and experimental set up. | | 18:00 – 19:00 | Attendee Evening Reception | | | | | THURSDAY April 19, 2012 – Workshop Day 2 | | 09:00 – 10:30 | SESSION 7: Large-scale Systems & Systems of Systems | | | Chair: Rick Warren, Director of Technology Solutions, RTI Scalable and Interoperable DDS Security Angelo Corsaro, Chief Technology Officer, PrismTech With the adoption of the OMG DDS standard as the technology of choice for distributing operational data in systems such as SESAR (the next generation European Air Traffic Control System), interoperable and scalable DDS security is becoming a crucial need. However, traditional approaches such as TLS or DTLS are inherently point-to-point and would lead inacceptable overheads and loss of scalability. Using SESAR as a running case study, this presentation will survey key DDS security requirements for systems of systems, and describe how the Secure Real-time Transport Protocol (SRTP) can be projected onto DDSI/RTPS to obtain a secure, scalable, efficient and interoperable DDS wire-protocol. | | | Applying DDS to Large Scale Mission Critical Distributed Systems: An Experience Report Niels Korstee, Technical lead OpenSpliceDDS, PrismTech Angelo Corsaro, Chief Technology Officer, PrismTech Hans van't Hag, OpenSplice DDS Product Manager, PrismTech The Object Management Group (OMG) Data Distribution Service for Real-Time Systems (DDS) is today used in a large number of large-scale mission-critical systems. Examples include: - The control system for the Gran Coulee Dam, the largest hydroelectric central of the United States and the fifth of the entire planet
- The CoFlight Air Traffic Control Systems, a next generation air traffic control system for France, Italy and Switzerland
- SESAR SWIM, the integration of all the European Air Traffic control centers.
All of these projects share some common challenges as they are (1) very large in scale, some of them spanning across nations, (2) mission-critical as the system needs to be always functional during its operations regardless of failures, overloads, etc, (3) long-lived, once deployed these systems remain operational for several decades, and (4) heterogeneous, as resourceful subsystems need to interact with resource-constraint devices. This presentation will introduce the use-cases discussed above, and crystalize common DDS challenges that provide solutions for each of them. These solutions have been proven to work on several systems and distill our experience in working with very complex distributed mission-critical systems. | | | DDS Interoperability in SWIM Hakim Souami, Technical Architect, Thales SWIM (System Wide Information Management) is a key enabler for SESAR, the European Air Traffic Management (ATM) modernization program. SWIM builds a system of systems for the sharing of common information in an interoperable and standardized way. The Ground-Ground SWIM is built around a constellation of automated systems collaborating for the provision of a shared and coherent view of Flight Objects to ATM users. This presentation will cover then main technical challenges that have been encountered during integration of two SWIM Infrastructure prototypes developed by two industrial partners in the context of SESAR WP14. DDS interoperability through OMG DDSi has already been demonstrated at the OMG, but we will report how integration of SWIM prototypes has revealed both limitations of the current standard and interoperability problems between two widely-used DDS implementations. | | 10:30 – 10:50 | Morning Refreshments | | 10:50 – 12:50 | SESSION 8: Model-driven Techniques | | | Chair: Andrew Watson, OMG Model Driven Development of High Integrity Applications Based on Reusable Assets Emilio Salazar, Researcher, Universidad Politecnica de Madrid Miguel A. de Miguel, Associate Professor, Universidad Politecnica de Madrid This presentation introduces solutions for the integration of different MDA technologies based on Reusable Modeling tool Assets (RMA), and solutions for deploying artifacts so as to provide modeling tool independence. We introduce approaches for the construction of a set of reusable assets oriented to the development of high-integrity applications. The assets included in the presentations provide support for: - UML extensions for the representation of high integrity concepts (e.g. MARTE profile, and Safety-aware profile)
- High integrity analysis languages such as FMECA, FTA and RMA
- Bridged with specific analysis tools such as MAST and Item Toolkit
- Code generators for real-time execution platforms: RTSJ and Ada-Ravenscar
The application of RMA techniques makes possible the applications of these artefacts in different modeling tools, and this avoids tool dependencies of model-based software applications. The presentation introduces the problems and solutions for the portability and interoperability of MDA artifacts (e.g. UML profiles and model libraries, EMOF metamodels, QVT and MOF2Text transformations). The presentation will be based on a set of practical RMAs, which support the development of real-time systems on platforms such as RTSJ (Real-Time Java Specification) and Ada Ravenscar. Examples of modeling tools used in the portability will be Papyrus, RSA and Magic Draw. | | | An MDA Approach to Designing and Implementing Publish-subscribe Applications Andreas Korff, Consultant, Atego Systems James Hummell, Principal Engineer, Atego Systems This presentation investigates how Model Driven Architecture (MDA) can be applied to the design of distributed publish-subscribe applications and how automated transformation can be used to map the architecture to a variety of publish-subscribe technologies, including DDS and JMS. We will cover recent research on the use of UML and SysML to model publish-subscribe applications and the kinds of profiles and extensions that are necessary to properly model these systems. In addition we will describe the model transformations required to generate the artifacts necessary to map the model into various publish-subscribe technologies. Special attention is given to transformation that map to platforms compliant with the OMG DDS specification. Unlike the emerging UML4DDS specification (which is still in progress), this work leverages new artifacts introduced by the SysML, DDS for light-weight CCM, and DDS Extensible Types specifications to define a more abstract mapping that significantly eases the burden on the modeling tools and model-transformation engines. | | | Open Source DDS Modeling Toolkit Mike Martinez, Principal Software Engineer, Object Computing, Inc. We present the results of SBIR funded research to create a viable Open Source modeling toolkit for DDS. Specifically we discuss the meta-model used to capture semantic middleware information using an Eclipse EMF/GMF based editor suite; binding deployment information to middleware models; and, generation of C++ code to provide model defined middleware support for applications. The modeling toolkit was created as a bundle of Eclipse plug-ins. These include UML model capture of the data types, Quality of Service (QoS) policies, and middleware connectivity for all or portions of a system. Model information is graphically captured and stored as an instance document of an XML schema defined by the model capture meta-model. The diagram is stored in XMI format with external references to the separate middleware semantic content. This is then used, either from within the plug-ins or scripted externally, to generate C++ code targeted to a modeling toolkit specific support library. Targeting a support library instead of generating straight line C++ code allows the implementation to mature independently of any captured models. In addition to the C++ code, the data is defined in IDL format and additional build dependency information is included allowing diverse build systems to be used for actual compilation and linking of the captured middleware model. | | | Using MDE Approaches in Compiler Research for High Performance Computing Tomofumi Yuki, PhD Candidate, Colorado State University Sanjay Rajopadhye, Associate Professor, Colorado State University The emergence of multicore machines has led to renewed research on parallelizing compilers. However, these are very complicated software systems, and research compiler infrastructures are even more intricate, having usually been produced incrementally by graduate students who see them as a vehicle to demonstrate research ideas, not as a product. In this context, Model Driven Software Engineering (MDE) offers unique techniques and tools to reduce and streamline compiler development. We will report on our recent work on parallel code generation using MDE. We are aiming for a compiler infrastructure which will support multiple Domain Specific Languages (DSLs) without having to be completely reimplemented. | | 12:50 – 14:05 | Lunch | | 14:05 – 15:05 | SESSION 9: DDS | | | Chair: Gerardo Pardo-Castellote, CTO, RTI Regression Testing of DDS-based Distributed Systems Hans van't Hag, OpenSplice DDS Product Manager, PrismTech With the growing adoption of DDS as a crucial building block of business- and mission-critical systems there is an increasing need to systematically test correct functioning of distributed DDS systems in a non-intrusive and repeatable way. This talk discusses the experiences of system-integrators at large-scale deployments of DDS in the area of naval combat systems and deduces from those experiences a set of requirements for a regression-test toolsuite for DDS-based distributed systems. Whereas most DDS vendors have product-specific 'white-box' testing-tools (that utilize specific protocols) the aimed-for regression tool-suite is intended to treat the system-under-test as a 'black-box' where standardized DDS communication is utilized to 'stimulate' and 'monitor' the systems behavior. The requirements to be discussed can be grouped into: - (non-)intrusiveness: a black-box approach where the DDS system 'under test' should be left 'untouched' w.r.t. configuration, execution and performance
- platform/connection: location of where the test-engine would need to be executed and how to access the 'target-system-under-test'
- regression-test nature: repeated and automated regression-testing
- scripting: script-engines that allow to both inject, capture and process data from the system-under test
- browsing: different views on the system such as the physical view (app's and where they're running) as well as logical DDS-view (DDS-entities: topics/writers/etc.)
- visualization: the way DDS-data can be captured (e.g. as a timeline) and visualized (lists and/or charts)
- analysis: aids that facilitate easy analysis of the data
| | | DDS in Low-Bandwidth Environments Jaime Martin Losa, CEO, eProsima DDS middleware uses the RTPS protocol, which allows fine-tuning of timing and other parameters, making it a good choice for different kinds of links, including low-bandwidth links. DDS should therefore work over low-bandwidth radio links found in aerospace and defense applications. However, in practice, DDS often achieve either very poor performance or no communication at all in these environments. There are several reasons for this behavior: - The DDS Discovery protocol may require dozens of high payload messages,
- RTPS Headers are large compared to those of lower-level transport protocols,
- DDS does not compress data,
- DDS QoS defaults may or may not be the best choice for this kind of links,
- And the DDS wire protocol needs special tuning to work well with the TDMA (Time Division Multiple access) algorithm often used to multiplex low-bandwidth channels.
However, all these problems can be solved, and DDS is being used successfully with shared bandwidths as low as 2400 bps. This presentation will discuss how to use DDS in this kind of environments. | | 15:05 – 15:25 | Afternoon Refreshments | | 15:25 – 16:55 | SESSION 10: DDS Patterns | | | Chair: Hans van't Hag, OpenSplice DDS Product Manager, PrismTech Data Distribution Patterns Rick Warren, Director of Technology Solutions, RTI Gerardo Pardo-Castellote, Chief Technology Officer, RTI Sophisticated distributed applications include a variety of interaction patterns, for example Publish-Subscribe, Request-Reply, and Point-to-Point. Often these patterns are baked into middleware in ad hoc ways. For example, in JMS, a given data-stream may operate according to a Publish-Subscribe pattern or a Point-to-Point pattern, but not both, while a Request-Reply pattern may be dynamically layered atop either. In CORBA, exchanges are coupled to either a Point-to-Point or Request-Reply pattern, and no others, at design time. Significant gains in flexibility and interoperability can be had by applying interaction patterns on top of more general data access and distribution mechanisms and by decoupling them from specific middleware technologies. This talk will describe how system architects and platform builders can improve their solutions by decomposing their data streams on the basis of common data elements and simple "atomic" patterns -- and then using these primitive elements to construct more complex patterns. The applicability of various middleware technologies to this approach will also be discussed. | | | Classical Distributed Algorithms with DDS Sara Tucci-Piergiovanni, Research Engineer, CEA LIST's Laboratory of Model-driven Engineering Angelo Corsaro, Chief Technology Officer, PrismTech The OMG DDS standard has achieved very strong adoption as the distribution middleware of choice for a large class of mission- and business-critical systems, such as Air Traffic Control, Automated Trading, SCADA, Smart Energy, etc. The main reason for choosing DDS lies in its efficiency, scalability, high-availability and configurability - through the 20+ QoS policies. However, this comes at the cost of a relaxed consistency model with no strong guarantees over global invariants. As a result, many architects have had to independently devise the correct algorithms for classical problems like fault-detection, leader election, consensus, distributed mutual exclusion and snapshot, using the DDS primitives. In this presentation we will describe DDS-based distributed algorithms for many classical problems in distributed systems. We'll start first with algorithms that ignore the presence of failures, for simplicity, and will show then how these algorithms have to evolve in order to deal with failure. We'll also show how these classical algoritms can be used to implement useful extensions of the DDS semantics, such as global take and global write. | | | Integration Patterns for Mission Critical System of Systems Julien Enoch, Engineering Team Lead, PrismTech Angelo Corsaro, Chief Technology Officer, PrismTech An increasing number of mission- and business-critical systems rely on OMG DDS for distributing and managing data. DDS trivially addresses system integration where a shared data model exists. However, building systems that rely on different information models, or interfacing DDS-based systems with those using other technologies, is often done either with point-to-point communication, or via integration technologies such as an Enterprise Service Bus (ESB). Neither is satisfactory; point-to-point integration introduces quadratic complexity, while ESBs are characterised by inefficiency and lack of QoS preservation and transformation. This presentation will introduce a pattern language for Integration of Mission Critical System of Systems; explain for each pattern the problem it solves, the context in which it arises and how the conflicting forces are often balanced; and present an implementation of these patterns as provided by the OpenSplice Gateway route definition DSL. | | 16:55 | Close Chair: Andrew Watson, Object Management Group | Last updated on November 19, 2015 by Lana  | | |