Vol 28, No 2 (2016)
View or download the full issue
PDF (Russian)
5-32
Abstract
It is generally considered that object-oriented (OO) languages provide weaker support for generic programming (GP) as compared with functional languages such as Haskell or SML. There were several comparative studies which showed this. But many new object-oriented languages have appeared in recent years. Have they improved the support for generic programming? And if not, is there a reason why OO languages yield to functional ones in this respect? In the earlier comparative studies object-oriented languages were usually not treated in any special way. However, the OO features affect language facilities for GP and a style people write generic programs in such languages. In this paper we compare ten modern object-oriented languages and language extensions with respect to their support for generic programming. It has been discovered that every of these languages strictly follows one of the two approaches to constraining type parameters. So the first design challenge we consider is “which approach is better”. It turns out that most of the explored OO languages use the less powerful one. The second thing that has a big impact on the expressive power of a programming language is language support for multiple models. We discuss pros and cons of this feature and its relation to other language facilities for generic programming.
Alexander Tchitchigin,
Larisa Safina,
Mohamed Elwakil,
Manuel Mazzara,
Fabrizio Montesi,
Victor Rivera
33-44
Abstract
Jolie is the first language for microservices and it is currently dynamically type checked. This paper considers the opportunity to integrate dynamic and static type checking with the introduction of refinement types, verified via an SMT solver. The integration of the two aspects allows a scenario where the static verification of internal services and the dynamic verification of (potentially malicious) external services cooperate in order to reduce testing effort and enhance security. Refinement types are well-known technique for numeric, array and algebraic data types. They rely on corresponding SMT-theories. Recently SMT solvers got support for a theory of strings and regular expressions. In the paper, we describe possible application of the theory to string refinement types. We use Jolie programming language to illustrate feasibility and usefulness of such extension. First, because Jolie already has syntax extension to support string refinements. We build on top of that extension to provide static type checking. Second, because in the realm of microservices the need for improved checking of string data is much higher as most of external communication goes through text-based protocols. We present simplified but real-world example from the domain of web-development. We intentionally introduce a bug in the example demonstrating how easily it can slip a conventional type system. Proposed solution is feasible, as it do not accept program with the bug. Complete solution will need enhancements in precision and error reporting.
45-62
Abstract
Visual domain-specific languages usually have low entry barrier. Sometimes even children can program on such languages by working with visual representations. This is widely used in educational robotics domain, where most commonly used programming environments are visual. The paper describes a novel dataflow visual programming environment for embedded robotic platforms. Obviously, complex dataflow languages are not simple for understanding. The purpose of our tool is to "bridge" between lightweight educational robotic programming tools (commonly these tools provide languages which are based on control flow model) and complex industrial tools (which provide languages based on more complex dataflow execution model). We compare programming environments mostly used by robotics community with our tool. After brief review of behavioural robotic architectures, some thoughts on expressing them in terms of our dataflow language are given. Visual language, which is described here, provides opportunity to mix dataflow and control flow models for robotics programming. We believe that it is important for educational purposes. Program on our language consists of different blocks (visual representation of data transformation processes) and "links" which presents data flow between them. Domain-specific modelling approach was used to develop our language. Also, this paper provides the examples of solving two typical robot control tasks in our language.
63-78
Abstract
In this paper we describe our approach to representing concerns in an interface of an IDE to make navigation across crosscutting concerns faster and easier. Concerns are represented as a tree of an arbitrary structure, each node of the tree can be bound to a fragment of code. It allows one to quickly locate fragments in the source code and makes switching between software development tasks easier. We describe a model which specifies data structures used to store the information about these code fragments and algorithms used to find the code fragment in original or modified source code. The model describes the information about code fragments as a set of contexts. Another important feature of the model is language independency. The model supports different programming, mark-up, DSL-languages and any structured text, such as a documentation. Main goal is to keep concern tree consistent with evolving source code. Search algorithm is designed to work with a modified source code, where the code fragment may change. The model is implemented as a tool, which supports different programming languages and integrates into different editors and integrated development environments. Source code analysis is performed by a set of lightweight parsers. In case of significant changes if the code fragment may be not found automatically the tool helps a programmer to find one by suggesting possible places in the source code based on the stored information.
79-96
Abstract
A service-based approach is a method to develop and integrate program products in a modular manner where each component is available through any net and has the possibility of being dynamically collaborated with other services of the system at run time. That approach is becoming widely adopted in industry of software engineering because it allows the implementation of distributed systems characterized by high quality. Quality attributes can be about the system (e.g., availability, modifiability), business-related aspects (e.g., time to market) or about the architecture (e.g., correctness, consistency). Maintaining quality-attributes on a high level is critical issue because service-based systems lack central control and authority, have limited end-to-end visibility of services, are subject to unpredictable usage scenarios and support dynamic system composition. The constant evolution in systems can easily deteriorate the overall architecture of the system and thus bad design choices, known as anti-patterns, may appear. These are the patterns to be avoided. If we study them and are able to recognize them, then we should be able to avoid them. Knowing bad practices is perhaps as valuable as knowing good practices. With this knowledge, we can re-factor the solution in case we are heading towards an anti-pattern. As with patterns, anti-pattern catalogues are also available. In case of continues evolution of systems, it is metric-based techniques that can be applied to obtain valuable, factual insights into how services work. Given the clear negative impact of anti-patterns, there is a clear and urgent need for techniques and tools to detect them. The article will focus on rules to recognize symptoms of anti-patterns in service-based environment, automated approaches to detection and applying metric-based techniques to this analysis.
97-110
Abstract
The theme of code reuse in software development is still important. Sometimes it is hard to find out what exactly we need to reuse in isolation of context. However, there is an opportunity to narrow the context problem, if applications in one given domain are considered. Same features in different applications in one domain have the same context respectively so the common part must be reused. Hence, the problem of domain analysis arises. On the other hand, there is metaCASE-techonology that allows to generate code of an application using diagrams, which describe this application. The main objective of this article is to present the technology for application family creation which connects the metaCASE-techonology and results of domain analysis activity. We propose to use some ideas of FODA (feature-oriented domain analysis) approach for domain analysis and use feature diagrams for describing of variability in a domain. Then we suggest to generate metamodel of the domain-specific visual language, based on feature diagram. After that based on generated metamodel domain-specific visual language editor is generated with the aid of metaCASE-tool. With this language user can connect and configure existing feature implementations thus producing an application. This technology supposed to be especially useful for software product lines.
111-126
Abstract
Verification tools are often the result of several years of research effort. The development happens as a distributed effort inside academic institutes relying on the ability of senior investigators to ensure continuity. Quality attributes such as usability are unlikely to be targeted with the same accuracy required for commercial software where those factors make a financial difference. In order for such tools to become of widespread use, it is therefore necessary to spend an extra effort and attention on users' experience, and allow software engineers to benefit out of them without the necessity of understanding the mathematical machinery in full detail. In order to put the spotlight on usability of verification tools we chose an automated verifier for the Eiffel programming language, AutoProof, and a well-known benchmark, the Tokeneer problem. The tool is used to verify parts of the implementation of the Tokeneer so to identify AutoProof's strengths and weaknesses, and finally propose the necessary updates. The results show the efficacy of the tool in verifying a real piece of software and automatically discharging nearly two thirds of verification conditions. At the same time, the case study shows the demand for improved documentation and emphasizes the need for improvement in the tool itself and in the Eiffel IDE.
127-138
Abstract
Certified programming allows to prove that the program meets its specification. The check of correctness of a program is performed at compile time, which guarantees that the program always runs as specified. Hence, there is no need to test certified programs to ensure they work correctly. There are numerous toolchains designed for certified programming, but F* is the only language that support both general-purpose programming and semi-automated proving. The latter means that F* infers proofs when it is possible and a user can specify more complex proofs if necessary. We work on the application of this technique to a grammarware research and development project YaccConstructor. We present a work in progress verified implementation of transformation of Context-free grammar to Chomsky normal form, that is making progress toward the certification of the entire project. Among other features, F* system allows to extract code in F# or OCaml languages from a program written in F*. YaccConstructor project is mostly written in F#, so this feature of F* is of particular importance because it allows to maintain compatibility between certified modules and those existing in the project which are not certified yet. We also discuss advantages and disadvantages of such approach and formulate topics for further research.
139-156
Abstract
The development of Cyber-Physical Systems often involves cyber elements controlling physical entities, and this interaction is challenging because of the multi-disciplinary nature of such systems. It can be useful to create models of the constituent components and simulate these in what is called a co-simulation, as it can help to identify undesired behaviour. The Functional Mock-up Interface describes a tool-independent standard for constituent components participating in such a co-simulation and can support different formalisms. This paper describes an exploration of whether different concurrency features in Scala (actors, parallel collections, and futures) increase the performance of an existing application called the Co-Simulation Orchestration Engine performing co-simulations. The investigation was conducted by refactoring the existing application to make it suitable for implementing functionality that takes advantage of the concurrency features. In order to compare the different implementations testing was carried out using four test co-simulations. These test co-simulations were executed using the concurrent implementations and the original sequential implementation, verifying the simulation results, and retrieving the execution times of the simulations. The analysis showed that concurrency can be used to increase the performance in terms of execution time in some cases, but in order to achieve optimal performance, it is necessary to combine different strategies. Based on these results, future work tasks has been proposed.
157-172
Abstract
During development of modern avionics systems and other mission-critical systems modelling is vitally used. Models can be used for checking and validation of developed system, including early validation. Early validation is very important because the cost of errors is raising exponentially depending on the development stage. For modelling of such systems, Architecture Analysis and Design Language (AADL) is widely used. It allows to model both architecture of a developed system and some of behavioral characteristics of its components. In the paper the task of automated model checking for consistency of some behavioral properties is considered. In particular, we focus on the problem of estimation of working time of model components and corresponding between this time and other properties in a model. This problem is close to the worst-case execution time problem (WCET) but it has its own specific in this application. We considered a static approach allowing to work with standard specification of components behaviour in AADL-models with specialized extended finite automata. In the paper peculiarities of used behaviour model (specialized finite automata) were considered including work with time and external events. We considered the problem of working time estimation for such models connected with non-local characteristic of this property. We propose an algorithm for time estimation for such behaviour models. This algorithm was implemented in MASIW framework, a tool for development of AADL-models.
173-180
Abstract
The article describes the technical world evolution tendencies, which require proper software and system engineering approaches used for complex systems creation, for example for aircrafts creation. The substantiation of the importance and relevance of using requirements management discipline in software development is made. The main basics of software and system engineering approaches and discipline are set out. System engineering is a discipline, which integrates and harmonizes all activities around entire area of systems creation. The article contains description of information systems, which have been created in GosNIIAS and now are actively used in internal and external works: requirements management information system, problem reports management information system, technological environment for test methods preparation and test results registration. Requirements management information system contains special predefined documents and template, required by standards DO-178, DO-254, DO-330, ARP4754, State Standards GOST 51904 and GOST 34. Using of requirements management system in GosNIIAS and external enterprises is described. Problem reports management information system registers and supports the lifecycle of problem reports, which appear during the work process. Technological environment for test methods preparation and test results registration supports different activities such as test methods, test cases, test procedures preparation and testing on the integration stand, test results registration and test protocols preparation. Some perspective directions of software and system engineering approaches applying in GosNIIAS are listed.
181-192
Abstract
Modern airliners such as Airbus A320, Boeing 787, and Russian MS-21 use so called Integrated Modular Avionics (IMA) architecture for airborne systems. This architecture is based on interconnection of devices and on-board computers by means of uniform real-time network. It allows significant reduction of cable usage, thus leading to reducing of takeoff weight of and airplane. IMA separates functions of collecting information (sensors), action (actuators), and avionics logic implemented by applied avionics software in on-board computers. International standard ARINC 653 defines constraints on the underlying real-time operation system and programming interfaces between operating system and associated applications. The standard regulates space and time partitioning of applied IMA-related tasks. Most existing operating systems with ARINC 653 support are commercial and proprietary software. In this paper, we present JetOS, an open source real-time operating system with complete support of ARINC 653 part 1 rev 3. JetOS originates from the open source project POK, created by French researchers. At that time POK was the only one open source OS with at least partial support for ARINC 653. Despite this, POK was not feasible for practical usage: POK failed to meet a number of fundamental requirements and was executable in emulator only. During JetOS development POK code was significantly redesigned. The paper discusses disadvantages of POK and shows how we solved those problems and what changes we have made in POK kernel and individual subsystems. In particular we fully rewrote real-time scheduler, network stack and memory management. Also we have added some new features to the OS. One of the most important features is system partitions. System partition is a specialized application with extended capabilities, such as access to hardware (network card, PCI controller etc.) Introduction of system partitions allowed us moving large subsystems out of the kernel and limiting the kernel to the minimal functionality: context switching, scheduling and message pass. In particular, we have moved network subsystem to system partition. This moving reduces kernel size and potentially reduces probability on having bug in kernel and simplifies verification process.
193-204
Abstract
In this paper, we report on the work in progress on the debugger project for real-time operating system JetOS for civil airborne systems. It is designed to work within Integrated Modular Avionics (IMA) architecture and implements ARINC 653 API specification. This operating system is being developed in the Institute for System Programming of the Russian Academy of Sciences and next step in developing this system is to create a tool to debug user-space applications on it. We also discuss the major requirements to such a debugger and show the difference between it and typical debugger, used by desktop developers. Moreover, we review a number of debuggers for various embedded systems and study their functionality. Finally, we present our solution that works both in emulator QEMU, which we use to emulate environment for our system, and on the target hardware. The presented debugger is based on GDB debugging framework but contains a number of extensions specific for debugging embedded applications. However, the implementation of the debugger is not complete yet and there is a number of features that can improve debugger usability, but it is already more functional than common GDB debugger for QEMU and, in contrast to other systems and their debuggers, where developers can use some functions to debug applications, but not all we need, our debugger meets the majority of our requirements and restrictions.
205-220
Abstract
We present three generations of prototypes for a contactless admission control system that recognizes people from visual features while they walk towards the sensor. The system is meant to require as little interaction as possible to improve the aspect of comfort for its users. Especially for people with impairments, such a system can make a major difference. For data acquisition, we use the Microsoft Kinect 2, a low-cost depth sensor, and its SDK. We extract comprehensible geometric features and apply aggregation methods over a sequence of consecutive frames to obtain a compact and characteristic representation for each individual approaching the sensor. All three prototypes implement a data processing pipeline that transforms the acquired sensor data into a compact and characteristic representation through a sequence of small data transformations. Every single transformation takes one or more of the previously computed representations as input and computes a new representation from them. In the example models presented in this paper, we are focusing on the generation of frontal view images of peoples’ faces, which is part of the processing pipeline of our newest prototype. These frontal view images can be obtained from colour, infrared and depth data by rendering the scene from a changed viewport. This pipeline can be modelled considering the data flow between data transformations only. We show how the prototypes can be modelled using modelling frameworks and tools such as Cinco or the Cinco-Product Dime. The tools allow for modelling the data flow of the data processing pipeline in an intuitive way.
221-242
Abstract
In this paper authors presents “mmdlab” library for the interpreted programming language Python. This library allows to carry out reading, processing and visualization of the results of numerical calculations in the tasks of molecular simulation. Considering the large volume of data obtained from such simulations, there is a need in parallel realization of algorithms for processing those volumes. Parallel processing should be performed on multicore systems, such as common scientific workstation, and on super-computer systems and clusters, where the MD simulations were held. During the development process we have study the effectiveness of the Python language for such tasks, and we have examined the tools for it’s acceleration. As well, we studied multiprocessing capabilities and tools for cluster computation using this language. Also we have investigated the problems of receiving and processing the data, located on multiple computational nodes. This was prompted by the need to process the data, produced by parallel algorithm, that was executed on multiple computational nodes, and saves its output on each of them. As a tool for scientific visualization was chosen an open-source “Mayavi2” package. The developed ”mmdlab” library was used in the analysis of the results of MD simulation of the gas and metal plate interaction. As a result, we managed to observe the effect of adsorption in details, which is important for many practical applications.
243-258
Abstract
This paper is a report of a study in progress that considers development of a framework and environment for modelling hardware memristor-based neural networks. An extensive review of the domain has been performed and partly reported in this work. Fundamental papers on memristors and memristor related technologies have been given attention. Various physical implementations of memristors have mentioned together with several mathematical models of the metal-dioxide memristor group. One of the latter has been given a closer look in the paper by briefly describing model’s mechanisms and some of the important observations. The paper also considers a recently proposed architecture of memristor-based neural networks and suggests enhancing it by replacing the utilized memristor model with a more accurate one. Based on this review, a number of development requirements was derived and formally specified. Ontological and functional models of the domain at hand have been proposed to foster understanding of the corresponding field from different points of view. Ontological model is supposed to shed light onto the object-oriented structure of memristor-based neural network, whereas the functional model exposes the underlying behavior of network’s components which is described in terms of mathematical equations. Finally, the paper shortly speculates about the development platform for the framework and its prospects.
259-270
Abstract
In the paper the composition model of functionally oriented informational resources is provided as well as the method for creating this model. The composition model and method of functionally oriented informational resources provide enhanced features on creation of informational security systems. The composition model corresponds to a data model in not the first normal form, is formed by the functional complex of data and is presented in the form of polybasic algebraic structure which displays structure, correlations, and also specifics of operations of handling and processing over difficult structured data. The method of creation of function-oriented information resources is based on a deductive method of creation of axiomatic theories. The formalized description of function-oriented information resources is provided; the functional complex of data of polybasic algebraic structure of composition model; the functional dependences and the relation forming system of axioms of composition model. Graphic elements on which it is displayed components of subsystems are shown; structurally functional chart of component interaction of levels of control; the structure of composition model of function-oriented information resources provided by the table of difficult structured data in not the first normal form; structural logic circuit of a method of creation of composition model of function-oriented information resources.
ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)
ISSN 2220-6426 (Online)