Preview

Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS)

Advanced search
Vol 27, No 3 (2015)
View or download the full issue PDF (Russian)
9-28
Abstract
This report deals with the construction of the language service for extended support of the Fortran programming language in the integrated development environment (IDE) Microsoft Visual Studio. The model and general approach for language service construction is offered. The report focuses on the organization of this model, and the proof of its operability, that is given on the example of the FRIS language service developed by author. The material could be equally applied for construction language services both for other programming languages and for other development environments.
29-46
Abstract
In comparison with Haskell type classes and C++ concepts, such object-oriented languages as C# and Java provide much limited mechanisms of generic programming based on F-bounded polymorphism. Main pitfalls of C# generics are considered in this paper. Extending C# language with concepts which can be simultaneously used with interfaces is proposed to solve the problems of generics; a design and translation of concepts are outlined.
47-56
Abstract
In this paper we present the visual approach to parallel programming provided by Graph-Symbolic Programming Technology. The basics of this technology are described as well as advantages and disadvantages of visual parallel programming. The technology is being implemented as a PaaS cloud service that provides the tools for creation, validation and execution of parallel programs on cluster systems. The current state of this work is also presented.
57-72
Abstract
Requirements and code, in conventional software engineering wisdom, belong to entirely different worlds. Is it possible to unify these two worlds? A unified framework could help make software easier to change and reuse. To explore the feasibility of such an approach, the case study reported here takes a classic example from the requirements engineering literature and describes it using a programming language framework to express both domain and machine properties. The paper describes the solution, discusses its benefits and limitations, and assesses its scalability.
73-86
Abstract
This paper investigates a formal approach which supports a critically significant step in object oriented analysis and software engineering. It is proposed to create an object class structure model based on an Ontological Data Analysis. Pragmatically important attributes and Ontological Data Analysis basic stages review is given.
87-100
Abstract
Optimizing compilers make significant contribution to the performance of modern computer systems. Among them VLIW architecture processors are the most compiler-dependent, since their performance is ensured by effective compile time scheduling of multiple commands in a single clock. This leads to an eventual complication of VLIW compilers. Taking as an example optimizing compiler developed for the Elbrus family processors, it runs consequently over 300 stages of code optimization in basic mode. Such an amount of stages is needed to obtain decent performance, but it also makes compilation quite time consuming. It turns out that the main reason for compilation time increase when using high level compilation is applying some aggressive unreversable code transformations, which eventually leads to code size increase that is also unwanted. In addition, there remains the problem of using a number of optimizations that are useful for rare contexts. To reach the objectives, namely increasing performance, decreasing compilation time and code size, it is reasonable to choose an appropriate strategy on an early compilation stage according to some procedure specific characteristics. This paper discusses the procedures classification problems for this task and suggests several possible solutions.
101-114
Abstract
The paper presents a unified model for testing tools for object-oriented application development. Based the available papers were identified shortcomings of existing work and identified the following optimal criteria, which shall comply the resulting model: 1. To deep inheritance hierarchies 2. To presents of multiple inheritance hierarchies 3. To presents of abstract classes in the hierarchy 4. To presents of multiple (n-ary) associations 5. To presents of associations with attributes 6. To presents of a composition between classes 7. To presents of recursive associations 8. To presents of associations between classes belonging to the same inheritance hierarchy 9. To presents of association classes 10. To presents between the association class and other classes 11 To presents enumerations in model With a unified graphical language UML class diagram unified model testing. The paper we verified compliance with the resulting implementation of the selected criteria was presented. Currentlythe implementation of applications using object-oriented programming languages and relational databases. To overcome the object-relational mismatch it is necessary to implement object-related mapping patterns presents. The paper presents three methods used to represent the class hierarchy highlighted the advantages and disadvantages of each method. For test the feasibility a unified model chosen development environment SharpArchitect RAD Studio which is designed object applications in C# and are implementing a relational database. The paper presents the developed object model in the form a class diagram showing the interfaces and inheritance relations diagram containing all the tables and columns the resulting database. In the conclusion recommendations on the areas for further development work and identified the need of implement a unified model with other approaches proposed by the authors was used.
115-124
Abstract
Described in the paper is an approach to symbolic test scenarios concretization in the scope of automated software verification and testing technology. Tools for automated concretization process based on user defied settings are presented.
125-138
Abstract
In this work, an approach to generate test programs for functional verification of memory management units of microprocessors is proposed. The approach is based on formal specification of memory access instructions, namely load and store instructions, and memory devices such as cache units and address translation buffers. The use of formal specifications helps automate development of test program generators and makes verification systematic due to clear definition of testing goals. In the suggested approach, test programs are constructed by using combinatorial techniques, which means that stimuli - sequences of loads and stores - are created by enumerating all feasible combinations of instructions, situations (instruction execution paths) and dependencies (sets of conflicts between instructions). It is of importance that test situations and dependencies are automatically extracted from specifications. The approach has been used in a number of industrial projects and allowed to discover critical bugs in memory management mechanisms.
139-148
Abstract
A method of direct memory access subsystem verification used for “Elbrus” series microprocessors has been described. A peripheral controller imitator has been developed in order to reduce verification overhead. The model of imitator has been included into the functional machine simulator. A pseudorandom test generator for verification of the direct memory access subsystem has been based on the simulator.
149-160
Abstract
The paper describes a method for constructing test oracles for memory subsystems of multicore microprocessors. The method is based on using nondeterministic reference models of systems under test. The key idea of the approach is on-the-fly determinization of the model behavior by using reactions from the system. Every time a nondeterministic choice appears in the reference model, additional model instances are created and launched (each simulating a possible variant of the system behavior). When the testbench receives a reaction from the system under test, it terminates all model instances whose behavior is inconsistent with that reaction. An error is detected if there is no active instance of the reference model. The suggested method has been used in verification of the L3 cache of the Elbrus-8C microprocessor and allowed to find three bugs.
161-182
Abstract
Model-based test generation is widely spread in functional verification of hardware designs. The extended finite state machine (EFSM) is known to be a powerful formalism for modelling digital hardware. As opposed to conventional finite state machines, EFSM models separate datapath and control, which makes it possible to represent systems in a more compact way and, in a sense, reduces the risk of state explosion during verification. In this paper, a new EFSM-based test generation approach is proposed and compared with the existing solutions. It combines random walk on a state graph and directed search of feasible paths. The first phase allows covering “easy-to-fire” transitions. The second one is aimed at “hard-to-fire” cases; the algorithm tries to build a path that enables a given transition; it is carried out by analyzing control and data dependencies and applying symbolic execution techniques. Experiments show that the suggested approach provides better transition coverage with shorter test sequences comparing to the known methods and achieves a high level of code coverage.
183-196
Abstract
This article analyzes existing methods of verification of cache coherence protocols of scalable systems. Based on the research literature, the paper describes a method of formal parameterized verification of safety properties of cache coherence protocols. The paper proposes a design of a verification system for cache coherence protocols. The article analyzes the method in terms of development and examination of the corresponding Promela model of the German cache coherence protocol and discusses extension and automation of the method needed to adapt it to verification challenges of the Elbrus microprocessors.
197-218
Abstract
The language of message sequence charts (MSC) is a popular scenario-based specification language used to describe the interaction of components in distributed systems. However, the methods for validation of MSC diagrams are underdeveloped. This paper describes a method for translation of MSC diagrams into coloured Petri nets (CPN). The method is applied to the property verification of these diagrams. The considered set of diagram elements is extended by the elements of UML sequence diagrams and compositional MSC diagrams. The properties of the resulting CPN are analyzed and verified using the known system CPN Tools and the CPN verifier based on the SPIN tool. The application of this method is illustrated with an example.
219-236
Abstract
Process models and graphs are commonly used for modeling and visualization of processes. They may represent sets of objects or events linked with each other in some way. Wide use of models in such languages engenders necessity of tools for creating and editing them. This paper describes the model editor which allows for dealing with classical graphs, Petri nets, finite-state machines and their systems. Additionally, the tool has a list of features like simulation of Petri nets, import and export of models in different storage formats. Carassius is a modular tool which can be extended with, for example, new formalisms. In the paper one can find a detailed description of a couple of layout algorithms that can be used for visualizing Petri nets and graphs. Carassius might be useful for educational and research purposes because of its simplicity, range of features and variety of supported notations.
237-254
Abstract
This paper is dedicated to a tool whose aim is to facilitate process mining experiments and evaluation of the repair algorithms. Process mining is a set of approaches which provides solutions and algorithms for discovery and analysis of business process models based on event logs. Process mining has three main areas of interest: model discovery, conformance checking and enhancement. The paper focuses on the latter. The goal of enhancement process is to refine an existing process model in order to make it adhere event logs. The particular approach of enhancement considered in the paper is called decomposed model repair. Although the paper is devoted to the implementation part of the approach, theoretical preliminaries essential for domain understanding are provided. Moreover, a typical use case of the tool is shown as well as guides to extending the tool and enriching it with extra algorithms and functionality. Finally, other solutions which can be used for implementation of repair schemes are considered, pros and cons of using them are mentioned.
255-266
Abstract
Comparing business process models is one of the most significant challenges for business and systems analysts. The complexity of the problem is explained by the fact there is a lack of tools that can be used for comparing business process models. Also there is no universally accepted standard for modeling them. EPC, YAWL, BPEL, XPDL and BPMN are only a small fraction of available notations that have found acceptance among developers. Every process modeling standard has its advantages and disadvantages, but almost all of them comprise an XML schema, which defines process serialization rules. Due to the fact that XML naturally represents hierarchical and reference structure of business process models, these models can be compared using their XML representations. In this paper we propose a generic comparison approach, which is applicable to XML representations of business process models. Using this approach we have developed a tool, which currently supports BPMN 2.0 [1] (one of the most popular business process modeling notations), but can be extended to support other business process modeling standards.
267-278
Abstract
This paper presents further development of Sevigator hypervisor-based security system. Original design of Sevigator confines users’ applications in a separate virtual machine that has no network interfaces. For trusted applications Sevigator intercepts network-related system calls and routes them to the dedicated virtual machine that services those calls. This design allows Sevigator protect networking from malicious applications including high-level intruders residing in the kernel. Modern microkernel-based hypervisors opened the door to redesign of Sevigator. Those hypervisors are small operating systems by nature, where management of virtual machines as well as most of hardware operations are isolated in processes with low priority level. Compromising such a process does not result in compromising the whole hypervisor. In this paper we present an experimental design of Sevigator based on NOVA hypervisor where system calls of trusted applications are serviced by a dedicated process in the hypervisor rather than a separate VM. The experiment shows about 25% performance gain due to reduced number of context switches.
279-290
Abstract
The paper contains the description of a private service with an encryption and decryption of data on client-side. The focus is on the direct client-to-client connection. The article provides the algorithm for work of the programs. The authors describe the methods of protecting from some network attacks and the experiment of prototype work. Additionally, we consider some potential dangers of an external character that can violate confidential communication data.
291-302
Abstract
The paper describes a new approach to website messages filtration using combined classifier. Information security standards for the internet resources require user data protection however the increasing volume of spam messages in interactive sections of websites poses a special problem. Unlike many email filtering solutions the proposed approach is based on the effective combination of Bayes and Fisher methods, which allows us to build accurate and stable spam filter. In this paper we consider the organization of combined classifier according to determined optimization criteria based on statistical methods, probability calculations and decision rules.
303-314
Abstract
The paper presents a plugin to the Wireshark traffic analyzer to calculate the moments of the random variable - the interval between packets of incoming traffic. The article also presents the analytical solution for the average waiting time for a QS type H2/M/1. Here H2 is the 2nd order hyperexponential distribution law of the input flow time intervals. The final result is obtained as a solution of Lindley’s integral equation using the method of spectral decomposition. It is shown that in this case the distribution laws of intervals between input flow requirements can be approximated at the level of their three first moments. The joint use of these results allows to fully analyze the incoming traffic by queuing methods.
315-328
Abstract
Existing approaches to the use of cloud computing resources is not efficient. Modern multimedia services require significant computing power, which are not always available. In this paper, we introduce an approach that allows more efficient use of limited resources by dynamically scheduling the distribution of data flows at several levels: between the physical computing nodes, virtual machines, and multimedia applications.
329-342
Abstract
During the information exchange from one department to another the problem of personal identification arises. This problem concerns people who have partially or completely not coinciding personal details. In represented work the new method and algorithm for such people identification are elaborated. This method is based on fuzzy comparison and on the metrics of Levenshtein. The algorithm, developed in the form of Data Mining process, allows defining people quickly according to earlier carried out search. The built-in system of personal details priority gives the opportunity to identify person in such cases as changing of surname, name, moving, mistakes from manual data input and if personal details are partially absent also. The algorithm was realized by PL-SQL in Oracle 11g database management system.
343-350
Abstract
In the article the review of tools used in a new type object DBMS for increasing the efficiency of access to data is provided. Some object DIM DBMS features based on the use of the classes of object relations as object sets (inheritance, inclusion, interaction and history) and object relations (inheritance, internal inheritance, inclusion, internal inclusion, interaction and history) are described. The description of the subject domain is entered by means of an object and dynamic data model (OD-model), and DIM DBMS completeness for any OD-model is justified. An ODQL object query language allowing to combine the exact description complexity with the simplicity of use due to two query level introduction is described. For the elucidation of the most effective way of the appeal to DIM DBMS the study of various query technologies for this environment is conducted, and mechanisms for user work with it are developed and realized. Software development ”The Generator of ODQL-queries” is considered which is necessary for simplification of query creation to DIM DBMS, needless for the user to know the syntax of a modern query language. Problems of converting data from the existing DBMS into DIM DBMS are considered.
351-364
Abstract
Crowdsourcing is an established approach for producing and analyzing data that can be represented as a human-assisted computation system. This paper presents a crowdsourcing engine that makes it possible to run a highly customizable hosted crowdsourcing platform controlling the entire annotation process including such elements as task allocation, worker ranking and result aggregation. The approach and the implementation have been described, and the conducted experiment shows promising preliminary results.
365-378
Abstract
The article discusses the architecture and the structural scheme of distance learning system «3Ducation», as well as the purpose and capabilities of all its constituent software components, describes its main features. The system «3Ducation» is built on two pillars: the game approache and technologies of virtual worlds. Virtual reality technology allows to move the learning process into the three-dimensional space and to make the learning space more interesting and educational process more fun. The game approach, through which active methods of pedagogical work are implemented, allows you to engage students in the learning process, to constantly maintain and even increase the interest in learning.
379-390
Abstract
In the report the optimization of image similarity metric computation method for three dimensional vector video with general-purpose computations on graphical processor unit (GPGPU) is discussed. The use of stream processors in graphics accelerators and Compute Unified Device Architecture (CUDA) platform allows significant performance gain in comparison to calculations on general-purpose processors, while solving problems of computer vision and image similarity determination. The performance of the GPGPU metric value computation is measured and researched.
391-406
Abstract
This paper describes the problem of creating computer music without interruption of humankind in generating algorithm working. It contains review of existing solutions, description of their key features and the attempt of modeling composer’s functions on a computer. Modeling opuses on the basis on unification of musical rhythm and melody line allows providing computer music with given parameters of composition. Using unhackneyed approach leads to the results which differ from the predecessors and suggests new direction for further research and development in the sphere of computer art.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)