Vol 28, No 3 (2016)
View or download the full issue
PDF (Russian)
7-20
Abstract
The concept of Z-numbers introduced by Zade in 2011 is discussed topically nowadays due to it aptitude to deal with nonlinearities and uncertainties whose are common in real life. It was a large step of representing fuzzy logic, however that numbers created much larger problems of how to calculate them or aggregate multiple numbers of that type. Z-numbers have a significant potential in the describing of the uncertainty of the human knowledge because both the expert assessment and the Z-number consists of restraint and reliability of the measured value. In this paper, a method of converting an expert opinion to Z-number is proposed according to set of specific questions. In addition, the approach to Z-numbers aggregation is introduced. Finally, submitted methods are demonstrated on a real example. The topicality of the research is developing a new algorithm and software (based on that development) which could help people make decision in a messy uncertainty with many parameters and factors that are also defined imprecisely. In this work, we present the research that is aimed to find the most efficient methods to operate them (aggregate, add, divide). Firstly, it is important to identify all existing methods of aggregating Z-numbers. Secondly, it is needed to invent new methods of how work with them. The most interesting techniques should be detailed and summarized. There is also a program that is developed to model the real-word task of decision-making.
21-34
Abstract
Privacy enhancing technologies (PETs) are ubiquitous nowadays. They are beneficial for a wide range of users: for businesses, journalists, bloggers, etc. However, PETs are not always used for legal activity. There a lot of anonymous networks and technologies which grants anonymous access to digital resources. The most popular anonymous networks nowadays is Tor. Tor is a valuable tool for hackers, drug and gun dealers. The present paper is focused on Tor users’ deanonimization using out-of-the box technologies and a basic machine learning algorithm. The aim of the work is to show that it is possible to deanonimize a small fraction of users without having a lot of resources and state-of-the-art machine learning techniques. The first stage of the research was the investigation of contemporary anonymous networks. The second stage was the investigation of deanonimization techniques: traffic analysis, timing attacks, attacks with autonomous systems. For our system, we used website fingerprinting attack, because it requires the smallest number of resources needed for successful implementation of the attack. Finally, there was an experiment held with 5 persons in one room with one corrupted entry Tor relay. We achieved a quite good accuracy (70%) for classifying the webpage, which the user visits, using the set of resources provided by global cybersecurity company. The deanonimization is a very important task from the point of view of national security.
35-50
Abstract
The article describes two approaches for control access rights based on role approach (RBAC) and the use of tables (lists) access rights (ACL). At first, an overview of modern approaches to information security and control user access rights of applications with different architectures is provided. After that, two author's methods of data protection is described. The first approach was developed for the protection of object-oriented applications, the second approach was developed for object-attribute applications used to operating network (graph) databases and knowledge bases. The focus of attention is the first author's approach based on the description of access rights for classes, attributes of classes and objects that has a certain criterion. The approach is implemented by the use of a class hierarchy, composition and structure describing in detail in the article. The article gives examples of specific information systems developed by the first author: information system for scientific conferences that was repeatedly used at the conference "Object systems" (objectsystems.ru) and information system of the beauty salon. Further focus is on the second approach required development of new technique to the information security of network (graph) information structures. The approach developed by second author fully duplicates the functionality of the first approach. In particular, it provides permissions copy when copying of the network data structure, just as in the object-oriented paradigm is a transfer of the properties of parent to child class; the article gives a detailed description of such mechanism. For access control, the method involves the use of a special virtual device. Information about access rights is linked to the node network (graph) if restrict access is needed.
51-64
Abstract
The article analyses the problem of data persistence while transmitting the messages and looks into possible solutions. The central part of the article describes the algorithm of data encryption and digital signature algorithm according to the starting time of the session. In the algorithm the session key is symmetrically generated for each pair of subscribers; further the data are encrypted with this key. In its turn the session key is also encrypted with a public asymmetric key of a recipient and with an asymmetric encryption algorithm. Then the decrypted session key with the decrypted message are sent to the recipient. This client employs the same asymmetric encryption algorithm and his/her secret decryption key to decrypt the asymmetric session key. The decrypted session key is used for decryption of the received message. Thus, every time new symmetric keys are generated according to the starting time of a session, which enables high speed of encryption along with an open to public temporary encryption keys transmitting. Besides, the article contains examples of Diffie-Hellman protocol work and the hash-function algorithm MD5. They are used for encryption of generated temporary keys and for transmitting common private key to both clients. According to the suggested algorithm, the prototype of key and signature generation has been created and probated. The article illustrates the stages of Diffie-Hellman and MD5 protocol work. The prototype was tested with the help of a computer and two phones (2013 and 2015 production years).
65-84
Abstract
Nested Petri net formalisms is an extension of coloured Petri net formalism that uses Petri Nets as tokens. The formalism allows creating comprehensive models of multi-agent systems, simulating, verifying and analyzing them in a formal and rigorous way. Multi-agent systems are found in many different fields - from safety critical systems to everyday networks of personal computational devices; and, their presence in the real world in increasing with the increasing number of mobile computational devices. While several methods and tools were developed for modelling and analysis of NP-nets models, the synthesis part of multi-agent systems development via NP-nets is still under active development. The widely used method of automatic generation of target system code from designed and verified formal models ensures obtaining correct systems from correct models. In this paper, we demonstrate how Nested Petri net formalism can be applied to model search-and-rescue coordination systems and automatically generate implementation in the form of the executable code for event-driven systems based on the Telegram platform. We augment the NP-nets models with Action Language annotation, which enables us to link transition firings on the model level to Telegram Bot API calls on the implementation level. The suggested approach is illustrated by the example annotated model of a search and rescue coordination system.
85-102
Abstract
In this paper, we consider an approach to reverse engineering of UML sequence diagrams from event logs of information systems with a service-oriented architecture (SOA). UML sequence diagrams are graphical models quite suitable for representing interactions in heterogeneous component systems; in particular, the latter include increasingly popular SOA-based information systems. The approach deals with execution traces of SOA systems, represented in the form of event logs. Event logs are created by almost all modern information systems primarily for debug purposes. In contrast with conventional reverse engineering techniques that require source code for analysis, our approach for inferring UML sequence diagrams deals only with available logs and some heuristic knowledge. Our method consists of several stages of building UML sequence diagrams according to different perspectives set by the analyst. They include mapping log attributes to diagram elements, thereby determining a level of abstraction, grouping several components of a diagram and building hierarchical diagrams. We propose to group some of diagram components (messages and lifelines) based on regular expressions and build hierarchical diagrams using nested fragments. The approach is evaluated in a software prototype implemented as a Microsoft Visio add-in. The add-in builds a UML sequence diagram from a given event log according to a set of customizable settings.
103-122
Abstract
Process mining is a relatively new research field, offering methods of business processes analysis and improvement, which are based on studying their execution history (event logs). Conformance checking is one of the main sub-fields of process mining. Conformance checking algorithms are aimed to assess how well a given process model, typically represented by a Petri net, and a corresponding event log fit each other. Alignment-based conformance checking is the most advanced and frequently used type of such algorithms. This paper deals with the problem of high computational complexity of the alignment-based conformance checking algorithm. Currently, alignment-based conformance checking is quite inefficient in terms of memory consumption and time required for computations. Solving this particular problem is of high importance for checking conformance between real-life business process models and event logs, which might be quite problematic using existing approaches. MapReduce is a popular model of parallel computing which allows for simple implementation of efficient and scalable distributed calculations. In this paper, a MapReduce version of the alignment-based conformance checking algorithm is described and evaluated. We show that conformance checking can be distributed using MapReduce and can benefit from it. Moreover, it is demonstrated that computation time scales linearly with the growth of event log size.
123-144
Abstract
The derivation of checking sequences for Finite State Machines (FSMs) has a long history. There are many papers devoted to deriving a checking sequence that can distinguish a complete deterministic specification FSM from any non-equivalent FSM with the same number of states. To the best of our knowledge, for nondeterministic FSMs, the topic appeared only recently; the authors started with preset checking sequences for FSMs where the initial state is still known but the reset is very expensive. In this paper, a technique is proposed for deriving an adaptive checking sequence for a complete nondeterministic finite state machine with respect to the reduction relation. The main contribution of the paper is the use of a (adaptive) distinguishing test case instead of a separating sequence. Such a test case is usually shorter that a separating sequence (when it exists) and can exist when there is no separating sequence. We also discuss the possibilities of using adaptive transfer sequences instead of deterministic transfer sequences that also allows to extend the set of FSMs for which the strategy can be used and reduce the length of a checking sequence. The application of a proposed strategy to partial possibly nondeterministic FSMs is briefly discussed.
145-160
Abstract
In this article, an approach of detailing verified test scenarios for developed software system without losing the model's semantics is proposed. Existing problem of generating test cases for real software systems is solved by using multi-level paradigm to obtain the real system signals, transactions and states. Because of this, the process is divided into several steps. Initial abstract traces (test cases) with symbolic values are generated from the verified behavioral model of software product. On the next step, called concretization, these values in test scenarios are replaced with concrete ones. Resulting concrete traces are then used as input for the next step, data structures conversion. This step is needed because concrete traces do not contain all the information for communicating with developed software and presented in another way with different data structures. After concrete test scenarios are detailed, they can be used for generation of executable test cases for informational and control systems. In this paper, a software tool is suggested for detailing test scenarios. It consists of several modules: a Lowering editor that allows user to create rules of detailing a signal, a Signals editor used to define complex data structures inside the signal and a Templates editor that eases work with similar signals. Process of translating abstract data structures into detailed data structures used in system implementation is presented with examples.
161-172
Abstract
The paper presents an overview of approaches used in verifying correctness of multicore microprocessors caches. Common properties of memory subsystem devices and those specific to caches are described. We describe the method to support memory consistency in a system using cache coherence protocol. The approaches for designing a test system, generating valid stimuli and checking the correctness of the device under verification (DUV) are introduced. Adjustments to the approach for supporting generation of out-of-order test stimuli are provided. Methods of the test system development on different abstraction levels are presented. We provide basic approach to device behavior checking - implementing a functional reference model, reactions of this model could be compared to device reactions, miscompare denotes an error. Methods for verification of functionally nondeterministic devices are described: the «gray box» method based on elimination of nondeterministic behavior using internal interfaces of the implementation and the novel approach based on the dynamic refinement of the behavioral model using device reactions. We also provide a way to augment a stimulus generator with assertions to further increase error detection capabilities of the test system. Additionally, we describe how the test systems for devices, that support out of order execution, could be designed. We present the approach to simplify checking of nondeterministic devices with out-of-order execution of requests using a reference order of instructions. In conclusion, we provide the case study of using these approaches to verify caches of microprocessors with “Elbrus” architecture and “SPARC-V9” architecture.
173-188
Abstract
In many application areas it is necessary to perform symbolic-numerical calculations with a large volume of data. Examples of such areas are robotics, speech recognition, recognition of graphical information, automation and others. Symbolic computation systems, they also called computer algebra system, actively developed since the late eighties. Well-known systems are Mathematica, Maple, Reduce, and many others. Almost all of these systems were not originally focused any large-scale mathematical objects or on multiprocessor clusters. System FORM is a unique exception. It was conceived as a system which can operate with objects exceeding RAM. Such objects are placed on the hard drive. We give a description of such algorithms of MathPartner web services, which are designed to interact with a computing cluster. We give an algorithm to work a socket server, which is the link between MathPartner and super computers, and which provides the execution of parallel programs on a cluster. We explain in detail the mechanism which abstracts the specific features of super computers and the installed PBS package. The user can run on the cluster or program of MathPartner package, or their own programs. To run its own programs, they are able to send the compiled classes to the computing cluster in a zip-archive through the MathPartner web interface. We show examples of using parallel algorithms included in MathPartner package. Some of MathPartner parallel programs implemented with the paradigm of DDP (dynamic decentralized parallelization). DDP is designed as a framework that allows to write efficient parallel program for working with nonhomogeneous data such as sparse matrix. We demonstrate examples of using DDP-programs that are integrated into MathPartner.
189-208
Abstract
This paper regards problems of analysis and verification of complex modern operating systems, which should take into account variability and configurability of those systems. The main problems of current interest are related with conditional compilation as variability mechanism widely used in system software domain. It makes impossible fruitful analysis of separate pieces of code combined into system variants, because most of these pieces of code has no interface and behavior. From the other side, analysis of all separate variants is also impossible due to their enormous number. The paper provides an overview of analysis methods that are able to cope with the stated problems, distinguishing two classes of such approaches: analysis of variants sampling based on some variants coverage criteria and variation-aware analysis processing many variants simultaneously and using similarities between them to minimize resources required. For future development we choose the most scalable technics, sampling analysis based on code coverage and on coverage of feature combinations and variation-aware analysis using counterexample guided abstraction refinement approach.
209-230
Abstract
With the growing volume and demand for data a major concern for an Organization trying to implement Data Driven projects, is not only how to technically collect, cleanse, integrate, access, but even more so, how and why to use it. There is a lack of unification on a logical and technical level between Data Scientists, IT departments and Business departments, as it is very unclear where the data comes from, what it looks like, what it contains and how to process it in the context of existing systems. So in this paper we present a platform for data exploration and processing, which enables Data-Driven projects, that does not require a complete organizational revamp, but provides a workflow and technical basis for such projects
231-240
Abstract
Authors are developing precedent approach to solving the problem of optimal decision making. The method they develop makes it possible to make the most adequate precedent selection in conditions where the object under consideration is not fully described, and cannot be estimated unambiguously. The originality of the approach offered by authors is in its focus on functioning with varying set of features (attributes). It is important for different applications, but it is especially important while supporting physician’s decision making, who often has a lack of time and resources. The method presumes the need in differentiating possible object membership that may be done by widening of its feature space. This task may in its turn be reduced to investigating of feature’s roles and their combinations (as in differential diagnosis and semiotics in medicine). In order to determine in what way should one retrieve missing features the authors offer to use the following conceptions: range, persistent feature combination, frequency of occurrence, availability of a feature, and object category.
Y. A. Rumyanstev,
P. N. Zakharov,
N. A. Abrashitova,
A. V. Shmatok,
V. O. Ryzhikh,
N. B. Gudimchuk,
F. I. Ataullakhanov
241-266
Abstract
This paper presents high performance simulation of microtubule molecular dynamics implemented on Xilinx Virtex-7 FPGA using high level synthesis tool Vivado HLS. FPGA implementation is compared to multicore Intel Xeon CPU and Nvidia K40 GPU implementations in terms of performance and energy efficiency. Algorithm takes into account Brownian motion thus heavily uses normally distributed random numbers. Original sequential code was optimized for different platforms using OpenMP for CPU, OpenCL for GPU and Vivado HLS for FPGA. We show that in terms of performance FPGA achieved 17x speed up against CPU and 11x speedup against GPU for our best optimized CPU and GPU versions. As to power efficiency, FPGA outperformed CPU 227 times and GPU 75 times. FPGA application is developed using SDK, which has Board Support Package including FPGA project framework where accelerator kernel (designed in Vivado HLS) IP core is to be integrated, and host-side libraries used to communicate with FPGA via PCI Express. Developed flow does not require expert FPGA skills and can be used by programmer with little knowledge of hardware design methodology that could use C\C++ language for complete development of FPGA accelerated solution.
267-326
Abstract
The hybrid method for approximation of advective terms is proposed in order to resolve flows in the wide Mach numbers region. This hybrid method is based on the Kurganov-Tadmor (KT) scheme and projection method PISO (Pressure Implicit with Splitting Operators). To construct this method Kurganov-Tadmor scheme for convective fluxes was formulated in implicit manner together with introduced blending function which switches between compressible regime (KT) and incompressible regime (PISO) depending on local characteristics of the flow. Such hybrid scheme gives next benefits: a) implicit treatment of diffusive terms allows to remove time step restrictions imposed by this kind of processes when approximated with explicit scheme; b) implicit formulation of convective terms together with mixing between PISO and KT produces better stability relied only on the flow Courant number, removing acoustic Courant number restrictions at low Mach number flows; c) however, acoustic flows can can also be reproduced - in this case, local acoustic Courant number must be decreased to values less the 1 in the whole computational domain; d) utilization of KT scheme as the basis for approximation of convection terms allows to achieve non-oscillating solution for both acoustic and compressible cases (Mach number larger then 0.3). In order to study hybrid method properties a set of cases with analytical solution or experimental data for different classes of flows was considered: a) compressible flows - propagation of the wave in straight channel (Sod's Problem), supersonic flow over flat wedge, supersonic flow over backward step, flow over forward step with supersonic velocities, flow in supersonic converging-diverging nozzle with shock wave; b) incompressible flows - subsonic flow of laminar viscous fluid in the channel with circle cross section, flow around cylinder in laminar and turbulent regimes, mixing of two gases in 2D flat channel; c) industrial and academic verification tests - superisonic flow of air in NASA nozzle for pressure ratio 5, expansion of stationary equilibrium hot plasma in vacuum; d) qualitative assessment of the hybrid method adequacy for industrial cases - numerical simulation of flows in high speed micro-compressor, simulation of two-phase flow in liquid ring pump. All materials are available for public access through GitHub project https://github.com/unicfdlab.
ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)
ISSN 2220-6426 (Online)