The structure of a process model directly discovered from an event log of a multi-agent system often does not reflect the behavior of individual agents and their interactions. We suggest analyzing the relations between events in an event log to localize actions executed by different agents and involved in their asynchronous interaction. Then, a process model of a multi-agent system is composed from individual agent models between which we add channels to model the asynchronous message exchange. We consider agent interaction within the acyclic and cyclic behavior of different agents. We develop an algorithm that supports the analysis of event relations between different interacting agents and study its correctness. Experimental results demonstrate the overall improvement in the quality of process models discovered by the proposed approach in comparison to monolithic models discovered directly from event logs of multiagent systems.
Any state-of-the-art integrated development environment (IDE) should provide software developers with services for quick and correct code transformation. Such services can be used both for program refactoring to improve its quality and for quick fixing of syntax errors in source code. This paper presents the method of constructing a subsystem that makes it possible to create the services described above and also has the property of fast extensibility to support different programming languages. The method of transformation of Program Structure Interface (PSI) - a special data structure, which provides API for development of IDE-services, is proposed. Besides, a method of generating types for PSI in accordance with the syntax of the supported programming language is proposed. The approach is developed for a multi-language platform of a large telecommunications company. Refactoring and Quick Fix features are implemented using on the proposed generator for two IDEs: a Python IDE and a Java IDE.
The article is devoted to the research of the processes arising during the formation, translation and reading of information signals in the acoustic paths of magnetostrictive linear and angular displacement transducers. Mathematical models are given that make it possible to calculate the magnetic fields of annular permanent magnets and those formed by current pulses when they flow in a waveguide medium. To calculate the magnetization of the waveguide, a numerical method was developed that allows taking into account the magnetization of the waveguide material at a previous time. Mathematical models are also given to calculate the parameters of the magnetic flux of the solenoid and the output signal. Mathematical models for calculating permanent magnet magnetic fields, the developed numerical method and mathematical models for the formation of magnetic flux and output signal were implemented in the developed software used in the educational process. The research results, as well as refined and developed methods for calculating magnetic fields and the numerical method can be used to study magnetostrictive devices both at the stage of their design and operation, which reduces their final cost. It should also be noted that the article does not address issues related to the processing of the output signal, which provides opportunities for further research and further modification of the software.
The article provides an overview of the main methods of steganography, on the basis of which a new method was developed, consisting in embedding additional text (pseudo-information) in parallel with the transmitted message. An algorithm of this method has been developed. In this case, the frequency of the bit sequence was obtained in accordance with the generated pseudo-random numbers. In accordance with the algorithm, an application has been developed that allows the sender to encrypt and place the message in a container that is an image, and the recipient to determine the presence of the message and, if there is one, extract it. A computational experiment was also conducted, which showed that an image with a fairly large embedded text does not visually differ from the original image.
Big Data technologies have traditionally focused on processing human-generated data, while neglecting the vast amounts of data generated by Machine-to-Machine (M2M) interactions and Internet-of-Things (IoT) platforms. These interactions generate real-time data streams that are highly structured, often in the form of a series of event occurrences. In this paper, we aim to provide a comprehensive overview of the main research issues in Complex Event Processing (CEP) techniques, with a special focus on optimizing the distribution of event handlers between working nodes. We introduce and compare different deployment strategies for CEP event handlers. These strategies define how the event handlers are distributed over different working nodes. In this paper we consider the distributed approach, because it ensures, that the event handlers are scalable, fault-tolerant, and can handle large volumes of data.
The paper briefly discusses progressive technologies of wastewater treatment from ions of heavy and nonferrous metals of industrial and small enterprises of urban agglomerations. An analysis of the efficiency of three-dimensional flow-through electrodes for wastewater treatment of harmful reagents is given. The mathematical models of electrochemical processes in three-dimensional flow-through electrodes as applied to extract metals from the solutions of galvanochemical industries are presented. The description of the set of programs developed in the programming language Object Pascal for computational experiments according to the obtained mathematical models is given. Numerical solution of scientific problem of practical importance has been obtained by using the program complex. A good correspondence between the results of calculations and experiments is shown.
Dynamic symbolic execution is a well-known technique for testing applications. It introduces symbolic variables – program data with no concrete value at the moment of instantiation – and uses them to systematically explore the execution paths in a program under analysis. However, not every value can be easily modelled as symbolic: for instance, some values may take values from restricted domains or have complex invariants, hard enough to model using existing logic theories, despite it is not a problem for concrete computations. In this paper, we propose an implementation of infrastructure for dealing with such “hard-to-be-modelled” values. We take the approach known as symcrete execution and implement its robust and scalable version in the well-known KLEE symbolic execution engine. We use this infrastructure to support the symbolic execution of LLVM programs with complex input data structures and input buffers with indeterminate sizes.
The Regular Expression Denial of Service (REDoS) problem refers to a time explosion caused by the high computational complexity of matching a string against a regex pattern. This issue is prevalent in popular regex engines, such as Python, JavaScript, and C++. In this paper, we examine several existing open-source tools for detecting REDoS and identify a class of regexes that can create REDoS situations in popular regex engines but are not detected by these tools. To address this gap, we propose a new approach based on ambiguity analysis, which combines a strong star-normal form test with an analysis of the transformation monoids of Glushkov automata orbits. Our experiments demonstrate that our implementation outperforms the existing tools on regexes with polynomial matching complexity and complex subexpression overlap structures.
We present a straightforward implementation of a simplified imperative programming language with direct memory access and address arithmetic, and a simple static analyzer for memory leaks. Our study continues a line of research attempted (in Innopolis University in years 2016-2022) on alias calculi for imperative programming languages with decidable pointer arithmetic but differs by memory address model – we study segmented memory model instead linear one.
This article explores the relevance of using design patterns in the development of the architecture of monitoring systems. The increasing complexity of modern monitoring systems has made it challenging to maintain and evolve them. The use of design patterns can address these challenges by providing reusable solutions to common problems in monitoring system architecture. This article reviews the literature on monitoring systems and design patterns and identifies appropriate design patterns for monitoring system architecture. The article also analysis the requirements for monitoring systems and demonstrates how design patterns can be used to meet these requirements. The results show that the use of design patterns can improve the maintainability, flexibility, reliability, compatibility and scalability of monitoring systems. This article is relevant to software architects, developers, and system administrators who are involved in the development and maintenance of monitoring systems.
Many verification tasks in model checking (one of the formal software verification approaches) can’t be solved within bounded time requirements due to combinatorial state space explosion. In order to find a bug in the verified program in a given time, a simplified version of it can be analyzed. This paper presents DD** algorithms (based on the Delta Debugging approach) to iterate over simplified versions of the given program. These algorithms were implemented in software-verification tool CPAchecker. Our experiments showed that this technique might be used to find new bugs in real software.
When migrating software to new hardware architectures, including the development of optimizing compilers for new platforms, there is a need for statistical analysis of data on the use of different machine instructions or their groups in the machine code of programs. This paper describes a new framework useful for statistical research on machine opcodes that is designed to be extensible and a dataset that can be used by other researchers. We automatically collect data on different GNU/Linux distributions and architectures and provide facilities for its statistical analysis.
Enterprises often provide their services via a family of applications based on various platforms. Applications in such a family can behave differently. Their development processes can differ as well. Moreover, modern development processes are often complex and sometimes vague. This can lead to bugs, defects, and unwanted discrepancies in applications. In this paper, we show that process mining can be applied to leverage the development in such a case. Real-life models can be discovered and investigated by the developer teams in order to reveal differences in application behaviour, find bugs, and highlight inefficiencies. We consider datasets with event data of two types. Firstly, we analyse event logs of Android and iOS applications of the same product family. Secondly, we consider event data from working repositories of these applications. We show how by analysing such datasets, the real-life development process can be discovered. Besides, application event logs can help to find more and less severe bugs and unwanted behaviour.
Thread-modular approach over predicate abstraction is an efficient technique for software verification of complicated real-world source code. One of the main problems in the technique is a predicate abstraction refinement in a multithreaded case. A default predicate refiner considers only a path related to one thread, and does not refine the thread-modular environment. For instance, if we have applied an effect from the second thread to the current one, then the path in the second thread to the applied effect is not refined. Our goal was to develop a more precise refinement procedure, reusing a default predicate refiner to refine both: a path in a current thread and a path to an effect in the environment. The idea is to construct a joined boolean formula from these two paths. Since some variables may be common, a key challenge is to correctly rename and equate variables in two parts of the formula to accurately represent the way threads interact. It is essential to get reliable predicates that can potentially prove spuriousness of the path.
The proposed approach is implemented on top of CPAchecker framework. It is evaluated on standard SV-COMP benchmark set, and the results show some benefit. Evaluation on the real-world software does not demonstrate significant accuracy increase, as the described flaw of predicate refinement is not the only reason of false positive results. While the proposed approach can successfully prove some specific paths to be spurious, it is not enough to fully prove correctness of some programs. However, the approach has further potential for improvements
Development of telecommunication product lines is still a very labor-intensive task, involving a great amount of human resources and producing a large number of development artifacts — code, models, tests, etc. Declarative domain-specific languages (DSLs) may reasonably simplify this process by increasing the level of abstraction. We use the term “declarative” implying that such a DSL does not enable the development of a closed software application, but rather supports creation, generation and maintenance of various kind of software assets — product database, events and event handlers, target code data structures, etc. At the same time, such a DSL may have some executable semantic, but it could be very specific and have many environment-wise requirements. Thus, execution and debugging of such DSL specifications is a meaningful task, which has no common solution due to the unique executable semantic. Consequently, it is not possible to use debug facilities of known DSL environments, such as xtext, MPS, etc. for such a case. In the current paper, we present a debugger for DevM — a declarative DSL intended for support device management in software development in the context of a router product line by a large telecommunication company. We clarify executable semantic for DevM, making it possible to execute DevM specifications in an isolated environment, i.e. in simulation mode, without generation of target code. We use a graphic model-based notation to depict every step of execution. Finally, we implement and integrate the debugger in the DevM IDE, using Debug Adapter Protocol and language server architecture combined with the Eclipse xText/EMF tool chain.
In system software environments, a vast amount of information circulates, making it crucial to utilize this information in order to enhance the operation of such systems. One such system is the Linux kernel, which not only boasts a completely open-source nature, but also provides a comprehensive history through its git repository. Here, every logical code change is accompanied by a message written by the developer in natural language. Within this expansive repository, our focus lies on error correction messages from fixing commits, as analyzing their text can help identify the most common types of errors. Building upon our previous works, this paper proposes the utilization of data analysis methods for this purpose. To achieve our objective, we explore various techniques for processing repository messages and employing automated methods to pinpoint the prevalent bugs within them. By calculating distances between vectorizations of bug fixing messages and grouping them into clusters, we can effectively categorize and isolate the most frequently occurring errors. Our approach is applied to multiple prominent parts within the Linux kernel, allowing for comprehensive results and insights into what is going on with bugs in different subsystems. As a result, we show a summary of bug fixes in such parts of the Linux kernel as kernel, sched, mm, net, irq, x86 and arm64.
ISSN 2220-6426 (Online)