Tolerant parsing is a form of syntax analysis aimed at capturing the structure of certain points of interest presented in a source code. While these points should be well-described in a tolerant grammar of the language, other parts of the program are allowed to be described coarse-grained, thereby parser remains tolerant to the possible variations of the irrelevant area. Island grammars are one of the basic tolerant parsing techniques. “Islands” term is used as the relevant code alias, the irrelevant code is called “water”. Efforts required to write water rules are supposed to be as small as possible. Previously, we extended island grammars theory and introduced a novel formal concept of a simplified grammar based on the idea of eliminating water description by replacing it with a special “Any” symbol. To work with this concept, a standard LL(1) parsing algorithm was modified and LanD parser generator was developed. In the paper, “Any”-based modification is described for LR(1) parsing algorithm. In comparison with LL(1) tolerant grammars, LR(1) tolerant grammars are easier to develop and explore due to solid island rules. Supplementary “Any” processing techniques are introduced to make this symbol easier to use while staying in the boundaries of the given simplified grammar definition. Specific error recovery algorithms are presented both for LL and LR tolerant parsing. They allow one to further minimize the number and complexity of water rules and make tolerant grammars extendible. In the experiments section, results of a large-scale LL and LR tolerant parsers testing on the basis of 9 open-source project repositories are presented.
The subject of this paper is development of software framework for real-time management of intelligent devices. The framework enables intelligent management of IoT devices in cyber-physical systems using models based on recurrence relations and differential equations. The platform was developed using Python programming language, Django framework and wide corpus of modules and libraries that supports continuous simulation. The software framework incorporates application programming interface as well, for specification of system behaviour, transmission of input parameters and output results, sending control actions via web services to the IoT system.
Development of system-on-chips or network-on-chips requires verification of standalone units (peripherals and commutators) and a system as a whole. An approach to test development for verification of programmable standalone units is presented. The tests are written in C++ using a specific API to program the device-under-test (DUT) and the test environment. The API functions are implemented in the standard environment library; the specific implementation depends on the test environment structure: a standalone device, a device as a part of controllers block or a device as a part of the whole SoC. For system-level verification the test program is translated for execution on a general-purpose core of the verified SoC as well as the standard environment library. The testbench for unit-level verification consists of the environment library and the test linked to the testbench as a PLI-application, an adapter for the DUT-system bus interface and, possibly, a specific imitator of an external device. Different devices with one programming interface can be tested by the same test program even if they have different bus interfaces; different bus interfaces require different adapters to be implemented. The presented approach gives an opportunity to use the same test program both for standalone and for system-level verification (as an integration test). The implementation of the presented approach and its application to verification of microprocessors of the Elbrus family are described.
State of the art microprocessor systems usually include complex hierarchy of a cache memory. Coherence protocols are used to maintain memory consistency. An implementation of memory subsystem in HDL (hardware description language) is complex and error-prone task. Ensuring the correct functioning of the memory subsystem is one of the cornerstones of a modern microprocessor systems development. Functional verification is used for this purpose. In this paper, we present some approaches for verification of memory subsystem units of multi-core microprocessors. We describe characteristics of memory subsystems that need to be taken into account in the process of verification. General structure of test environment for stand-alone verification of memory subsystem units is presented. Classification of checking model types and their advantages and disadvantages are described. The approach of construction of a standalone verification environment using Universal Verification Methodology (UVM) is presented in the paper. Restrictions that should be taken into account when verifying memory subsystem unit are listed. The generation stimulus algorithm stages are presented. Method of using “hints” from design under verification to eliminate nondeterminism is used in the implementation of checking module. We review several other techniques for checking the correctness of memory subsystem units, which can be useful at different stages of project development. A case study of applying the suggested approaches for verification of Home Memory Unit of microprocessors with Elbrus architecture is presented. Classification of detected and corrected errors in different submodules of verified device is provided. Further plan of the test system enhancement is presented.
This article presents a modular approach that reduces the labor costs for the technological preparation of small-scale metalworking production. Its idea is to formalize the technological processes, allowing generating them and their documentation from pre-prepared parameterized templates stored in the special database. Details to be processed are represented as the structures of their basic geometric components. For the template of machining operations for each component, symbolic parameters are fixed, defining the workpiece used, cutting tools options, machining modes, etc. The result of formalization is an automatically generated technological route in the form of an MSC diagram encoding it as a sequence of macro-operations for the machinery. This symbolic model is adapted to a specific instance of the detail being manufactured by replacing the symbolic variables with specific values set by the technologist. The MSC diagram is supplemented with the results of time and cost calculations of technological routes, which allows selection of the most efficient one. The correctness of the technological routes is ensured in the process of symbolic verification by checking the permissible ranges of parameters of the MSC diagram, as well as checking the correctness of order and compatibility of operations in the sequence. The results of the whole process obtained from the MSC diagram are the set of technological documentation of preproduction, which, in particular, includes a set of operating cards, and the fine-tuned schedule of production after its digital modeling with the real resources of the workshop taken into account. According to technologists, by applying the described automation, the time to prepare documentation for details of medium complexity is reduced from several weeks to 1-2 days.
E-commerce is a runaway activity growing at an unprecedented rate all over the world and drawing millions of people from different spots on the globe. At the same time, e-commerce affords ground for malicious behavior that becomes a subject of principal concern. One way to minimize this threat is to use reputation systems for trust management across users of the network. Most of existing reputation systems are feedback-based, and they work with feedback expressed in the form of numbers (i.e. from 0 to 5 as per integer scale). In general, notions of trust and reputation exemplify uncertain (imprecise) pieces of information (data) that are typical for the field of e-commerce. We suggest using fuzzy logic approach to take into account the inherent vagueness of user’s feedback expressing the degree of satisfaction after completion of a regular transaction. Brief comparative analysis of well-known reputation systems, such as EigenTrust, HonestPeer, Absolute Trust, PowerTrust and PeerTrust systems is presented. Based on marked out criteria like convergence speed, robustness, the presence of hyper parameters, the most robust and scalable algorithm is chosen on the basis of carried out sets of computer experiments. The examples of chosen algorithm’s (PeerTrust) fuzzy versions (both Type-1 and Interval Type-2 cases) are implemented and analysed.
Data access conflicts may arise in hardware designs. One of the ways of detecting such conflicts is static analysis of hardware descriptions in HDL. We propose a static analysis-based approach to data conflicts extraction from HDL descriptions. This approach has been implemented in the Retrascope tool. The following types of conflicts are considered: simultaneous reads and writes, simultaneous writes, reading of uninitialized data, no reads between two writes. Conflict assertions are formulated as conditions on variables. HDL descriptions are automatically translated into formal models suitable for the nuXmv model checker. The translation process consists of the following steps: 1) preliminary processing; 2) Control Flow Graph (CFG) building; 3) CFG transformation into a Guarded Actions Decision Diagram (GADD); 4) GADD translation into a nuXmv format. Conflict assertions are automatically built using static analysis of the GADD model and passed to the nuXmv model checker. Bounded model checking is used to check whether these assertions are satisfiable. If true, counterexamples are generated and then translated to HDL testbenches by the Retrascope tool. The proposed approach was applied to several open source HDL benchmarks like Texas-97, Verilog2SMV, VCEGAR and mips16 modules. Potential conflicts have been detected for all of these benchmarks. Future work includes propagation of conflict assertions to the interface level (thus getting assertions on modules’ communication protocols) and generation of built-in HDL checkers.
Blockchain technologies are gradually being found an application in many areas, especially in FinTech. As a result, a lot of blockchain platforms have emerged with the support of smart contracts that are intended to automate party interactions. However, it has been shown that they are prone to attacks and errors which lead to money loss. To date, there has been a wide range of approaches for making smart contracts safer that included analysis tools, reasoning models, and safer and more rigorous programming languages. In this paper, we provide an overview of smart contract programming languages design principles, related vulnerabilities, and future research areas. The provided overview is meant to outline the to date state of languages and to become a possible basis for future proceedings, and show approaches, used by the community, to reach safe and usable language for smart contracts. We have split all found vulnerabilities by source of their arising. Various languages’ characteristics such as abstraction level, paradigm, Turing completeness and main features are summarized in the table. Additional information about languages is provided, e.g. model of execution and tools for static analysis.
Due to huge amounts of code in modern software products, there is always a variety of subtle errors or flaws in programs, which are hard to discover during everyday use or through conventional testing. A lot of such errors could be used as a potential attack vector if they could be exploited by a remote user via manipulation of program input. This paper presents the approach for automatic detection of security vulnerabilities using interprocedural static taint analysis. The goal of this study is to develop the infrastructure for taint analysis applicable for detection of vulnerabilities in C and C++ programs and extensible with separate detectors. This tool is based on the Interprocedural Finite Distributive Subset (IFDS) algorithm and is able to perform interprocedural, context-sensitive, path-insensitive analysis of programs represented in LLVM form. According to our research it is not possible to achieve good results using pure taint analysis, so together with several enhancements of existing techniques we propose to supplement it with additional static symbolic execution based analysis stage, which has path-sensitivity and considers memory region sizes for filtering results found by the first stage. The evaluation of results was made on Juliet Test Suite and open-source projects with publicly known vulnerabilities from CVE database.
Cryptographic protocols are the core of any secure system. With the help of them, data is transmitted securely and protected from third parties' negative impact. As a rule, a cryptographic protocol is developed, analyzed using the means of formal verification and, if it is safe, gets its implementation in the programming language on which the system is developed. However, in the practical implementation of a cryptographic protocol, errors may occur due to the human factor, the assumptions that are necessary for the possibility of implementing the protocol, which entail undermining its security. Thus, it turns out that the protocol itself was initially considered to be safe, but its implementation is in fact not safe. In addition, formal verification uses rather abstract concepts and does not allow to fully analyze the protocol. This paper presents an algorithm for analyzing the source code of the C# programming language to extract the structure of cryptographic protocols. The features of the implementation of protocols in practice are described. The algorithm is based on the searching of important code sections that contain cryptographic protocol-specific constructions and finding of a variable chain transformations from the state of sending or receiving messages to their initial initialization, taking into account possible cryptographic transformations, to compose a tree, from which a simplified structure of a cryptographic protocol will be extracted. The algorithm is implemented in the C# programming language using the Roslyn parser. As an example, a cryptographic protocol is presented that contains the basic operations and functions, namely, asymmetric and symmetric encryption, hashing, signature, random number generation, data concatenation. The analyzer work is shown using this protocol as an example. The future work is described.
Multiway trees are one of the most popular solutions for the big data indexing. The most commonly used kind of the multiway trees is the B-tree. There exist different modifications of the B-trees, including B+-trees, B*-trees and B*+-trees considered in this work. However, these modifications are not supported by the popular open-source relational DBMS SQLite. This work is based on the previous research on the performance of multiway trees in the problem of structured data indexing, with the previously developed multiway trees C++ library usage. In this research the B*+-tree was developed as the data structure which combines the main B+-tree and B*-tree features together. Also, in the research the empirical computational complexities of different operations on the B-tree and its modifications were measured as well as the memory usage. The purpose of the current work is the development of the SQLite RDBMS extension which allows to use B-tree modifications (B+-tree, B*-tree and B*+-tree) as index structures in the SQLite RDBMS. The modifications of the base data structure were developed as a C++ library. The library is connected to the SQLite using the C-C++ cross-language API which is developed in the current work. The SQLite extension implements the novel algorithm for selecting the index structure (one of B-tree’s modifications) for some table of a database. The provided SQLite extension is adopted by the SQLite EventLog component of the LDOPA process mining library. In addition, the experiment on the counting the empirical computational complexities of operations on the trees of different types is conducted using the developed in this work SQLite extension.
This paper presents an approach to the description of cellular automata using tensors. This approach allows to attract various frameworks for organizing scientific calculations on high-performance graphics adapter processors, that is, to automatically build parallel software implementations of cellular automata. In our work, we use the TensorFlow framework to organize computations on NVIDIA graphics adapters. As an example cellular automaton we used Conway's Game of Life The effect of the described approach to the cellular automata implementation is estimated experimentally.
Currently, cluster systems are widely used, the nodes of which use processors with a large number of cores. Effective software implementation on such computing systems requires that the corresponding mathematical models have a significant parallelism resource. For the problems of modeling of hybrid dynamical systems (HDS) a significant resource of parallelism is typical, since in this class of mathematical models the (theoretically infinite-dimensional) phase space of control objects with space-distributed parameters is isolated. The purpose of this work is to study the effectiveness of the software implementation on parallel computing systems of the class of modeling problems of the influence of typical nonlinearities and nonstationarity on the output vector function of the HDS. As an example, a nonlinear stabilization system for a mobile control object (the rocket taking into account the elastic deformations of its body) is considered.
ISSN 2220-6426 (Online)